Exploring How AWS Serverless Web Apps Work

Posted by Kimi Mahajan on Sep 7, 2019 4:10:00 PM

2014 saw a breakthrough release of AWS Lambda offering a powerful new way of running applications on the cloud. However, soon it was realised that a structure is needed to run an application, and it’s difficult to manage all of the containers that lambda introduces.

This gave way to the most powerful framework for building applications exclusively on AWS lambda which was called Serverless framework.

With increasing number of organizations riding the wave of serverless, the way they develop and deliver software has undergone drastic transformation. Let’s get to know the details of serverless web apps and explore how they work.

The What and Why of Serverless

Serverless refers to an application framework for building web applications without going into the detailing of servers. The servers are managed by cloud provider, taking care of its provisioning and allocation.

This makes the application to run in a stateless compute containers that are ephemeral and event-triggered. The productive efforts of developers can be channeled in the right direction, and saves their time in getting caught up in intricate web of modern complex infrastructure.

Pricing is based on pay-per-use model rather and not pre-purchased compute capacity. Serverless has the most realistic offerings, and is anticipated to be one of the most used cloud services in upcoming years.

We have already mentioned in our blog - 5 Reasons To Consider Serverless AWS For Drupal - why AWS is considered the best hosting provider for Drupal websites.

If we compare the architecture for a multi-user, mobile friendly app which requires user authentication of a monolithic with that of a serverless web app, it would look somewhat like the one shown below:

monolithic architectureMonolithic Architecture (Source: Dzone)

The serverless architecture would look something like the one shown below:

serverless architecture

Serverless Architecture (Source: Dzone)

With serverless, application development is dependent on a combination of third-party services, client-side logic and cloud-hosted remote procedure calls, and is hence referred to as Functions as a Service.

FaaS refers to an implementation of serverless architecture where a piece of business logic processes individual requests. It is an independent, server-side, logical functions which are small, separate, units of logic that take input arguments, operate on them and return the result, such as lambda. It is stateless, which means, any two invocations of the same function could run on completely different containers.

AWS Lambda, Azure Functions, IBM OpenWhisk and Google Cloud Functions are most well-known FaaS solutions available, supporting a range of languages and runtimes e.g. Node.js, Python, etc.

3

Source: Slideshare

Composition of Serverless App

Assembling a modern application implies you’re creating a solution by combining SaaS with a managed/serverless service. This makes the process faster, but complex at the same time, as the process requires a lot of manual work to bring all the pieces together.

However, with serverless components, it becomes quite simpler. Every project uses AWS resources and divides these resources into three groups:

  • AWS Lambdas
  • AWS API Gateway REST API
  • AWS other resources such as DynamoDB tables, S3 Buckets, etc.

Serverless projects live exclusively on cloud i.e. AWS and don’t have specific environment. Serverless isolates the AWS resources a project uses for development, testing and production purposes through stages.

Stages can be thought of as environments, except for the fact that they exist merely to separate and isolate your project's AWS resources.

Each serverless project can have multiple stages, and each stage can have multiple AWS Regions.

  • AWS Lambdas

Lambda functions on AWS can be replicated across each region your project uses. When you include a function in your project to a stage, it deploys a lambda. It can be triggered by events from other AWS services but not from direct HTTP requests.

  • AWS API Gateway REST API

If your functions have endpoint data, a REST API on AWS API Gateway will automatically be created for your project which can be replicated across each region.

When you deploy an endpoint to a stage, it builds it on your API Gateway REST API and then creates a deployment in that API Gateway stage.

  • AWS Other Resources

Your project's other AWS resources have separate deployments for each stage which can be replicated across each region your project uses.

Components presents a single experience for you to provision infrastructure and code across all cloud and SaaS vendors saving development time.

Creating a Serverless Application

Let’s take a look at how you can use serverless components to create an entire serverless application. A Serverless solution consists of a web server, Lambda functions (FaaS), security token service (STS), user authentication and database.

Creating a Serverless Application

Source

  • Client Application :  The UI of your application is rendered client side.
  • Web Server: Amazon S3 acts as a robust and simple web server which can serve the static HTML, CSS and JS files for our application.
  • Lambda function (FaaS): It is the key enabler in a serverless architecture. AWS Lambda is used in the above shown framework for logging in and accessing data to read and write from your database and provide JSON responses.
  • Security Token Service (STS) :  STS generates temporary AWS credentials (API key and secret key) for users of the application to invoke the AWS API (and thus invoke Lambda).
  • User Authentication : User login can be added to mobile and web apps by an identity service integrated with AWS Lambda. Not only this, it helps in authenticating users through social identity providers with SAML identity solutions.
  • Database :  AWS DynamoDB provides a fully managed NoSQL database. DynamoDB is used as an example here.

Any cloud service can be packaged as a serverless component.

Understanding Through An Example

You want to write a serverless image processing API that pulls images from S3. To do so, you might create an AWS API Gateway endpoint to call an AWS Lambda function, which then pulls an image from the AWS S3 bucket and modifies it.

All serverless components can be nested in a larger component to create a serverless image processing API, as shown in the image below:You want to write a serverless image processing API that pulls images from S3. To do so, you might create an AWS API Gateway endpoint to call an AWS Lambda function, which then pulls an image from the AWS S3 bucket and modifies it.

Here’s why this is important: when you create this image processing API, you’ll configure each component. However, this can be avoided by nesting those infrastructure-level components in a higher-order component, which can expose simpler configuration, and can be used somewhere else.

Composing components to form an entire application

The complete web application can be built by nesting the serverless components.

6-592

Serverless framework believes in making infrastructure more invisible, enhancing developers’ ability to focus on outcomes, and fosters a community to share and reuse outcomes.7-684376835498614167

CloudFormation is a service that AWS offers as a unit with all the necessary pieces to make a Lambda do any actual work. It treats a complete serverless ‘stack’ as a configuration file that can be moved and deployed in different environments.

How do we tell our Lambda where its running, and how do we give it the configuration that it needs to interact with other services?

We need secrets to authenticate to our DB, but we also need our Lambda to know that it’s running on staging so that it doesn’t try to update the production database during our test runs.

So we can identify three key sections of our serverless app: our function, its resources, and the secrets/configuration that make up its environment.

In a highly virtualized environment it can be difficult to point where is a particular code running. The ‘environment’, stack, its configuration, and secrets will collectively exist across multiple zones or even in services.

An active stack refers to a complete set of functions, resources, and environment. Dev, test and prod can be the three active stacks where you’re running your code. If these production stacks are distributed to three different AWS regions, you again have three active stacks.

It is important to build a plan to manage all these features so as to adopt a serverless model for part of our architecture. You must have:

  • Programmers to write functions and also manage their source code
  • Cloud professionals to manage the resources those functions need
  • Operations and security to deploy these stacks in the right environments

Srijan can help you assemble matching stacks and their environments, and easily define complete applications and re-deploy them in different AWS regions. Contact our experts with your requirements.

Topics: Microservices, AWS, Cloud

Event Sourcing Microservices and Deploying with Docker

Posted by Gaurav Mishra on Jul 22, 2019 11:47:00 AM

The microservices architecture, while the right choice for enterprises looking to build scalable, future-ready applications, also come with a set of challenges. Moving from monolithic applications to microservices-based architecture means dealing with a set of independent services that could range from 10 to 100s, depending upon the complexity of the application. And managing this distributed system is naturally more nuanced than doing so for an application that is packaged as a single unit. 


The key challenges with a microservices architecture are:

  • Complexity in developing and deploying the microservices architecture with all its moving parts
  • Testing is complex owing to inter-service dependencies
  • Managing inter-service communication
  • Program each service to respond to failure in other services
  • Ensure database consistency even as each service ideally uses independent databases
  • Complexity in developing functions that span multiple services

 

This blog will concentrate on the best solutions for two of the challenges listed above, namely

  • Complexity of deploying a microservices architecture
  • Ensuring database consistency

We will also take a look at how to do this using AWS services.

Docker for Deploying Microservices

Deploying and orchestrating a host of different microservices to cohesively deliver an application experience is extremely complex. Add to it the fact that there are a few specific prerequisites for deploying these services:

  • Services must be deployed independent of and isolated from each other
  • Deployment process must be fast, if the application is to be truly scalable
  • Deployment process has to be viable, easily repeatable, and cost effective

 

Once you take all this into account, it might begin to look like you were better off with the monolithic architecture. But while progressing with monoliths simply has you banging against a wall of problems, the challenges with microservice deployment actually have a solution.

It’s Docker

How Docker helps

Microservices that make up an application can be written in different languages, and each service can have multiple instances that need to be deployed. With docker:

  • Each service instance is hosted on separate docker containers
  • These are self-contained packages that have the exact environment needed for the service to run uninterrupted. The containers can be hosted on any EC2 instance, moved around at will, and would still run perfectly.
  • Because they are lightweight, several of these containers can be hosted on a single virtual machine, making them extremely resource-efficient
  • From building a container image to registry to launching a docker container in a production environment can be done in under a minute

All of this put together not only makes deploying a microservices-based application simplified to deploy and manage, but also highly available with minimal downtime.

A few other advantages with docker are:

  • Manually setting up a new development environment with the exact configurations of your application can be extremely difficult. But with Docker Compose, replicating the infrastructure is as easy as deploying a configuration file
  • Faster end-to-end testing of the entire application can be automated, with a Jenkins pipeline that tests every single container image that’s created, to ensure its working as it’s supposed to.

Ensuring Database Consistency by Event Sourcing Microservices

An optimal microservice architecture is one where each service is designed to be completely independent of the others. That is what keeps the entire application agile, scalable, and fail resistant. A key element of ensuring this independence is that each microservice has its own separate database. This keeps the services loosely coupled and prevents any coordination nightmares between different microservice teams.

However, in any applications, there are scenarios where services need to access common data or access information from other service databases to fulfil a task. 

For example, if a social network application has a “user profile” service and a “social post” service, they would need to access each others databases. Sharing a post is handled by the ‘social post’ service, but the action also has to be reflected in the user’s profile, with an increase in the number of posts. And for that to happen, the ‘user profile’ service will need to access the ‘social post’ service’s database.

So, how do you ensure that both databases remain consistent while still keeping them isolated from each other?

The answer is event sourcing microservices.

How event sourcing works

With event sourcing, the process becomes something like this:

  1. Service 1 completes a certain task and updates its database, like when the ‘social post’ service publishes a new post
  2. This creates a particular event denoting a change in state of the application. For example: 1 new post has been created
  3. This event is consumed by Service 2, like the ‘user profile’ service
  4. Based on this event, the Service 2 database is updated, let’s say with the new number of posts published by this user

In applications, this whole process happens by way of creating an event table where every single change in state of the application is logged sequentially, as a series of events. Each microservice has its own event stream, and all other services that have dependencies on it can subscribe to this event stream. Each service can then consume one or more of the events in this stream and use the information to accordingly update their databases.

And that’s how event sourcing can help all service databases maintain consistency at all times, with each state change in the application.

Besides this, there are few other advantages to event sourcing:

  • The reliable list of events can be used to enable other functionalities in the application, like customer notification associated with certain events, or predictive analytics of application usage patterns based on historical event streams.
  • The event streams also becomes a reliable audit log of all state changes in the application, giving valuable information in case of application failure to trace point of origin of an error, or know the application state at any given time.
  • The preserved history of all state changes in the application can be fed into any new feature, and it will immediately be in sync with the rest of the application

How to achieve event sourcing with AWS

With AWS, you get a set of solutions to easily set up event sourcing for your microservices. Here’s a quick look:

  • Use Amazon Kinesis to set up event streams. Though it comes with certain limitations in terms of customization when compared to Kafka, Amazon Kinesis is extremely reliable for event streams. It’s capable of handling most enterprise application requirements while the limitations ensure that you don’t try to design something that’s very customized but ultimately too costly to maintain.
  • Set up Lambda Kinesis subscriptions to get services to tap into event streams. AWS can invoke Lambda functions to periodically pass records from the event stream to the interested services. It can also keep track of the record last read by a service, and initiate the next batch of records from that point onwards. 
  • Leverage Amazon Kinesis Data Firehose to load event data into data repositories, preferably Amazon S3 buckets. You can ensure that Kinesis Firehose Delivery stream is one of the subscribers to any event stream, and it can route the data to S3. The data can be stored here indefinitely and used for whenever you need to play it back. 

 

And that's how you address two of the most common challenges that enterprise face when shifting from monoliths to future-ready microservices. 

Srijan is assisting enterprises in modernizing applications with microservices architecture, primarily leveraging Docker and Kubernetes. Srijan’s is also an AWS Advanced Consulting Partner, with AWS certified teams that have the experience of working with a range of AWS products and delivering cost-effective solutions to global enterprises.

Ready to build modernize your application architecture with microservices? Just drop us a line and our expert team will be in touch.

Topics: Microservices, Architecture

Understanding Cloud Native Applications - What, Why, How

Posted by Nilanjana on Jul 10, 2019 1:23:00 PM

What are Cloud Native Applications?

Cloud native applications are the ones that are designed to optimally leverage the benefits of the cloud computing delivery model. The applications live in the cloud and not on an on-premise data centre. However, merely existing on the cloud does not make an application ‘cloud native’. The term refers to a fundamental change in how applications are developed and deployed, and not just where they are hosted. 

Cloud native applications are best described by a set of key characteristics that differentiate them from traditional applications:

  • Microservices architecture: They are built as a collection of loosely coupled services that handle different functions of the applications. Using the microservices architecture instead of the monolithic approach is what gives cloud native applications much of their speed and scalability.

  • 12 Factor applications: This refers to a set of 12 design principles laid out by Heroku founder to help create applications that are well suited for the cloud. These include defined practices around version control, environment configuration, isolated dependencies, executing apps as stateless resources and more
  • Platform-as-a-Service: Because cloud native apps run on microservices which can number into 100s for any given application, provisioning new environments for each services in the traditional way is time and resource intensive. Using Platform-as-a-Service (PaaA) simplifies this process and can handle rapid provisioning for numerous microservices instances. This is also key to ensuring scalability of cloud native applications.
  • API-based: Independent microservices in a cloud native application communicate via API calls. This preserves their loosely coupled nature and keeps the application fast and scalable.
  • Robust: Cloud native applications are robust, with minimal to zero downtime. Once again the microservices architecture, coupled with being on a highly available cloud environment, makes this possible.

Why go for Cloud Native Applications?

The manner in which cloud native applications are developed brings with it a distinct set of advantages for enterprises. These are:

Speed

In a disruption heavy market, the time-to-market for new products and services is extremely crucial to success. Reaching potential customers before your competitors means achieving a faster go-to-market, and that’s possible with cloud native applications. The microservices architecture makes them easy to develop, test and deploy, as compared to monolithic applications. 

These applications also work with smaller but more frequent release cycles, that are easily reversible. So you can constantly introduce new features, functions and bug fixes for your applications, while also having the option of quick rollbacks if needed. 

Finally, with independent microservices, updates to a service need not be integrated with the code of the rest of the services. With the integration time eliminated, new functionalities can be quickly rolled out for these applications.

Scalability

The microservices architecture makes cloud native applications extremely scalable. This is because each microservice handles a specific function within an application. In cases of increase in demand, the application can be scaled by creating more instances of only those services that are needed to handle that demand. And provisioning new instances of a microservice can be done in seconds because the application is based on the PaaS model. 

Besides this, with cloud providers like AWS you get auto-scaling and elastic load balancing solutions that make it easier to dynamically scale resource utilization for cloud native applications.

Cost Efficiency

For monolithic applications, scaling to meet new demand involves creating a new instance of the entire monolith, and that is both a time and resource intensive process. It also means paying for more hardware resources in the cloud, even though the actual demand spike is only for a limited set of features.

With cloud native applications, scaling means increasing instances for only specific microservices. And that saves money as it eliminates the need to consume resources that will not be utilized. Also, it’s easy to turn off your consumption of extra resources once the spike in demand subsides.

There are also secondary cost savings generated with cloud native apps, in the form of multitenancy. Several different microservices can dynamically share platform resources leading to reduced expenditure.

Availability

Cloud native applications are extremely available and that’s also because of their microservices architecture. This works at two levels:

  • If one service goes down, the rest of the applications still continues to be available. This is because the application is designed with failsafes, and can always provision another working instance of the failed microservice.
  • The containerized nature of microservices mean that they are packaged with their runtime environment. This makes them self-sufficient and designed to work uninterrupted, no matter where they are hosted. So in case an entire availability region of your cloud goes down, the application can simply be moved to a different region. And it will continue to be available, with your users none the wiser. 

How to get started with Cloud Native Applications?

Building cloud native applications involves a large scale change in how applications are developed and deployed within the organization. So getting started with it will require some preparation on the part of the enterprise. 

Some of the key aspects to consider would be:

Create your enterprise strategy

The shift to cloud native applications is being considered because it serves specific business goals - creating new products and services, gaining new market share, or increasing revenues. And these business goals is what should be kept front and center while creating your strategy for going cloud native. 

This will also help you avoid the trap of going down the technology-first route. Yes, cloud native applications will involve the use of new technology - languages, frameworks, platforms - by your team. But deciding to first lock down the technology aspects can be disastrous. That’s because the technology you choose should be able to serve your business goals. And if you haven’t figured those out first, the initiative will not be successful or sustainable.

So a good order of priority here is identifying:

  • Business goals to achieve with going cloud native
  • Right teams that can lead this, both in-house and as partners/vendors
  • Technology solutions that best suit your requirements

Transition away from the monolithic application

If you are working with a fairly complex monolithic application that has been put together over time, resist the temptation of a simple lift-and-shift to the cloud. Because of the tight coupling and the myriad dependencies that have developed over the years, it’s unlikely the monolith will run well on the cloud. So you need to plan for breaking down the monolith into constituent services that can be shifted to the cloud.

Moving towards a microservices architecture can seem daunting at first because you are dealing with 100s of different services instead of a single one. However, with practices like event sourcing microservices, deployment with docker, and a host of other design guidelines of building an optimal microservices architecture, the process can be well understood and executed. 

CI/CD approach

Adopting a continuous integration/continuous development approach is key to leveraging the speed benefits for cloud native applications. The system for rapidly developing and testing new features and pushing them out for use, as well as breaking down the traditional software development team silos is crucial for cloud native applications. Frequent, well-tested releases help keep your cloud native application updated and allow for continuous improvement.

So that was a quick look at understanding cloud native applications, their advantages, and where to get started. Moving forward, you would also need to identify your cloud platform of choice, and our take on building cloud native applications with AWS might be helpful.

Srijan is assisting enterprises in modernizing applications with microservices architecture, primarily leveraging Docker and Kubernetes. Srijan is also an AWS Advanced Consulting Partner, with AWS certified teams that have the experience of working with a range of AWS products and delivering cost-effective solutions to global enterprises.

Ready to build modernize your application architecture with microservices? Just drop us a line and our expert team will be in touch.

Topics: Microservices, Cloud, Architecture

Why enterprise applications need a Service Mesh Architecture

Posted by Ashish Thakur on Mar 15, 2019 4:05:00 PM

As enterprises increasingly shift from monolithic to a microservices architecture, IT teams are faced with the problem of effectively orchestrating these microservices. When we have a single application created with a few different containerized services, communication between them can be easily managed. However, enterprise applications with 100s or 1000s of different microservices need a better solution for load balancing, monitoring, traffic routing and security.

Enter the service mesh architecture.

Service Mesh Architecture

A service mesh is an infrastructure layer that manages service-to-service communication, and provides a way to dynamically route, monitor, and secure microservice-based applications.

Previously, the logic governing inter-service communication was coded into each microservice. But that’s not a feasible option when dealing with a large volume of microservices, or scaling applications by adding new services.

The solution is to have proxies that manage the service-to-service communication, running beside each microservice rather than within it. These are also known as ‘sidecar’ proxies and together they form the abstracted mesh architecture that manages the microservices communication.service mesh architecture

Why is this Needed?

The objective with microservices was to build applications as a collection of independent services that can essentially fail without causing system-wide outage. In practice however, most microservice-based applications began operating with direct communication between services. As the application complexity and number of microservices increased, this created greater interdependence between services, thus lowering agility and system resilience.

And hence the complex enterprise applications with a large number of microservices need a service mesh architecture.

Isn't that what APIs did?

Yes, APIs perform a similar function as a service mesh i.e. govern the flow of information. The key difference lies in what kind of communication they govern.

API gateways manage the communication between an application, and others within and outside the enterprise architecture. It provides a single entry point into an application, for requests from all external clients, and handles user-authentication, routing, monitoring and error-handling. It also abstracts the underlying complexity of an application, with its component microservices, from external clients.

A service mesh architecture on the other hand manages the communication between the microservices within an application.

All the proxy sidecars that make up the service mesh are listed in a service registry. Each microservice that wants to request information (client microservice) will have its proxy sidecar look up the registry to find the available proxies associated with the target microservice. It then uses the defined load balancing algorithm to direct its request to the right proxy.

What problems does a service mesh solve?

The service mesh primarily resolves concerns around increasing interdependence that creeps into microservice-based applications as they scale in complexity. Here’s how:

Deploying multiple microservice versions simultaneously

Canary releases, or introducing a new version of a microservice to a select number or type of requests, is a standard way to ease in new feature additions. However, effectively routing requests between old and new versions can be difficult when the logic in coded within each service, because they tend to have interdependencies on other services. Similarly, A/B testing microservice versions also requires dynamic routing capabilities that is best delivered by a service mesh.

The service mesh architecture has the routing rules, and can make the decision to direct source service queries to the right version of the target services. This decoupled communication layer reduces the amount of code written for each microservice, while still better managing inter-service routing logic.

Detailed visibility into inter-service communication

In a complex microservices architecture, it can be difficult to pin-point the exact location of a fault. But once all communication is routed through a service mesh, there is a way to gather logs and performance metrics on all aspects of the microservices. This makes it easier to generate detailed reports and easily trace point of failure.

The logs from the service mesh can also be used to create standardized benchmarks for the application. For example, how long to wait before retrying a service that’s failed. Once these rules are coded into the service mesh, microservices operation becomes optimized as the system doesn’t get overloaded with unnecessary pings to a failed downstream service before the requisite time-out period.

Microservice testing

Testing each microservice in isolation is critical to ensure application resilience. There are also instances where you need to test service behaviour when faults are introduced in downstream services. And that’s difficult and risky to do if we are forcing those faults to actually occur in the services.

The service mesh is the perfect way to simulate these faults in the systems and study the associated response.

Fault Tolerance

Resilience is a key reason why microservices architecture is preferred, and elements like circuit breakers, load balancing, rate-limiting and timeouts are what makes this possible. These rules are usually coded into each microservice , thus increasing complexity in the system, besides being time consuming to create.

Once again, the service mesh can be used to improve fault tolerance by taking these functionalities out of the microservices and adding them to the mesh. These can be implemented via a set of rules that will govern all microservices within the application, without actually cluttering the microservice implementation.

So that was a quick run down on the service mesh architecture and why it’s becoming a crucial infrastructure requirement for enterprise applications. Following blogs will explore service mesh implementation in depth, and evaluate the various tools like Istio, Linkerd and more for service mesh architecture implementation.

Srijan’s teams have expertise in decoupling monolithic systems with elegant single-responsibility microservices, as well as testing, managing and scaling a microservices architecture.

Looking to modernize legacy systems? Drop us a line and our enterprise architecture experts will be in touch.

 

Topics: Microservices, Architecture, Enterprises

How to design an optimal Microservice Architecture

Posted by Nilanjana on Nov 27, 2018 12:34:00 PM

A microservice architecture has its set of advantages, and we’ve also talked a bit about when it is the best choice for your business. What’s next is understanding how best to create one for your enterprise applications. While the exact architecture will vary, there are certain best practices that help design an effective and optimal microservice architecture.

Domain Driven Design

The entire point of microservices is to break up a unified architecture into smaller, more manageable parts. But this break up has to be done in a manner that makes sense to all stakeholders.

A few common approaches to defining the scope of individual microservices:

  • Create microservices that correspond to the different operational teams that exist in the application development process. For example: one team could be working on user authentication, another on data collection and each of them should be responsible for creating a set of microservices that achieve that particular task.

  • Create microservices corresponding to specific functionalities. For example: an analytics application can have a chatbot functionality, a visual dashboard, a data analytics functionality and more. Each of these can be created as separate pieces that converse via APIs.

The idea is to create an independent set of services that offer a focussed value add. The challenge is to avoid overstuffing too many functionalities into one microservice. It’s usually advised to keep the code simple enough to be easily re-deployed when required.

Independent Microservices

The independence of microservices is the key to their effectiveness. The fact that they are loosely coupled and work without complex interdependence makes sure that entire applications don’t go down due to single point failures. Hence, there are a few key areas where independence has to be ensured:

  • Independent teams: Each microservice should ideally have its own dedicated team that comprises of a product manager and DevOps team. One team taking each service from development to deployment helps in frequent releases and maximum uptime.

  • Automate for independent deployment: Automating build and release cycles, and making sure each microservice can be deployed independent of others is critical for well designed applications. That way they can be up and running on any environment.

  • Separate storage: Each microservice should own its specific database, and any other services requiring that data should access it through APIs only. Sharing databases might look like a convenient option in the short run. But as microservices scale, this sharing causes them to become coupled with each other, defeating the purpose.

  • Isolate failure: To ensure that applications continue to run even as one service fails, microservice architecture needs to isolate failure. A common way to do that is to establish circuit breakers in the application. When a service fails beyond a set number of times, the circuit breaker trips and immediately fails all requests to that service for a given time period. The functionality offered by that service remains unavailable for that time period, even as the rest of the application keeps working. After the timeout, the circuit breaker allows a few requests to pass, and if successful reverts back to normal operation.

Similar to this, other design patterns such as asynchronous communication, and event-driven architecture can be used for failure isolation.

Immutable Infrastructure

Immutable infrastructure divides services into data and ‘everything else’, and the ‘everything else’ part is replaced at each release. However, a new version of the microservice is created rather than updating the current version right away. The new version is tested and fine tuned while the older version is the one being used by the application. Only when the new version is stable should it be merged with the previous one.

Standards Creation

As organizations rely more on microservices architecture, with different teams working on different services, processes and practices can start to vary across teams. Each team could start doing development, deployment, and error handling differently. This could lead to a lot of code being repeated by different teams, impacting efficiency and turnaround times.

Hence, it’s advisable to create organization-wide standards that teams can adhere to. Processes for microservice creation and deployment, as well as their corresponding APIs should be well documented. This also allows different teams to understand other microservice APIs being used, and how to best use them.

Best practices shared, our next blog will concentrate on event sourcing microservices and handling deployment with docker.

Srijan is assisting enterprises in moderninzing applications with microservices architecture, primarily leveraging Docker and Kubernetes.

Looking to drive your digital transformation with microservices? Just drop us a line and our expert team will be in touch.

Topics: Microservices, Architecture

When Is The Right Time For Microservices

Posted by Nilanjana Dey on Oct 10, 2018 12:11:00 PM

Enterprises are increasingly adopting microservices architecture to create applications that are easily scalable. In a previous post, we covered the basics of microservices and a look at how they compare with a monolithic architecture. We also discussed some quick pointers of why a microservices architecture is a better choice.

However, the superiority of microservices is heavily dependent upon the application that it is meant for.  

For example: if your application is a relatively simple one, or is in the early stages of development, a monolithic architecture is comparatively easier to design. It just requires a basic understanding of exactly what the application is meant to do, and you can create it as a tightly-coupled architecture.

On the other hand, a microservices architecture requires greater developmental and design awareness, to break down an application concept into its constituent parts. And that could lead you to invest more time and resources than is actually required on a simpler or smaller application.

So, when to choose microservices?

While that’s just one example, there are two factors to consider when it comes to choosing between monolithic and microservices architecture - Application & Business Size and Resource Availability.

Application and Business Size

Monolithic architecture is well suited for small businesses with applications that have a limited scope. The application handles fewer requests, has fewer points of failure, and often uses a single database, and hence can be managed with fewer resources.

As the business grows in size, both in customer and code base, monolithic architecture becomes inefficient. With heavier loads on servers, a monolithic architecture would take too long to process several requests and render output within a reasonable amount of time.

With a rising requirement for additional functionalities and features, branching out into a microservice architecture makes sense because:

  • You gain the flexibility to develop and deploy each service individually, without affecting other services.
  • Each microservice can be simultaneously developed by different teams, and hence the application can be built faster
  • Teams can make frequent updates to the application without the risk of altering other functionalities, unlike a monolithic architecture where every change can impact everything else.

pic1-1

Resource availability

The availability of resources is an important deciding factor for monolithic vs. microservices architecture.

Small team

If you have access to a small team, monolithic applications are the right choice. Monolithic architecture gives you fewer points for failure, and hence easier for a small team to identify and resolve. However, each member of the team has to have a sound understanding of the overall context the program and the purpose of each module. Any and all edits or additions to the program will, by nature of tight coupling, affect the overall functioning of the architecture.

Larger teams

Opting for microservices architecture is wise only if you have greater resources at your disposal. As each service in the application is refactored into independent entities, you will need different teams to develop, test, deploy, update, and maintain each service. So even though microservices introduce multiple points of failure in the application, they do not impact the whole application and having service-specific teams makes it easier isolate and fix the exact cause of failure.

Onboarding new resources

If your application development is outsourced to a third-party, or is a long-term project, chances are you will get new developers on onboard frequently. With monolithic architecture, each new member has to understand the complete application before they even begin to work on the application.

However, with a microservice architecture, adding new developers to the team is much easier. Because each microservice is self-contained, they don’t need to have an in-depth knowledge of the entire application architecture.

pic2
Opted for a microservices architecture. What next?

Post an evaluation of all aspects, let’s say you do decide to adopt a microservices architecture for your next application. How do you make sure you achieve successful application development with it?.

There are a set of best practices to follow , both at the level of application design, as well as organizational design.

Organizational best practices

Conventional monolithic architectures have their organisations structures such that the development team is separate from the testing and security team, which is different from the deployment and maintenance teams. And each of these teams work on the entire application.

However, with the adoption of microservices, a few changes are required:

Individual teams: Each microservice is independent and dedicated to achieving a particular goal. Each of these need to go through all the stages of development, testing, deployment and maintenance. And hence, each individual microservice should have it’s own dedicated team comprising of people assigned to all of the above stages.

Autonomous teams: Since the microservices are meant to be autonomous, the teams themselves should also be to a certain degree autonomous. This is what allows different teams to simultaneously build different microservices to bring in faster delivery times, without being held up by dependencies on other teams. While communication is essential between all the teams, it should ideally be a function of the degree of coupling between their services.

Once these organizational best practices are met, there’s another set of design practices that can help you create an optimal microservices architecture. And that’s what we will be covering in our next installment of this post. We will also be taking a deep dive into understanding event sourcing microservices and handling deployment with docker.

Srijan is assisting enterprises in moderninzing applications with microservices architecture, primarily leveraging Docker and Kubernetes.

Looking to drive your digital transformation with microservices? Just drop us a line and our expert team will be in touch.

Topics: Drupal, Microservices

Break It Down: The Advantages of Microservices

Posted by Nilanjana on Aug 29, 2018 2:01:00 PM

Enterprises using monolithic systems to support large applications find it increasingly difficult to respond to evolving business priorities and rising customer expectations. Every functionality is built together as one single block, and it’s almost impossible to change or update a portion of it, without overhauling the complete monolith. And that is why enterprises IT teams are rapidly exploring the advantages of microservices.

Legacy systems are inflexible, hard to scale, and unable to keep pace with emerging technologies. One large code base makes it difficult to update or modify the application for a specific feature request. This leads to a greater learning curve, increases deployment cycle and project delivery timeline, slows down time-to-market and complicates delivery of new capabilities.

If your enterprise system are based on a monolithic architecture, here’s what you need to consider:

  • How to achieve faster go-to-market with new application, and new updates on existing applications, while minimizing downtime
  • How to optimize scalable application development, even as you deploy multiple teams across diverse platforms

The answer is microservice architecture. Let’s take a closer look.

What is Microservice Architecture?

The microservice architecture essentially breaks down complex applications into its component functionalities, and creates each of them as independent, self-contained services.

What is Microservice Architecture?

With microservices, an application is built as a suite of small services, each running in its own process and communicating with lightweight mechanisms often an HTTP resource - API. Each of these microservices focus on completing one task only (representing a small business capability) and do that task really well.

These microservices can be developed in any programming language and have a bounded context, which means that they don’t need to know anything about underlying implementation or architecture of other microservices.

Advantages of Microservices Over a Monolithic Architecture

The microservice architecture has several advantages over a monolithic architecture, specifically focused on the code base, delivery and deployment.

 

Monolithic Architecture

Microservice Architecture

Code

A single code base for the entire application

Each microservice has its own code base

Understandability 

Often confusing and hard to maintain

Much better readability and much easier to maintain

Deployment

Complex deployments with maintenance windows and scheduled downtimes

Simple deployment as each microservice can be deployed individually, with minimal or zero downtime

Language

Entire application is developed only in one programming language

Each microservice application can be developed in a different programming language

Scaling

Requires you to scale the entire application even though bottlenecks are localized

Enables you to scale bottle-necked services without scaling the entire application.

Architecture

Tightly coupled with interdependent components that cannot be deployed without following a defined procedure

Loosely coupled, demands self-sufficient services and avoid hard-coded references to other services

Delivery

Applications are developed, and updates are made when scheduled (quarterly or annually)

Microservices is delivered continuously- is deal for teams that have apps with constantly changing needs

Check out our webinar on transitioning from a monolithic to microservice architecture.

The Business Benefits

Microservice architecture is helping enterprises deliver richer experiences for customers as well as employees.

  • Responsive - Companies that adopt a microservice architecture can continuously deliver capabilities according to the customer’s changing needs.
  • Continuous Delivery- Microservices built on the principles of Agile and DevOps help the software teams run parallel projects while iterating quickly on discrete capabilities.
  • Scalability and Reliability - A successful microservice architecture prevents any system failure. Its repeatable automation, scaling application, and designing pattern features keep the system running.

 

While microservices become an increasingly popular service architecture, it may not necessarily be the best fit for your project. The monolithic architecture still works better for simple, lightweight applications. However, enterprise digital transformations typically demand complex, evolving applications and for those, microservice architecture is undoubtedly a better choice.

Srijan modernizes enterprise architecture with microservices & API management for faster turnaround, scalability, and resilience. Drop us a line to learn a few success stories.

Topics: Microservices, Architecture

Making Docker Compose work with Drupal

Posted by Nilanjana on Jul 21, 2016 1:34:00 PM

My last post talked about how Docker microcontainers speed up the software development workflow. Now it's time to dive into how all this applies to Drupal. 

I created a collection of Docker configuration files and scripts to make it easy to run Drupal. If you want to try it out, follow the steps in the README file.

The repository is designed using the microcontainers concept, so that each Drupal site will end up with 3 containers of it's own (Apache, MySQL and Drush containers), which are linked together, to run our application. If you want to serve a new site, you need to create separate containers.

In theory you could re-use containers for different web applications. However, in practice, Docker containers are resource-cheap and easy to spin up. So it’s less work to run separate containers for separate applications than it is to configure each application to play nice with the other applications running on the same container (e.g.: configuring VirtualHosts and port mappings). Or at least this is what my colleague M Parker believes.

Plus, configuring applications to play nice with each other in the same container kind of violates the “create once, run anywhere” nature of Docker.

How it works


My repository uses the docker-compose program. Docker-compose is controlled with the docker-compose.yml file, which tells Docker which containers to start, how to network them together so they serve Drupal, and how to connect them to the host machine. This means serving the Drupal repository filesystem and mapping a port on the host machine to one of the ports in one of the containers.

A useful tip to remember is that docker-compose ps will tell you the port mappings as shown in the screenshot below. This is useful if you don't map them explicitly to ports on the host machine.

making 1


Networking


If you've ever tried setting up a bunch of containers manually (without docker-compose), it is worth noting (and not very well documented in the Docker docs, unfortunately) that you don’t need to explicitly map port 3306:3306 for the mysql container, because docker-compose sets up a miniature network for containers run from the same docker-compose.yml. It also sets up hostnames between each container in the same docker-compose.yml. This means that the web container can refer to the mysql-server machine with the hostname mysql-server, and, even if you implicitly map 3306 to some random port on the host machine, web can talk to mysql-server on port 3306.

Note in this case that the container running MySQL is named db, so, when you're installing Drupal, on step 4 (“Database configuration”) of the Drupal 7 install script, you have to expand “Advanced options”, and change "Database host” from localhost to db!

Filesystem


It is possible to put the Drupal filesystem into a container (which you might want to do if you wanted to deploy a container to a public server). However, it doesn't really make sense for development, because most of the time, you're changing the files quite frequently.

To get around this requirement for a development environment, we mount the current folder (often referred to as ‘.’) to /var/www/html in the container, which matches where the current directory is mounted in all three containers. This is done with the 'volumes' directive in the docker-compose.yml file. The ’working_dir’ directive says “when you run the Drush command in the Drush container, pretend it’s running from /var/www/html”, which is the equivalent of ‘cd /var/ww/html’ before you run a drush command.

So when you run the Drush command in the Drush container, it sees that it’s currently in a Drupal directory and proceeds to load the database connection information from sites/default/settings.php which tells it how to connect to the mysql on the `db` container with the correct credentials. (recall the `links` directive makes sure that the `drush` container can access the `db` container so it can connect to it on port 3306).

The Drush container


The drush container is a bit special because it runs a single command, and is re-created every time a Drush command is used.

If you look at the step 9 of my https://github.com/lhuria94/docker-drupal-lamp you’ll see it says…

  • # Run Drush commands with:
  • # USER_ID=$(id -u) docker-compose run --rm drush $rest_of_drush_command

… i.e.: `docker-compose run --rm drush`
i.e. start the container named `drush`, pass it `$rest_of_drush_command`

making 2


If you look at the Dockerfile for https://hub.docker.com/r/mparker17/mush/~/dockerfile/ , you’ll see it contains a line saying ‘ENTRYPOINT ["drush"]’. ENTRYPOINT is a variant of the CMD command which passes all the rest of the ‘docker run’ parameters to the command specified by the ENTRYPOINT line.

So what happens when you run that ‘docker-compose run’ line is that it creates a new container from the ‘mparker17/mush’ image, with all the configuration from the ‘docker-compose.yml’ file. When that container runs, it automatically runs the ‘drush’ command, and docker-compose passes ‘$rest_of_drush_command’ to the ‘drush’ command. When the ‘drush’ command is finished, the container stops, and the ‘--rm’ thing we specified deletes the container afterwards

Running `USER_ID=$(id -u)` before a command sets an environment variable that persists for that command; i.e.: when `docker-compose` runs, an environment variable $USER_ID exists; but $USER_ID goes away when docker-compose is finished running. You can leave out the `USER_ID=$(id -u)` if you add that line to your shell’s configuration. Essentially what that environment variable does is set the user account that the Drush command runs as. If you don’t specify the user account, then Docker defaults to root.

The main reason why I do this is so that if I ask Drush to make changes to the filesystem (e.g.: download a module, run `drush make`, etc.) that the files are owned by me, not root (i.e.: so I don’t have to go around changing ownership permissions after I run the drush command)

It may only be necessary on Windows/Macintosh, because the virtual machine that Docker runs in on Win/Mac has different user IDs — I think if you run a Docker command from a Linux machine, your user id is already correct; but because a Docker command on a Mac/Win is run with your Mac/Win user ID (e.g.: 501) but gets passed to the docker VM’s ‘docker’ user (which runs as user 1000), some problems arise unless you’re explicit about it.

Acknowledgements


Lastly, I would like to thank Matt Parker here, who has been mentoring me since the beginning of setting up docker and telling me better ways to do it. He also recommends reading the Docker book if you want to explore this further.

Topics: Drupal, Microservices, Architecture

Docker: A microcontainer for faster software workflows

Posted by Nilanjana on Jul 11, 2016 2:10:00 PM

Docker, a container-based technology which I just came across, is great for setting up environments. It was first introduced to the world by Solomon Hykes, founder and CEO of dotCloud at Python Developers Conference in Santa Clara, California, in March 2013. The project was quickly open-sourced and made available on GitHub, where anyone can download and contribute to it.

Containers vs. Virtual Machines

You might be wondering, "What is the difference between Containers (like Docker) and Virtual Machines"?

Well, virtual machines (VM) work by creating a virtual copy of a computer's hardware, and running a full operating-system on that virtual hardware. Each new VM that you create results in a new copy of that virtual hardware, which is computationally expensive. Many people use VMs because they allow you to run an application in a separate environment which can have it's own versions of software and settings, which are different from the host machine.

On the other hand, container technologies like Docker, isolate the container's environment, software, and settings, in a sandbox; but all sandboxes share the same operating-system kernel and hardware as the host computer. Each new container results in a new sandbox. This enables us to pack a lot more applications into a single physical server as compared to a virtual machine.

Docker containers are isolated enough that the root process in a container cannot see the host machine’s processes or filesystem. However, it may still be able to make certain system calls to the kernel that a regular user would not, because in Docker, the kernel is shared with the host machine. This is also why Docker containers are not virtual machines and thus a lot faster.

Note, however, that Docker relies on a technology which is only available in the Linux kernel. When you run Docker on a Windows or Macintosh host machine, Docker and all it's containers run in a virtual machine

That said, there are two projects trying to bring Docker-style containers natively to OS/X , Dlite and Xhyve. But last I heard, these projects were still very experimental. So consider yourself warned.

When you are done with a container, on a Mac host machine, it’s probably good to suspend the containers, because they run in a virtual machine and that has a lot of overhead. But on a Linux host machine, there would be no need to suspend them because they would not create (much) additional overhead (no more than, say, MAMP).

Docker is a tool that promises to scale into any environment, streamlining the workflow and responsiveness of agile software organizations.

Docker’s Architecture

This is a diagram explaining the basic client-server architecture which docker uses.docker

Source: http://www.docker.com

Important Terminology

  • Docker daemon: A Docker engine which runs on the host machine as shown in the image above.
  • Docker client: A Docker cli which is used to interact with the daemon.

Workflow components

  • Docker image: A read-only disk image in which environment & your application resides.
  • Docker container: A read/writeable instance of an image, which you can start, stop, move, and  delete.
  • Docker registry: A public or private repository to store images.
  • Dockerfile: A Dockerfile is instructions for how to build a single image. You can think of a Dockerfile as kind of Vagrantfile, or a single Chef cookbook, or an Ansible script, or a Puppet script.

Microservices


Because Docker allows you to run so many containers at the same time, it has popularized the idea of microservices: a collection of containers, each of which contain a single program,  all of which work together to run a complex application (e.g. Drupal).

Taking Drupal as an example, every Drupal site has at least two dependencies: an HTTP server (Apache, Nginx, etc.) running PHP; and MySQL. The idea of microservices would involve packaging Apache+PHP separately from MySQL; as opposed to most Drupal virtual machine images which bundle them together into the same VM. For more complicated setups, you could add another container for Solr, another container for LDAP, etc.

For me, the main advantage of using microservices is that it’s easier to update or swap one dependency of an application without affecting the rest of it. Another way of looking at this is that microcontainers make it easier to modify one piece without waiting a long time for the virtual machine to rebuild.

When I was using a virtual machine on a particularly complex project, if I needed to make a change to a setting, I had to make that change in the Puppet config, then run vagrant destroy && vagrant up and wait two hours for it to tell me that the new configuration wasn’t compatible with some other piece of the system. At which point I had to repeat the two hour process, which wasted a lot of time.

If I had been using Docker (properly), then I could have just changed the setting for that one program, rebuild that program's container (5 seconds), and not have to worry that one piece of the machine needed at least Java 6 and the other piece of the machine could not work without Java 5.

Now that you know the possibilities with Docker, watch this space to find out how all this applies to Drupal.

Topics: Drupal, Microservices

Microservice architecture is Not for You

Posted by Nilanjana on Mar 10, 2016 4:28:00 PM

Lately, I have been working on a Rails + Sinatra project, based on microservice architecture. This architecture was chosen from scratch for the project—and I think this is where things started getting painful. I am not really against microservices architecture, but I do find it very overwhelming and cumbersome.

 

I felt we could have built an MVP in a single Rails app first, and then added more features to it as components or modules. Components can have their own controllers, views models, etc, and can even talk to their own database. I was watching a video on component-based architectures in Ruby and Rails by Stephan Hagemann (http://youtu.be/-54SDanDC00), and it seemed simpler to start your application as a component collection and then slowly move to service-oriented architecture (SOA). Your services can be component-based too, but this time your components would be small.

 

You can design one service which is a suite of small services, and you would see as you scale your application higher and higher that it becomes a suite of small services—and you have your microservices architecture in place.

 

My take on microservices architecture is don’t start out with it—it can negate all the benefits which Rails offers of speedy development and quick prototyping. Also, such architecture requires good developers, and everyone may not be comfortable doing things this way. As an analogy, you can think of how many people know how to speak and write in a particular language—like English—but not everyone is comfortable writing poetry in it.

 

So, how would I approach my next application? I would probably design different components according to business needs and create a well organized monolith first; and then separate different components into services, if need be; and then small services out of a big service, if there was any need to do so.

If you are a PHP or Java developer, then you might want to figure out how you can design your application on component-based architecture and then slowly take it from there.

Topics: Microservices, Architecture, Enterprises

Discussion

Write to us

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms Of Service apply. By submitting this form, you agree to our Privacy Policy.

See how our uniquely collaborative work style, can help you redesign your business.

Contact us