The microservices architecture, while the right choice for enterprises looking to build scalable, future-ready applications, also come with a set of challenges. Moving from monolithic applications to microservices-based architecture means dealing with a set of independent services that could range from 10 to 100s, depending upon the complexity of the application. And managing this distributed system is naturally more nuanced than doing so for an application that is packaged as a single unit.
The key challenges with a microservices architecture are:
- Complexity in developing and deploying the microservices architecture with all its moving parts
- Testing is complex owing to inter-service dependencies
- Managing inter-service communication
- Program each service to respond to failure in other services
- Ensure database consistency even as each service ideally uses independent databases
- Complexity in developing functions that span multiple services
This blog will concentrate on the best solutions for two of the challenges listed above, namely
- Complexity of deploying a microservices architecture
- Ensuring database consistency
We will also take a look at how to do this using AWS services.
Docker for Deploying Microservices
Deploying and orchestrating a host of different microservices to cohesively deliver an application experience is extremely complex. Add to it the fact that there are a few specific prerequisites for deploying these services:
- Services must be deployed independent of and isolated from each other
- Deployment process must be fast, if the application is to be truly scalable
- Deployment process has to be viable, easily repeatable, and cost effective
Once you take all this into account, it might begin to look like you were better off with the monolithic architecture. But while progressing with monoliths simply has you banging against a wall of problems, the challenges with microservice deployment actually have a solution.
How Docker helps
Microservices that make up an application can be written in different languages, and each service can have multiple instances that need to be deployed. With docker:
- Each service instance is hosted on separate docker containers
- These are self-contained packages that have the exact environment needed for the service to run uninterrupted. The containers can be hosted on any EC2 instance, moved around at will, and would still run perfectly.
- Because they are lightweight, several of these containers can be hosted on a single virtual machine, making them extremely resource-efficient
- From building a container image to registry to launching a docker container in a production environment can be done in under a minute.
All of this put together not only makes deploying a microservices-based application simplified to deploy and manage, but also highly available with minimal downtime.
A few other advantages with docker are:
- Manually setting up a new development environment with the exact configurations of your application can be extremely difficult. But with Docker Compose, replicating the infrastructure is as easy as deploying a configuration file
- Faster end-to-end testing of the entire application can be automated, with a Jenkins pipeline that tests every single container image that’s created, to ensure its working as it’s supposed to.
Ensuring Database Consistency by Event Sourcing Microservices
An optimal microservice architecture is one where each service is designed to be completely independent of the others. That is what keeps the entire application agile, scalable, and fail resistant. A key element of ensuring this independence is that each microservice has its own separate database. This keeps the services loosely coupled and prevents any coordination nightmares between different microservice teams.
However, in any applications, there are scenarios where services need to access common data or access information from other service databases to fulfil a task.
For example, if a social network application has a “user profile” service and a “social post” service, they would need to access each others databases. Sharing a post is handled by the ‘social post’ service, but the action also has to be reflected in the user’s profile, with an increase in the number of posts. And for that to happen, the ‘user profile’ service will need to access the ‘social post’ service’s database.
So, how do you ensure that both databases remain consistent while still keeping them isolated from each other?
The answer is event sourcing microservices.
How event sourcing works
With event sourcing, the process becomes something like this:
- Service 1 completes a certain task and updates its database, like when the ‘social post’ service publishes a new post
- This creates a particular event denoting a change in state of the application. For example: 1 new post has been created
- This event is consumed by Service 2, like the ‘user profile’ service
- Based on this event, the Service 2 database is updated, let’s say with the new number of posts published by this user
In applications, this whole process happens by way of creating an event table where every single change in state of the application is logged sequentially, as a series of events. Each microservice has its own event stream, and all other services that have dependencies on it can subscribe to this event stream. Each service can then consume one or more of the events in this stream and use the information to accordingly update their databases.
And that’s how event sourcing can help all service databases maintain consistency at all times, with each state change in the application.
Besides this, there are few other advantages to event sourcing:
- The reliable list of events can be used to enable other functionalities in the application, like customer notification associated with certain events, or predictive analytics of application usage patterns based on historical event streams.
- The event streams also becomes a reliable audit log of all state changes in the application, giving valuable information in case of application failure to trace point of origin of an error, or know the application state at any given time.
- The preserved history of all state changes in the application can be fed into any new feature, and it will immediately be in sync with the rest of the application
How to achieve event sourcing with AWS
With AWS, you get a set of solutions to easily set up event sourcing for your microservices. Here’s a quick look:
- Use Amazon Kinesis to set up event streams. Though it comes with certain limitations in terms of customization when compared to Kafka, Amazon Kinesis is extremely reliable for event streams. It’s capable of handling most enterprise application requirements while the limitations ensure that you don’t try to design something that’s very customized but ultimately too costly to maintain.
- Set up Lambda Kinesis subscriptions to get services to tap into event streams. AWS can invoke Lambda functions to periodically pass records from the event stream to the interested services. It can also keep track of the record last read by a service, and initiate the next batch of records from that point onwards.
- Leverage Amazon Kinesis Data Firehose to load event data into data repositories, preferably Amazon S3 buckets. You can ensure that Kinesis Firehose Delivery stream is one of the subscribers to any event stream, and it can route the data to S3. The data can be stored here indefinitely and used for whenever you need to play it back.
And that's how you address two of the most common challenges that enterprise face when shifting from monoliths to future-ready microservices.
Srijan is assisting enterprises in modernizing applications with microservices architecture, primarily leveraging Docker and Kubernetes. Srijan’s is also an AWS Advanced Consulting Partner, with AWS certified teams that have the experience of working with a range of AWS products and delivering cost-effective solutions to global enterprises.
Ready to build modernize your application architecture with microservices? Just drop us a line and our expert team will be in touch.