Avoiding Common Pitfalls During Vendor Selection

Posted by Kimi Mahajan on Oct 25, 2019 3:22:44 PM

Choosing a service provider to get your job done is something which many of us do on a regular basis - at home or at the workplace. Whether it is choosing the right broadband provider, the vehicle insurance company as homeowners or choosing the right coffee vending machine. 

Selecting a vendor that can offer a solution to meet your software or service-related needs is as crucial as choosing a partner for your business. It is a critical decision for an organization as the right vendor will have a considerable impact on the success of your business, and a wrong one will be nothing more than a hindrance.

When Selecting Your Vendor

When choosing a vendor, it is necessary to ensure that both the vendor and you understand the business, outcomes, and expectations to avoid wastage of resources. 

The vendor selection criteria may vary depending upon the organization but it is a must to include them in the inventory management plan. 

1. Assess Your Requirement

Before choosing any vendor, first, analyse what your project or organization want from the vendor. Hiring a vendor can greatly reduce your operational cost and effort in creating the product by yourself. You should document what your expectations and needs are from your vendor. This will greatly benefit you to outline the benefits of hiring a vendor. Later on, when deciding on between vendors you can compare these expectations with their offerings so that you’re able to match what you need from them and what they can offer.

2. Check The Offered Product

Next, you will have to do your own research and identify which vendors can provide you with the right solution as per your requirement. Provide an introduction of your business, a summary of your requirement for a one-on-one discussion. Demo the products to gain insights by porting a sample of your company's data into the available products to test out the entire workflow and know whether it will work for your organization or not. This will give you a fair idea of what will the product offer and how the system would be integrated into your company's existing business processes. By this stage, after you test the product, you will be able to weed out vendors to understand their profitability for your business.

3. Ensure Product Complies With Security Standards

It is important to check whether the offered product complies with your organizations’ regulatory standards, in terms of how data is kept and stored. Ensure that the vendor takes up the onus of helping you with the security and regulatory compliance of the product.

4. Review Past Performance

When you’re done evaluating the right solution for your project that falls within your organization’s budget, you should consider investigating the past performance of your chosen vendor and take the final decision. Consult the online business references to see their past reviews. However, it’s possible the vendor you're looking is new and untested and may not have online reviews. In that case, you can ask for third-party referrals. And in cases where vendors do get negative reviews, it is important to see how professionally they deal with them. 

5. Total Cost Of Ownership

By now, you will be left with 3-4 vendors and you will be deciding to go with one of them post this crucial stage of judging the total cost of ownership involved. You will have final discussions with your shortlisted choices on the upfront costs involved and the long-term cost of maintenance. The costs should be compared with all competitive choices to evaluate they fall within the organization’s budget. Also, this is the right time to mention whether you’ve any special requirements and needs from the vendor. Provide as much information as possible so as to select the best vendor in terms of price and service.

It is a time-consuming process for the project manager to know which vendor will deliver the well-maintained, scalable product as promised and at an affordable cost. And no matter how much you strategize, you won’t know whether you found a reliable vendor, until you give them a chance to prove. And when you feel the decision makes good business sense as vendor delivers exactly what you’re looking for, you need to nurture the relationship with them as you do with your employees.

Topics: Project Management, Enterprises

DIY Bot platforms or build Bots from scratch - What to choose for Your Enterprise?

Posted by Gaurav Mishra on Jun 28, 2019 5:30:00 PM

Enterprises are constantly investing in solutions that can help scale up their operations and automate their internal as well as customer level interactions. Deploying chatbots across different enterprise usecases - accessing data from repository, handling customer queries, collecting feedback, booking tickets etc. - has emerged as one of they key ways to optimize operations. It is estimated that 80% of enterprises will be using chatbots by 2020, to solve a diverse range of business challenges

While that’s a great number, there are things you need to consider before deploying bots for your enterprise. Here’s a look:

Why Do You Need a Chatbot

Before you start with what kind of chatbot to deploy, and which platform to use, it is important to answer the first basic question: why do you need a chatbot? What is the business problem that you are trying to solve? Is it to conduct research, answer queries, give reminders, or something else? Starting with a clear definition of your business problem will give you clarity on how chatbots can solve that problem for you.

Clearly defining the 'why' will involve specifying:

  • The exact use cases of your bot. This will help define the first set of features and capabilities your bot should have
  • The users of the bot. This will help define additional features that might be valuable for the intended users. It will also help create the right conversation flow for the bot.

    Once the 'why' answered, the next question is how? Based on your tech stack capabilities, and the above factors, you can decide whether you want to build a DIY (do it yourself) drag-and-drop chatbot using any of the available bot platforms, or a customised bot from scratch.

    We take a look at the two ways to build a chatbot, and which one you should choose.

Proprietary Vs Open Source Platforms

Chatbots make use of machine learning and natural language processing engines to perform enterprise tasks, and solve related business problems. While typically this would involve a skilled team of developers, there are a number of DIY chatbot platforms that are gaining popularity.

Understanding Proprietary DIY Bot Platforms 

Beginners and non-technical users can simply use platforms like Chatfuel, Motion.ai, Aivo, Botsify etc to build and deploy bots without any coding. The key aspects of machine learning and natural language programming are incorporated into the platform, and all that you have to do is create the conversation flow and the tasks that you want the bot to perform. Designing these bots is as simple as dragging and dropping from a set of pre-defined functionalities, with some scope to modify and customize them for your specific business objectives.

For example, on Chatfuel, all you need to do is write use cases and user stories, follow tutorials, and run some testing. These kind of chatbots can be built using a drag-and-drop interface, and also integrate easily with third party integrations like Salesforce, Zendesk, WhatsApp etc.

Using these platforms, you can create a basic bot in minutes and then tailor it for your usecase. But even with these capabilities and ease of deployment, it may not always be the right choice for your business. Why you ask?

DIY bot platforms come with certain challenges:

Limited Functionality: Building chatbots using these platforms means limiting your bot's capapilities to what the platform can do. There are high chances of your bot missing out on elements like self-learning, responding based on user intent, or carrying out contextual conversations.

And this can severely affect your customer experience, especially if compared to competing organizations that deploy self-learning and intelligent bots.

Limited Extensibility: Most enterprise solutions need to take into account concerns around integration, scalability and extensibility. While your current chatbot usecase might be a simple one, and adequately served by a DIY platform, is it scalable in the long run? Given that most DIY platform offer only a specific set of functionalities, it becomes challenging to scale a DIY bot to perform tasks with greater complexity. 

Compounding this is the fact that DIY platform bots also have limited integration options. In a scenario where an enterprise has used different DIY platforms to build bots for different tasks, the complete bot ecosystem becomes a jumble of different systems straining to work cohesively. Frequent integration challenges with each other as well as with the existing enterprise architecture will likely become a major drain on enterprise resources and productivity.

Building Intelligent Bots from Scratch

Companies like Google and Amazon are investing heavily in to develop extraordinary capabilities in their voice assistants. Alongside, they have created products that bring in powerful machine learning and NLP capabilities for developers. AWS solutions like Amazon Lex and Sagemaker, along with Alexa skills gives enterprise development teams a complete toolbox to conceptualize and design bots from scratch, with a wide range of features. 

What's more important is that these solutions are focused on delivering capabilities like self-learning, understanding user intent, advanced analytics and also customized for people with speech disabilities. So the level of fine-tuned customer experience you can generate with these tools if your build your bot from scratch cannot be matched by DIY bot platforms. 

Yes, building a chatbot from scratch can seem like a complex and time consuming task upfront, but the gains for your business intelligence processes, operations, and user interactions are also higher. With code-based frameworks like AWS, Wit.ai, API.ai, or Microsoft Bot, a skilled team of developers can help you create a bot that's tailored to your organization’s needs. It can work across multiple platforms, solve complex use cases, generate analytics, and extend in close collaboration with your enterprise IT infrastructure.

Summing up, here's a look at the proprietary DIY bot platforms vs. building bots from scratch

DIY Bot Platforms vs Building from scratch

What Should You Choose?

Choosing either of these two depends largely on your enterprise requirements, team skills, and project limitations. So if you need a chatbot for a simple task, like feedback collection or setting reminders, it might make sense to use a DIY platform. But its benefits are only for short term. In the long run, you cannot scale up your bots, nor have innumerable use cases, or integration with other platforms, and cannot solve complex enterprise problems with it.

There are also chances that in an effort to keep all bots interoperable, you create all of them on the same platform. But then again, you get locked within a walled garden in terms of functionality and hinder the scalability of your bot ecosystem.

So if you want to ensure that your bots are future ready, and create a foundation that can scale with your enterprise requirements, it makes sense to build your bots from scratch, using an advanced set of machine learning and NLP solutions. And if you do not have a team of developers who can do that for you, you can always get in touch with qualified third party development teams.

Srijan's expert team of certified AWS engineers are working with machine learning and NLP to create interesting enterprise chatbots for diverse industry use cases. We recently built chatbots to access asset performance data for a large cleaning and hygiene solutions enterprise. AWS solutions like Amazon Lex, Amazon Cognito, AWS Lambda, AWS Translate and Amazon S3 were leveraged for the same, eventually leading the client to upsell to a business worth 90 million USD. 

Looking to develop an effective enterprise bot ecosystem? Just drop us a line and our team will get in touch.

Topics: Machine Learning & AI, Enterprises

Can CIOs manage Digital Transformation while cutting costs?

Posted by Rajat Lal on Jun 3, 2019 5:50:00 PM

Early Encounters with Duality

Managing with Dual Strategies - a book by Derek F. Abell, was the recommended reading suggested to me early in my career - by a senior executive of ABC Consultants, India’s premier people management consulting company. I’d been in a pure sales representation role till then, moving into a new Corporate Planning function with my then company. This book was supposed to act as my Bible on the challenges faced by senior management.

I’ve forgotten most of the content I’d read then (to confess, I never read the book fully, save for a presentation I did to my Corporate Planning colleagues). But the book’s title has stayed with me all these years, and so also the subtitle Mastering the Present, Pre-empting the Future. I would see CIO equivalents in the 90’s struggle with early age technology initiatives, in a quandary whether to stay put in the air conditions environs where the Unix servers were hosted or go down to the business teams and figure out how to make life easier for them.

As was usual in that time, management wisdom then lay more faith on maximal utilisation of the former. The would-be CIOs had their KRAs quite clear - loads of report generation, and upkeep of the servers. And as it happens with most such concise pieces of wisdom that stick around in your head, I ended up applying this beyond work situations, to my personal life.

So ‘Mastering the Present’ became an early start to personal financial planning, on a self-generated spreadsheet. Jotting down all expenses each day, then doing ‘small-data’ analytics at the end of each month on where the money went. Over months and years, the sheet evolved to ‘Pre-empting the Future’ - tracking stocks, mutual funds and tax planning. The sheet is now an unwieldy behemoth with some 20 tabs, possibly tending to big-data proportions.

A blue pen pointing at a graph on a white sheet

The Contradicting Challenges Before CIOs

Coming back to work, and this dual strategy approach was applied more to my customers - largely CIOs or business-technology heads. The issues they’ve faced can largely be put under the ‘Present’ bucket of ‘managing’ Resources, or Costs. And the ‘Future’ bucket of all that goes in the name of Transformation, or ‘pre-empting’ its effects.

CIOs today deal with constant requests, expectations, demands, directives

The challenges CIOs face are essentially two - reducing or optimising  total cost of ownership, while creating Value via Transformation. But the contradiction between these two goals is perhaps as vast and unbridgeable for CIOs today, as they once did in Derek Abell’s book to a then greenhorn.

CIOs, today, deal with constant requests, expectations, demands, directives - to cut down on resources, merge functions or teams, pare down, or harmonise or consolidate vendor contracts. All the while launching into new digital transformation initiatives that seemingly come out of a parallel universe of meetings, conclaves, or business directives.

The CIO’s role seems to straddle the two universes, chasing one objective at the cost of the other.

Can Contradictions be Resolved?

Yet as a recent customer conversation revealed, this contradiction is not insurmountable. If the business goals and expected operational benefits of any initiative are clearly outlined - CIOs can bring about digital transformation and tie it in with reducing costs.

The CIO’s role seems to straddle the two universes, chasing one objective at the cost of the other


We undertook a focused study for a manufacturing company CXO whose remit was to ‘Optimise TCO in IT’:

  • we researched the company’s complete IT landscape
  • identified areas where costs were being sunk into servicing licensed software or infrastructure
  • identified where these were being deployed
  • and then came up with a list of business use cases → these licensed IT assets could be replaced by open source alternatives

 

In the process generating feasible options for mini, stand-alone digital innovation projects limited to a business unit or division - acting as a proof-of-concept for a larger transformation.

So what might ‘innovation’ mean, especially since ‘innovation’ and ‘transformation’ are heavily overused terms?

Being an Indian, I can’t help, indeed I guess I must be expected to, try to explain ‘innovation’ in terms of ‘jugaad’.

‘Jugaad’ vs ‘Frugal Innovation’ - What’s the Underlying Objective?

There are lots of books out there eulogising ‘jugaad’ but since I haven’t read any, I won’t refer them here. Also, the usual positive sense of the word - this or that entrepreneur finding a cheap alternative and making billions (they usually go bust soon after) - holds true less and less.

It's more likely to be in line with this comment (even if it's highly obnoxious) of the infamously sarcastic Prince Philip. While India has seen technology and economic boom since liberalisation in the 1990s and especially the last few years, a ‘we are like this only’ attitude prevails, and in many ways, the nation remains tied to its socialist past.

Despite recent efforts at a national level, this is the attitude that has spawned and nurtured a culture of circumventing laws, evading taxes, soliciting favours, lax service standards, and public litter...  

 … from being the stepping stone to ‘frugal innovation’, ‘jugaad’ has come to mean ‘finding short-cut ideas to cut costs/corners no matter if the objective is compromised’ as opposed to … what ‘innovation’ means - ‘translating ideas into replicable solutions at reduced costs, to help achieve the (customer’s) objective’

As long as the objective is remembered and respected - “the business goals and expected operational benefits” - and cost reduction scenarios worked out accordingly, we are on track with innovation. So CIOs/CXOs can manage with dual strategies, or indeed combine seemingly contradictory strategies into one.

Enterprise Transformation - Experiences, and Examples

Some examples to reference, on Digital Innovation and Transformation delivered using open source platforms, while reducing costs / TCO -

  • An Omnichannel CMS for a Telcos Digital Stores - with a global template framework enabling client teams in 21 EMEA countries to structure pricing & product information as per regional market needs. And an infrastructure solution for the client to shift their hosting to new servers cutting annual costs by hundreds of thousands of pounds  
  • A Video Analytics Solution for an Industrial Cleaning Solutions Company - a machine learning solution using Amazon Sagemaker, Lambda and S3 - on a scalable, pay-as-you-go model - analyzing scraped video feed data to log time-stamped asset performance ---> delivering cost benefits in terms of reduced costs of asset recall & repair, as well as reduced IT infra costs

And again, being an Indian, I can’t help but relate to some examples of successful, ‘frugal’  innovation - ‘jugaad’ gone right (or wrong..?) so to speak -  

We are yet to ‘Master the Present’ - inefficiencies still run dark and deep, many streets and lanes yet to sweep, and some more years to sow before we reap (my modest take on Robert Frost).

But we’ve made progress in ‘Pre-empting the Future’ - leaders globally in areas transformative in reach and scale - universal healthcare & insurance, digital payments and space technologies, for example. Things that might make us proud, and perhaps wake up Prince Philip from his sleep.

...

The views expressed in this blog are solely of the author and do not reflect, by any means, the view of Srijan Technologies. 

Topics: Digital Experience, Enterprises

Multisite architecture - Is it the right choice for your business?

Posted by Akshita Rawat on Apr 9, 2019 5:25:00 PM

For enterprises looking to provide very targeted online experiences for different product lines, a multisite architecture is a no brainer. However, the challenges can be overwhelming. From managing different web content to different technology stacks, to increased costs and the most dreaded of all - an incoherent online experience for the visitors! And this raises several questions:

How well can Drupal manage your multi-region sites from a single codebase?

What options does it offer to govern the web footprint better while also securing the data?

And most importantly - is Drupal the choice for your multisite architecture?

Let us delve into these questions one by one.

Multisite architecture: A closer look

Multisite is an out-of-the-box feature in Drupal that lets the businesses run different websites on the same codebase. These websites share the same configurations and sometimes, even the modules and themes.

Utilizing the multisite features of Drupal can be the perfect fit for the needs of several businesses. Here are 6 reasons why you should opt for a multisite architecture:

  1. Knocks off the Extra Cost: Having a multisite architecture saves your business from the extra cost of having separate sites, and instead shares the resources (server, database, codebase). Licensing fees, costs of building a website, duplication issues and pre-codebase hosting expenses are all taken care of and reduced drastically.
  2. Easy to Manage with one Super Admin: With a distributed system of websites, content management can get tougher. But having a multisite platform provides a unified database and a single dashboard. This gives the admin centralized control and manage the creation and deployment of websites. It also makes the building and maintenance easier with cloned templates, modules, and themes.  

  3. Easy Maintenance: A common site architecture allows the organizations to add new modules, update or replace existing modules. With a unified multisite platform, visibility, access, and control of your sites remain in line. Easy maintenance ultimately leads to better site experiences for your audience.

  4. Upgrade Only Once: Multisite is particularly useful for managing the code since each upgrade needs to be done only once. This saves a lot of time particularly if the number of websites is more than single digits.

  5. Speed up the Website: Different and incompatible CMS platforms can be fatal to your digital strategy. While your content might be getting the right traction, do you know that a lot of traffic can affect the speed of your website, eventually pushing away your visitors with dropped SERP? Breaking the content into separate subdomains, intranet, wikis while interlinking the content is a better option. And it works wonders for media websites.

  6. Consistency in Brand: Often organizations opt for multisite because they need to build websites with the same features. With a single dashboard to manage all, admins can create and control templates and sites to build consistency in brands. Or how else will the visitors connect with three unique themes on three different website

Game of Codebase

One key element to multisite platforms is - the codebase. A quintessential part of multisite platforms, it helps in defining and maintaining that superset of functionality, the platform can provide great efficiencies.

A common site architecture allows new modules to be added, or existing modules to be updated or any changes you want to make across your sites to take place. This not only saves time during the build process but also when launching new sites.

What type of multisite do you need?

The key to effectively using a multisite platform is to understand the requirement before adding the features and functionalities. Businesses when opting for a multisite often fall in one of the three cases:

  1. Clones:  These are the sites with the same look and feel. They have only minor changes to their branding and content. Clones are perfect for multi-regional or multilingual websites where a template can be set and then deployed many times over without any
  2. Feature Flexible: These are the sites which require slightly more consideration when it comes to the features of the different websites. They are more complex than clones. Feature flexible sites require a deeper level of change management and communication for development partners who are working on the platform as well.
  3. Snowflakes: These kinds of sites have modifications unique for each of them. They provide a unique experience and have an entirely unique theming and backend development. This often leads to these sites needing their own unique site architecture that may change frequently.

A snowflake will most likely be the main organization’s website, where the major value of the digital business, such as e-commerce or customer support, will be housed.

Now that you know the kind of multisite you need, let us delve deeper into knowing how and why Drupal is the right choice for your multisite architecture.

Is Drupal the right choice for multisite?

Whether it is a university with different departments, or government helping citizens, or the corporate businesses, each has their own reasons for a multisite. But what is common is that none of them wants a co-independent site, a single database or security lapse. Which is where Drupal comes into the picture.

Here’s what it offers:

  • Enhanced Workflow for Distributed teams: Organizations can eliminate the time and opportunity for error introduced when users enter content in multiple places. Content staging and syndication is possible with Drupal’s Workflow and Deploy module.
    The Workflow module allows defining multiple workflows with fixed states and the transitions allowed between them. Drupal 8.6 ensures easy staging and preview of content in different environments.
    You can define multiple workspaces (such as "staging" and "live") which are copies of your site, and create content (or modifications to existing content) that is visible only within that workspace. Then, once the changes have been verified, you can "deploy" the content to another workspace at the click of a button.
  • Secured Network: When opting for the multisite platform, the inconsistency in the number of technologies can create a possible security loop. Each CMS will require different bug fixes, security scans or updates, making it challenging for your organization to minimize risk across the board and thus leaving you vulnerable.
    Drupal has a dedicated team of volunteers who track security-related bugs and release updates. This includes - resolving security issues reported to Security Advisory, offering documentation for writing secure code and safeguarding Drupal sites, providing technical assistance in the module.
    With a unified multisite platform, all site updates and maintenance can be done once to the codebase.
  • Faster time to Market: The more the number of CMS are used for different websites, the tougher it gets to manage the digital properties since you will need more resources to maintain them.
    By consolidating them into one, organizations can always reuse the themes and templates, thus reducing the deployment time. Assisting with the varying needs of the organizations, Drupal provides multisite management tools for easy customization.
    Not just this, each time a new Drupal core update is released, organizations only need to perform the update on one codebase instead for each site.

All of this and more make Drupal the optimum choice for your business despite the kind of multisite you need.

Conclusion

Opting for multisite is a big decision but it doesn’t have to be budget and time draining. By adopting a unified platform for your technical and business needs, you can take the hassle out of multisite management while reducing costs, and also bringing brand and design consistency to all of your sites.

Planning to opt for multisite but still have doubts? Drop us a line and our experts can help explore your options.

Topics: Architecture, Enterprises

Leveraging AWS Solutions to solve High-Value Enterprise Challenges

Posted by Sanjay Rohila on Mar 29, 2019 4:31:00 PM

The AWS ecosystem is an invaluable asset for enterprises driving their digital transformation. While the AWS Cloud infrastructure is powering a huge slice of enterprises, there are several other AWS solutions, especially in the realm of computation and machine learning, that’s enabling enterprises to leverage emerging technologies.

Here’s a look at some interesting projects and PoCs that Srijan has delivered for enterprise clients, using AWS solutions.

Chatbots powered by Amazon Lex and AWS Lambda

As a leading provider of intelligent cleaning solutions, the client wanted to be able to analyze and optimize the performance of their products. They had a set of data visualization dashboards that track this data in real time. However, these were not easily accessible and involved some effort before stakeholders could extract relevant insights.

The solution was to build enterprise chatbots that could deliver the same insights without taking up too much time or effort on the part of the client stakeholders. They could just type their query into the chatbot, and receive an appropriate response.

Srijan leveraged Amazon Lex as the core conversational interface framework to design the chatbot. Lex’s deep learning functionalities enabled the chatbot to identify the intent behind a particular questions, understand the context, and give back an appropriate response.

The other key solution used was AWS Lambda, that handled the backend extraction of data form the client databases, and computation to generate the correct response. The business logic defined atop Lamba determined how the raw data from various sources would be interpreted and presented to the user as a final answer.

Other AWS services used were:

  • AWS Cognito for user authentication
  • AWS Translate to ensure the chatbot could be used by client stakeholders in any location
  • Amazon S3 to store relevant asset images and performance graphs that could be accessed solely by the chatbot users.

 

READ COMPLETE CASE STUDY

Video Analytics powered by Amazon SageMaker

The cleaning solutions enterprise was also receiving increasing complaints around their floor cleaning machines not performing as expected. The client wanted to have detailed logs of machine performance across all locations, so validate or refute these customer claims, and prevent unwarranted expenditure on recalls and repairs.

Srijan proposed a video analytics algorithm capable of identifying the machine and verifying its performance at given locations. The approach was focussed on recording real-time footage of the machines operating at different customer locations and then automatically analyzing the video feed to identify and verify if the machines are performing as expected.

This was achieved with a deep learning model designed to analyze video feed data. The key objective of the model, built on convolutional neural network, was to accurately identify the machine in a video stream at 5 second intervals. These sightings are then timestamped and put together in a JSON file. This created a continuous log of whether a machine is working or not, in any given location.

Amazon SageMaker was the core solution used for this model. As a managed platform, it allowed for:

  • Creating the deep learning algorithm, with TensorFlow
  • Data augmentation and training the algorithm to accurately recognize the machines in a video stream
  • Quick and efficient scaling of training data to create a more accurate machine learning model

 

Once the model was in place, Srijan used Amazon S3 and AWS Lambda to create the workflow for collecting video feed from various customer locations, analyzing them, and creating detailed logs of machine performance.

READ COMPLETE CASE STUDY

Enterprise Data Analytics Platform with AWS Lambda

OnCorps offers PaaS for insight into enterprise data, to make better decisions using predictive analytics, machine learning and peer comparison. They wanted to create a platform that can do a lot of the heavy lifting when it came to data - right from gathering, to processing, to analytics and visualization.

While the platform was built on Drupal, Srijan leveraged a host of AWS solutions to deliver some powerful functionalities:

Amazon EC2: This offered an easily scalable and cost-effective computation solution. It gave the ability to run data analysis, compute workloads to aggregate data, as well as deliver predictive insight.

AWS Lambda: The frontend interface of the platform needed structured data to work with, preferably in JSON format. Lamba was used to transform the data coming in from various sources into a standard format.

Amazon S3: This was used to host the single page application built on AngularJS. S3 was also used as storage for all files and content assets for the platform.

AWS Cost Explorer: One of the Srijan team’s primary objectives was to keep product development costs on track. AWS Cost Explorer was used to get a clear visualization of operation costs across all solutions, and optimize the budget as much as possible.

With these solutions in place, OnCorps was able to roll out a scalable platform with >99% performance reliability across enterprise customers.

READ COMPLETE CASE STUDY

Cloud Native Storage with Amazon S3

TheRecordExchange (TRX) offers a SaaS-based workflow management application to capture and access recordings of legal proceedings. Their business model is dependent upon frictionless uploading and downloading of text and media files. The application frontend is designed so that transcription agents can easily upload files of legal proceedings, and customers can download them at will.

Given this, they needed a cloud-based server that could efficiently manage all file-related requests on the applications, and robustly render them on the frontend.

With Amazon S3, an object storage solution, Srijan was able to deliver a flexible clou-native storage for TRX. S3 enabled:

  • Addition of any number of files to the application, without worrying about capacity constraints. Since the application layer didn't have to handle file processing, it was lighter and delivered a better user experience.
  • Dynamic spacing, which allowed TRX to scale up or scale down space usage as and when required. With no minimum usage requirements and availability of on-demand usage, S3 proved to be a highly cost-effective solution for the client.

READ COMPLETE CASE STUDY

Srijan is an Advanced Consulting Partner for Amazon Web Services (AWS). It is currently working with enterprises across media, travel, retail, technology and telecom to drive their digital transformation, leveraging a host of AWS solutions.

Looking for an experienced AWS certified team to aid your digital growth strategy? Just drop us a line and our team will get in touch.

Topics: AWS, Cloud, Enterprises

Why enterprise applications need a Service Mesh Architecture

Posted by Ashish Thakur on Mar 15, 2019 4:05:00 PM

As enterprises increasingly shift from monolithic to a microservices architecture, IT teams are faced with the problem of effectively orchestrating these microservices. When we have a single application created with a few different containerized services, communication between them can be easily managed. However, enterprise applications with 100s or 1000s of different microservices need a better solution for load balancing, monitoring, traffic routing and security.

Enter the service mesh architecture.

Service Mesh Architecture

A service mesh is an infrastructure layer that manages service-to-service communication, and provides a way to dynamically route, monitor, and secure microservice-based applications.

Previously, the logic governing inter-service communication was coded into each microservice. But that’s not a feasible option when dealing with a large volume of microservices, or scaling applications by adding new services.

The solution is to have proxies that manage the service-to-service communication, running beside each microservice rather than within it. These are also known as ‘sidecar’ proxies and together they form the abstracted mesh architecture that manages the microservices communication.service mesh architecture

Why is this Needed?

The objective with microservices was to build applications as a collection of independent services that can essentially fail without causing system-wide outage. In practice however, most microservice-based applications began operating with direct communication between services. As the application complexity and number of microservices increased, this created greater interdependence between services, thus lowering agility and system resilience.

And hence the complex enterprise applications with a large number of microservices need a service mesh architecture.

Isn't that what APIs did?

Yes, APIs perform a similar function as a service mesh i.e. govern the flow of information. The key difference lies in what kind of communication they govern.

API gateways manage the communication between an application, and others within and outside the enterprise architecture. It provides a single entry point into an application, for requests from all external clients, and handles user-authentication, routing, monitoring and error-handling. It also abstracts the underlying complexity of an application, with its component microservices, from external clients.

A service mesh architecture on the other hand manages the communication between the microservices within an application.

All the proxy sidecars that make up the service mesh are listed in a service registry. Each microservice that wants to request information (client microservice) will have its proxy sidecar look up the registry to find the available proxies associated with the target microservice. It then uses the defined load balancing algorithm to direct its request to the right proxy.

What problems does a service mesh solve?

The service mesh primarily resolves concerns around increasing interdependence that creeps into microservice-based applications as they scale in complexity. Here’s how:

Deploying multiple microservice versions simultaneously

Canary releases, or introducing a new version of a microservice to a select number or type of requests, is a standard way to ease in new feature additions. However, effectively routing requests between old and new versions can be difficult when the logic in coded within each service, because they tend to have interdependencies on other services. Similarly, A/B testing microservice versions also requires dynamic routing capabilities that is best delivered by a service mesh.

The service mesh architecture has the routing rules, and can make the decision to direct source service queries to the right version of the target services. This decoupled communication layer reduces the amount of code written for each microservice, while still better managing inter-service routing logic.

Detailed visibility into inter-service communication

In a complex microservices architecture, it can be difficult to pin-point the exact location of a fault. But once all communication is routed through a service mesh, there is a way to gather logs and performance metrics on all aspects of the microservices. This makes it easier to generate detailed reports and easily trace point of failure.

The logs from the service mesh can also be used to create standardized benchmarks for the application. For example, how long to wait before retrying a service that’s failed. Once these rules are coded into the service mesh, microservices operation becomes optimized as the system doesn’t get overloaded with unnecessary pings to a failed downstream service before the requisite time-out period.

Microservice testing

Testing each microservice in isolation is critical to ensure application resilience. There are also instances where you need to test service behaviour when faults are introduced in downstream services. And that’s difficult and risky to do if we are forcing those faults to actually occur in the services.

The service mesh is the perfect way to simulate these faults in the systems and study the associated response.

Fault Tolerance

Resilience is a key reason why microservices architecture is preferred, and elements like circuit breakers, load balancing, rate-limiting and timeouts are what makes this possible. These rules are usually coded into each microservice , thus increasing complexity in the system, besides being time consuming to create.

Once again, the service mesh can be used to improve fault tolerance by taking these functionalities out of the microservices and adding them to the mesh. These can be implemented via a set of rules that will govern all microservices within the application, without actually cluttering the microservice implementation.

So that was a quick run down on the service mesh architecture and why it’s becoming a crucial infrastructure requirement for enterprise applications. Following blogs will explore service mesh implementation in depth, and evaluate the various tools like Istio, Linkerd and more for service mesh architecture implementation.

Srijan’s teams have expertise in decoupling monolithic systems with elegant single-responsibility microservices, as well as testing, managing and scaling a microservices architecture.

Looking to modernize legacy systems? Drop us a line and our enterprise architecture experts will be in touch.

 

Topics: Microservices, Architecture, Enterprises

Amazon Lex and the possibilities it holds for Enterprises

Posted by Sanjay Rohila on Feb 28, 2019 2:52:00 PM

Amazon Lex is an AWS solution that allows developers to publish voice or chat bots for use across different mobile, web and chat platforms. It can listen, understand user intent, and respond to context. Powered by deep learning functionalities like automatic speech recognition (ASR) and natural language processing (NLU), Lex is also the technology behind Alexa devices. Available now in the open, it can be easily leveraged by enterprises to build their own digital assistants.

Amazon Lex for Enterprises

For enterprises, Lex-powered applications can become a key competitive advantage, allowing them to optimize processes and enable cost savings. A few key aspects where Amazon Lex can assist are:

Performing User-based Applications

Lex can help build bots capable of providing information, or addressing user requests and queries. It can perform applications like ordering food, booking tickets, and accessing bank account.

Made possible with the help of the ARS and NLU, these capabilities can help create powerful interfaces customer-facing mobile applications. Such a voice or text chat interface on mobile devices can help users perform tasks that involve a series of steps played out in a conversational format. Further, the integration of Lex with Amazon Cognito helps developers control user management, authentication, and sync across all devices.

For example, healthcare enterprises can enable patients to schedule appointments at their facility with Lex powered bots. The patient can send a text request via his mobile application for “an appointment on Monday”.

  • Amazon Lex will recognize that an appointment has been requested, and will ask the user for a “preferred time on Monday”.
  • The user responds with a text, say, “1 pm”.
  • Lex will reserve this appointment time for the user once the account information is retrieved.
  • It will further notify the patient that “an appointment time of 1 pm has been finalised on Monday”.

 

Similarly, tasks like opening bank accounts, ordering food, or finding the right dress at a retail store can all be accomplished via Lex-powered bots.

Enabling Device Interactions

Lex also helps you build highly interactive and conversational user experiences for connected devices ranging from vehicles, to wearables, and other appliances.

For example, a wearables company can have Lex powered bots installed on its products for providing information like day, date and weather. So when the user makes a request like, “temperature in California”, Amazon Lex on the device recognizes it and responds in an appropriate manner.

  • It can further inquire, “Celsius or Fahrenheit?” 
  • And on receiving an answer “Celsius”, it will retrieve the information with the help of other AWS services involved

This ability to imbibe everyday accessories with an intelligent digital assistant allows brands to always exist in their customers immediate environment. And that means an exponential rise in brands recall and customer retention.

Enhancing Enterprise Productivity

Whether it is checking your sales data from Salesforce, marketing performance from HubSpot, or customer service status from Zendesk, you can do it all and more, directly with your chatbots. Lex enables you to build bots that connect to a variety of such enterprise productivity tools via AWS Lambda functions.

So, if an employee wants to access the “sales numbers for the month of December”, he can simply ask the bot on his system. Lex will recognize this as a request, and pull data from relevant enterprise systems like Salesforce or proprietary BI dashboards. Once the data is received, it will deliver it to the executive on his device and platform of choice.

This helps enterprises streamline their operations, and improve organizational productivity. 

Benefits of Deploying Lex for Your Enterprise

Ease of usage: Amazon Lex lets you build your own bot in minutes, no deep learning expertise required. Once you have the basic objective of the bot mapped out, you can specify the conversation flow, and Lex will build a natural language model to ask and respond to user queries.

Seamless deployment and integration: A Lex powered bot has native interoperability with other AWS services like Cognito, Lambda, and CloudWatch. It can scale automatically, and you need not worry about provisioning hardware or managing infrastructure to power your bot experience.

High quality ASR and NLU: Lex enables your bots to understand the intent behind the input. It can then subsequently fulfil the user intent by invoking the appropriate response.

Multi-turn conversations: With the help of Lex, you can build multi-turn conversations for your bots. This means that once an intent has been identified, users will be prompted a series of next questions to extract the required information needed for giving the right answer. For example, if “book hotel” is the intent, the user is prompted for the location, check-in date, number of nights, etc.

Cost effectiveness - Amazon Lex has no upfront costs or minimum fees. With a pay-as-you-go model, users are charged only for the text or speech requests made. And with the Amazon Lex free tier, you can try it without any initial investment.

How Srijan can Help

AWS has a broad range of AI and Deep Learning solutions to help enterprises build and deploy intelligent products and services. But you also need a skilled team that can evaluate your business requirements, and choose the right AWS deep learning solutions that fit the bill. That’s where Srijan teams get into the game.

Srijan teams are adept at leveraging Amazon Lex to deliver a range of services:

Ready to leverage conversational interface for your enterprise? Let's brainstorm to explore where your enterprise can best leverage Lex-powered bots.

Topics: Machine Learning & AI, Enterprises

Why Fortune 500 Companies Use Drupal

Posted by Gaurav Singh on Feb 5, 2019 2:58:00 PM

With so many options available in the market, picking the right content management system (CMS) for your needs can be challenging. A good CMS must allow even a non-technical person to be able to make changes and add content without having to depend on a technical team.

Drupal is the leading CMS for developing sophisticated, flexible and robust web applications that allows unlimited customization. Many Fortune 500 companies choose Drupal because it is built for editors, is powerful for administrators, and has secure and accessible features. Some of these companies include General Electric, Warner Media, Johnson & Johnson and eBay.

Drupal is suitable for complex, content heavy, and high traffic websites, as well as those with large resource libraries and databases.

Why Choose Drupal?

Drupal has evolved and become more than just a CMS. It’s a digital experience management system that creates content and personalized customer experiences which helps your organization connect with customers wherever they are.

Here are a few top-level advantages of using Drupal CMS:

1.    Flexible Content Architecture

One of the key strengths of Drupal is its flexible content architecture with essential tools for content creation and publishing, like a customizable WYSIWYG editor. This allows anyone to create entirely new types of content (landing pages, blog entries, surveys and polls), with complete control over where and how the content is displayed.

Drupal also handles multiple users with different roles and permission sets and lets administrators control access to tasks.

  1.    Super Scalable

Drupal supports among the world's highest traffic sites  such the Emmy’s and Ebay, and is capable of handling both traffic spikes and regular streams of high volume traffic.

  1.    Seamless Integration Capabilities

One of Drupal’s greatest strengths is that it is highly flexible and seamlessly integrates with other solutions and services. It uses an open standard which provides an extensible framework to integrate with data sources. This gives organizations a great opportunity to implement functionality in the most appropriate technology or platform, and then simply connect to it via web services or other means.

This seamless third-party integration feature of Drupal can be used as a new part of your ecosystem, consuming existing services thus minimizing the scale of architectural change needed to introduce new services.

Drupal also integrates with “internal” services - services that you don’t intend to expose to the public (CRM, accounting software, etc.) which make it suitable for Government websites.

  1.   Secure CMS

Security is a major concern for businesses, and Drupal is a proven, secure CMS and application framework that stands up to critical vulnerabilities.

A dedicated security team with dozens of experts from around the world regularly reviews releases and security issues. The team fixes security problems and publishes advisories that explain vulnerabilities, along with steps to mitigate them.

With regular updates and security reports regarding users and software installations, organizations around the world—including leading corporations, banks, and governments industries that deal with a lot of sensitive personal information—rely on Drupal’s ability to prevent security threats.

  1.    IT Cost Reduction

Open source frees business from  “the more you grow the more you pay” model. Drupal open source licensing spares companies and organizations from costly licensing fees and vendor lock-in.

The community-driven nature of the platform allows innovation through code sharing by hundreds of thousands of developers—the code is available to others to use in their projects or to find ways to make it better. With Drupal, an open source system, the focus is on community and collaboration. As more people and organizations participate, more features are available, thus resulting in less expenditure on development, while accelerating innovation.

  1.    Highly Extendable

Drupal allows you to configure and add the appropriate modules to take the platform beyond its core capability. The numerous modules in Drupal help you build amazing sites with little programming skills. And if it doesn’t support the specific business need, then you can build it through custom code.

The most recent upgrade from Drupal 7 to Drupal 8 made the integration and use of contributed modules easier and better than before. There are tens of thousands of modules currently available.

  1.    The Community

Drupal is beloved to a thriving development community. The tagline rightly says, “Come for the Software, Stay for the Community”. Drupal has a large user community which regularly contributes to repositories. Here are a few figures to illustrate this:

  • 38,695 active developers
  • 1.2 million accounts
  • 2,000+ commits / week
  • 31,199 modules and 2,141 themes
  • 2,400+ events each year                                                                                                                                                                                                                                                                                                                                                               Drupal is beloved to a thriving development community

The community maintains a platform for digital innovation and ideas..

While the battle of choosing the right CMS between Drupal and other CMSs is guaranteed to be a longstanding one, the benefits of choosing Drupal for complex, content heavy, and high traffic websites, as well as those with large resource libraries and databases are incomparable. With immense flexibility, extensibility, a huge open source community involvement, massive database accommodation capabilities, and impressive security measures, Drupal is a powerful choice for your content-based website.

Looking to migrate your enterprise websites to Drupal? Speak to our Drupal experts for the efficient, scalable and cost effective way to do this.

Topics: Drupal, Drupal Planet, Enterprises

Ensuring Vendor Diversity - Why Enterprises should work with small IT vendors

Posted by Gaurav Mishra on Sep 19, 2018 11:22:00 AM

Large IT vendors are great. They have huge teams they’re able to put at your disposal, they’ve been around and will be around for a long time, and more often than not they have the experience of working on a project like yours. In short, they are safe, reliable, and they will get the work done. 

Smaller technology companies, on the other hand, are never the first choice for enterprises because of several concerns:

  • Do they have a team big enough to serve my project?
  • Haven’t really heard about them. Are they any good?
  • They make a good point but what if they can’t deliver? Is that a risk worth taking?

 

These are all valid questions that an enterprise procurement team has to consider. But amidst these doubts, there might be a few key advantages that you might be missing out on.

Here are four unique benefits that smaller IT firms bring to the table:

Niche Expertise

While large vendors may have teams offering a lot of different technologies, each is a small part of their overall skill set. So you get a team that has a range of skills but often not enough depth of skills. On the other hand, small IT companies typically build deep expertise in niche technologies, which makes them capable of delivering more customized and sophisticated solutions with those technologies. 

For example, we work with a global consulting giant because they wanted a team that has deep expertise and experience of working with Drupal. They were already working with a large IT vendor, but not really convinced of their Drupal skills. We, however, had a team of Drupal developers experienced with deploying large-scale Drupal implementations. And that fit in perfectly with what the client was looking for. We have been working with them for six years now, and till date are the only high-skilled Drupal team they trust.

Interestingly, our engagement with them has also led to them expanding their previously limited Drupal project. Currently, they run a vast array of internal systems on Drupal including internal as well as customer facing websites that leverage Decoupled Drupal.

De-Risk Project Delivery

Not putting all your eggs in the same basket are widely accepted words of wisdom that should apply to hiring an IT vendor as well. And then why you need vendor diversity - a mix of different vendors working on your project Entrusting the entire project, or all simultaneous independent or interdependent projects to a single vendor is taking a huge risk on the delivery. In a scenario where the initial engagement is plagued by project management or delivery challenges, that impacts other downstream projects as well, severely delaying project timelines. The high initial investment and time spent in on-boarding a large vendor also makes it difficult to quickly change vendors mid-project.

Introducing small IT vendors into the project, especially for parts of it that demand niche technology expertise, is a good way to de-risk delivery. This could have two key advantages:

  • Segments of the project get delivered even if there are challenges with the large vendor. So you are just left with a huge bill with nothing to show for it.
  • Since you already have a different vendor in the mix, with understanding of the project (even if just a part of it), they can be easily brought in to either assist or replace the teams from the large vendor, if necessary. You save on critical expenditures in the middle of the project, while being able to bring it back on track.

Agile and Transparent Project Management

Small companies have tighter teams, shorter hierarchies, and less red tape which allows them to move faster on projects.

While Agile is an accepted projected delivery method across most IT teams, it is easier to follow rigorously with smaller, tightly integrated teams. And that ensures a more streamlined, flexible, and faster product development and delivery process.

For example, one of our clients, a global financial services firm, had stalled their product development owing to the US$ 1 million budget, and six month delivery timeline quoted by a large IT firm.

We came on board and offered to do a PoC delivering the most complex piece of their product. We did it at an investment of US$ 50,000 and in just four weeks. The lower investment and shorter delivery time gave the client confidence to move ahead with the project. We successfully delivered the complete product and have been maintaining it for a few years now.

In addition to the speed, teams brought in by smaller vendors are more amenable to working in close collaboration with your internal teams, and aligning to your delivery processes. This gives your stakeholders better visibility into the project, and hence greater control. Given the shorter hierarchies, you also have easier access to vendor-side decision makers in case you need to escalate certain challenges and concerns.

Superior Service and Delivery

Smaller IT vendors specializing in a specific set of technologies are small by design. They choose to work on a limited number of projects at a time, and that creates a huge advantage for their clients.

Teams from smaller vendor firms are highly focussed on their respective projects, and not spread out too thin across different projects. This means greater attention to your particular project, with teams making the effort to identify and develop the right technology solutions, and taking the time to truly innovate to meet your requirements. With a fully dedicated team, you can also be assured of timely delivery and a more transparent project management process.

Yes, a small IT vendor may not be an immediately logical choice for enterprise procurement teams. But these advantages definitely make them a viable contender, especially when it comes to developing highly customized solutions leveraging specific technologies. What they might lack in numbers, they more that make up for in skill levels, flexibility, and accessibility.

Srijan’s 250+ strong technology team is currently working with global enterprises across seven countries. Our extensive expertise with Drupal, as well as skilled teams for complementary technologies, enable us to successfully deliver a range of transformative digital solutions to our enterprise clients.

Working across media, travel, retail, telecom, and pharmaceutical industries, we typically kickstart our engagements with small PoCs. That helps our clients assess if our technology expertise fits their requirements, and how our teams work and deliver.

Got a digital transformation project in the works? How about we do a quick PoC to demonstrate the benefits of working a smaller vendor?

Tell us a bit about your project, and our solution experts will be in touch.

Topics: Project Management, Enterprises

xAPI: Toward ROI of Enterprise learning & development

Posted by Shashank Merothiya on Aug 29, 2018 3:24:00 PM

Enterprises spend on training in many ways. Offline, online, through learning management systems (LMSs), coaching, mentoring and so on. But often there’s no real way to track how the training has impacted the business. And if someone doesn’t seem to have learned what they were trained for, they are slotted into more training sessions. And it just continues.

But that’s now changing because enterprises now are measuring many activities and have a lot of data at hand. That can be put to use for training. 

Unleashing Data to Enable Learning

Take the example of cleaning a washroom by a janitor. The company can track when he came in, and when he signed out. And if there are sensors attached to the various soap dispensers and cleaning machines, there’s data on whether the right levels of soap were maintained, or if the machine was operated at the right settings. To gather customer feedback, you can have simple button presses to capture a Happy or Sad rating.

What has that to do with training?

Let’s say the time taken to clean a particular washroom is 10 minutes on an average. By looking at the time each janitor spends, you know who is likely to rushing through the job, and who is taking too much time. Map that to customer feedback received, and you have more realistic data to go with. And you also have information on soap levels and machine settings.

You have all this data that can tell you which janitor is doing a great job, and which one’s not. And so, who needs to be trained? And in what area, and for what purpose.

Let’s say Paul, a janitor, doesn’t ensure soap dispensers are kept at the levels prescribed. Instead of putting him through a training module that teaches him “How to Clean Washrooms”, you just create a micro-learning module for him: maybe a video that shows him exactly how to check for soap dispenser levels, and do the refills. And it’s made available on his phone. So he doesn’t have to be called in for an-person training, or log into a straight-jacketed LMS.

Did he check the module sent to him? How many times? Did he see the video? Did he see it on mute? You have data on all this, which enables your learning system to tell him what he is learning/missing.

All this can be automated, except for the micro-learning module creation, of course. So for a company that has tens of thousands of janitors across the world, training now becomes more personalized, and far more impactful.

xAPI, the Enabler

That’s the power of xAPI, or Experience API. It’s a standard that defines how you can interface any application with a system that stores learning data. So in the example above, you are interfacing the application that captures soap dispenser sensor data to pick up instances that pertain to learning, store it in a learning record store (LRS) which is then pulled in by the company’s LMS.

Any activity that can be observed or recorded can be mapped into your LMS through xAPI. So potentially anything that an enterprises has deployed by way of IT setup can be used to extract data that can be used as inputs for further learning needed. This can be the ERP, collaboration platforms, helpdesk systems, performance management systems, and so on.

That changes how the way you look at learning & development, doesn’t it? No doubt, it is a great idea to have all learning modules up on the LMS complete with quizzes and assignments, and scores to ensure people spend time on what they need to know to do their job well. But with xAPI coming into play, you don’t have to force-fit everyone into the same training module. Someone with some prior experience in the job can take an assessment test, and if she clears that, can be put on to the job right away. And then the data being captured on her work, can be reviewed to see what areas she needs to work on so she can deliver the business impact the enterprise is aiming for.

Yes, that’s right. The entire learning focus now can be zeroed in on business impact. So it could be about increasing sales or improving the bottomline. It could be ensuring safety at all times. The enterprise moves from, a broad based training scope to laser-focused micro learning moments that can reflect in the business results.

With xAPI, enterprises now have a way to measure how their learning and development efforts are tangibly impacting the topline or bottomline.

Srijan is now helping enterprises with delivering these systems that will put them squarely on the road to L&D ROI. Our teams are also working with enterprises to revamp their existing learning systems and make them more effective. 

Wish to drive greater ROI from your enterprise learning systems? Let's start the conversation and explore how Srijan can help. 

Topics: Drupal, Enterprises

Discussion

Write to us

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms Of Service apply. By submitting this form, you agree to our Privacy Policy.

See how our uniquely collaborative work style, can help you redesign your business.

Contact us