Why you need a DevOps Audit

Posted by Pavan Keshavamurthy on Dec 21, 2017 3:19:00 PM

You have convinced your team to start adopting DevOps practices. They have automated some processes, and are using tools like Jenkins and Travis.CI for continuous integration. You see the team working more efficiently than before, but you are yet to see the promised “significant acceleration in the delivery pipeline”.

Sounds familiar?

That could be because your team has gone the DevOps way, but not in the right manner. According to Mike Kavis, VP, Cloud Technology Partners, here’s what could be happening:

  • A New Silo: You hire a new set of people, the DevOps engineers, whose job is to take care of all the automation and work with the new tools. But that’s just adding another silo, and not really solving anything.

  • No Ops Needed: The Dev team decides to take up the infrastructure-as-code principle, and provision their own environments. Their work gets faster, but the challenges in network, security compliance, and support, still remain.

  • Rebranding Ops: The Ops team gets rebranded as a DevOps team. They’ve solved the problems with provisioning and deployment, but the Dev team’s challenges around configuration management, testing, and continuous integration still remain.

 

Your DevOps practice could have fallen into any, or all of these traps. 

How do you get from here to a stage where DevOps is bringing in measurable ROI?

You need to identify where you stand on the DevOps maturity model, and how to move forward. And that’s when you required a DevOps audit.

What to Expect Out of Your DevOps Audit

All enterprises, in their quest for adopting DevOps, go through the DevOps maturity model

Why You Need a DevOps Audit

They usually move from being a siloed organization to slowly adopting different DevOps practices, starting with automation. And finally, they become a DevOps practitioner with one-touch deployment, faster bug resolutions, and reduced system failures.

At the initial stages, the team itself is concentrating on collaborating with each other, and integrating automation. But there’s no bird’s eye view of where their DevOps practice is headed. 

A DevOps audit at this stage helps enterprises:

Benchmark their DevOps efforts against industry best practices

This involves identifying your position on the DevOps maturity model and assigning a DevOps maturity score for your enterprise. It’s helpful for the team if this score is presented as a breakdown of different aspects of the delivery cycle. For example: individual scores for performance/processes in terms of version control, deploy automation, lead times, failure notification etc.

Understand the current bottlenecks in their delivery pipeline

The audit should identify sections of your pipeline where you require better processes, or those that could cost you a successful roll-out. It should also present actionable insights that tell your team:

  • which processes to automate

  • what are the right tools for the job

  • what are the flaws in their current risk management and rollback strategies

 

The idea is to give your team a set of immediate next steps to work upon.

Identify low hanging fruit

You should expect your audit to pinpoint sections of the pipeline where DevOps practices can generate the highest initial impact. 

For example: If the QA team never gets adequate time to test each build, leading to increased bug reports, the audit should showcase that as your low-hanging fruit. 

In this case, you can have an automated script that will perform a standard set of tests for each commit, before pushing the code forward. That will significantly speed up the development, while ensuring lower errors. 

Chart out a Roadmap 

The audit should lay out a plan for your teams to advance on the DevOps maturity scale. You should expect a DevOps playbook that lays out the right practices and toolchains, to make your teams competent and competitive.

For enterprises to retain their competitive edge against industry disruptors, DevOps maturity is critical. Teams with a high-performing DevOps methodology can deploy code 46X more frequently, and have a 96X faster mean time to recovery. Those are hard to ignore gains.

So, are you ready to audit your DevOps practice? Drop us a line and explore how our DevOps consulting team can help.

Topics: CI/CD & DevOps, Framework and Libraries

How to chalk out your Enterprise DevOps adoption strategy

Posted by Pavan Keshavamurthy on Dec 15, 2017 5:45:00 PM

There are a lot of steps from product build to deployment, and also a lot of things that can, and do, go wrong. A few of these might sound familiar:

  • The Dev team adds a new feature right before release and the QA team does not have enough time to run the whole gamut of tests. The code is pushed to live and it’s too late before you realize that you missed a few bugs.

  • You update your product and release a new version. It worked fine for the developers and testers, but not when deployed on the production servers. Because the Dev team missed informing the Ops team about updating a library or the database on the servers.

  • Your code is ready to go, but the Ops team says they will need a couple of days to configure all the environments to the given specifications. Or maybe a server goes down, and they take hours to get it back up, manually configuring it all over again.

 

Every product team has faced these challenges at some point in time. If your product development and delivery pipeline is largely manual, the only way to avoid these challenges is to work steady and double check to make sure you haven’t missed anything.

Meanwhile, that new disruptive start-up in your industry has already released the second version of a competing product. They are able to do this because of shorter and faster release cycles, ability to deploy quick fixes for bugs, and recover faster from system failures.

And this is probably why your enterprise has to start looking towards DevOps adoption. If you wish to stay ahead of the curve with your products and services, this is no longer a matter of choice. Besides resolving some of the current challenges your siloed teams face, DevOps is key to a faster product delivery pipeline.

However, enterprise-wide DevOps adoption is easier said than done. There is, of course, the initial resistance to changing established practices. Additionally, convincingly showcasing the value of DevOps to the entire organization is a challenge.

Micro Hering, Principal Director at Accenture, points out one of the key reasons why DevOps adoption is derailed: “Some group goes off and implements DevOps practices (for example the testing center of excellence or the operations team) and they have great successes. Their individual results improve so that the automated regression now runs in minutes not hours and giving developers a development environment from the cloud takes hours not weeks. Yet the organization as a whole fails to see the benefits because the different practices are not compatible or too many dependencies continue to hinder real results.”

Hence, what is needed is a well-planned and executed DevOps adoption strategy, that will produce measurable results. What we suggest is a two-phase roadmap:

Phase 1: Showcasing DevOps ROI
Phase 2: Identifying and Side-stepping the Trip Wires

Showcasing DevOps ROI

To accelerate time to market with DevOps, it has to be adopted by all teams across the organization. And that is easier to achieve when stakeholders get behind the idea of DevOps adoption. So the first step is to demonstrate to all stakeholders how DevOps can bring in significant benefits.

Here’s how to do that:

Evaluate Your Delivery Pipeline

Understanding the existing delivery pipeline is the first step. Get together all the process stakeholders. Map out your pipeline in complete detail, understanding each process, highlighting if it’s manual or automated, and how long it takes to complete it. Identify any inherent cause-effect relationships and dependencies in the pipeline.

Identify Process Bottlenecks

The next step is to identify all the existing process bottlenecks. 

This could be in terms of time taken by the Ops team to configure a production environment. Or the time taken by the QA team to thoroughly test every feature addition. Or mismatch in configurations between development and deployment servers. 

Identifying these bottlenecks helps you and other stakeholders realize that there is a need for adopting better practices. It also demonstrates where the distinct DevOps practices like infrastructure-as-code, automated test scripts, configuration management etc. will fit in.

This is also the right time to identify certain base performance metrics, which will help showcase the improvements made with DevOps.

Choose Your Experimental Set

A lot of times, applying DevOps principles to an isolated process is not enough to convince stakeholders of the benefits. It gets viewed as a one-off incident and not something that would be replicable across the organization.

Your experimental set, i.e. the specific set of processes on which you want to apply DevOps practices, has to be chosen carefully. 

Ideally, it should meet the following criteria:

  • The processes are integrated and need to change together in order to work

  • They are important for the enterprise and whose improvement will deliver significant benefits

  • Can be optimized within a short period of time

 

For example, let’s say you choose server configuration as your experimental set, which will involve:

  • How you configure a development, testing, or production server, and how long it takes

  • How do you update the servers when new product versions are released

  • How fast can you reconfigure a server that went down

 

This inter-related set of processes can be tackled with the DevOps practices of infrastructure-as-code and configuration management. And an improvement in your server configuration times means a huge acceleration in your delivery pipeline. 

Practise DevOps

With your experimental set of processes identified, start integrating DevOps practices from the ground up. The earlier on in the process you introduce it, the better. The key here is to be conscious of the process changes and making sure the team sticks with them. Tight feedback schedules and team’s complete alignment with the final goal would be crucial. 

As you work on optimizing these processes, the benefits of DevOps would start reflecting. In the time taken to delivery, the number of bugs reported, the automations introduced and their impact on performance. These benefits must get documented and mapped against the base metrics identified. 

Now your work is ready to be showcased to stakeholders as an example where DevOps created significant ROI for the enterprise. Once they see the measurable value additions, it’s easier for them to buy into the idea and make a strong push for DevOps adoption across the enterprise. 

Identifying and Side-stepping the Trip Wires

With key stakeholders on-board, the next phase is to make sure you foresee the challenges that can come up, and how to get over those. This involves taking a look at three key factors:

Organizational Practices

DevOps demands close collaboration between Dev and Ops teams and a work culture that is focussed on accepting and correcting mistakes rather than pointing fingers. This kind of cross-team cooperation could be difficult to get right at the first go. But recognizing the need for it, and enabling it across the organization is the first step here.

Enterprise teams should also consciously extend the development methodologies of agile, to the operations teams. This allows both teams to communicate in stand-ups and retros, building a greater understanding and collaboration.

Legacy Modernization

Here’s where most enterprises trip up because they feel:

  • DevOps practices would only be effective with modern applications and platforms

  • Legacy modernization would mean completely doing away with the old architecture, which will involve significant expenditure

 

However, DevOps practices can work as easily with legacy platforms as with modern ones. Continuous integration, one-touch deployments, agile release cycles are all possible on legacy platforms, with the right tools.

Moreover, legacy modernization does not always mean an expensive upheaval. In most cases, it can be achieved through adapting your heritage systems to modern methods. And DevOps practices can actually let you do this faster and in a more efficient manner.

System Reliability

The speed of delivery achieved through DevOps often raises concerns around system reliability.

As a concept, DevOps embraces failure as inevitable and concentrates on designing systems that can get back to work fast, after a breakdown. There are examples like Netflix’s Chaos Monkey, that are designed to trigger randomized system failures. This pushes their teams to build systems that are capable of healing fast in such conditions. 

So DevOps moves beyond reliability, towards achieving more resilient systems.

While this two-phase roadmap is a great starting point as enterprises start thinking about DevOps, you will have to tweak it suit your particular organization. 

There are, of course, a lot of decisions to be made once you start down the DevOps way. The most important of those is choosing the right DevOps toolchain for your teams. This is also where  DevOps consulting services like ours could lend a hand. 

Let's start the conversation about how we can power your competitive advantage with DevOps.

Topics: CI/CD & DevOps, Enterprises

How DevOps solves the challenges faced by siloed teams

Posted by Pavan Keshavamurthy on Dec 5, 2017 12:27:00 AM

The most critical factor that determines the quality of your product or service is how your Dev and Ops teams work together. If your Dev team is responsible just for production, and Ops team is concerned only with maintaining availability, the product is in trouble.

DevOps integrates these two teams, making them jointly responsible for delivering a product to your customers’ satisfaction. Adopting the core DevOps principles, you achieve:

  • Robust, quality software, built and maintained using the best practices
  • Faster time to market with smaller but frequent releases
  • Proactive monitoring that identifies bugs and issues before they become too well-baked into the product

 

Besides these key business benefits, DevOps principles also solve certain common challenges that siloed teams face:

Challenge 1: Whose fault is it?

With siloed Dev and Ops teams, every time something breaks, the first question is “Whose fault is it?” With skill-based teams like this, people tend to believe their skills are above reproach, and hence the problem must be with the other team. 

Solution: Culture

With DevOps, teams adopt a culture that’s open, trusting, and collaborative. This is best achieved by switching to project-based teams working towards a single goal.

Project-based teams make sure that the entire team, whether they are developing the product or maintaining it, are communicating with each other. The Ops people know how the Devs are working, their issues, and timelines. The devs understand the kind of challenges that Ops face, making for easier release management. And when something breaks, the team’s first priority is fixing the project, not pointing fingers.


Challenge 2: The Genius Bottleneck

All teams have that one person who takes the initiative to fix a problem whenever they see one. Maybe someone on your dev team is tired of asking the Ops team for new environments. They write scripts to quickly set up the virtual environments that they want. 

It’s an efficient solution but not a scalable one. Because only the one who wrote the scripts can troubleshoot them, and the team is stuck when that person is not around.

Solution: Automation

With DevOps, automation becomes a key characteristic of how a team works. The entire team adopts toolchains that ensure standardized automation for repetitive tasks. And each team member is capable of understanding and troubleshooting these automated processes.

Two of most common automations adopted are:

Continuous Delivery: Every piece of code is run through a series of automated tests, and error-free code is packaged and deployed. That is why teams adopting DevOps have smaller releases, where it’s easier to identify and fix bugs.

Infrastructure as code: Provisioning becomes as simple as a modular code that can be run multiple times to set up identical instances. Any change in configuration, or even reprovisioning an instance becomes a quick and reliable process, making life easier for the Ops team.


Challenge 3: Unplanned Work and Dealing with Failures

For any product or service, your systems are bound to break down at some point. That means a lot of firefighting and unplanned work for both the Dev and Ops teams. And while they are busy dealing with such immediate problems, there’s no time to think about developing strategies and practices that help anticipate and prepare for system failures.

Solution: Lean and Agile

While your development team already follows the agile and lean way of working, DevOps extends those practices to your Ops team as well. 

For the Ops team, being lean would mean learning to accept system failures as inevitable and prepare for it. Every emergency is not just a mad scramble to get things fixed, but also a close a collaboration between the Dev and the Ops team to improve the system to prevent a repeat occurrence.

Additionally, with project-based teams, the Ops team is now also a part of the regular stand-ups and retros by the dev team. So they are always in the loop about the kind of work they can expect and plan accordingly.

If you aim to build great products and services, DevOps is non-negotiable. Srijan's DevOps consulting teams can help your teams assess current practices and identify the correct DevOps adoption roadmap. Let's start the conversation about how we can power your competitive advantage with DevOps.

Topics: Project Management, CI/CD & DevOps

The key elements of a High-Performing DevOps methodology

Posted by Pavan Keshavamurthy on Sep 4, 2017 3:45:00 PM

What should an enterprise do to improve the performance of their DevOps practice? More, or less communication? Remove policies that keep the Dev and Ops teams in silos? More, or less automation?

There are a hundred different ways of looking at this. And we might still not have clarity on what one can do to have a higher performance DevOps methodology. But here’s something that can give us clues: 2017 State of DevOps Report by Puppet and DORA (DevOps Research and Assessment).

The report, now in its sixth year, received responses from 3200 people in the domain around the world. It “looks at the statistical relationships between IT performance, organizational performance, technical practices, cultural norms, and management”. While transformative leadership has emerged as key for DevOps success in this version of the report, we will keep our focus on the technical aspects that the report highlights.

Let’s look at what the report has identified as key parameters that drive high performers in the DevOps space. But first, let’s look at how high performers do better than low performers.

 

The survey found that in comparison to the low performing DevOps teams the high performers have:

  • 46X more frequent code deployments
  • 440X faster lead time from commit to deploy
  • 96X faster mean time to recover from downtime
  • 5X lower change failure rate

 

What does that translate to? The table from the report illustrates:

Devops methodology survey questions

Given the metrics we see above, there is a lot to be learned from the high-performance DevOps teams and their DevOps methodology. What does the report suggest?

Elements of a High-Performance DevOps Methodology

Loosely coupled architecture to practice continuous delivery

If the team is replacing or modifying a component, or service, does it require them to make corresponding changes to the services or components dependent on it? If yes, then that’s a bottleneck for you to consider.  For example, do minor changes to an application cause breakage? Or is the continuous integration/continuous deployment process crippled by deployments that are big and time taking? These are some warning alerts that could make you reassess your architecture. A couple of approaches for this include:

  • Use of APIs, bounded contexts to decouple large domains
  • Use of test doubles and virtualization to test services or components in isolation

 

Similarly, can delivery teams test, deploy and change their systems without depending on other teams? This could be for additional work, resources, or approvals. And can this be done with less back-and-forth communication? Do they need to check schedules with many people, just to get their job moving? The report found that high performing teams have less dependency on other teams thereby speeding up release cycles.

Daily code merging in trunk based development

Periodically merging code to trunk is part of the development workflow for most teams. However, what should be the periodicity or frequency of the merging? To get higher software delivery performance, merge code into trunk on a daily basis, have branches or forks with lifetimes less than a day, and don’t have more than three active branches at a time.

That may sound quite the contrasting approach to software development workflows being followed, but it works. 

The survey found that teams that do not have code lock periods have higher software delivery performance. Developers in high-performing teams work in small batches and develop off of trunk or master rather than long-live branches, says the report. High performers have branch life integration lasting just a few hours, while low performers have it lasting days, typically. 

More automation

The report says that high performers are doing significantly less manual work than low performers, and so have automated:

  • 33 percent more of their configuration management.
  • 27 percent more of their testing.
  • 30 percent more of their deployments.
  • 27 percent more of their change approval processes.

 

With more automation comes more time for the team to do value-creation activities, building out new features. The report cites that “by undertaking a continuous improvement initiative and investing in automation — including significant investments in automated testing — HP LaserJet was able to increase time spent on developing new features by 700 percent.”

What we've also learned, in our work with teams and in conducting research, is that people are not very good at estimating how automated they are. What they are good at is estimating the percentage of their work that is still done manually. That's not surprising: Manual work is painful, so people are highly aware of it. Once work is automated, it's no longer painful and tends to disappear from people's attention.

percentage of work done manually

The table shows high IT performers report the lowest amount of manual work across all practices — and therefore, the highest amount of automation.  

While medium performers do more manual work than low performers when it comes to deployment and change approval processes, the authors regard this phenomenon of manual work and rework as a temporary stage in which the organizations add more manual controls around changes, inevitably slowing them down. The recommendation is not to give in to the temptation and move change review process to earlier on in the development cycle.

Quality and security

It is one thing to be able to deploy code rapidly, and on demand. But what of quality and security? The report measures unplanned work and rework as proxies to measure quality, and found that high-performing organizations spend 21 percent less time on unplanned work and rework. So they can spend 44 percent more time on new work, such as new features or code. They also spend 50 percent less time remediating security issues than low performers. These results point to the fact that dev teams need to involve security and quality teams in the development process early.

These are some of the key technical intervention areas that can lead to a higher-performing DevOps team.

What are some of the issues your teams face in their DevOps processes? Drop a mail here to brainstorm about your challenges and explore how our DevOps consulting services can help you resolve them. 

Topics: CI/CD & DevOps, Architecture

The basics of Jenkins

Posted by Deepshikha Singh on Apr 19, 2017 6:14:00 PM

Continuous Integration(CI) is a process where developers integrate their code at regular intervals. Code is regularly committed to a shared repository and a build is triggered. It is important that when new code is committed, existing functionality does not get broken. And that is ensured by Jenkins.

The Jenkins tool is a popular open source automation software used for the continuous integration and continuous deployment process. It triggers builds and runs functional tests, UI tests, and smoke tests. It can be used for deployment and static analysis as well.

At Srijan, we have used the Jenkins tool in almost every project to smoothen the CI/CD process for smart and quick delivery of the projects. Some of the key areas where Jenkins was the perfect tool to use are :

  • Jenkins was used as a gatekeeper for pushing the code to Github repository. Client had put certain checks related to code standards and some smoke tests. So any code that is pushed to github repo has to go through those checks before the code is merged to github.
  • Srijan team has scheduled the Jenkins jobs to trigger at the predefined time and send across the report via email and Slack.
  • Jenkins job/build was triggered on every pull request (PR) raised by any team member. A pull request is raised by any team member who pushes his code to github repo to get it merged with master branch so that we have a stable code in Master branch.
  • Jenkins is very useful in sending reports to the client on daily basis, so we configure the job to run at predefined time. We don’t need to trigger the job manually, instead Jenkins does this for us and itself sends out the reports.

Now that we know what’s possible with Jenkins, let’s take a closer look at what the tool is, and how it works during the development lifecycle.

Basic Features of Jenkins

  • It is coded in Java and can be used across operating systems like Windows, Ubuntu, Debian, Mac OS X,  Red Hat, FreeBSD, Open BSD, Gentoo, Fedora, CentOS, 
  • It triggers a build when new code is committed in the Git repository. Once the build is triggered, it shows number of builds that succeeded or failed, time of last build success and failure, and preferably which parts of a pipeline succeeded or failed.
  • Jenkins deploys code on the test server(or any desired server) and triggers automated tests.
  • It generates test results that can be emailed to concerned teams.
  • It is a dominant tool with numerous active installations and users across the world. Therefore there is lot of support within the community.

How Does Jenkins Work

Jenkins can be used for automating the entire CI process. Let us look at how Jenkins works in the software development lifecycle -

  • A developer commits code to the source code repository. The Jenkins tool detects the changes that have occurred in the source code repository as it checks the repository at regular intervals.
  • If there is a commit, Jenkins will start preparing a new build with the new changes.
  • If the build is successful, Jenkins deploys the build in the test server. If the build fails, then the concerned team is notified.  It gives a basic report of success and failure of build and the build trend.
  • After testing, Jenkins generates feedback. It generates a test report for the development and QA teams indicating the test results. 
  • It will continue to check for new source code. As a result, the entire CI process is automated, leading to higher efficiency and faster software delivery process.

Advantages of Jenkins

  • The Jenkins tool is widely used, and hence has very strong community support. This means help is readily available for all issues and bottlenecks.
  • It provides about 1,000 plugins to different applications in the DevOps process. This makes Jenkins quite extensible. The best part is, you can write your own plugin if you do not find one that suits your requirements.
  • For example, it can integrate with Version Control Systems like Git and Subversion. It can work with build tools like Maven and Ant. It has plugins to integrate with tools like JIRA and Bugzilla for reporting issues and the builds where these issues are.
  • There are constant incremental iterative improvements.
  • It is an open source, cross-platform tool.
  • Jenkins can generate test reports that can be viewed on the web UI.
  • It can deploy directly to production or test environments making the CI process automated.

Disadvantages of Jenkins

  • Jenkins is not compliant with Sarbanes Oxley, ITIL, or COBIT which might be required by some clients.
  • Jenkins might not be the best tool for .NET projects as there are other tools that provide more features and integration with .NET tools
  • It does not support two factor authentication.

A robust CI/CD system needs to provide all functions required for DevOps and also be flexible to run in different environments. All said and done, the Jenkins tool is great for the CI/CD process as it automates most of the tasks of DevOps and has the facility for others to add plugins as per requirements. 

You can also check out our posts on working with Behat, and Selenium, and our webinar on how to manage QA tasks in an agile project. Hope you find them insightful.

Topics: CI/CD & DevOps

Continuous development: ensuring dedicated support & maintenance on flexible budgets

Posted by Rudra Singh on May 20, 2016 4:51:00 PM

In the web development industry, hourly support models are the accepted practice, when dealing with typical support & maintenance projects. Usually companies opt for certain hours of support per month, and get billed accordingly.

Till sometime back, also offered the hourly support model. This meant that we dealt with client issues as and when they came up, on an ad hoc basis. However, we have come to realize that this model might not be a very efficient one, both for our clients as well as us. Here’s why: 

  • Since we are dealing with issues on an ad hoc basis, it is difficult to predict the number of hours of support that will be required in any given month. This means we are unable to plan client support hours across various clients and hence unable to serve them to the best of our ability.
  • There is no dedicated team working for specific clients. Hence things take longer to get fixed, as teams take time to first understand the problem before getting started on the solution.
  • Our resources and team members are also not optimally utilized in this model.

The Continuous Development Model

In order to put this dilemma to rest, and to serve our clients better, we adopted the Continuous Development Model. Now Srjan teams work with our clients for a set number of weeks every month, rather than the entire month. 

continuous_development_model

For example, once the team is deployed, they could work with the client in two-week cycles which includes two week development sprints followed by two weeks of planning. 

The development sprints cover new feature buildouts, bug fixes, quality assurance, module upgrades and security patches. Throughout the sprints, teams are in constant communication with the client, with daily review calls and scheduled weekly demos. During the two week planning phase, we look at all the work that needs to be taken up in the next sprint and ensure that there is a rolling backlog for us to work on, every sprint.

The Continuous Deployment Model has three distinct benefits for our clients

  • Dedicated Teams: Since all the work for a particular client is done in a set timeframe, we can ensure that the same set of people are working on the project. We are able to assign a dedicated team that has all the skills required for the project, and exclusively works on it, every two weeks. They don’t have to spend time on getting familiar with the project, before attempting to resolve any issues. This is a team that knows the project inside out, and is thus able to provide faster and smarter solutions.

  • Flexibility: Not all projects involve the same amount of work. We work with a lot of clients who only require a few bug fixes or one new functionality in the entire month. Most clients usually do not have enough backlog to keep a development team engaged for the entire month. In the continuous development model, clients can get a flexible team that is comprised of only the skill sets that they require in any given sprint. There is also flexibility in terms of the duration of the development sprints. Depending on the amount of work, clients can agree upon a one- or a two-week sprint duration.

  • Budget: Our teams work with the client for the entire duration of the project, and are ready to provide assistance at crucial instances, even outside their allotted sprints. However, the cost acquired by our client is only for the one or two week sprints per month, throughout the project. This cost is significantly lower than deploying a team for the entire period.

 

The efficacy of the model also depends upon the kind of industry that the client operates in, and the exact requirements of the project. We still work with a few of our clients on the hourly support model, since that is the way that works best for them.

Besides continuous deployment & development, also works with enterprises along the distributed agile development model. 

Srijan strives to optimally serve our clients. And that is why we make sure that we first understand the requirements of every client project, and then agree upon the kind of development model that suits them best. If you feel that the Continuous Development model is something that would suit your business needs, just drop us a line below, and we will be glad to have a conversation with you.

Topics: CI/CD & DevOps

Client wrangling: How to deliver successful Drupal projects

Posted by Nilanjana on Jul 8, 2015 3:19:00 PM

You know a keynote is well received when the audience is in splits and breaking into applause every now and then. That, in short, describes Jacob Singh’s keynote at Drupal Camp Delhi 2013. Jacob is the Regional Director, Acquia India. While Jacob kept the mood light, he made some powerful points about Drupal and its acceptance across the world.

 

Successful, profitable projects are hard to achieve, but once you start Client Wrangling, it’s quite easy. The main issue which remains with a business is to extract information from the client to build the basic architecture. So Client Wrangling is the first step to understand client needs and their expectations. Srijan, invited Susan Rust, Drupal Business Consultant at Drupal Anywhere, to lead a Srijan Wednesday Webinar. On the webinar, she emphasized the need to build processes to help Drupal shops scale their services, while running profitably. You can catch the recording of the webinar here. 

According to Susan there are there are five main issues which come under Client Wrangling. 

 

Close the Gap: The first step is to manage the client expectations, and usually coders and developers do not understand this gap. This gap is more of a communication gap than a technology one, and it can keep increasing if false expectations are set. Coders and developers communicate in Drupal but the client does not understand Drupal. They only know business. A company can end up losing a lot of money due to this gap. So training plays a vital role here.

 

Reduce Risks: It’s important to define the goals beforehand, and how your team is going to deliver the project. You have to give the best description of the project to the client. If a client is not listening to your advice, it can be risky for both the client and you. There have been times when a clients start dictating what modules they want and that creates a lot of problems. They should instead just tell you their business needs. You should know their business model and how it will be converted to Drupal. There have been instances when the client just changes the framework of the final product! That’s why it’s so important to manage risks. It will help you to save a lot of time and money in the end.

 

Build Trust: You can only develop trust with the client if your team understands the client business and requirements. Give your team the responsibility and authority to make the decisions. With the right process, everything falls in place. The project manager’s role is very important here. Most of the times they are doing a lot of projects and fail to deliver and build trust because they are usually involved in micro managing. Clients trust velocity, not chaos. Velocity can only be delivered by the right team that’s clear about their goals. A right team is the one where everyone has a say and anyone can raise concerns and issues anytime.

 

Communicate with Customers: Both the client and you have different expectations, and communicating about Drupal is a complex task. So sometimes you need to be direct with clients. You have to understand which clients you need to say No to, and whose unrealistic expectations you need to lower down. If you use the correct terminology to communicate, you will save a lot of money.

 

Create Processes: Developing the right processes is extremely critical. Generating tickets is a major step in this process. This is usually done through a ticketing tool. Let’s say a developer develops a view in the ticket which is then passed to project manager who further gives it to the Themer and finally gets tested by the QA person. After all this, in the end, if the client finds some fault, it again goes back into the process. Hence, a correct and transparent process is important to be followed from the beginning. Else the project loses money.

Topics: Drupal, CI/CD & DevOps

Learning to be a ScrumMaster

Posted by admin on Feb 7, 2015 5:28:00 PM

Though I had a fair knowledge of Scrum, I never had an opportunity to practice it before and never felt the heat of being a Scrum team member. But recently, I got a chance to work as a Scrum Master for a team in Srijan’s Goa office.

When I joined the Goa office, Avienaash (our Agile coach) gave me three books on Scrum to read in my spare time. I read the one that looked the thinnest and started practicing it. Srijan already has mature Scrum and Agile practices, all thanks to Avienaash. Hence, it was easy for me to get started.

I had two challenges in front of me, even before I took the role of ScrumMaster (SM). One of not having practised Scrum before, and another that I did not have a technical background. As I come from an entrepreneurial background, I did have the advantage of understanding the market and product easily. Getting into the shoes of the customer has always been very easy for me, but as a Scrum Master my role demanded me to do more.

According to Mike Cohn: 

“Many who are new to the Scrum Master role struggle with the apparent contradiction of the Scrum Master as both a servant-leader to the team and also someone with no authority. The seeming contradiction disappears when we realize that although the Scrum Master has no authority over Scrum team members, the Scrum Master does have authority over the process.”

Different people can have different sets of challenges as per their ability and mindset. I came across the following challenges:

  • Fear of not knowing something, which could be obvious to the team- The only way to deal with this situation is to read as much as you can. As an SM, I should have complete knowledge of Scrum, so I started with reading a book written by Mike Cohn called Succeeding with Agile, and it gave me enormous confidence. Trust me, when you practice along with reading, it becomes easier to connect the dots and the learning curve becomes shorter. Technical jargon are still a problem for me, but I do try to go back and read about that.

  • Understanding Impediments - This has been a problem for me especially in the case of technical bottlenecks, because I don’t have a tech background. As an SM, I should know the impediments, but it is even more important to understand how to find the right solution. There may be a chance that team members have not given enough thought to other consequences. So, asking questions is an important aspect of being an SM to help the team understand the real problem.

  • When to seek help outside - The ‘never give up’ attitude of team is good, but it can be a hindrance when strict timelines of the project is also a factor of success. At some point of time, external help becomes unavoidable, but to take that call is not easy. External help can dismantle the team morale because when team solves a problem by themselves, they get a sense of achievement. A good team tends to get closer to each other at the time of crises. So trusting your team’s ability is important here, and let the team decide when they need help.
  • Having an effective daily standup - Daily Scrum meetings at start of the day is important as they bring discipline into the team and sets the tone for the day. Some usual problems are:

    • Coming late
    • Connecting with distributed team members

    • Low energy levels
    • Not attentively listening to others updates
    • Team tends to face the SM while giving the update instead of the team

The essence of a daily standup is not just to give an update to the SM, but to sync up with each other on how we are doing as a team. As a Scrum Master I play the role of a moderator in this daily standup meeting. Most of the above mentioned problems got resolved by consistently bringing them up in our retrospection meeting.

  • Sharing my update in a daily standup meeting in the format of “What I did yesterday” & “What will I do today” - I work on multiple things throughout the day. But to filter out what is relevant and valuable in the daily stand up is tricky. Committing a task for the day in the daily standup has not been as easy for me as compared to my fellow team members. They can see the sprint backlog to choose a task, but for me as a SM, I have to go with the flow, as per situation demands.

In Scrum, everyday unfolds something new, hence identifying tasks and prioritising them is quite challenging. I am not convinced that a SM should give the updates in the same format as the team does, since SM has a completely different role. However, a SM must give the update on the impediments identified on the previous day, change requests, budget, new developments, product backlog, status of sprint planning, UAT, etc.

  • Writing user stories on behalf of PO, who is far away - Acting as proxy PO is not easy, because the team expects SM to have 100 percent clarity on user stories. It requires intensive thinking to ask all the probable questions to a PO before the implementation starts. I could manage this problem by setting up the team’s expectations in beginning of the sprint by telling that 20 percent ambiguity in stories are recommended to make them negotiable at the time of implementation.

You can also gain clarity by discussing with the team while writing stories and involving the QA in writing acceptance criteria. Doing the story estimation exercise with the team also helps, and the team gets an idea of the backlog in advance. The point that needs to be understood by the team is that discussion is valued over the excessive documentation (detailed story) in Scrum.

  • Engaging the designer - Designers always work one sprint ahead of the team, hence it is important to plan the dependencies early. Sometime you don’t realise which user story will trigger a change in the UX, so it is important to evaluate every task from a designer’s perspective. Cohesion between development team and designer is also very important.
  • Focussing on my work and keeping an eye on how we are doing as a team - This becomes even more difficult when team members are not co-located and working across different time zones. To deal with this, I created a checklist for myself. This helped me to keep an eye on every aspect. Later, I found Scrum Master’s checklist which is very useful to evaluate myself on how I am doing as a SM.
  • Estimation vs committing - Whenever we could not finish all the committed stories in a particular sprint, we would blame estimation. However, it was not always the reason because estimation and commitment of stories in a sprint are different things. So, while committing to stories, the following things are important to be considered:
  • Actual productivity fluctuates due to external factors, hence 70 percent is the maximum output we should expect, not 100 percent
  • A good mix of tasks between developers and QA, which depends upon the ratio of developers and QA in a team
  • Considering offs and holidays, because sometimes we tend to forget this while planning for sprint
  • Nature of Stories/Task and dependencies
  • Frequency of Deployment/Release/UAT

  • Better Sprint Planning - It becomes cumbersome when you try to do it at the last moment. So, planning two sprints in advance is important because it takes some time to get clarity on the stories. For example, some user stories require POC to be performed even before estimation, hence it is required to pick that POC at least one sprint before. Understanding such dependencies is key for a smooth implementation. It is important to involve the team to do the backlog estimation from time to time. I kept 10 percent of my team’s time in each sprint to plan for the next sprint. This helped the team to deal with last minute surprises. Consistent grooming of product backlog is key for better sprint planning.

  • Effective Retrospection - To do an effective retrospection, it is important to understand what we want to achieve as a team, and what needs to be done to achieve that. For example ask yourself, "How are we doing on XP practices?" "How can the team be more productive?" etc. My contribution in the retrospection meeting got drastically improved when I learned how an ideal agile team should be working.

My constant endeavour of finding the root cause of every problem has helped me to come out of most of the problems. I would retrospect my performance almost every day. It kept me on my toes to improve. I think when you know the exact problem statement then solution comes easily. Avie has played a great role by providing constant feedback to me on where I lag. I am still learning and trying to improve myself.

Topics: Project Management, CI/CD & DevOps, Agile

A new deployment method: Structural Level Migration

Posted by Nilanjana on May 16, 2013 4:47:00 PM

One of our long-running projects follows an approach of releasing several new features together. This means that hundreds of features get developed and tested, but do not get deployed for many months. Since there are movements of developers across projects, tracking the features and how they have been developed prove challenging, especially when it comes to deployment.

So recently, when the time for deployment came, our team was in a fix. We tried to do a manual deployment, but it proved to be extremely time consuming.

The second way of deployment was to handle it through the Features module of Drupal. When we tried that, we realized that we needed to code some more in order to deploy it. And even after that, many of the new features were not being reflected. So not only was it time consuming to deploy it through Features, it was also not really able to deploy the features properly.

That’s when we thought of doing a structural-level migration of information from one database to another. This is not the usual method for a deployment, but with many features to be deployed, this seemed like a good way to take.

And we were right. We looked for the places where the information resided and moved blocks of those to enable the deployment. Here’s what we did.

We collected the name of schemas/tables holding the information about view, block, panel, rule, content type, permissions, etc. Our strategy was to only carry the information in those tables which had structural data populated in them. We identified the number of tables with this information.

STRUCTURAL DUMP

FileName: structure.sql

views_display,  views_view,  block,

block_custom,  block_role,  blockcache_alter,

panels_display,  panels_pane,  custom_breadcrumbs_panels,

page_manager_handlers,  page_manager_pages,

context,  facebook_rules_user_tokens,  rules_config,

rules_dependencies,  rules_tags,  rules_trigger,

facetapi,  quicktabs,  menu_router,

menu_links,  i18n_string,  role_permission,

locales_target,  locales_source,  image_effects,

image_styles

Since there were many content type level changes, we compiled the information from variable tables and exported them. If you are going to try this, please take care that you bring all the settings in the Dev Server and compare it from the variable table on the live server.

VARIABLE DUMP

Filename: variable.sql

SELECT * FROM variable WHERE (name like '%academy%' or name like '%opportunity_follow%'  or

name like '%apps_type%'  or name like '%ask_question%'  or name like '%campaign%'  or

name like '%club_rank%'  or name like '%competition%'  or name like '%competition_group%'  or

name like '%competition_winner%' or name like '%group_invites%'  or name like '%project%'  or

name like '%report_abuse%'  or  name like '%volunteering_hours%'  or name like '%volunteering_rank%' or

name like '%htmlmail_postfilter%' or name like '%mail_system%' or name like '%htmlmail%' or

name like '%ant_pattern_%' or name like '%privatemsg%' or name like '%print_pdf_%' or

name like '%rules_forms_event_info%' or name like '%vh_state_%' or name like '%r4032login_%' or

name like '%date_format_%') and name not like '%solr%'

Note: We replaced INSERT INTO with REPLACE INTO to do a forceful entry.

Here are the steps for the deployment on the live server from the staging server.

1. Take the live Site into the offline mode.
Admin Path: en/admin/config/development/maintenance
2. Take the live server database backup
3. Deploy a tag on the live server from the staging server.
4. Export the content types from the staging server and import it to the live server.
5. Enable the required modules on the liver server.
6. Apply the structural changes to the live server.
drush sqlc < structure.sql
7. Apply the variable level settings changes to the live server.
drush sqlc < variable.sql
8. Run the updates on the live server.
drush updb
9. Clear the cache from the live site.
drush cc all
10. Take the site off from the maintenance mode.

The deployment is done! It took us just 20 minutes to complete all these steps and make the go live.

Topics: CI/CD & DevOps, Architecture

Jenkins - the swiss knife for continuous integration

Posted by Nilanjana on Mar 24, 2011 3:08:00 PM

Before discussing the advantages of using Jenkins, lets underdstand the term Continuous Integration.

 

Continuous Integration 

(as defined at Wikipedia): 

"Continuous integration (CI) implements continuous processes of applying quality control — small pieces of effort, applied frequently. Continuous integration aims to improve the quality of software, and to reduce the time taken to deliver it, by replacing the traditional practice of applying quality control after completing all development"

Objective

Our objective at Srijan has been to improve the quality of code that we write. As part of this initiative there are several baby-steps we have been taking including getting help of external consultants to write better Test Case documents. A large part of these Test Cases are being automated in our projects going forward. This is being done using Selenium RC.

However, one major bottleneck has been code reviews.

Enter Jenkins!

Our objective has been to solve basic code-writing problems within the Drupal environment. Here are the basic things we felt we could put automated test processes in place for code reviews:

  • Check for the PHP syntax [though IDE like Eclipse and Netbeans gives you warning]
  • Check for the Drupal coding standards
  • Check if a core module has been hacked (not permitted at Srijan)
  • Check for the performance of the site
  • Check which pages are taking more memory
  • Run Selenium test cases already written (on-deman or cron); [we're shortly going to be working on clustering, to ensure server load does not increase on the staging or/and production environments]

 

Jenkins is a Java-based CI engine used to automate the developers and QA [excluding the UI testing] jobs with notifications on deviation of agreed code standards. Jenkins was originally designed for Java developers using Antor Maven.

To tame Jenkins to obey the rules of Drupal the following modules have been used:

  • Drush
  • Coder
  • Hacked
  • A fork of Droptor

Jenkins Vocabulary

Job : Every automated task using Jenkins
Workspace : A directory where all source files are checked out
Build : The build steps involving the steps which are executed when a job is run
Status : Build has two status Sucess or Failure

Process

The Jenkins reporting is based on shell script which utilizes the Jenkins's environment variables to make it flexible. The script starts with setting up the environment variables to set path for php, drush, drupal path, modules directory and site path. 

#!/bin/bash

#set -x

 PHP=/usr/bin/php

DRUSH_PATH=/var/www/drupal/drush

DRUPAL_PATH=/var/www/drupal/thesite

MODULES_DIR=$DRUPAL_PATH/sites/all/modules

SITE="http://example.com/"

 

We now set the drush command alias to be accessible throughout the scriipt

DRUSH="$PHP $DRUSH_PATH/drush.php -n -r $DRUPAL_PATH -i $DRUPAL_PATH -l $SITE"

EXITVAL=0

 

Now that the basic environment to carry out the rest of the job is setup, we begin with checking up the basic php syntax for themes. This is done by finding all the files in themes folder and check all files with .tpl.php extension having <?php in the begining

THEMES_PHP_FILES=`/usr/bin/find $THEME_DIR -type f  \( -iname "*.tpl.php"  \) -exec grep -q '<?php' {} \; -print`
for theme_file in $THEMES_PHP_FILES; do
 $PHP -l $theme_file
if [ $? != 0 ]; then
let "EXITVAL += 1"
echo "$theme_file failed PHP lint test."
exit $EXITVAL
fi
done

 

The same logic needs to be applied for the custom modules. Once the php syntax are checked and passsed we proceed to collect some valid url for bechmarking which can either be through the crawler or simple shell command

LINKS=`/usr/bin/wget -q -O - $SITE | egrep -i '<a href="[a-zA-Z/-]+"' "$@" | sed -e 's|.*<[aA][  ][  ]*[hH][rR][eE][fF][ ]*=[  ]*"*\([^">][^">]*\)"*[^>]*>.*|\1|g'`

for L in $LINKS; do

    CRAWLURL=`echo "$L"|grep -v ".pdf" |grep -v "http:"|grep -v ".jpg"`

    echo "$SITE$CRAWLURL" >> /tmp/"$URLFILE.txt"

done

 

Before benchmarking we make sure that all cache is cleared, which can be easily done by drush command 

$DRUSH cc all

 

We warm up cache by hitting each of the url at least one time.

SIEGEWARMUP=`/usr/bin/siege -r "$NUMREQ" -c 1 -f  /tmp/"$URLFILE.txt"`
$SIEGEWARMUP

 

Once the cache is builtup we start benchmarking with 20 and 50 concurrent users for 5 minutes and send the report to users.

SIEGE20=`/usr/bin/siege -t 5M -c 20 -f  /tmp/"$URLFILE.txt" > /tmp/siege-20"$URLFILE.seq" 2> /tmp/siege-20"$URLFILE.log" &`
sleep 300
SIEGE20=`/usr/bin/siege -t 5M -c 50 -f  /tmp/"$URLFILE.txt" > /tmp/siege-50"$URLFILE.seq" 2> /tmp/siege-50"$URLFILE.log" &`
sleep 300
echo "*********************  Siege Report with 20 concurrent users  *****************************************"
cat /tmp/siege-20$URLFILE.log 
echo "*********************  Siege Report with 50 concurrent users   *****************************************"
cat /tmp/siege-50$URLFILE.log 

In the same way be get a report for checking drupal coding standards using coder module and check if the core is hacked using hacked module. 

Future Plans:

  1. Utilise the data from benchmarking to gerate graphs using jMeter.
  2. To generate report on updates of the contributed modules and security release.

Topics: CI/CD & DevOps

Discussion

Write to us

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms Of Service apply. By submitting this form, you agree to our Privacy Policy.

See how our uniquely collaborative work style, can help you redesign your business.

Contact us