Why Platform as a Service (PaaS) is the answer to high-performance hosting

Posted by Kimi Mahajan on Sep 24, 2019 3:10:00 PM

Running, compiling or configuring your web application on a single virtual server instance can be complex as well as time-consuming.

However, with new technologies emerging and evolving, the entire cloud computing process is getting simplified.

Let’s understand how Forbes has termed Platform as a Service (PaaS) to be the dominant cloud service model and why it stands as the best suited solution for your high-performance hosting needs.

Understanding Platform as a Service

PaaS service delivery model has evolved from Software as a Service (SaaS) cloud offering. It allows the customer to make use of virtualized servers by not purchasing them directly but renting it so as to design, develop, test, deploy and host a web application.

PaaS vendors offer the following along with the cloud offering:

  1. Specific software development tools such as source code editor, a debugger, a compiler, and other essential tools which developers needs to build their application.
  2. Middleware which acts as an intermediate between user-facing applications and the machine's operating system.
  3. Operating system for developers to build an application.
  4. Databases to store data and for developers to administer and maintain them.
  5. Infrastructure to manage servers, storage, and physical data centers.
paas-offerings

Why choose PaaS over IaaS and SaaS?

Before comparing PaaS with Infrastructure as a Service (IaaS) and SaaS, it is important to understand what each service means and how it helps users achieve their goals.

Let’s understand each one by comparing them with modes of transportation.

On- premises IT infrastructure is like owning a car. When you own a car, you take the responsibility for its maintenance.

IaaS is like renting a car. You choose the car as per your own preference and drive it wherever you wish. And when you think of upgrade, you can simply rent a different car. SaaS is like taking a public transport, wherein you share your ride with other fellow passengers with a common route.

However, PaaS can be thought of opting for a cab, wherein you don’t drive the car by yourself, but pay the driver to take you to your destination.understanding-cloud-offeringsNow after understanding what each means, let’s compare IaaS, PaaS and SaaS on the basis of what service you manage (✔) and what you don’t (╳).

Services

IaaS

PaaS

SaaS

Applications

Data

Runtime

Middleware

Operating System

 Virtualization 

Server

Storage

Networking

Examples

(AWS), Cisco Metapod, Microsoft Azure

AWS Elastic Beanstalk, Windows Azure, Google App Engine

Gmail, Google docs, GoToMeeting

 

As per Gartner, global public cloud services market is expected to grow to over $383 billion by 2020.Global Market of Public Cloud Services

Perfectly suited for software developers, PaaS helps them deploy applications and test and manage them without needing all the related infrastructure.

It’s very different from the traditional forms of web hosting like shared or Virtual Private Server hosting, wherein the developer has to take up the responsibility of ensuring the production environment is good enough to host their application and set up the application server, database, run-time platform, set up server configuration and many more, before beginning to code.

With HTTP caching servers, PaaS ensures faster application loading and eliminates issues like latency and downtime even if one server goes down. Applications can be deployed to the servers with a single command. It is useful for high-traffic websites (when your server may be under heavy load) which have performance issues in a shared environment.

PaaS can be thought of as a multi-server high performance solution which virtually distributes the web traffic across multiple terminals, keeping your site performance at peak.

High speed hosting services not only improves the user experience of your site, but they also have a positive impact on search engine ranking and users are likely to stay longer on the site as the site speed and resource delivery will be quick.

Here are 5 advantages that PaaS offers over other cloud offerings:

1. Helps Build Applications Quickly
PaaS allows developers to build applications quickly than they would possibly build, configure, and provision with their own platforms and backend infrastructure. With PaaS vendors providing web servers, storage, networking resources, it allows them to gain instant access to a complete software development environment, without any need of configuring or maintaining them and focus mainly on delivering projects speedily.

2. Minimal Development and Cost-Effective Approach

PaaS services offer templates and code libraries to allow rapid development by providing prebuilt backend infrastructure and other resources. It offers new capabilities to your in-house development team without hiring additional staff, thereby reducing costs associated with development in building applications from scratch.

3. Easy Collaboration on Pre-Built Sophisticated Tools
PaaS offers advantage over traditional hosting in a way it lets developers in distributed teams to collaborate. It allows them to create applications using pre-built software otherwise expensive development tools to develop, test and reiterate.

4. Scalability and Future-Proofing

The reusable code not only facilitates easy app development and deployment but also increases the opportunity for scalability. This allows businesses to scale and modify their product or processes efficiently and focus on core business initiatives rather than maintaining underlying IT infrastructure.

5. Cross-Language Support

PaaS cloud services support developers to build applications on multiple programming languages.

How is PaaS different from Serverless Computing?

PaaS and serverless computing are similar in a way where a developer has to worry about working on code, and the vendor handles all backend processes. However, it is different from serverless computing as mentioned in the below table:

Features

Serverless Computing

PaaS

Scale

Automatically scales

Will not scale unless programmed

Startup time

Instantaneous

Is running most of the time to be available to users

Offerings

Do not provide development tools/frameworks

Provides development tools/frameworks

Pricing Model

Precise

Not precise

Any demerits?

However, before taking the decision to opt for PaaS process, it is important to understand your business needs in order to find a solution that is a good fit.

Firstly, the decision for PaaS providers should be taken wisely as you might not be able to switch the vendor after an application is built. Each vendor may not support the same languages, libraries, APIs, architecture, or operating system used to build and run applications. Although it is possible to switch PaaS providers, the process can be time consuming and may even result in rebuilding the application to fit the new platform.

Another thing to keep in mind is that the external vendor will store most or all of an application’s data, along with hosting its code and may actually store the databases via a third party. So it is important to test the security measures of the service provider and you should know their security and compliance protocols before making the decision.

Srijan can help you take the truly strategic option of opting for PaaS out of various options, so as to potentially deliver more with better functionality. Contact us to get the conversation started.

Topics: Cloud, Agile, Architecture

Data Lake Strategy:  6 Common Mistakes to Avoid During Implementation

Posted by Nilanjana on Aug 29, 2019 5:42:00 PM

While we have talked a lot about the rising need for data lakes, it’s probably as important to talk about how easily they can go wrong in the absence of a good data lake strategy. While most businesses expect phenomenal insights, not enough attention is paid to actually setting it up in the right manner. And that is where it can all start to unravel. 

It's not uncommon to see scenarios where businesses have invested a lot of time, money and resources into building a data lake but it’s actually not being used. It can be that people are slow to adopt it or it could be that faulty implementation actually made the data lake useless. 

So here, we take a brief look at six common data lake strategy pitfalls, and how to avoid them. 

Challenges involved in Loading Data 

There are two challenges involved when loading data into a data lake:

Managing big data file systems requires loading an entire file at a time. While this is no big deal for small file types, doing the same for large tables and files becomes cumbersome. Hence to minimize the time for large data sets, you can try loading the entire data set once, followed by loading only the incremental changes. So you can simply identify the source data rows that have changed, and then merge those changes with the existing tables in the data lake.

Data lake consumes too much capacity to load data from the same data source into different parts of the data lake. As a result, the data lake gets a bad reputation for interrupting operational databases that are used to run the business. To ensure this doesn’t happen, strong governance processes are required.

Lack of Pre-planning

Data lakes can store an unfathomable amount of data, but not planning the value of data before dumping it is one major reason for their failure. While the point of a data lake is to have all of your company’s data in it, it is still important that you build data lakes in accordance with your specific needs. Balancing the kind of data you need with the amount of data you dump into the data lake ensures the challenges of the data lake implementation is minimized.

Uncatalogued Data

When you store data into a data lake, you also need to make sure it is easy for analysts to find it. Merely storing all the data at once, without cataloguing is a big mistake for a few key reasons

  • Can lead to accidental loading of the same data source more than once, eating into storage
  • Ensuring metadata storage is key to a data lake that’s actually useful. There are several technologies available to set up your data cataloging process. You can also automate it within your data lake architecture with solutions like AWS Glue. 

Duplication of Data

When Hadoop distributions or clusters pop up all over the enterprise, there is a good chance you’re storing loads of duplicated data. As a result, data silos are created which inhibits big data analytics because employees can’t perform comprehensive analyses using all of the data.

All of this essentially re-creates the data proliferation problem data lakes were created to solve in the first place.

Inelastic Architecture

On of the most common mistakes organizations make is building their data lakes with inelastic architecture. Several of them start out with one server at a time, slowly and organically growing their big data environment, and adding high performance servers to keep up with the business demands. While this decision is taken because data storage can be costly, it eventually proves to be a mistake in the long run when the growth of data storage outpaces the growth of computing needs and maintaining such a large, physical environment becomes cumbersome and problematic.

Not the Right Governance Process

Not using the right governance process can be another obstacle to your data lake implementation. 

  • Too much governance imposes so many restrictions on who can view, access, and work on the data that no one ends up being able to access the lake, rendering the data useless
  • Not enough governance means that organizations lack proper data stewards, tools, and policies to manage access to the data. Unorganized and mismanaged data lakes can lead to an accumulation of low quality data, which is polluted or tampered with. Eventually the business stops trusting this data, rendering the entire data lake useless

Implementing good governance process and documenting your data lineage thoroughly can help illuminate the actions people took to ingest and transform data as it enters and moves through your data lake.

While this is by no means an exhaustive list, these are some of the most seen mistakes that businesses make. Plugging these holes in your data lake strategy sets you up for better returns from your initiative right out the gate. It also ensures that your data lake does not become a data swamp where information and insights disappear without a trace.

Working on a data lake strategy for your enterprise? Or building the right data lake architecture to leverage and monetize your data?

Tell us a bit about your project and our experts will be in touch to explore how Srijan can help.

Topics: Project Management, Agile, Data Engineering & Analytics

Preparing For A Data Lake Implementation

Posted by Kimi Mahajan on Aug 29, 2019 5:39:00 PM

Data remains a giant value generator and reinforces your enterprise’s ability to stay ahead of the competition.

However, managing, securing and storing data for its continued relevance and using that voluminous information to your advantage is difficult at times, and requires a streamlined process flowchart.

So, how do you make data more useful to you and benefit from its infinite possibilities? What are the cutting-edge tools you need to keep your enterprise future-ready?

We have already discussed the basics of Data Lake and  the expected stages of data lake implementation. Let’s dig deeper as to when and why to implement data lakes and how to strategize the implementation process.

When Should You Opt for a Data Lake

Here are a few scenarios you could be looking at, when it comes to enterprise data:

  • You’re working with a growing amount of unstructured data
  • You want to leverage big data across your offerings
  • Your organization needs a unified view of information
  • You need to be able to perform real-time analysis on data
  • Your organization is moving towards a culture of democratized data access
  • You need access to data, analytics and applications
  • Your organization can benefit from elasticity of scale

If one or more of these look familiar, then it’s time to formulate a phased transformational process.

Traditionally, an Enterprise Data Warehouse (EDW) has served as the foundation for data discovery and functioned well in defining the data according to its quality. However, EDWs are restricted in scope and ability, and are unable to handle data complexities.

So a data lake is required, to expand the possibilities of what you can do with your data. You can take a look at the whole data lake vs. data warehouse discussion, and see how they are actually complimentary.

That said, you can take a call whether now is the right time to start with a data lake or can you invest in that a few months/years down the line. And that depends mostly on your current business goals and challenges, and the kind of data that’s currently most valuable to you.

Here’s a list of pointers to consider before preparing to implement data lake architecture:

Type of Data

Data lakes are best used to store constantly generated data, which often accumulates quickly.

Usually streaming data has a common workload of tens of billions of records totalling to hundreds of terabytes. If you’re handling such huge amount of data, then you should definitely consider a data lake since the costs of structuring and storing it in a relational database will be too high.

Choosing to stay with data warehouse could be a better choice if you’re mostly working with traditional, tabular information, e.g., data generated by financial, CRM or HR systems.

Understanding the Intent

One of the great things about data lakes is the flexibility with which data is ingested and eventually be used, with a sole principle to ‘store now, analyze later’.

A data lake could be a good fit for a project where higher level of flexibility is required.

Complexity of Data Acquisition Process

The process of adding newly acquired data to your warehouse can often be a resource-intensive process. And the process can even get more complex when it comes to unstructured or semi-structured sources, with a serious ETL overhead in order to ingest the data into a format that your data warehouse can work with.

If this complex process is making you consider giving up on some sources altogether, it’s time to consider a data lake – which will allow you to store all the data with minimal overhead, and then extract and transform the data when you want to actually do something with it.

Existing Tools and Skills

A data lake would typically require big data engineers, which are difficult to find. In case of lack of such skills, consider sticking to your data warehouse until the prerequisite engineering talent is hired to manage your data lake.

Data Management and Governance

Both data lakes and data warehouses pose challenges when it comes to governance. Data warehouses pose the challenge of constantly maintaining and managing all the data, whereas data lakes are often quite difficult to effectively govern. Whichever approach you choose, make sure you have a good way to address these challenges as per your project.

The above points will help you decide to opt for data lake or not.

Once you decide to stay with data lake, blindly plunging into its implementation won't necessarily benefit your organization. The big picture of what you want to achieve with your data, and a strategy for a cohesive data infrastructure is crucial.

Strategy for Implementing Data Lake

A haphazard approach may lead to several challenges hampering the use of a data lake to support big data analytics applications.

In the absence of an overarching strategy, a lot of data handling best practices can get overlooked, causing challenges and bottlenecks further down the line. For example, not documenting the relevance of data objects stored in a data lake might make it difficult for data scientists to find relevant data and track who accesses what data sets and determine what level of access privileges are needed on them.

So, here are seven steps to avoid such concerns for implementing data lakes.

  1. Create a taxonomy of data classifications
    Classification of data objects plays an important role in how they’re organized. Identify the key dimensions of the data such as data type, content, usage scenarios, groups of possible users and data sensitivity as part of your classifications.
  2. Design a proper data architecture
    Apply the defined classification taxonomy to direct how the data is organized. Include file hierarchy structures for data storage, file and folder naming conventions, access methods and controls for different data sets. 
  3. Employ data profiling tools
    The segregation of data going into a data lake can be easily done by analyzing its content. Data profiling tools can help by gathering information about what's in data objects, thereby providing insight for classifying them. They can also help in identifying data quality issues to ensure analysts are working with accurate information.
  4. Standardize the data access process
    Use of diverse data access methods to obtain different data sets often pose difficulties. Standardizing the procedure with the help of a common and straightforward API can simplify data access and ultimately allow more users to take advantage of the data.
  5. Develop a searchable data catalog
    Prospective users might not be aware of what's in a data lake and where different data sets are located. A collaborative data catalog allows the users to know the details about each data asset and provides a forum for groups of users to share experiences, issues and advice on working with the data.
  6. Implement sufficient data protections
    Aside from the conventional aspects of IT security, utilize other methods to prevent the exposure of sensitive information contained in a data lake. This includes mechanisms like data encryption and data masking, along with automated monitoring to generate alerts about unauthorized data access or transfers.
  7. Raise data awareness internally
    Ensure the users of your data lake are aware of the need to actively manage and govern the data assets it contains with appropriate training. Knowledge of using the data catalog to find available data sets, and configuring analytics to access the data they need will help press upon them the importance of proper data usage.

Organizations are increasingly attempting to innovate processes, driving heightened service excellence and delivery quality. Interested in knowing how data lakes represent a smarter opportunity for effective data management and usage for your organization?

Contact us and let our experts do the talking.

 

 

Topics: Project Management, Agile, Data Engineering & Analytics

Championing the Art of Managing API as a Product

Posted by Kimi Mahajan on Jul 16, 2019 3:01:00 PM

There’s a never before uptrend in API economy. When we think upon the business side of managing an API, it becomes clear that we need to treat the API as a product, in order for it to succeed.

Just like any other product, API as products too have a market with a segment of customers and enterprises leave no stone unturned to maximize its return on investment.

Let’s understand the entire concept of treating APIs as a product and how to manage them as one.

Understanding the Concept

Like any other product, the concept of API as a product aims to help users solve their concerns and get their job done, by outlining its target audience, marketing strategy, development plans, support processes and sales strategy.

Though APIs are believed to be a technical aspect making the lives of developers easy, they are rightfully facilitating business offerings and values.

By viewing the API as their own product, enterprises can focus on creating sales strategies that are relevant and appeal to more potential users.

Watch the video here to know why your API deserves to be treated as a product:

 

 

Usually enterprises leverage their APIs to consumers without thinking about their broader strategic value. Once an organization creates an API, they need to focus on its maintenance, promotion, easy consumer on-boarding, enact right security, instill testing and performance monitoring, developer documentation and consumer support.

How to Manage Your API as a Product

Applying a product strategy to your API will enable it to flourish and generate revenue. The best way to treat the potential APIs as products is to vigorously maintain them, strive for continued improvements by setting up the business side of managing an API. This includes:

  • Building a team of professionals who can advocate the usage of APIs as a product to a large audience, not just limited to developers and gain clarity on what all is expected from the API. A Product Manager will be the best person to interact with the consumer to collect requirements to help identify the best workflow, keeping in mind the final business results.

  • Encourage cross-organizational support to make internal team use APIs collaboratively after APIs become the preferred internal technology for connecting various services and functionality.

  • Identifying developer personas and mapping their customer journeys by documenting and categorizing the problems for different types of developers.

  • Defining a business model and aligning it with the overall business mission to open up organizational culture to new business model opportunities.

There are three basic principles to manage your API as your products which can enable your business to meaningfully expand its ecosystem, accelerate development, and improve efficiency. Here’s how you can champion the art of managing API as a product for a prosperous outcome:

Be Customer Centric

The approach focuses on developer’s problems, and on providing products that help solve those problems.

“Customers don’t care about your solution; they care about their problems”.APIs need to focus on the customer and keep their needs in mind for an API to thrive as a product.


Assume APIs to become Public

The best practice to build an API is to build it in a way to be able to transform it into a public API product in future. Public or Open APIs are created in order to give a wider population of developers access to an organization’s information assets.

Many of today’s successful APIs which were built as private APIs became so valuable that their owners decided to open them up to external developers and monetize it.

Focus on Strategic Outcome

To have a successful strategic outcome for an API as a product, you must arm your APIs with a tactical plan and make all decisions to align your business successfully with your API strategy.

You must have a clear vision of how you want your APIs to succeed and should be equipped with a strong plan for them to grow and become profitable.

To extract maximum ROI out of the API, we need to stay focused on setting up the business side of managing an API. It’s important to sell the benefits of APIs to the right customer and tailor the pitch accordingly.

Conclusion

Businesses have embraced APIs as a way to expose business capabilities to both external and internal developers. They have seen it as a way to reduce cost and ensure faster time-to-market for new services and products, quickly launch own digital services and provide integration with any partners.

Srijan's API Management teams offer API products tailored as per your business needs and can help you with customized developer portals. Drop us a mail and let our experts do the talking.

 

Topics: Agile, API Management

How to follow the Kanban process for software development using JIRA

Posted by Siddharth Goswami on Dec 18, 2017 1:44:00 PM

The word 'Kanban' stands for 'signboard' or 'billboard' in Japan, which signifies some information. It was introduced by Taiichi Ohno (Industry Engineer, Toyota) to achieve “just-in-time” (JIT) manufacturing.

In the 1940s, Toyota started studying the supermarkets' self-stocking technique which they thought could be applied to their factory floor to improve manufacturing efficiency.

Pull-technique

                 Fig.1: Self-Stocking Technique of Supermarket Supply Chain


One of the essentials of any supermarket stores is to keep a track of the stocks maintained on the shelf. Ideally, the stocks on the floor should never exhaust and the product must be available for the customers to buy. 

Here’s how it was done:

  • Whenever stock on a supermarket shelf fell below a certain number, a red Kanban card was placed on the shelf.

  • This card signaled the need for new stock to be brought it from the store inventory

  • This, in turn, signaled the inventory-in-charge to get the same number of stocks from the factory

  • A Red Kanban card is added for each unit fall below the defined minimum stocks. An upper limit of cards is defined for each product type. Once there are that many cards on a shelf, it’s a signal for immediate restocking, without any delay.

 

This was the Pull Technique of stock replenishment, driven by customer demand.

The system is considered efficient if more and more shelves are well-stocked, and able to meet customer demand. This means, the goal of the Kanban process is to decrease the number of Red Kanban cards in circulation.

How to Set Up a Kanban Board in JIRA

Agile software development teams started leveraging the same JIT principles by matching the amount of "Work in progress" (WIP) work items to the team's capacity. 

Let’s understand Kanban Process in a software development project through an illustration. Let’s assume that there is a team, with two developers and one QA, who must complete 200 tasks to meet the project goals. 

To complete a task, it must traverse through the following statuses: 

  1. To-Do

  2. In Development

  3. Code Review

  4. Testing 

  5. UAT

  6. Done

The software team needs to define the ideal number of work items that they need to pick, in accordance with the team’s capacity. They need to have some kind of a signaling mechanism like a Red Kanban Card. In order to achieve that, the software development team started using Kanban Boards with WIP limits (max and min) for each of the above mentioned statuses. 

image1_1

To-Do: With the current understanding of the project and the given maturity of the team members, it is discovered that a developer will only be able to focus on 7 tasks in each release cycle. Hence, the WIP limit (Max) for the To-Do column becomes 14. 

This is added to the Kanban board (see screenshot):

image5_1                                                  Fig 2: WIP Limit(MAX) Added to the 'To-Do' Column

Also, to keep the work ticking, there must at least 3 tasks for each of the developers which bring in the factor of WIP (Min) = 6. 

Whenever the ‘To-Do’ exceeds this max limit of 14 tickets or goes below the min limit of 6, the column will turn red to notify the team to subtract or add a work item from the ‘To-Do’ column. The color ‘Red’ of the column will indicate the bottleneck in the workflow. 

Please see the screenshot below reference:

image2_3                                                     Fig 3: WIP Limit(min) Added to the 'To-Do' Column

In-Dev: Multitasking kills efficiency. When you overload the team members with more work, there are more chances of introducing errors into the system. 

So, if you are using the Kanban Board, you can make a rule that a developer will never work on more than two tasks at a moment. Therefore, the WIP (Max) for ‘In-Dev’ = 2. This will reduce the impact of context switching and will help the team to complete the tasks quicker.

Testing: Similarly, there should not be more than 3 tasks in ‘Testing’ Column for a single QA to work on at a given point in time. If there are more than 3 tasks in the ‘Testing’ column. This implies that there is some bottleneck in the flow which calls for an improvement. 

Therefore, the WIP (Max) and WIP (MIN) for the ‘Testing’ column becomes 3 and 1, respectively. 

In the same manner, the WIP limit can be defined for all the significant column. The Project Owner(PO) and Scrum Master(SM) can together study the capacity of the team and decide the WIP limit they would want to set on the planning board columns. Let’s take a look the execution board after the WIP limits for the all the statuses have been set:

image4_1
Fig 4: The complete view of the Execution Board with WIP Limits for the different columns

At any given point of time in the execution of the project whenever the WIP Limits are exceeded, the corresponding will turn red in order to notify the team of the blockage on the workflow, thus, ensuring the smooth flow of tasks on the execution board.

Please see the below screenshot for reference:

image3_2

The boards need to be monitored on a regular basis to ensure that work flows smoothly. As the project environment keeps changing over the course of time, the workflow and the WIP limit of the board are supposed to change accordingly. 

For example: A senior developer of the team exits the team and a new developer comes in to work on the project. This new team member comes with a different skill set and has limited knowledge of the project, which might lower the WIP Limit. The scrum masters are required to monitor the control chart on regular basis during this duration till a set equilibrium of WIP limit gets established.

Once the workflow (columns of the board) and the WIP limit has been decided, the project is set to roll. With this information in hand, the SMs would be enabled to monitor, control and act upon the blockages identified on the execution board. 

This is a good start for working with a Kanban board. In my next post, I will cover the Kanban metrics to monitor, and how to measure project progress.

Meanwhile, do you have any tips or suggestions on how you use the Kanban board for your team? Or any questions about starting off with Kanban? Please let me know in the comments below.

Topics: Project Management, Agile

Distributed Agile best practices: An expert round-up

Posted by Nilanjana on Oct 19, 2016 12:57:00 PM

Distributed Agile teams are the future of high performing global enterprises. They are lean, efficient, and get you the best people on the job, And all this while being extremely cost-effective. So that answers the question why enterprises need distributed scrum teams

But it's not an easy thing to get started with. The challenges range from timezone differences to collaboration issues. And so we decided to ask the experts, "What would you do?" And that led to this great round up post on distributed agile best practices. Here we have seven agile coaches, including Jeff Sutherland, the co-creator of Scrum, sharing their expertise on resolving the key challenges faced by distributed scrum teams: communication and team building.  

If your enterprise is contemplating working with offshore delivery teams, or if you already work with such a team; here are the distributed agile best practices that will make it a pleasant and productive experience for everyone involved.

Ready? Let's go!

Jeff Sutherland

CEO, ScrumInc.

Jeff is the inventor and co-creator of Scrum. He launched the first Scrum team in 1993 and has shepherded its growth into almost every industry: finance, healthcare, higher education and telecom.

"For over 20 years we have scaled Scrum using a Scrum of Scrums as the release team. So any delivery problems in any team are resolved at the Scrum of Scrums level. Scrum is fractal in nature so if there are many Scrum of Scrums they role up into a Scrum of Scrum of Scrums. Saab Defense has four levels to run the Saab Grippen fighter aircraft factory with the top Scrum of Scrums being the senior management team that meets every day an 8:30. We also developed other proven methods 20 years ago for virtual teams which are implement at Spotify and ING Bank as chapters. I regularly coach both of these companies. For really high performing teams we allow backlog to move across teams during a sprint to automatically level loading but that is beyond the scope of this short comment. In Scrum@Scale we also reduce dependencies and carefully manage any that remain across teams. We eliminate all bottlenecks (there are no release teams, test teams, hardening sprints, operations teams, or other waste that cripples the system) and go to a pure Scrum release every sprint or better yet, every day. For teams that can't do this, their top priority process improvement is to implement it."

Bryan Jacobson

Bryan Jacobson

Scrum Master, Dealertrack Technologies

Accomplished leader with a track record delivering outstanding products. Strong communicator who connects stakeholders to developers and collaborates across all parts of the organization.

 

"First of all, we should be honest with ourselves and admit that a “distributed” agile team will never be as effective as a co-located team.

In fact, people see improvement seating team members within view and within a few steps of each other, versus in different rooms or on different floors of the same building. When you’re in the same place questions get answered within seconds, instead of minutes or hours. You overhear conversations you need to hear. Multiple team member can consult with each other quickly. You can tell if it is a good or bad time to interrupt another team member.

However, for many reasons, sometimes you have a distributed team or remote team members. What to do?

1. Travel

It makes a vast improvement to spend at least a day or two face to face, at least once, and ideally two or more times a year.  Communication improves. Team work improves. “Us vs them” is reduced.
If you can’t do regular travel, doing it at least once also makes a significant improvement.
I realize that some teams, due to budget constraints, cannot get travel approved. As Agile experts we should advocate that we don’t make a one sided analysis that assumes “travel costs money” and “not traveling is fine”. We should find ways to quantify the impact of missed communication, teamwork difficulties, misunderstanding, rework to correct problems, missed customer deliveries due to time spend correcting problems, etc.

(Obvious, and everyone does this): If time zones differ, make good use of the times when both teams are in the office. In extreme cases some team member may need to come in early or late.

Top recommendation: If you have a team that is largely co-located, but has one or a few remote team members, the remote team members will be at a disadvantage.

2. Remind everyone to over-communicate

  • Make it the responsibility of the remote team members to make sure the communication happens. They should:

  • Call on the phone (over email) as much as possible. Video call is even better.

  • It is the remote person’s responsibility to ask for information.

  • It is the remote person’s responsibility to ask to be included.

  • It is the remote person’s responsibility to pick up the phone when they are not sure.

  • Categorize email as “non-communication”. A growing percentage of people are simply not reading their email.

 

Top recommendation: Video calls are very beneficial. Seeing the other person makes the other person seem more human. You engage better. They engage better. It forces the remote person to be engaged and participate (not sit there partly listening, but partly reading their email or whatever).

3. Use a Chat based team collaborations systems like HipChat or Slack

  • Require everyone to monitor it.

  • Require everyone to use it for questions.

  • Use team rooms for most conversations.  That way everyone sees the conversation, knows what the problem is, what the solution is, etc.

  • Have special channels for critical communication if the main team room gets “too chatty” and people miss important notices.

 

Benefits: Everyone sees critical conversations. Less interruptive than the phone. If someone is out on the break or in a meeting, they see messages when they get back. Faster than email, harder to ignore than email.  There is a record of conversations that can be checked later.

Top recommendation: Have an online system of information record, such as a wiki or issue tracking system. That way everyone has the same view of information. 
Use the rule: If it’s not in the wiki, it didn’t get decided. A conversation that some people heard, and others didn’t, doesn’t qualify as a team decision"

Daniel Mezick

Agile Coach & Author

Coaching executives and teams since 2006, Daniel Mezick is an expert on extending adaptive Agile culture beyond software. A published author, his books The Culture Game (2012) and Open Space Agility Handbook (2014) talk about his Agile philosophies and help teams and enterprises work better.


"In general, distributed-Agile teams that want to be successful are subject to the very same Agile principles as co-located teams. This means (for example) that "working software is the primary measure of progress," and that "business people and developers must work together daily throughout the project," and that "the most efficient and effective method of conveying information...is face to face conversation."

Distributed-Agile teams that keep Agile Manifesto principles in mind are the teams that will win big with Agile. The actual methods or practices are far less important than the principles that power them. So for example, a team may use one of several methods to practice the following principle: "business people and developers must work together daily throughout the project." As long as the practice aligns with (and does not directly violate) Agile Manifesto principles, the team- distributed or otherwise- is going to be OK. And do great work. And deliver working software-- ourprimary measure of progress."

Suzanne Prince

Director, Product Management, ThoughtWorks

Suzie Prince works with ThoughtWorks Studios as Director of Product Management. She has ten years experience designing, building, and delivering software for large and small organisations in a variety of domains.

 

"Distributed teams require specific and purposeful management that is different than co-located teams. Clear, articulate and specific communication is deeply important. I highly recommend peer-to-peer collaboration between teams or individuals who are not located in the same place. Removing silos and exchanging information frequently is paramount to success. At the same time I recommend having full stack teams and minimizing dependencies between teams. When dependencies do occur creating clear contracts and hand offs is important. Tools that support visual representations of work in and between teams as well as tools that support synchronous and asynchronous communication are essential.

Avienaash Shiralige

Agile Coach, Srijan Technologies

Avienaash Shiralige is an agile coach and trainer, and has been working in close conjunction with Srijan. For him, agile is a way of life, and he helps business and teams understand and implement agile methodologies into the way they function. He also shares his expertise and experiences on his blog titled “Agile Buddha

"Information sharing and communication are well known challenges when you offshore. A great aspect of Agile is the open synchronization of the team on a daily basis. But, how is the remote development team synchronized with the local team or with customer?

Here are a few distributed agile best practices that you can incorporate into your work culture:


1. Streamline offshore team communication and development infrastructure

The off shore team must be able to seamlessly communicate with the onshore team. You could use emails to video conferencing and anything in between. Go for a trial before you start on a project. If you experience any technical issues during trial, demand an upgrade and/or additional systems. Fix them before you dive into project

2. Meet face-to-face to build trust
If the offshore team is small, bring the entire team onshore for some up-front project activities such as establishing the shared project vision, requirements elicitation, initial planning and execution of initial 2-3 sprints. If the team is big and budget is limited, then have key members of the team onsite. Additionally, plan travel of onsite teams to offshore location and spending few sprints together.
Travelling onsite is a big investment, so make sure it brings tangible returns.Define collocation goals and closely track it. Teams should spend outside office time together to build informal relations.

3. Reduce work disruptions due to each other
Establishing continuous integration approach with good test coverage is a must. Good CI will ensure teams getting successful green build when they start their day. This will reduce work loss or productivity loss due to one team committing bad code and next team unable to use that code base. You could also have a common email list between teams, where you post your issues.This will get you support from other team members not just in common office hours, but also during outside of office hours too.

4. Shared project Vision
When you are starting a project and all along the project keep offshore team completely involved in all the activities. Share customer feedback, involve them in release planning, doing all scrum ceremonies together. The Product Owner (PO) has to make an extra effort in communicating product vision, road-map, his conversations with product users etc to offshore team. Product Owner spending a sprint or two together with offshore team, makes the team feel important and in the process lot of domain knowledge gets transferred.

5. Synchronize your working hours to get at least 1-2 hour overlap
Plan at least an hour of overlap between local and remote teams. Use this one hour for synchronization and information flow between teams. Plan your joint team activities like pair programming, reviews, joint meetings, distributed stand-up etc during this overlap. Even in extreme situations like 12 hour delta between teams, local and remote team can alternatively extend their day to get sufficient overlap.

Some Agile purists say that Agile is contradictory to multi-shore development because of its inherent reliance on face-to-face communication. I consider agile thinking(short sprints, working software, focus on people and collaboration and more) is a natural solution for above offshore challenges. Yes it requires new way of thinking and doing.

You can also watch our webinar by Avienaash and discover why distributed agile teams are necessary for global enterprises.

Ken Collier

Director of Data Science & Engineering, ThoughtWorks

Dr. Collier leads a team of brilliant data scientists, data engineers, and data analysts. The focus of this practice is on advanced “big data” analytics solutions that combine adaptive data pipelines, modern data engineering, rigorous data science, and data savvy business analysis to create maximum value from data.

"Cross-team communications in distributed/remote agile teams is a function of three key factors - distance (timezone difference), language difference, and cultural gaps. Each of these decreases the effectiveness of cross-team communication/coordination, but distance is perhaps the most significant.

  • Timezone: When a team is separated by 6 timezone hours or more, asynchronous communication becomes the norm. At ThoughtWorks we seek to shrink this separation by using near-shore distribution whenever possible (e.g., US/Brazil, India/Australia, etc.)

  • Language: Language barriers are the next most significant factor. The distributed team as well as the customers must have at least one shared language in which everyone is fluent.

  • Agile Terminology: The "agile language" must be common for everyone. Does everyone have a shared understanding of what "pairing" means, the rigors of test-driven development, and the definition of "done"?

  • Culture: Cultural differences, both regional and corporate, are impactful. The team should establish a shared set of cultural norms and expectations. For example, is there a cultural of hierarchical subordination (i.e., "pecking order") held by the remote team that differs from that of the host team? At ThoughtWorks we routinely rotate distributed team members from the remote location (where the delivery team is) to the host location (where the customers and stakeholders are) for a few iterations at a time. In this way everyone on the distributed team gets some face-to-face collaboration with customers, stakeholders, and other team members thereby building a shared culture of trust, familiarity, and clarity about the project."

Ben Linders

Agile Consultant & Author

Ben Linders is an Independent Consultant in Agile, Lean, Quality and Continuous Improvement, based in The Netherlands. Author of "Getting Value out of Agile Retrospectives", "Waardevolle Agile Retrospectives", and "What Drives Quality and Continuous Improvement".

"If you are a new distributed Scrum team getting ready to become productive, a sail boat futurespective helps you to get to know each other and agree upon the way of working for your team. You start by stating your goal and imagine how your team should look to reach it, next you'll explore what you can do to get there. Alternatively you can play a core qualities game to learn about each team member's strengths and find ways to collaborate effectively. You can build a new team using these retrospective exercises.They can easily be done with distributed teams, all you need is an audio/video connection between the team members and an online drawing tool in the retrospective."

Jeff Sutherland

CEO, ScrumInc.

Jeff is the inventor and co-creator of Scrum. He launched the first Scrum team in 1993 and has shepherded its growth into almost every industry: finance, healthcare, higher education and telecom.

 

"Scrum Inc is totally run by Scrum - marketing, sales, finance, consulting, training, software development, everything we do. Everyone is on a single Scrum team. I am on a distributed team and it is essential for us to fly everyone into to a two day quarterly meeting face to face where we do quarterly planning, retrospectives, and build team agreements. We do one week sprints and meet on Google hangout for all other Scrum meetings. We are heavily into Legos these days and my Scrum Master ran the quarterly retrospective this week by having team members build lego structures representing the best and worst experiences they had. It was deeply insightful."

So that was seven Agile experts sharing their take on distributed agile best practices. You could also take a closer look at all the different challenges that can arise in a distributed scrum model, and how to resolve them. We hope you found this post useful, and will now be able to take an informed decision on starting to work with distributed agile teams. 

If you are looking to plug a distributed team into your enterprise, Srijan scrum teams could be just the thing you need. And in case your enterprise is just transitioning to agile, we have curated 6 of the best blog posts by agile coaches, to guide you through a smooth transition.

Topics: Project Management, Agile

6 Agile Coaches' Take on Transitioning Your Enterprise to Agile

Posted by Nilanjana on Aug 31, 2016 12:31:00 PM

As enterprises strive to stay lean and efficient, outsourcing a part or all, of your development process has become a common practice. But it requires a lot of coordination and collaboration between in-house and outsourced teams, to build a great product. And the first step to reaching there is making sure both your onshore and offshore teams have the same method of approaching and handling the project.

One of the industry-wide best practices for software development is to adopt the Agile Scrum Methodology. The team that you outsource to will be, more often than not, an offshore agile team. And so, it’s best that your in-house development team is also well versed in agile. And the best people to aid you in this process are agile coaches, who will understand your business challenges and help you transition accordingly.

scrum-board

And so, we got down to bringing together six great agile coaches, and their take on the various aspect of the transition process. Their ideas and strategies will help you kickstart and manage agile at your enterprise:

Transitioning to Agile

Breaking set patterns and adopting a new way of work can seem difficult at first. But onboarding your team to agile and scrum can actually be a fun exercise.

Stefan Wolpers has a great post where he shares how he transitioned his start-up team to agile and formed scrum team #1. What’s best about this post is that it tells you how to approach the transition process. Starting from why the transition, to identifying current patterns and dysfunctions, to finally forming the first scrum team.

Stefan goes into greater detail, laying down the exact steps to getting your first scrum team. And while the post outlines what he did for a fast growing start-up, the steps wouldn’t be different when you are trying to form the first scrum team at your enterprise.

Evolving Project Managers

Most enterprises have a team of project managers who make sure that all projects are on track. From addressing roadblocks to ensuring delivery, they see to it that the team is working as well as it should. But with agile teams, things work a little differently and so the role of the project manager changes as well.

Mike Cohn adeptly addresses this in his post “The Roles of a Project Management Office in Scrum”. Adopting scrum does away with a lot of a project manager’s conventional responsibilities; assigning them to the ScrumMaster, product owner and the team. But Mike explains that project managers still have a very critical role to play here, especially at large enterprises.

Since enterprises will have a few different projects running simultaneously, there will be various scrum teams in place. Project managers now have the role of facilitating training and ensuring consistency across all teams. Mike’s post is a comprehensive list of all the new responsibilities that project managers now have, to make sure that scrum teams can function effectively.

Managing Scrum Team Challenges

As an enterprise, if you have just started working with the agile scrum methodology, challenges are bound to crop up. It is a new way of work, with new tools and requirements, and there will be a lot of questions and unforeseen problems.

Luis Goncalves shares a great way to address these challenges in his post “How to Manage Organisational Scrum Impediments?” He shares a framework that enables organizations to continuously learn and improve with each project. He shares a tool termed as the ‘Organisational Improvement Board’ and outlines how to use it. Luis shares every step of effectively using this tool, in great detail.

Pivoting at the Right Time

Enterprises usually go large when they start off with a product, investing huge amounts of time and money. There is obviously a lot at stake and building the wrong product can be disastrous. But when you are so involved in building the various parts of a product, it is possible for the team to lose sight of the big picture.

Avienaash Shiralige’s post on “Lean Product Development: Dealing with Business Emergency” tackles this exact problem. He outlines why is essential to include frequent feedback loops in the development process.

But just in case teams realize that they are indeed headed towards the wrong product, Avienaash describes a phase termed the ‘Pivot’. He takes you through the steps of the ‘Pivot’ phase, that will help teams realign their actions with the desired end-product. No new features without validating markets and choosing to hard code some things are a few steps that can help teams get back on track.

Running Effective Retros

In the agile way of doing things, retrospectives are a way of taking stock after each sprint. It’s an important learning tool and helps the teams improve the way it works. Agile coaches can’t stress the importance of this often enough. But the catch is, retros are useful only when done the right way.

And Ben Linders' post on “Retrospective Exercise: Few Vital Actions”, shows you the right way to do it. First up, he lists out some of the critical things that retros should cover. Besides that, Ben also takes you through the value of each of these retrospective actions and how it helps the team become more streamlined and efficient.

Ben also shares another post, “Getting Business Value out of Agile Retrospectives”. This post focuses on how retrospectives can help create value for businesses and customers. Ben lists a set of pointers which will ensure that teams come out of a retro with a set of doable actions for the next sprint.

If you are just starting out with a scrum team, Ben's retrospective exercises will also be a great resource.

Balancing Between Development and Delivery

Agile teams work well because they are quick to respond to changes. Changed requirements are accepted, non-performing strategies are fast discarded, and teams start moving in new directions. But it is a legitimate fear that too many changes, and responding to those, might delay the delivery of anything usable.

Stephanie Ockerman takes up this issue in her post titled “Balancing Emergence and Delivery”. While admitting that it is a tight-rope walk, she looks at three key factors that can cause an imbalance. She then goes on to provide a set of possible solutions that scrum teams can adopt, to ensure that agile development and delivery go hand in hand.

So, this is what some of the best agile coaches have to say about transitioning your enterprise to the agile way of project delivery. If you are already reading up on agile, are there any agile coaches whose ideas  you like, but we missed? What are some of your favorite posts on agile?

If you are an enterprise looking to augment your scrum teams, can help you do that. Explore how we can help you scale up your Drupal enterprise teams, and deliver according to the distributed agile models.

Topics: Project Management, Agile

First Agile and Scrum Meetup at Srijan Goa

Posted by Nilanjana on Apr 11, 2016 4:40:00 PM

Having started the first Agile meetup in Goa, I thought it would be a great idea to write about it.The theme of this meetup was Introduction to Agile & Scrum. The meetup took place at the offices in north Goa and with three hours at hand, there were plenty of discussions and take aways from this session. 

We were joined by  Anuj Khurana, Co-founder of Ajency.in. He talked about his experience with agile, why agile comes naturally to all small teams, and the current flavour of agile that is followed in his organization. He shared his missteps during the process of adopting agile and the learnings from all of this. He also emphasized that the agile methodology should not be confused with lack of planning.

Anuj also shed light on:

  • Team Sprint Vs. Individual Sprint.
  • The fact that the value of agile might not be visible to stakeholders in the initial phase of a sprint and in some cases, even for a few sprints.

Another interesting talk was "What does it really mean when you say you are an agile company?" by Avinaash Shiralige, where he talked about 12 principles behind the Agile Manifesto.Here are a few key points that I remember:

  1. Agile processes harness change for the customer's competitive advantage.
  2. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
  3. Working software is the primary measure of progress.
  4. Agile processes promote sustainable development.  The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  5. Continuous attention to technical excellence and good design enhances agility.
  6. The best architectures, requirements, and designs emerge from self-organizing teams. 

Avienaash shared a lot of real life instances and examples to explain the 12 principles; here is one example that stood out for me:

NASA User Story:

As an <astronaut> I want to <write easily with a ballpoint pen while in zero gravity> so that <I can record key information that I might otherwise forget.>

NASA specified and developed, at great expense, a ball point pen that Apollo astronauts could use in space where gravity would not aid the flow of ink. On the other hand, Russian cosmonauts used crayons.

Moral: Specify what you want to achieve, not how to achieve it.

 

He also shared a few critical points that are important to the agile philosophy:

  • Produce Value Early: He explained the importance of first delivering those parts of the software which provide the highest value to the client, to help keep their business running. He also talked about building vertical slices of features rather than building features horizontally.
  • Welcome Change: Avie discussed about accepting the changing requirements that come in during the software development life cycle. In order to succeed in today’s fast evilving in business environment, it is crucial to accommodate any change requests that might emerge due to market & competitive needs. He gave a real life example from his life where he was developing software for some Foreign Law Department and it took a lot of time (more than a year) to deliver that software following some traditional development life cycle, like waterfall model. After the delivery of software was made, client realized that it was not useful for them as most of the laws have changed during this development period. So it proved to be a total waste of all the resources.   
  • Deliver Frequently: Here, he talked about the importance of delivering the parts of software more often and frequently so as to collect the timely feedback and act on the same.
  • Cross Functional Teams: He emphasized on the fact that teams should be cross functional, with multiple skill sets, in order to move together towards the end goal with greater efficiency.
  • Trust Motivated People:  Here he talked about the motivated team members and their trustworthiness so that they can be empowered to make decisions on their own. The traditional Master-Slave relationship is too slow to keep up with the rapid delivery schedules.
  • Face to Face Communication: This is crucial to minimize the communication time and increasing the understandability of points under discussion. This also helps in reducing the turnaround time of the teams. He discouraged the cubicle culture in corporate and favored the face to face seating.
  • Working Validated Software: the discussion on this principle revolved around providing the fully functional and thoroughly tested software to client at the end of the sprint. Anything that is not working cannot be tremed as ‘Done’. An agile team must agree upon single definition of ‘Done’. 
  • Keep It Short and Simple (KISS): The idea is to cut short the tasks or story or feature to its simplest form where it is measureable, and most importantly to rule out the waste. We should be albe to say “No” to irrelevant tasks, or those which do not provide much value and can complicate the user experience.The idea is to ask the client why this is needed and what value it will provide to their business.
  • Self-Organization: Team should be motivated enough to self-organize around the tasks and move towards the common Goal. A very good example was given of football or rugby, where every team member strives to reach the goal irrespective of whether he is a defender or a shooter or someone else.
  • Inspect and Adapt: Here he talked about how to minimize and utilize the feedback loops and adapt as per the feedback received during each phase of sprint in Agile Development i.e. Planning, Estimations, Development, UAT, Retrospection etc.

  • At the end of the day, the meetup was very well received and the interactive session helped attendees strengthen their understanding of the agile methodology. A lot of attendees were excited to invite their colleagues for any future sessions that we plan.The post session excitement motivated us to come up with a tentative date for the next meet-up and we are definitely expecting a much larger participation. 

teams constantly update themselves with the agile best practices, so as to serve our clients better. This is what qualifies to help you scale up your enterprise Drupal teams, and deliver according to the distributed agile methodologies.

Topics: Community, Agile, Event

Estimation: From Waterfall to Agile

Posted by Nilanjana on Sep 23, 2015 5:27:00 PM

At Srijan, we transformed our methodologies from a classic waterfall approach to semi agile process, and now we are implementing a full-on agile Scrum inspired approach to work on Drupal projects. In the ongoing DrupalCon Barcelona, there was a session on Waterfall and Agile.

Agile for clients who still don't understand the concept.

 

Presenters of the Session.

 

Interestingly @adaptdk used the very same example that I had given in my talk at DrupalCamp Australia!

Don't build a Car at once, build a shippable product every sprint. Start with a unicycle and achieve the car

 

Manauwar from our team also shared a tid-bit on Agile.

 

Waterfall model means the client is going to pay for features that he might not even use - A valid point which I also used in my talk.

 

Sprint Planning is a good time to do and review the estimates.

Talk in terms of Story Points. Atleast for the Stories!

 

However there are times when you might not want to try Agile.

    • You don't have a mature team.
    • The project is very small.

Topics: Project Management, Agile

Learning to be a ScrumMaster

Posted by admin on Feb 7, 2015 5:28:00 PM

Though I had a fair knowledge of Scrum, I never had an opportunity to practice it before and never felt the heat of being a Scrum team member. But recently, I got a chance to work as a Scrum Master for a team in Srijan’s Goa office.

When I joined the Goa office, Avienaash (our Agile coach) gave me three books on Scrum to read in my spare time. I read the one that looked the thinnest and started practicing it. Srijan already has mature Scrum and Agile practices, all thanks to Avienaash. Hence, it was easy for me to get started.

I had two challenges in front of me, even before I took the role of ScrumMaster (SM). One of not having practised Scrum before, and another that I did not have a technical background. As I come from an entrepreneurial background, I did have the advantage of understanding the market and product easily. Getting into the shoes of the customer has always been very easy for me, but as a Scrum Master my role demanded me to do more.

According to Mike Cohn: 

“Many who are new to the Scrum Master role struggle with the apparent contradiction of the Scrum Master as both a servant-leader to the team and also someone with no authority. The seeming contradiction disappears when we realize that although the Scrum Master has no authority over Scrum team members, the Scrum Master does have authority over the process.”

Different people can have different sets of challenges as per their ability and mindset. I came across the following challenges:

  • Fear of not knowing something, which could be obvious to the team- The only way to deal with this situation is to read as much as you can. As an SM, I should have complete knowledge of Scrum, so I started with reading a book written by Mike Cohn called Succeeding with Agile, and it gave me enormous confidence. Trust me, when you practice along with reading, it becomes easier to connect the dots and the learning curve becomes shorter. Technical jargon are still a problem for me, but I do try to go back and read about that.

  • Understanding Impediments - This has been a problem for me especially in the case of technical bottlenecks, because I don’t have a tech background. As an SM, I should know the impediments, but it is even more important to understand how to find the right solution. There may be a chance that team members have not given enough thought to other consequences. So, asking questions is an important aspect of being an SM to help the team understand the real problem.

  • When to seek help outside - The ‘never give up’ attitude of team is good, but it can be a hindrance when strict timelines of the project is also a factor of success. At some point of time, external help becomes unavoidable, but to take that call is not easy. External help can dismantle the team morale because when team solves a problem by themselves, they get a sense of achievement. A good team tends to get closer to each other at the time of crises. So trusting your team’s ability is important here, and let the team decide when they need help.
  • Having an effective daily standup - Daily Scrum meetings at start of the day is important as they bring discipline into the team and sets the tone for the day. Some usual problems are:

    • Coming late
    • Connecting with distributed team members

    • Low energy levels
    • Not attentively listening to others updates
    • Team tends to face the SM while giving the update instead of the team

The essence of a daily standup is not just to give an update to the SM, but to sync up with each other on how we are doing as a team. As a Scrum Master I play the role of a moderator in this daily standup meeting. Most of the above mentioned problems got resolved by consistently bringing them up in our retrospection meeting.

  • Sharing my update in a daily standup meeting in the format of “What I did yesterday” & “What will I do today” - I work on multiple things throughout the day. But to filter out what is relevant and valuable in the daily stand up is tricky. Committing a task for the day in the daily standup has not been as easy for me as compared to my fellow team members. They can see the sprint backlog to choose a task, but for me as a SM, I have to go with the flow, as per situation demands.

In Scrum, everyday unfolds something new, hence identifying tasks and prioritising them is quite challenging. I am not convinced that a SM should give the updates in the same format as the team does, since SM has a completely different role. However, a SM must give the update on the impediments identified on the previous day, change requests, budget, new developments, product backlog, status of sprint planning, UAT, etc.

  • Writing user stories on behalf of PO, who is far away - Acting as proxy PO is not easy, because the team expects SM to have 100 percent clarity on user stories. It requires intensive thinking to ask all the probable questions to a PO before the implementation starts. I could manage this problem by setting up the team’s expectations in beginning of the sprint by telling that 20 percent ambiguity in stories are recommended to make them negotiable at the time of implementation.

You can also gain clarity by discussing with the team while writing stories and involving the QA in writing acceptance criteria. Doing the story estimation exercise with the team also helps, and the team gets an idea of the backlog in advance. The point that needs to be understood by the team is that discussion is valued over the excessive documentation (detailed story) in Scrum.

  • Engaging the designer - Designers always work one sprint ahead of the team, hence it is important to plan the dependencies early. Sometime you don’t realise which user story will trigger a change in the UX, so it is important to evaluate every task from a designer’s perspective. Cohesion between development team and designer is also very important.
  • Focussing on my work and keeping an eye on how we are doing as a team - This becomes even more difficult when team members are not co-located and working across different time zones. To deal with this, I created a checklist for myself. This helped me to keep an eye on every aspect. Later, I found Scrum Master’s checklist which is very useful to evaluate myself on how I am doing as a SM.
  • Estimation vs committing - Whenever we could not finish all the committed stories in a particular sprint, we would blame estimation. However, it was not always the reason because estimation and commitment of stories in a sprint are different things. So, while committing to stories, the following things are important to be considered:
  • Actual productivity fluctuates due to external factors, hence 70 percent is the maximum output we should expect, not 100 percent
  • A good mix of tasks between developers and QA, which depends upon the ratio of developers and QA in a team
  • Considering offs and holidays, because sometimes we tend to forget this while planning for sprint
  • Nature of Stories/Task and dependencies
  • Frequency of Deployment/Release/UAT

  • Better Sprint Planning - It becomes cumbersome when you try to do it at the last moment. So, planning two sprints in advance is important because it takes some time to get clarity on the stories. For example, some user stories require POC to be performed even before estimation, hence it is required to pick that POC at least one sprint before. Understanding such dependencies is key for a smooth implementation. It is important to involve the team to do the backlog estimation from time to time. I kept 10 percent of my team’s time in each sprint to plan for the next sprint. This helped the team to deal with last minute surprises. Consistent grooming of product backlog is key for better sprint planning.

  • Effective Retrospection - To do an effective retrospection, it is important to understand what we want to achieve as a team, and what needs to be done to achieve that. For example ask yourself, "How are we doing on XP practices?" "How can the team be more productive?" etc. My contribution in the retrospection meeting got drastically improved when I learned how an ideal agile team should be working.

My constant endeavour of finding the root cause of every problem has helped me to come out of most of the problems. I would retrospect my performance almost every day. It kept me on my toes to improve. I think when you know the exact problem statement then solution comes easily. Avie has played a great role by providing constant feedback to me on where I lag. I am still learning and trying to improve myself.

Topics: Project Management, CI/CD & DevOps, Agile

Discussion

Write to us

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms Of Service apply. By submitting this form, you agree to our Privacy Policy.

See how our uniquely collaborative work style, can help you redesign your business.

Contact us