Valuable Agile Retrospectives: How to Do Them?

Guest post from Ben Linders, Netherlands-based Sr. Consultant, InfoQ editor and bilingual (Dutch & English) blogger

At the end of an iteration, typically two meetings are held: The sprint review (or demo) which focuses on getting product feedback, and the agile retrospective which focuses on the team and the process used to deliver software. Agile retrospectives are a great way for teams to continuously improve the way of working. Getting workable actions out of a retrospective and getting them done helps teams to learn and improve.

To do agile retrospectives it’s important to understand what they are and why you would want to do them. This helps you to motivate team members to actively and openly take part in them. Many exercises exist for retrospective facilitators to design and perform a retrospective.

This blog post is based on Getting Value out of Agile Retrospectives, a pocket book by Luis Gonçalves and Ben Linders that contains many exercises that you can use to facilitate retrospectives, supported with the “what” and “why” of retrospectives, the business value and benefits that they can bring you, and advice for introducing and improving retrospectives.

What is an Agile Retrospective?

The agile manifesto proposes that a “team reflects on how to become more effective”. Teams use agile retrospectives to inspect and adapt their way of working.

A retrospective is normally held at the end of each iteration, but teams can do it as often as needed. It focuses on the team and the processes used to deliver software. The goal of retrospectives is helping teams to improve their way of working.

All team members attend the retrospective meeting where they “inspect” how the iteration has gone and decide what to improve and how they want to “adapt” their way of working and behavior.

Typically a retrospective meeting starts by checking the status of the actions from the previous retrospective to see if they are finished, and to take action if they are not finished and still needed. The actions coming out of a retrospective are communicated and performed in the next iteration.

To ensure that actions from a retrospective are done they can for instance be added to the product backlog as user stories, brought into the planning game and put on the planning board so that they remain visible to the team.

Why Do We Do Retrospectives?

Organizations need to improve to stay in business and keep delivering value. Classical organizational improvement using (large) programs takes too long and is often inefficient and ineffective. We need to uncover better ways to improve and retrospectives can provide the solution. Many agile teams use retrospectives: to help them solve problems and improve themselves!

What makes retrospectives different from traditional improvement programs? It’s the benefits that teams can get from doing them. The team owns the agile retrospective. They can focus where they see the need to improve and solve those issues that hamper their progress. Agile retrospectives give the power to the team, where it belongs! When the team members feel empowered, there is more buy-in from the group to do the actions which leads to less resistance to the changes needed by the actions coming out of a retrospective.

Another benefit is that the team both agrees upon actions in a retrospective and carries them out. There is no handover, the team drives their own actions! They analyze what happened, define the actions, and team members do the follow up. They can involve the product owner and users in the improvement actions where needed, but the team remains in control of the actions. This way of having teams leading their own improvement journey is much more effective and also faster and cheaper than having actions handed over between the team and other people in the organization.

Retrospective Exercises

How can you do retrospective meeting that delivers business value? A valuable agile retrospective identifies the most important things that a team wants to work on to improve their process. But what is most important? It can be the biggest, most current impediment your team has. Maybe something is disrupting your team’s atmosphere and they can’t get a hold of it. Or it could be finding the reason why the current iteration failed, or why it was such a big success.

Teams differ, and also the things that teams deal with can be different in each iteration. That is why there is no single retrospective exercise that always gives the best results. Also the risk exists that teams get bored when they are always doing retrospectives in a similar way. A solution to this is to introduce variation using different retrospective exercises. So before starting a retrospective, you need to think about which exercises would be most suitable.

The retrospective facilitator (often the scrum master) should have a toolbox of possible retrospective exercises and be able to pick the most effective one given the situation at hand. Here are some examples of retrospective exercises:

  • An easy but powerful exercise is Asking Questions. There are many different questions that you can ask in retrospectives. The trick is to pick the ones that help the team gain insight into the main and urgent issues and identify improvement potential. Then, by asking more detailed questions, it allows the team to dive even deeper into the retrospective.
  • The Star Fish is a variant on the “What went well?, What did not go so well?, What can be improved?” exercise. The Star Fish retrospective use a circle with 5 areas to reflect on what activities the team should stop right away, what activities the team should continue with in a reduced role, what activities should be kept, what activities should play a bigger role in the future and what activities the team should start.
  • The Sail Boat is an exercise to remind the team of their goal the product they need to deliver, the risks they might face, what is slowing them down and most importantly, what helps them deliver great software. It uses a metaphor of a boat, rocks, clouds and islands.
  • The moods of team members are often affected by problems encountered while working together. Having team members state their feelings in a retrospective using the Happiness Index helps to identify possible improvements. This exercise uses a graphic representation of team members´ emotions.
  • If there are significant problems that a team wants to avoid in the future, you can use Five Times Why exercise. This exercise uses root cause analysis to get to the deeper causes of problems and to define actions that address them.
  • A Strengths-Based Retrospective visualizes the strengths that your team members and teams have using a solution-focused approach. It helps to explore ways to use strengths as a solution to the problems that teams are facing.
  • When you have an agile project with multiple teams, you can do a Retrospective of Retrospectives to improve collaboration between teams. This is an effective way to share learning’s across a project and to solve problems that a project is facing.

Our advice to retrospective facilitators is to learn many different retrospective exercises. The best way to learn them is by doing them. Practice an exercise, reflect how it went, learn, and improve yourself. Feel free to ask any question about retrospectives.

Valuable Agile Retrospectives

Agile retrospectives are a great way to continuously improve the way of working. We hope that this blog post helps you and your teams to conduct retrospectives effectively and efficiently to reflect upon your ways of working, and continuously improve them!

Our book Getting Value out of Agile Retrospectives and the related blog posts and articles mark the beginning of a journey. We are growing a small ecosystem to release more exercises in the future, How To´s, retrospectives´ advices and many other things. If you want to stay up to date, the best way is to subscribe to our Valuable Agile Retrospectives mailing list.

You can download the book Getting Value out of Retrospectives free of charge from:

About the Author 

Ben Linders is a Senior Consultant in Agile, Lean, Quality and Process Improvement, based in The Netherlands. As an advisor, coach and trainer, he helps organizations by deploying effective software development and management practices. He focuses on continuous improvement, collaboration and communication, and professional development, to deliver business value to customers. Ben is an active member of several networks on Agile, Lean and Quality, and a frequent speaker and writer. He is an editor for Agile at InfoQ and shares his experience in a bilingual blog (Dutch and English). You can also follow him on twitter at @BenLinders.

Posted in Agile Adoption, Agile Methodologies, Lean, Scrum Development, Scrum Methodology | Leave a comment

SAFe and Other Frameworks for Scaling Agile: A Case Study

The following is a guest post from Brad Swanson, VP and Sr. Agile Coach at agile42

Scaling agile is one of today’s top challenges for many. I hear it from our customers all of the time. When agile and non-agile worlds collide within an organization, time to market and software quality often suffer. There are a number of fans in favor of the Scaled Agile Framework® (SAFe™) as the solution. I want to point out that while SAFe is highly effective for some organizations, it may not be the solution that’s best for you — OR you may be going about it the wrong way.

To demonstrate my point, I’d like to share this case study…

Introduction

We recently worked with a leading cable TV company that faced long and challenging development cycles with software quality problems. Guided by a small team of coaches*, they successfully implemented the Scrum framework with SAFe to scale up to 150 people delivering their set-top box/DVR software and server-side systems to support the DVRs. The following challenges drove the need for change:

  • 12+ month release cycle; unable to respond to a rapidly changing marketplace
  • Missed delivery dates; schedule slippage
  • Quality problems due to late integration and 3 concurrent versions in development

Agile methods and SAFe reduced time-to-market for major releases from 12+ to only 4 months, the shortest practical timeframe, given the cost of deploying firmware to over 11 million DVRs nationwide. Releases changed to fixed-date; scope was managed and prioritized to ensure that all business capabilities were delivered on time, even though some low-priority features (‘bells and whistles’) were cut to meet the delivery date. Quality improved significantly as a result of earlier and more frequent integration testing, which is fundamental to the agile approach. SAFe was tailored to the organization’s unique needs after piloting elements of it on a smaller scale.

Here’s what we learned:

  • The Agile Release Train model is effective for coordinating efforts of multiple, tightly integrated teams toward a short-term delivery.
  • Many elements of SAFe can be eliminated or scaled back when teams are working on decoupled or only loosely integrated products, features, or components; the Program level in particular may be excessive.
  • SAFe is sometimes implemented in its entirety in a “big-bang” change. This is possible, but extremely challenging and risky. Our  recommendation is to implement elements of SAFe in pilot mode to address known pain points, empirically determining which elements work and how, rather than pushing unproven changes to large swaths of the organization.

Scaled Agile Framework overview

The Scaled Agile Framework web site thoroughly describes the SAFe model. SAFe defines three levels for scaling an Agile organization:

  1. Portfolio
  2. Program
  3. Team

At the portfolio level, Lean-Agile principles are applied to balance workload with delivery capacity and optimize value delivered, while aligning architectural efforts. At the Program level, product features are integrated and tested often by a System team. At the team level, multiple agile teams build and test small features within a single business domain or system component, and deliver running, tested features (user stories) on a set cadence (usually every 2 weeks) to the System team. SAFe prescribes fixed released dates with variable scope using the release train metaphor; if a feature misses the train (the date), it has to wait for the next release train.

Other frameworks for scaling agile methods are also useful in many contexts, including Disciplined Agile Development, Large Scale Scrum from Larman/Vodde, and Scrum of Scrums.

Managing change: evolution or revolution?

At agile42, we recommend an incremental and empirical approach to introducing agile practices at scale, rather than prescribing one particular framework to implement. Scaling lean-agile practices is a complex problem and every organization’s context is unique. Long-term success is more likely with an empirical and evolutionary approach, as described in agile42’s Enterprise Transition Framework™.

(1)                Assess challenges to identify specific needs for improvement

(2)                Pilot changes in a low-risk way

(3)                Empirically measure the results of the change

(4)                If the pilot succeeds, expand the practice more broadly

(5)                Repeat…

In the cable TV case study, agile practices were first piloted by 2 teams. We tried many of the SAFe practices throught the pilot efforts and used the lessons learned to guide the expansion of agile and SAFe.

 

Where the full SAFe framework was excessive

A different agile42 client, a financial institution, issued a corporate mandate to implement SAFe. In this case, teams did their best to implement all of SAFe in a “big-bang” rollout. It became clear after a few releases (4 months) that significant portions of SAFe were unnecessary, and even counter-productive in their context. Their 5 teams were each working on mostly independent applications, and there was no need for the overhead and coordination of a Program-level agile release train so they abandoned it, allowing teams to operate more independently. An evolutionary approach could have helped this organization learn what parts of SAFe were applicable in their context, in a less disruptive manner.

 

Reducing time to market

In our case study, the cable TV company changed their release cycles as shown in Figure 1.

figure 1

Figure 1 – Product development cycle before and after

The agile development cycle uses the release train concept from SAFe. Releases have a fixed date, and scope is selected — and adjusted if necessary — in order to meet the deadline. If a feature misses the train, it has to wait for the next train. By aggressively prioritizing scope throughout development, and frequently integrating and testing, this model ensures that a viable product with the most important features will be ready on the planned date.

Portfolio Planning

The R&D organization started with a list of over 150 requests for features (projects) from the business. Senior leadership formed a Product Council consisting of 10 Product Owners (product managers), each of whom was aligned with a particular business area, plus R&D Directors. The Product Owners each made a ‘sales pitch’ for the highest value projects/features in their own domain, and the Product Council stack-ranked all the requests that might fit into a 4-month development cycle based on ballpark estimates. Ranking was accomplished by first scoring each request on a number of criteria: importance to business stakeholders, alignment with strategic initiatives, and cost of delay (urgency). This objective scoring cut the number of ‘contenders’ down to a more manageable number, from which point the Council members used a multi-voting technique to arrive at a final ranking.

Agile team structure

See Figure 2 below for a description of team structure before and after. Before agile was introduced, most of the people worked in large teams organized around technology components: the DVR (client) component and several back-end server components. Most of the business features however, required both client and server.  As a result, there was no clear ownership of the end-to-end business value. In the agile model, most of the people were organized into smaller feature teams (purple in Figure 2 below), each one owning features across client and server for one area of the business. One component team on the server side remained focused on building a major new architectural service. To maintain design integrity across feature teams, virtual platform teams coordinated designs across all the feature teams, as shown by the dotted line boxes  in Figure 2.

At first, the management team thought it wouldn’t be possible to form small cross-functional feature teams because each one would require too many people across too many specialties. So they put the names of every person on a separate card and began moving them around, trying to form feature teams of 10 people maximum. The managers were surprised to find that could form feature teams with only few gaps in skill sets and a handful of specialists (such as DBAs) who would need to serve multiple agile teams. Some organizations have accomplished the same structure through self-organization: allowing all the team members to collectively choose teams, rather than having a few managers do it. This organization wasn’t quite ready to embrace that idea.

fig 2

 Figure 2: Team structure before and after

 

Release train (4-month) planning

Figure 3 below gives an overview of the release train timeline.

fig 3

 Figure 3: Overview of the release trains from portfolio planning to delivery

  • 4 weeks of portfolio planning
  • 2 weeks for each team’s independent release train planning
  • 1 day for all agile teams to build a combined plan for the release
  • 4 months for building and testing – using 2-week sprints/iterations

With the portfolio priorities clear and team structure decided, each new team spent about 2 weeks doing high-level release train planning. Each release train was a 4-month period culminating in an integrated delivery from all the agile teams. Each Product Owner decomposed high-level business requests (features or projects) into smaller pieces (called stories), and prioritized the stories. The newly formed teams independently estimated the scope they could deliver in 4 months and identified dependencies on other teams.

The entire R&D organization (about 120 people) gathered in one room for the 1-day release planning event, except for one team that joined remotely by video conference.

1-day release planning agenda:

  • VP of R&D shared the vision and goals for the upcoming 4-month release train
  • Marketplace of collaboration: Each of the 10 teams had a large, visible timeline of features they planned to deliver in 4 months. People circulated between teams to better understand synergies and negotiate dependencies. (See Figure 4)
  • Each team adjusted their plan to reflect newly discovered dependencies and adjusted scope
  • All agile teams combined their release plans into a single visible timeline covering the 4-month period. (See Figure 5)
  • A retrospective on the 1-day event: lessons learned to improve the next one.

fig 4

 

Figure 4: One team’s release plan on the wall; collaboration with other team members

fig 5
Figure 5: Combined release train plan for all 10 teams

Delivery Sprints

Each team worked in 2-week sprints (development iterations) throughout the 4-month release train.  The system test team integrated the work of all teams every sprint to test new features and run a regression test on the entire system. Some tests were automated but many required manual validation of video. The Product Owners from each meet met biweekly (once per sprint) to coordinate their work; additional team members participated when necessary. The final 2-week sprint was a ‘hardening sprint’ with all hands on deck to perform final regression testing.

Results

  • The release was delivered on time with 100% of planned business capabilities delivered and 95% of planned low-level features included.
  • Quality was higher than previous long-cycle releases: fewer total defects total and fewer severe defects were discovered post-release.
  • The 1-day release planning event was an overwhelming success. People really appreciated the opportunity to understand the big picture and quickly reach a common understanding of the goal and scope of the release.

Challenges:

  • Forming feature-oriented teams was initially viewed as impractical due to the large number of specialists required to build the perfect team. Through many rounds of name-swapping, we arrived at a set of teams that each was focused on a single business value stream and consisted mostly of full-time dedicated people. A small number of specialists spread their time between multiple teams to fill specific gaps.
  • Regression testing every 2 weeks was possible only because the organization had invested in test automation. Still, some testing was manual and incremental testing was a significant shift for the system testing team.
  • One of the feature teams struggled to integrated the client-side and server-side developers into a truly integrated team. They reporting structure and culture separated those two disciplines, and in practical terms they worked as 2 separate teams.

Conclusions

SAFe was an appropriate model for the cable TV company because multiple teams are all building a single integrated and complex product. Prior to adopting SAFe, the organization had already piloted Scrum on 2 teams with the help of experienced coaches, and learned how to make Scrum work in their context. This evolutionary approach to adopting Agile and SAFe was a critical factor in learning how to succeed in delivering on-schedule with high quality.

The experience of the financial institution, on the other hand, where  SAFe was mandated, demonstrates the risk of wholesale adoption of a prescriptive framework without first piloting changes on a smaller scale and measuring the results. The financial institution learned that much of SAFe was overkill in their context.

Key takeaways

  • The release train model is effective for coordinating efforts of multiple, tightly integrated teams toward a short-term delivery.
  • Many elements of SAFe can be eliminated or scaled back when teams are working on decoupled or only loosely integrated products, features, or components; the Program level in particular may be excessive.
  • SAFe is sometimes implemented in its entirety in a “big-bang” change. This is possible but extremely challenging and risky. Our  recommendation is to implement elements of SAFe in pilot mode, evolving as you learn which elements work and how, rather than pushing unproven changes to large swaths of the organization. The agile42 Enterprise Transition Framework™ takes the evolutionary approach.

*Many thanks to the team of coaches who joined me on this effort: Manny Segarra, Deanna Evans, and Ken McCorkell.

Posted in Agile Methodologies, Agile Teams, Enterprise Agile, Scaling Agile | 1 Comment

The Agile Coach on Failure

They say the best way to learn is sometimes through failure. I couldn’t agree more.

Now you may be thinking, “Yeah, that sounds nice, but my boss doesn’t like failure. And I don’t like getting called on the carpet. The culture in my company doesn’t encourage risk taking. In fact, we get penalized for failing via negative performance reviews.”

Welcome to the culture of command and control; management by fear.

Now don’t get me wrong. I agree that we should try to mitigate risk. But on the other hand, we might not come up with that new groundbreaking technology if we never take a chance, step outside our comfort zone every now and then.

Funny thing; agile both mitigates and encourages risk at the same time.

It mitigates by employing short, 2-week timeboxes; by allowing the customer to see what we’ve built for them at the end of those timeboxes; by having team retrospectives, identifying what we do well and where we can improve; and by practicing agile technical practices.

An agile culture encourages risk taking by making clear that we will never win in the long-term by sticking to the status quo. We must take calculated risks and invest in them. R&D isn’t the only place that spends money on crazy ideas. The IT divisions are becoming increasingly important to a company’s success or failure.

As an Agile Coach, I find one of the hardest things to learn is to let the team fail. Watching and waiting is not an easy thing to do. That said, as a Coach, I’m not going to allow them to make an epic failure that would cost the company millions of dollars. But I do want the team to feel comfortable taking chances and using failures as an opportunity to learn.

If you’re overly protective of your team’s failures, they will recover from failure more slowly, learn less, and become weaker as a team. And, as you might expect, the opposite holds true.

And an epic failure at the end of a long cycle (6, 9, 12 months) is frowned upon even more, as it should be. Accordingly, this is the dig on the waterfall approach to developing software. At the end of the 12-month project, we may think we got it all right. We may have even worked double-time at the end to get there. But when the day comes to go live, we discover that what we created doesn’t work like it should, isn’t quite what the customer asked for 12 months ago, or the market has changed — and this thing we spent so much time creating is no longer valuable.

It’s not uncommon to end up only using 20% of the original requirements list. That’s a pretty large failure risk, if you ask me. Over my 10 years of managing waterfall projects, I’ve had more than a few projects fail in this way. Of course, as the Project Manager, the fingers were usually pointed at me. I loved my job. NOT!

Enter agile development. I made the transition from Project Manager to ScrumMaster very naturally. For some PMs, it’s not a good fit. Hard to shake that command-and-control mentality. But I liked this new and refreshingly realistic way of working and getting stuff done. If we failed in agile, we failed at the 2-week mark, not at 12 months. Simple. Brilliant. The finger pointing ceased. We succeeded or failed as a team. And if we did fail, we got better the next time, or we pivoted and went a different direction. ‘Why hadn’t we all been doing this the past 4 decades,’ I thought to myself. But that’s another blog topic.

Bill Cosby once said, “The desire to succeed must be greater than the fear of failure.” Ruminate on that.

Is your energy focused on succeeding or failing?

Posted in Agile Adoption, Agile Benefits, Agile Coaching, Agile Management, Agile Teams | 9 Comments

Scope Creep. It’s What’s for Dinner.

Once upon a time there was a whiny Product Owner, two Team Leads, a Dev Director, an executive, and only 2 days to go in the sprint.

Enter the unplanned feature. The villainous Scope Creep we all know and hate.

But he pays our bills so we stay here late at night. And take it like a champ.

Can you guess what happens?

Watch more Prevent Agile Pandamonium videos like this

Posted in Agile Management, Agile Project Management, Agile Teams, Scrum Software | Leave a comment

Make It So… If Only Software Project Management Were That Simple

What if you could successfully execute your software projects like Captain Jean-Luc Picard of the Starship Enterprise just by commanding, “Make it so?”

What if you could successfully execute your software projects like Captain Jean-Luc Picard of the Starship Enterprise just by commanding, “Make it so?”

Sounds silly, doesn’t it? Yet many traditional command-and-control organizations think they can work this way.

Manager types go behind closed doors and through a series of many meetings come forth with “The Plan.”  The plan directs what is to be done, how it is to done, when it is to be done, and by whom it will be done.  The software developers who must implement the plan are informed of the plan and directed to “Make It So.

Almost immediately it seems a question will be raised with the realization of, “oops, that was not considered…  it is not in the plan.”

But, the Project Manager’s job is to create and then deliver based on “The Plan.”  They will be evaluated based on that ability.  Their performance and success is measured by variances to the plan and they report these variances every month to their superiors.  As is typical with human behavior, it does not take long for a savvy Project Manager to realize that, “hey, if I want to keep my job, or better yet, get a good raise or promotion, I better not have variances.”  Variances to a plan are bad.  So the project management function seeks to control and, thus, essentially prevent change.  Change becomes a threat to success of the plan.

So, you must follow “The Plan” and stay on schedule and on budget – deliver the hours and the planned activity and events. 

But, the Customer wants everything they asked for, so you must deliver all content as well.  No, they cannot add anything that will change the plan. 

If the developers cannot stay on task as defined, they simply must work harder and longer because “The Plan” must be good.  Then they must explain in detail why they could not accomplish what they were told to do within their budgeted hours.  Just Make It So.

Waterfall project management wants fixed schedule, budget, and content.  Actually what they really want is that big raise for demonstrating how great their plan was and how well they controlled the project through the plan.

If this project is actually well understood up front and what’s to be accomplished is pretty stable, then this approach may actually work.  Project managers are smart people too, and they can effectively apply experience and judgment to make good plans under the right conditions.  Some industry is very heavily regulated and only a very precise and specific requirement is acceptable.  This approach can work well and, if it does for you, then great…. Make It So

But, what if the software product must evolve to changing conditions and needs?  Today’s rapidly advancing technology means end-users are increasingly fickle about what they want.  Many agile software project management products now are being provided to model business processes and interactions, either for internal or external customers.  These processes are under pressure to constantly change and improve meaning that the supporting software must also change.  Companies cannot maintain a competitive edge unless they change and evolve rapidly.  Software projects must be prepared to do the same.  Does it really make sense to tell your customer they cannot have the thing they discovered they really need now because it was not in your plan you made months ago?

Like science fiction, plans rarely mirror actual reality.  Trying to fix schedule, budget, and content is a real challenge for a number of software projects in organizations today.  In reality, one of these parameters must be allowed to flex.  End-user software is very hard to define up front.  Users often do not know what they want, and even when they think they do, they quickly evolve to need something else.  Plans must be able to adapt and yet still provide the business with the insight that it needs to manage teams effectively.

This “Make It So” approach must evolve and, with it, the behaviors of project management and the measurements of project/plan success.  Even really good project managers cannot control the future.

Maybe it is time to “boldly go” and explore a new “agile software development” approach – one that provides immediate visibility into the health and status of the project real-time, and readily allows for a changing parameter (content, schedule or budget) to be understood immediately. 

What do you think?

 

Posted in Agile Adoption, Agile Benefits | 3 Comments

Self-Assessing How Agile You Are

A guest post from Ben Linders’ blog: Sharing My Experience. Ben is a Quality, Agile, Lean & Process Improvement expert, co-author of Getting Value out of Agile Retrospectives, and editor @InfoQ.

Do your teams want to know how agile they are? And what could be the possible next steps for them to become more agile and lean? In an open space session about Agile Self-Assessments organized by nlScrum we discussed why self-assessments matter and how teams can self-assess their agility to become better in what they do.

Becoming Agile over Doing Agile

There are many checklists and tools for agile self-assessments. Some of them focus on “hard” things agile practices, meetings and roles, while other cover the “soft” aspects like an agile mindset and values, culture, and the conditions for agile adoption in organizations to be successful.

We discussed about self-assessing the teams agility at the nlScrum open space. One conclusion was that most attendants had a strong preference for assessing based upon agile values and mindset to explore if and how their teams are becoming agile. This way of assessing distinct teams where professionals have really internalized what agile is and know why they should do it and how it helps them to deliver value to their customers and stakeholders from teams who are only doing agile or Scrum because they have been told to do so by their managers or organization.

Assessing values and mindset involves asking why certain agile practices and rituals are done. It empowers the agile team by developing a shared understanding of the weaknesses and strengths of their way of working and to decide which steps they will take to become better.

Effective agile teams understand the agile culture, mindset and values. That makes it possible for them to improve their development processes in an agile way. They can use the golden rules for agile process improvement to improve by continuously doing small but valuable improvement actions.

Can teams assess themselves?

As the name suggests, agile self-assessments are intended to be tools for agile teams. The result of an assessment helps a team to know how they are doing to help them improve themselves. Therefore the results of an assessment are intended to be used by the team alone. They should not be used by managers to evaluate the team’s performance or to compare and rate teams.

Question is if you can expect that a team can assess itself? It depends as usual :-). Teams who have just started with agile can find it difficult to take some distance and explore how they are doing.  They also might not have enough understanding of the why and how of agile to really assess how they are doing. In such cases an (external) facilitator can help teams to do their first assessments.

Agile retrospectives are another great way for teams to reflect and improve their way of working (read more on how to do them in our book Getting Value out of Agile Retrospectives). They help team to learn observing and analyzing their way of working and define their own improvement actions.  Many skills that team members develop doing retrospectives are also usable to do self-assessments, so investing in retrospectives makes sense.

Finally an agile coach can help a team to develop assessment skills, enabling them to do their own assessments in the future. Soft skills matter in IT and agile coaches can help people to learn and improve those skills. Which is also an effective way to help a team to become agile in an agile way.

Agile self-assessments

I like the Open Space Technology (OST) approach; it’s a great way to get people together and discuss those things that really matter to them. At the nlScrum Meetup about Scrum Maturity Models hosted by Xebia we did an open space session where we exchanged our experiences with agile self-assessments. This is what we came up during our stand-up meeting:

nlScrum 20130206 agile self assessments by Doralin

Photo taken by Doralin on February 6, 2014 at the nlScrum meetup hosted by Xebia

I already talked about assessing values over practices and why self-assessments are intended to be used only by the team and not by their managers. In our discussion in the open space and afterward on the meetup forum, several tools and checklists were brought up to do self-assessments and also several models and frameworks were mentioned that can be used to develop your own assessment. Some of them were already on my list of Agile Self-assessments tools and checklist, but there were also some new ones which I added (thanks guys!):

Self assessing your agility

Have you done agile self-assessments? Did they help you to improve and become more agile and lean? I’d like to hear from you!

Posted in Agile Adoption, Lean, Lean Software Development | Leave a comment

Retrospectives Might Be a Waste of Time If…

Retrospective:  looking back on or dealing with past events or situations.  

In projects, it is a meeting to discuss what was successful, what could be improved, and how to incorporate successes and improvements into future initiatives.  In Scrum, the purpose of a retrospective is to inspect and adapt with regards to people, relationships, process, and tools seeking to implement improvements to the way the Scrum Team does its work.

If you believe that you are the very top of your industry and your competition cannot ever touch you, this article will not help you.  Hope you are right….To everyone else, keep reading.

So you acknowledge that maybe you might need to improve and grow… but here are 7 signs that you’re not doing sprint retrospectives right. Retrospectives might still be a waste of time if:

  1. You think you are smarter than your employees or peers and thus believe they have nothing to offer you.
  2. You just like to be in control.  After all you worked hard to get to the position you are and they should just respect what you tell them to do.
  3. You think that people are the problem and you really just get tired of their complaints and whining.
  4. You think that if somebody has an issue with the way we do things around here, they are just a trouble-maker and should get with the program, or just move on.
  5. You think that if somebody thinks they can do something better, than they should just go do it themselves to prove their worth and stop asking for help all the time like they cannot cut it on their own.
  6. You promote continuous improvement, as long as nobody makes your area look bad, or expects you to have to do anything new or different.  After all we really just want to look good and promote what we are doing.
  7. You really are just too busy.

You are right then!  Retrospectives and continuous improvement initiatives are in fact a waste of time for your organization or team.  Your employees and peers will not contribute anything of substance anyway out of fear.  Sadly, I suspect there are elements of this thinking more often than we would like to admit.  But there is hope, and these attitudes can change.

To succeed, continuous improvement requires a supportive culture – one of safety and respect.  Establishing the culture can be very hard.  The vision and expectations for the culture must be explicitly clear and communicated effectively to all.  Those who are hindering the adoption of the culture must be refocused immediately.  More of this later….

In a culture of safety, ideas and opinions are respected and not criticized. Dissenting opinions are welcome and people are not punished and devalued for them.   It is understood that healthy debate of conflicting views often results in better solutions and sharing of new knowledge.  People are not viewed as the problem, and problems are viewed as opportunities.

It takes time to establish a healthy continuous improvement mentality.  I suspect that is why Scrum came to build it into the process framework – to ensure that new teams were focused on getting this culture established and healthy.  But this is very hard to do well and often gets abandoned too early.

Over time, high performing empowered teams and organizations just build it into how they work and it is not so much a “scheduled reoccurring event”, as just how we do things here.  It becomes the culture.  It occurs spontaneously all the time as part of the norm without requiring a focused activity to ensure that it gets done.

So, if you think a scheduled meeting called a retrospective is a waste of time or not needed because you have achieved a true culture of spontaneous improvement as a way of working, then I applaud you!!!  To everyone else, I suspect you still have some work to do.

Perhaps the first retrospective your business or team should conduct is to determine why they think they do not need an improvement program.

Leadership should immediately resolve impediments that are holding them back from realizing the benefits of the ideas that those closest to the work can offer.

Posted in Agile Teams, Scrum Development | 4 Comments

How Often Are You Wrong?

While we have made great strides in challenging late integration, lack of collaboration and the obvious need for automation, we still have a ways to go when it comes to ideation, and the dangerous amount of certainty we have when it comes to products and people.

Assume Delivery is a Constant

I once had a physics teacher who often said, “Assume acceleration is a constant,” just before he took us into the land of big thinking. Before we stepped too far into the land of complex learning, he tried to reduce the number of variables so we could focus on the more complex aspects ahead. I use this same idea when working with product teams by helping them to work towards delivery as the constant.

Of course delivery is never a constant, but it is tangible and often deterministic. Eco-systems where teams build and sustain adaptive eco-systems with well structured code, high levels of automation, rich collaboration and strong visualizations tend to do well with delivery, and often learn to deliver more than ever before. Thoughtful and aware teams (and programs) quickly realize that as product delivery becomes a constant, product discovery looms large on the horizon, and it is a land that’s messy, clumsy, non-linear and non-deterministic. Best practices rarely help.

Product Arrogance

So, for a minute, assume you’ve made delivery a constant. How sure are you that you are producing the right thing? How do you decide what is your next best investment, and how do you validate your choice? As a tool to help you, I offer of the idea of product arrogance. Inspired by Nassim Taleb‘s use of Epistemic Arrogance in The Black Swan, or “the difference between what you know and what you think you know,” Product Arrogance is simply defined as “The difference between what people really need and what you think they need.”

Now, while you are still assuming delivery is a constant (which is no small challenge on its own), ask yourself, “How wrong are you?” as it relates to the product ideas you are chasing. Or examine the flip side, “How sure are you that you are building the right thing?” What makes you sure, and what makes you unsure, are areas of thinking and learning that confront teams when they’ve worked hard to smooth out delivery. It does not matter if they are using Scrum, Kanabn or NonBan

The Myth of Certainty and the Measures of Realities

Many teams I coach talk about a “definition of done,” one of many emergent ideas from the agile community that has helped people learn to deliver. Work deemed done, in the form of working product in a meaningful environment, improves measures and learning, but sometimes induces a false sense of certainty and a dangerous level of confidence that success is near.

Unfortunately, products are only done when they are in use. Watching users in the wild often teaches teams that what they were certain about. “I am sure people will need to …” is not what people need. It may be that one person’s arrogance, or fear of “not being a good product owner,” is the issue. It could also be the simply fact that the product ideas were right and the market changed the game. When this happens, shedding arrogance and embracing evidence is your best tool for building less of the wrong thing (which allows you to learn fast and spend less).

Embracing Wrongness

Product development, which goes far beyond product delivery alone, is an act of being wrong often. Like science, ideas are tools for learning and need to be viewed with less certainty than an automated test. Where people are involved, as opposed to code, automation is more difficult. People are beautifully chaotic and take unexpected journeys into interesting and uncharted territories. Being ready to be wrong is one way to be ready to learn, and product learning something we all need to practice, and practice and practice.

If you have practical experiences to share, please chime in so we can collaboratively learn from being wrong collectively.

Posted in Kanban, Scrum Development | 2 Comments

Cure Your Agile Planning and Analysis Blues: Top 9 Pain Points

Guest post by Ellen Gottesdiener, Founder and President of EBG Consulting

If you’re on a team that’s transitioning to lean/agile, have you experienced troubling truths, baffling barriers, and veritable vexations around planning and analysis? We work with many lean/agile teams, and we’ve noted certain recurring planning and analysis pain points.

Mary Gorman and I shared our top observations in a recent webinar. Our hostess, Maureen McVey, IIBA’s Head of Learning and Development, prompted us to begin by sharing why we wrote the book Discover to Deliver: Agile Product Planning and Analysis and then explaining the essential practices you can learn by reading the book.

As we work with clients—product champions and delivery teams for both IT and commercial products—we strive to learn continually. And that learning is reflected in the book. It tells you how to take actions that will accelerate your delivery of valuable products and will increase your enjoyment in the work.

9 Pain Points to Prevent, Mitigate, or Resolve

Here’s what you need to know, in a nutshell — the 9 pain points we most often see in planning and analysis. (Note: when you read “team,” it means the product partnership: business, customer, and technology stakeholders.)

Inadequate Analysis: Teams start to deliver and then realize they don’t know what not to build. Some teams, making a pendulum swing to agile, abandon analysis, trying so hard to go lightweight that they go “no weight.”

Poor Planning: Teams waste a lot of time in planning and meeting without first having a shared understanding of the product vision and goals or the product needs for the next delivery cycle. Planning might be taking too long, or, on some teams, the product champion and delivery team mistakenly think they have sufficient information to plan and deliver.

blog frazzled product championFrazzled Product Champion: The product champion (what Scrum calls the Product Owner)—the person who makes decisions about what to deliver and when—is frayed, frustrated, overwhelmed, and overstressed. These people, the keepers of the vision and the holders of political responsibility for the value of the product, often struggle mightily to balance their strategic product-related responsibilities with their tactical ones.

Bulging Backlog: Teams accumulate monster, huge backlogs (baselines) of requirements, often in the form of user stories. Every possible story or option for building the product is weighing down the backlog and squeezing or obscuring the highest-value stories.

Role Silos: The team members are acting according to their formal roles, and not focused on the goal. For example, someone always writes the stories, someone else does the testing, and someone else develops. They don’t have a shared way to communicate or a shared understanding of the product needs.

Blocked Team: Teams. Just. Get. Stuck. Waiting. On hold. It even happens to teams using high-end agile project management tools, which are supposed to help them stay organized and efficient. Some of these teams are overwhelmed by the plethora of requirements (see “Bulging Backlog”). Or they have unclear decision rules or don’t know how to define, quickly analyze, and act on value-based decisions. We’ve also observed teams with too few “fresh,” well-defined requirements, ready to pull into delivery.

Erroneous Estimates: Estimates are way off (dare I remind you, most of us underestimate our work). We’ve observed teams that, even after three or four iterations, can’t stabilize their cycle time or speed. Often, they lack clarity about complex business rules and data details, or about the product’s quality attributes (such as usability or performance). That often contributes to our next observation.

blog traveling storiesTraveling Stories: Traveling stories (no, not traveling pants) are ones planned for a given iteration or release but end up being pushed to a later date. (As you may know, a story is a product need expressed as a user goal. Many agile teams use them, following the canonical format: “As a…I need to…so that…”) Occasionally stories travel due to unexpected technical issues. More often it’s because the stories are “too big” to be completed in a given release. Or at the last minute the team discovers they need an interface. Or they find expected business rules for an unexplored regulation. Or data dependencies pop up. Teams are not thin-slicing their stories based on value, and so they’re unable to finish.

Oops: Teams find unpleasant surprises during demonstrations and reviews, or weeks (or months) after delivery. Or worse, they aren’t delivering the right thing, the right value.

Context-Conversation-Collaboration: Pain Relief

You may have heard of card-conversation-confirmation, originated by Ron Jeffries and his coauthors. These “3Cs” explain the critical aspects of user stories, a part of the planning cycle.

Borrowing from Ron, we’ve found 3Cs of our own: agile product planning and analysis means attending to context, conversation, and collaboration. And these practices relieve the 9 pain points we’ve outlined.

Watch the Video
Hear more about our observations of development teams, learn about the underlying principles that we’ve seen work in all kinds of teams, and see how Mary and I integrated them into Discover to Deliver. The link to the video is here. Let us know what you think.

Troubling Truths, Baffling Barriers, and Veritable Vexations. What are your pain points around agile planning and analysis? Share them with us in the comments section below.

Posted in Agile Management, Agile Metrics, Agile Project Management, Enterprise Agile, Lean, Scaling Agile | Leave a comment

Agile Metrics: Measuring Process Value

One of the things I emphasize in my executive engagements is the need to focus on measuring results rather than expectations, since expectations tend to focus more on operational adherence rather than value delivery.

Couched within this conversation is the idea that Agile is not just about efficiency, it’s also focused on effectiveness (value delivery). After all, what good is a process/method that helps you to do what you do faster/cheaper, but ultimately fails to deliver more value to your end users? Agile, therefore, offers the promise to both decrease costs and to increase revenue.

So, if your emphasis, and maybe your sole reason for choosing to go Agile, is to be more efficient, you will likely see the results you expected; your stuff is going out the door faster. But, if this excludes a focus on increasing revenue/value (i.e. product results), then ultimately you’re really just delaying the inevitable death of your organization. No amount of cost reduction can make up for a lack of revenue production. This idea may suggest that what we produce is more important than how we produce it, but I’ll leave that for another discussion.

Dave Gunther, a colleague at VersionOne, pointed out that process tools don’t really measure product results, and I’m not sure they should ever attempt to do this directly. However, if we view the “efficiency” or process/operational side of this equation as a product (something that attempts to solve a customer problem), then we may be able to find something of value in our process worth measuring.

Consider “pirate metrics”, which offer 5 key metrics to consider for a subscription based business model. Ultimately, these are focused on increasing revenue, which is the final metric. Here they are:

  • Acquisition
  • Activation
  • Retention
  • Referral
  • Revenue

Basically, each of these metrics build on the previous one, so, if you aren’t increasing your acquisitions, then there is no way you will increase your activations, therefore you won’t have increased retention, referrals and ultimately no increased revenue. Or, another way to look at it is that you may have received a referral, which is good, but if you aren’t actively tracking how you came to receive a referral, you don’t have very good chance at recreating that result.

Eric Ries, in his book The Lean Startup, posited that we make guesses on what will actually turn these metric “dials”, and so proposed that instead of investing a bunch of money to fully develop our “assumptions”, we would be better off implementing the scientific method to prove, or rather disprove, them. For example, if you were actively measuring acquisitions, then, hypothetically, every option you choose to implement to your product would affect that measurement. This being true, then we can compare the results of each option to each other to determine which options best get us to our goal of increasing revenue (i.e. value). Essentially, this process is an A/B testing model.

With this model as an example, if we apply it back to the operational/process side of Agile Transformation, what would we consider to be our metrics? Would the revenue equivalent be velocity (I know, don’t compare!)? Would acquisition be equivalent to number of people trained in a boot camp?

What do you think these measurements should be? How do you measure the value of your process options, or do you even measure them at all? Please leave your comments, I’d love to hear what you all think.

Posted in Agile Adoption, Agile Management, Agile Metrics, Agile Portfolio Management, Agile Project Management, Enterprise Agile, Lean | 5 Comments