Better Visibility of Complex Agile Projects: An Interview with CareerBuilder

At an Agile Day Atlanta event, we had the opportunity to interview Andy Krupit, the manager, agile development, and Thomas Connell, the team lead, corporate applications support team, at CareerBuilder about why they selected VersionOne Ultimate edition.

In the video below, Krupit talks about how they significantly improved tracking and reporting of complex projects for a global organization which helped them decrease defects by 25%.

Here are some key takeaways from the videos:

Highlights

  • Improved visibility into projects
  • Enhanced tracking of key metrics
  • Decreased defects 25%

Challenges

CareerBuilder has the largest online job site in the U.S. and the global leader in human capital solutions and has agile teams around the world. Initially CareerBuilder’s agile teams were using whiteboards and post-it notes to manage projects. The CIO saw the success of these teams and decided to scale agile across the entire IT organization.  This created a lot of remote teams and the whiteboards and post-it notes just weren’t working anymore. They tried to use some non-agile project management tools they already had in place, but nothing replicated the success they had using whiteboards and post-it notes. They needed an online solution that reproduced the visual and tactile benefits of physically moving cards across a whiteboard in front of their teams.

 Solution

After an extensive evaluation of several leading agile lifecycle management solutions, CareerBuilder was confident that VersionOne provided the best combination of online boards, custom workflows and access to the data. The online boards and custom workflows enabled the remote teams to replicate the success they had with colocated teams and the data allowed them to track progress in ways they couldn’t with whiteboards and post-it notes.

Benefits

“VersionOne provides everyone, from executives to developers, visibility into how we are progressing toward our business goals,” says Krupit. “Before VersionOne they could not efficiently track metrics, but now CareerBuilder is able to help individual teams continually improve quality. In fact, since CareerBuilder implemented VersionOne, defects have decreased 25%.”

Please visit VersionOne’s YouTube page for more video interviews.

Posted in Agile Project Management | Leave a comment

Improving Product Quality with Agile: An Interview with ABB Enterprise Software

At an Agile On Deck event, we had the opportunity to interview Scott Madden, senior director, product operations at ABB Enterprise Software, to find out why the organization selected VersionOne Ultimate edition.

In the video below, Scott talks about how they increased on-time delivery to 91%, decreased the defect backlog 40%, and decreased defects released to the customers 30%.

Highlights

  • ABB transitioned 800 team members to a single enterprise agile platform and agile methodology in seven weeks
  • On-time delivery has increased to 91%
  • Defect backlog has decreased 40%
  • Defects released to customers has decreased 30%

Challenges

ABB is a world leader in electrical engineering comprised of nine separate business units. Each of ABB’s business units was run by a product manager who had their own processes, architecture, and tools. Management was manually collecting and consolidating spreadsheets from disparate teams all around the world. In addition, ABB’s siloed product management organizations made visibility into the progress of the entire enterprise portfolio extremely difficult. The senior leadership team recognized that they needed more visibility across the nine business units to improve on-time delivery and product quality.

Solution

ABB transitioned 800 team members from using different tools and development processes to using a single enterprise agile platform and agile methodology in seven weeks. After an extensive evaluation of several leading agile solutions, ABB was confident that VersionOne provided the best combination of enterprise agile software and guidance from enterprise agile transformation experts to help them go from multiple teams with different methodologies and ways of reporting to a single system that brought them all together.

Benefits

Since ABB implemented VersionOne, the defect backlog has decreased 40%, defects released to customers have decreased 30%, and on-time delivery has increased to 91%. VersionOne gives the ABB leadership team get greater visibility to see how individual teams are progressing. Before implementing VersionOne, it was nearly impossible to track quality on a team-by-team basis. But now ABB is able to help individual teams continually improve quality and accelerate delivery within the context of the enterprise portfolio.

According to Madden, “VersionOne is not just a vendor. They are a partner. From implementation all the way through the life of our relationship with VersionOne, I believe it will be a world-class experience.”

Please visit VersionOne’s YouTube page for more video interviews.

Posted in Agile Project Management | Leave a comment

DevOps Trends: Adoption Expanding in Enterprises

During a recent webinar series on “Building a DevOps Culture & Infrastructure for Success,” we asked the audience to rate their confidence with knowing what features and/or fixes are in any given release. We were surprised to find out that only 12% were really sure that they could describe the functional changes within any given release with precision.

To help us further understand and overcome the barriers to realizing the vision of DevOps, we recently surveyed enterprise IT leaders to get their insights on DevOps adoption.

Here are the DevOps Adoption Survey results:DevOps Adoption Rates

DevOps Adoption Moving Into the Mainstream

Approximately 73% of the respondents are currently using DevOps for production systems, pilot programs, or they plan to adopt DevOps in the next 24 months.

 

#1 Driver for DevOps – Improve Quality, Consistency & Repeatability

DriversHistorically, the need to increase deployment frequency has been cited as the primary factor driving DevOps initiatives. However, this seems to be changing. In our survey, the desire to improve quality, consistency and repeatability was the highest rated DevOps driver (88%). The need to increase deployment frequency has dropped to the second most common driver (62%) followed by the need to reduce failure rate of new releases (57%). As DevOps practices move further into the enterprise, increasing the overall quality of software delivery may be overshadowing the need for speed.

DevOps Success RatesDevOps Adoption Increasing, But We’re Still in the Early Learning Stages

Only 33% of the respondents said that DevOps has been successful, while 54% said that DevOps has been moderately successful, and 13% said that it has not been successful.

 

 

DevOps_Efficiencies

Need to Improve Ability to Track the Flow of Business Value

Perhaps one of the biggest takeaways from our DevOps survey was the need to increase the overall organizational proficiency of tracking the flow of business value – from idea to production.  Approximately 88% of the respondents gave their organization a moderate or low efficiency rating for their ability to track and manage features and/or fixes.

Number of SystemsThe Number of Systems Are Part of the Problem

Part of the challenge may be the number of systems that need to be accessed in order efficiently manage and track features and/or fixes. Nearly 85% of the respondents are reconciling multiple systems to identify the business work items included within a specific environment or release at any given point in time.

Cumbersome Manual Processes Add Extra Effort

DevOps ProcessesIn addition, 87% said that pulling a list of features and/or fixes is manual – either very manual with spreadsheets, etc. (29%) or partially manual by combining and/or aggregating data generated by automated tools (58%).

 

 

 

Challenges Due to Disconnected and Fragmented Delivery Tools

So what are some of the challenges that organizations have experienced due to the disconnected and fragmented delivery tools? Here are some of the specific quotes from the survey respondents:

“Missed priorities, missed opportunities and rework”

“Delayed delivery and compromised quality”

“Bad code, bad data, no change management, lack of understanding”

“No traceability and no one knows what is in given releases, raising necessary questions”

“Manual deployment of the features has a high margin of error”

“Unknown and untested code making it to production. Incomplete functionality being delivered.”

“Re-work, extra effort, release delivery risks”

“System in off line for many minutes”

“Functional outages and degraded performance due to not deploying an integrated set of changes – code, database, and configurations – with specific features”

“Conflicts between delivery teams working on related (known or unknown) systems”

“Lack of confidence about possible errors”

Introducing Unified Software Development and Delivery

In my last blog post, I explained that while propagating the vision of DevOps has been a success, executing against it often remains challenging – especially in the enterprise. I believe that successfully unifying plan, develop, validate, deploy and run workflows is still challenging for two fundamental reasons:

  1. Plan and develop work items (features, fixes, stories, etc.) are not directly linked to operational outputs (builds, artifacts, environments, etc.)
  2. Lots of fragmented automation make it difficult to orchestrate and creates many pockets of siloed data.

In order to make the vision of DevOps a reality, a truly unified platform that supports the end-to-end delivery stream – from idea to production – is a primary requirement. The VersionOne® Continuum for DevOps solution is one example of this type of platform.  For more information, visit https://www.versionone.com/product/devops/

Posted in DevOps | Leave a comment

How Unified Software Development and Delivery Makes the Vision of DevOps a Reality

2 Barriers to Unifying Dev and Ops

What are the two barriers to unifying Development and Operations?

Are you finding that DevOps is more vision than reality? Here’s how you can unify the systems that DevOps workflows depend upon to help make your DevOps vision a reality.

DevOps Can Be More Vision Than Reality

The DevOps movement has provided organizations building software with a vision of increased deployment frequency, product quality and mean time to recovery gained from improved collaboration and automation.

While propagating that vision has been a success, executing against it often remains challenging – especially in the enterprise. Ultimately the DevOps movement seeks to tightly unify Dev and Ops workflows, but so far two systemic barriers have kept these functions from becoming truly unified.

2 Barriers to Unifying Dev and Ops

I believe successfully unifying plan, develop, validate, deploy and run workflows is still challenging for two fundamental reasons:

  1. Plan and develop work items (features, fixes, stories, etc.) are not directly linked to operational outputs (builds, artifacts, environments, etc.)
  2. Lots of fragmented automation make it difficult to orchestrate and creates many pockets of siloed data.

1. Development Workitems Are Not Directly Linked to Operational Outputs

devops-part-1-500

 

 

 

 

 

 

 

In any software delivery process, there is an inherent disconnect between development workitems and delivery outputs. The image above highlights a common pattern that organizations adopting DevOps face regardless of their level of DevOps maturity. This platform disconnect between functional workitems and delivery outputs makes it very difficult to truly unify development and operations.

Starting with the green box on the left, you have a simple representation of the agile development process. The main units of flow moving through the development organization’s storyboards have traditionally been workitems such as features, fixes, stories, epics, etc… However, once these development initiatives get converted into builds or artifacts and deployed into environments, the linkage gets muddy. At that point, “release” or “deployment” units of flow are only loosely affiliated with their corresponding workitems back in the agile storyboard on the left.

Feature attributes such as cycle time and current status can be tracked accurately while moving within context of the development storyboard, but manual updates to that data are required during downstream delivery. This creates a very weak understanding of the real-time flow of value once you get beyond the planning tool and into the downstream and more “operational” software delivery process.

According to a recent DevOps Survey conducted by VersionOne, more than 87 percent of respondents indicated that multiple systems are required to manually cross-reference features and fixes with their corresponding builds, artifacts and environments. This problem then gets magnified as functional changes “queue up” in later stage environments between release events. This lack of automated manifest reporting makes it increasingly difficult to express with certainty which workitems are included within specific artifacts and deployed into specific environments at any given point in time.

Here are a few questions that are typically difficult to answer with absolute certainty:

questions-500



 

 

 

 

It will continue to be difficult for all stakeholders across the end-to-end delivery pipeline to collaborate at the highest level if Dev and Ops platforms are not truly unified. Building DevOps maturity mandates a tight linkage between functional workitems and corresponding delivery outputs to streamline the flow of value and simplify cross functional collaboration.

2. Automation Processes and Tools Are Fragmented

fragmented-500

 

 

 

 

 

 

 

 

A clear and positive outcome of the DevOps movement is the emergence of a plethora of point process automation tools. These tools have been important enablers of DevOps practices and have dramatically reduced the amount of time required to validate, deliver and manage new software. However, the primary data models of these DevOps automation tools are wholly unaware of concepts such as features and fixes. Since these workitems represent the actual “content” flowing thru automation, visibility and traceability at the feature/fix level is critical to driving efficiency in a DevOps setting.

The image above depicts the fragmented delivery environment that frustrates our ability to link delivery outputs with functional workitems. This graphic was shared with me recently by an organization trying to enhance their ability to track the flow of value, in real-time, thru their delivery pipelines. If DevOps is a priority at your organization, this example is probably similar to what you have now or what you will have in the not too distant future.

As this very busy diagram indicates, the DevOps automation tools we depend upon to move value from the initial commit all the way out to production are continuously generating important audit, test and deployment data at every stop across the delivery pipelines. However, this data is often under-leveraged and buried deep inside tools completely unaware of the features and fixes flowing thru them.

Because of this fragmentation and lack of context, it is very difficult to provide critical status and audit data back to DevOps stakeholders. Without a unified development and delivery platform, correlating data generated through delivery pipelines back to specific features and fixes will continue to be a largely manual, error prone and time-consuming process.

4 Costs of Dev and Ops Not Sharing a Unified Platform

The costs of development and delivery not being unified is a missed opportunity. While small and incremental gains toward end-to-end unification have yielded progress, the reality is that most enterprise software development organizations are still struggling to improve:

value-stream1. Value Stream Efficiency

Because of the units of flow problem, stakeholders don’t have automated visibility into the status or and/or deployed location of the features and fixes flowing thru a delivery pipeline. As a result, manual effort is required to perform continuous business-to-operational cross-reference reporting and analysis that introduces material and unnecessary overhead into the software delivery value stream.

continuous-improvement2. Opportunities for Continuous Improvement

The plethora of fragmented point automation generates siloed data that is difficult to access and correlate back to a discrete set of features and fixes without significant human intervention. This fragmentation makes it difficult to collect meaningful statistics that can identify bottlenecks across the entire software delivery chain. This data is the crucial fuel required to drive the kind of continuous process improvements needed to materially increase delivery frequency and shorten time to market.

software-quality3. Software Quality & Failure Rate of New Releases

The lack of end-to-end visibility into the entire value stream makes it difficult to know with absolute precision which functional changes have been included in any given build or artifact. This reconciliation process is almost always manual and is susceptible to errors that increase the odds of deploying unstable or incomplete “work-in-progress” into critical environments.

meantime-recovery4. Mean Time to Recovery & Slower Analysis

The lack of detailed end-to-end delivery accounting and audit history, at the business level, frustrates the ability to find root cause and issue repairs for issues and defects once uncovered. Additionally, this un-correlated data makes it difficult to perform the detailed analysis needed to identify system or process failures that caused the introduction of critical production defects in the first place.

What Is a Unified Software Delivery Platform?

devops-part-2-500

 

 

 

 

 

 

In order to make the vision of DevOps a reality, a truly unified platform that supports the end-to-end delivery stream – from idea to production is a primary requirement.  A crucial capability to achieve platform unification is the ability to link together all of the data generated throughout the delivery process. If data can be gathered and correlated at the time of creation, a comprehensive dashboard can be created that supports real-time collaboration across stakeholders.

Most organizations that have multiple agile teams are already using some sort of agile lifecycle management platform to manage priorities and coordinate development activities. By reimagining our storyboards as development, validation, and deployment orchestration hubs, we can unify the planning and development platforms with the infrastructure required to support downstream automation – without ripping out or replacing any of the tools and technology you’ve already implemented.

By leveraging centralized pipeline orchestration, you can better track work items as they move from one stage to the next in your storyboard. Because the orchestration layer understands automation in context of the features and fixes flowing thru it, stories can now be directly associated with the artifacts, builds, config files or deployments, linking these two traditionally decoupled platforms.

When your storyboard is linked with all the DevOps automation tools that move changes from the first commit all the way out to production you can begin to capture and associate the important audit, test and deployment data generated at each and every point within your delivery pipelines.This is the type of unified software delivery platform that can help make the vision of DevOps a reality.

screenshot

 

 

 

 

 


Here are a few characteristics of a Unified Software Development and Delivery Platform:

  • Unified DevOps repository that can support robust cross-referencing between business value (features/fixes) and operational objects (builds, artifacts, deployments).
  • Ability to visualize, measure and optimize the journey of features and fixes from idea all the way to production deployment.
  • Robust pipeline orchestration that leverages existing DevOps automation and eliminates or minimizes the need for manual handoffs.

The 5 Benefits of Unified Software Development and Delivery

increased-collaboration1. Increased Collaboration Across All Disciplines

Product Owners, Project Managers, Developers, Testers, and Ops team members can more easily collaborate because business work items are linked to delivery outputs providing visibility, traceability and clarity across the entire value stream.

increased-automation2. Increased Automation and Streamlined Value Streams

The plethora of fragmented point DevOps automation tools are now orchestrated by the unified DevOps orchestration engine, reducing the need for human intervention.


increased-deployment3. Increased Deployment Frequency & Shorter Time to Market

With clear visibility of the entire value stream it is much easier to make the continuous process improvements that can increase delivery frequency and time to market.

improve-software4. Improved Software Quality & Reduce Failure Rate for New Releases

The ability to automatically cross-reference any build or binary to the features and fixes included within – with absolute precision – greatly reduces the chances of testing in the wrong environment, accidently promoting works in progress or items with unmet dependencies. This capability results in higher release quality with less wasted manual effort.

shorter-meantime5. Shorter Mean Time to Recovery & Faster Analysis

Unified audit and traceability throughout the entire software delivery process – from idea to production – will make it much easier to uncover issues prior to deployment. When defects do reach end-users, post-mortem root cause analysis can occur in minutes instead of weeks, uncovering root cause and prevent issues from recurring.

Conclusion

The independent evolution of planning platforms, build automation, testing and release management tools has created a profound and systematic data division between Dev planning platforms and Ops automation. As long as these disconnects persist, achieving the key DevOps ideal of cross functional collaboration and streamlined process flow is challenged.

Unified Software Development and Delivery is the process of merging these two universes to provide a comprehensive and end-to-end value stream that documents the flow of business value from idea to production. The VersionOne® Continuum™ for DevOps solution is one example of this type of platform.  For more information, visit https://www.versionone.com/product/devops/

Posted in Continuous Delivery, continuous improvement, Continuous Integration, DevOps | Leave a comment

“When will it be done?”

Sometime last year, I started working with a Fortune 100 company on a large, distributed product development effort. There were many “refactoring opportunities” – a term a friend once used to describe my code. Like many large efforts spread across locations, there were many constraints.

One day, towards the beginning of the engagement, we were pragmatically introducing agile practices and principles when one of the executives decided to pay us a visit. After a few friendly greetings, he walked up to me and said “So you’re the agile guy,” using a tone which sort of left me feeling like a suspect who has just been targeted and painted with lasers. He then asked the question in the forefront of his mind “When will it be done?”

For the first time, I suddenly realized the power of “It.” Without much thought, I quickly replied with the facts: “From what I know, no one has done a good job answering that to date.” Not knowing much about the project, but wanting to provide context, I followed up by saying “Agile methods will help us be able to tell you what is done which is the strongest evidence we might have to when it will be done.”

From Project to Product

For years, I have been helping leaders understand how to use agile methods by reframing the discussion. In this case, I might defend this executive by guessing that “When will it be done?” was the only question he felt he could ask. It could be that he had no other questions in mind, or it could be that past progress data has been so weak or non-existent, that all or nothing investigation was the path of least resistance with the best results to date. It could also be that the question comes from the years of conditioning around asking “When will it be done?”

All or nothing thinking is deep in the ethos of many companies. It may be that this is merely an organizational, or industry, norm that is well established. If, like me, you’ve been in the game for a bit, there is an interesting and unnamed progression that contains the agile movement and provides a challenge for its future.

If the 1990s were the decade of project (on time, on budget and within budget) then the 2000s could be viewed as the decade of process (or progress). The rebels who spawn the various methodologies later branded as “agile.” were frustrated by a lack of real progress. You could think of this progress as moving from 60% of 100% in the 1990s to 100% of 20% in the 2000s.

continuum

 

 

 

 

This change is so much more important than “Who is agile and who is waterfall?” This change allows for investors to use the whole product (100% of x%) as a way to validate or invalidate their investment, and possibly choosing to change their overall portfolio investments. Or put another way, it allows for a shift from “on budget” to “is valuable.”

What’s the next best investment?

Often overlooked and under discussed, agile practices have provided a way to shift toward questioning investments based on incremental evidence of completion. Teams who are in earnest embracing and practicing agile methods often move toward progress as more of a constant. With less worry about “What will we get done?” the new, and more ambiguous question, becomes “What should we get done?”

Scan the figure from left to right again, and you’ll see a progression of certainty. As cycle times for learning decreases, in the form of iterative product increments, we are able to more quickly assess how wrong we were. Using an analogy, if your car is (or was) an unreliable piece of junk, you head out on a journey wondering if you will make it to your destination. If your car is a trusted delivery vehicle, you are freer to wonder about other questions like “Do we still want to go there?” or “How are the passengers doing?” or other more valuable, non-progress based questions.

Refactoring Our Rhetoric

Changing the dialog from “When will it be done?” to “What is done?” provides an alternative question and new perspective. It challenges both investors (the executive) and producers (the teams) to shift towards validating products and user experiences that are “good enough.”

Concretely, let’s explore a few refactorings that surface when you make the switch to 100% of x%:

  • Planning for complete user experiences supports customer empathy as a guiding force
  • Validation over completion introduces a sort of test driven product which routes out waste
  • 100% of x% injects the idea of evaluating value returned for product increment investment

While there are many others, let’s explore these three, starting with customer empathy. Thinking in chunks of product, like user experiences, and the validation of each chunk tends to more quickly surface “the who” aspect of product development.

Customer Empathy: The product community in this example was building a game. Games provide a nice basis for validation because play is part of the product. Being overly certain about who might like what is a great way to build the wrong game. Simple tools like pragmatic personas now become powerful validators that can stop the building of the wrong thing simply by challenging the experience a player might have.

Incremental Validation: There are more companies than I’d like to admit who are working hard to build the wrong thing faster. Or put another way, they are so overly certain that they need to “get it done” that they fail to validate it until there is a ton of it in play. Moving away from it and towards incremental validation of a meaningful user experience, helps learning happen sooner. It’s not mutually exclusive with agile practices, but learning from meaningful user experience does not simply happen just because you are working sprints.

Iterative Evaluation: The best way to measure (evaluate) is to test in the market. This is easier for some products than others. For example, it’s easier to deploy and validate a mobile ready web app than it is to do the same for a pace maker. As these are obvious extremes, your product most like sits on a continuum between the two. Ask yourself what you could do to slide towards faster market validation, sooner, is a strong, simple take away that you can reflect on immediately.

More pragmatically, when you shift toward 100% of 8% (as an example), you can then ask, “If the first 8% was a poor return, should we still do the other 92%?” Or, you might find that by simply asking how you are going to evaluate that first 8%, you step into a deeper level of early product validation thinking that is often missed when people over focus on “How much can we get done in this sprint?” or as was the case with the executive in my experience, staying stuck in the land of “When will it be done?”

But so many people are all about “It”?

After reading this, you’ll find your awareness of “It” as a singular measure is more prevalent than you knew. Most common “Its” live in larger planning where investors are not aware of the power of incremental validation, or eco-systems where all the investors hear is agile speak instead of product speak or validation language.

If you are an executive, an influencer, or a big boss type, I challenge you to refactor the “its” you hear towards smaller chunks of meaningful investments. If the word smaller vexes you, then shift to an investment mindset: assuming that some investments pay more return than others, what is the right place to invest just enough to learn where you should invest next?

If deep in your brain, you are still thinking about building software like buying bonds, you need to refactor that metaphor towards hedge fund trading, where a series of small failures are wildly overwhelmed by the large returns around them. If you knew what stocks to buy, you would. Since you don’t you are forced to engage investments with a measure of certainty or an awareness of uncertainty and an eye toward measuring and adjusting based on the evidence and your experience.

If none of that works, buy a copy of Antifragile by Nassim Taleb. He seems to know more about agility that most coaches I know. I mean, look at the title, it contains both fragile and agile in one word.

 

 

Posted in Agile Analytics, Agile Project Management | Leave a comment

Executive Visibility in Successful Agile Enterprises

executive-visibility-in-successful-agile-enterprises-800x328

A colleague recently asked me, “What if developers only got paid when features ship?”

“They’d only do the easy ones,” I replied, only half-joking.

But as I thought about it more, I asked myself, “What if nobody in the entire value stream – executives included – got paid until features ship?”

Now, that might sound like a far-fetched idea, until you realize that that is exactly the position your business is in – especially if you work in a product company.   Who pays for things they haven’t received?

So, what if that was the case in your company?

How much more collaborative would your organization be?

How many meetings would you NOT have?

And what would your executives care about?

My bet is that your executives wouldn’t worry about tracking anything that doesn’t help them know whether or not features are getting finished and delivered quickly enough.

This bet isn’t based on just an internal hunch.  Over the last couple of years, I’ve asked every executive I’ve spoken with, as well as individuals who have to regularly report to executives, just what it is that they really care about.

The consistency of the answers I’ve received is remarkable:

“Time to market”

“Speed to cash”

“System lead time”

They might be using slightly different terms, but they’re all saying the same thing.  They’re saying that the most important thing they can know is how long it takes from the time they realize the need for some capability to the time that capability gets into the hands of their customers.

Why is this so, even though the “you get a paycheck only when features ship” policy isn’t in effect?

Well, we could point to the abundance of “lean business” awareness out there today.   Or we could talk about how, in today’s business climate, you’re either disrupting your competitors or they are disrupting you.  And we’d be right.

We could reason that small batch sizes and short lead times actually reduce costs and increase opportunities for revenue, along with providing the continual feedback that helps us make sure that we’re building the right things.  Again, we would have a strong case.

But if we step back and just think about metrics, we can see that the things we could measure, like internal cycle times, WIP defect trends, escaped defects, and low-level burnup and burndown rates, have one thing in common:  they all affect lead the time of your software development and delivery system.

That’s why, in my opinion, if you could only measure one thing, system lead time would be it.  And that’s why I believe I’m not hearing executives of successful agile enterprises asking for low-level metrics – at least not for their primary decision-making information.

Start High, Drill Down Only When Necessary

Executives need visibility into the measures that are relevant to their responsibility for business outcomes.  This visibility also needs to be easily-accessible.

If, as more executives are telling us, the measures that best indicate organizational performance are those related to the speed with which they can deliver, then those measures are the ones that need to be at their fingertips.  If all is well with those, there probably won’t be a need to drill down further.

This doesn’t mean that lower-level measures aren’t valuable.  It’s in those measures that we often find clues to what needs to be tweaked to continually improve lead time.

But starting low requires triangulation and analysis, which is time-consuming and subject to misinterpretation.  Said another way, it’s expensive and confusing, and who wants that?

By focusing reporting at a high, outcome-oriented level, executives can concentrate on what’s immediately important to them.  If those higher-level measures start to trend negatively, then they can explore the underlying data in more detail.

Configuring your executive views with this high-to-low progression in mind will allow you to help your executives make better decisions more quickly.  And better, quicker decisions can make the difference between an organization that thrives and one that doesn’t.

Which kind of organization do you want to belong to?

Learn more about Scorecards and other executive visibility options in VersionOne.

Posted in Agile Analytics, Agile Executive, Agile Leadership, Agile Metrics | 2 Comments

Measuring Agile Success?!?#?

Another ChartAbout six months ago, I wrote a blog post called Top 10 Tips for Measuring Agile Success, and the reality is that it wasn’t necessary a set of tips as it was a blog about the to ten ways people responded to the VersionOne State of Agile survey and some related metrics that support them. Way before that blog was ever published, the question of how to measure agile success was a common one that I and many other agile coaches would receive when working with organizations and executives. Since the blog was published, I’ve had more questions and in some cases some rather odd reactions to the concept of measuring agile success. Some questions are very direct — “which metrics really work?” Or, “which metrics should be used at the various levels of the organization?” Then there are the reactions or questions like, “aren’t you aware of the impact of metrics?” Or, the statement, “suggesting the one way is ridiculous.” Or, the best reaction, “dude, I hate metrics.”

Okay, I can accept all this and I get the confusion and general concern, and trust me — I share some of these sentiments. Instead of looking at the question from the stand point of which metrics are the best, let’s explore the topic or the questions of how do we measure agile success and why is it important.

Let’s start with the “why”, and I think the primary “why” is obvious — the cost of change can be significant. There’s not only a tangilble investment in training, coaching, reorganization, staff changes, and even re-engineering the physical environment, but there’s also the significant intangible cost associated with productivity loss due to teams reforming, working through the chaos, and emerging through the change usually with something that looks much different than what you started with. I don’t think I’ve been around a team or organization going through the change associated with adopting agile that hasn’t had staff turnover, fits-and-starts, and a brief time of general struggle both for the people and the software output as everyone comes up to speed. So, trying to understand the return or the off-setting value gained is an important reason to measure agile success. To that end, it’s not really measuring agile success, it is better stated as measuring the success of the process investment change that organization is embarking upon or has recently spent six-months enduring.

Plan-Do-Check-ActAnother “why” for measuring agile success is to enable the PDCA loop. The PDCA loop (a.k.a. the Deming Circle or Plan-Do-Check-Act [Adjust]) is a core business and leadership practice and it is called out in all lean and agile approaches. The concept is simple — establish a goal, decide what you are going to do, get it done, inspect the results, make adjustments based on observations, and then do it all over again as you march to the goal — the essence of iterative development and continuous improvement. Measuring the organizations progress and performance allows for the inspection to occur; thus, you adapt and get better the next time around.

So, we need to ensure that the organizational change we’ve embarked on is making the positive impact we expect and a key part of ensuring this is measuring to enable continuous improvement.

How we measure our agile success is a bit more complex — mostly because there are two things to measure. First, we need to measure the adoption of agile principles, processes, and practices. Second, we need to measure how our organization is performing to assess the impact of changing to agile.

The approach to measure agile process success is generally around leveraging Agile Assessments which hope to identify where your organization is on an “agile maturity” spectrum. There are several long established approaches that internal and external coaches use. The concept of measuring maturity is simple, conduct a self-assessment based on both quantitative and qualitative measures in several areas including team dynamics, team practices, requirements management, planning activities, and technical practices (just to name a few). For these measures to mean anything, you need to start with a baseline (how mature are you today?) and then select a reasonable cadence to re-assess on your road to … more maturity? There are some very-useful existing maturity assessments out there including Agility Health, the classic Nokia Test, and about 20+ others listed on Ben Linder’s blog.

Agile assessments do have some aspects of measuring impact; however, the focus is generally isolated to certain areas and or used to reflect the success back to the process. Measuring agile success from the standpoint of impact on the organization should be more focused on The Moneyball metrics of the business. Measuring impact is much more difficult sometimes because it can be difficult to tie a direct correlation between the agile delivery metrics and the traditional business metrics. It is also difficult because the lack of understanding of the agile delivery metrics. Making matters worse is how people sometimes focus on the wrong ones, which takes me back to The Moneyball reference. It’s important for organizations to select the right metrics to focus on and the right ones to tie together. As mentioned by Michael Mauboussin in his HBR article The True Measures of Success, leadership needs to understand the cause and effect of metrics. What this means, metrics if not selected correctly can provide misdirection and can result in misbehaviors — basically people will make bad decisions and game the metric.

To give you an example of a [not so solid] agile success impact metric, lets look at a common metric that people often argue about – sales revenue tied to the delivery organization’s velocity based on story points (e.g. revenue / velocity). The first challenge with this is using the term story points [and velocity], you tend lose or confuse people not familiar with the concept and, if they do, an argument about estimation generally ensues and people often change their point measuring stick. To avoid this challenge, go with safer, lean metrics or simply put, the count of stories or things (great advice from Jeff Morgan – @chzy). The next challenge with this metric is that it may be too generalized and not really lead to better results. There may be better goal focused measure such as publication mentions after a release that leads to an increase in the number of product trials. Or possibly a goal of reduces support tickets which leads to improvements in customer retention or renewals. All of these are good, but alone they don’t necessarily provide an ability to measure agile success. To help assess your agile success, correlate the impact metrics with the lean, agile metric — the number of stories delivered during the same period. For example, use the number of stories delivered to normalize product revenue, number of web visitors, number of trials, and the number of support calls. Watch and assess these trends over six months and see the impacts.

I recently read a book called RESOLVED: 13 Resolutions for LIFE by Orrin Woodward. Although the book is aimed at leadership development, one of the resolutions talks about establishing and maintaining a scoreboard. The idea is that we should have a set of metrics that we constantly visit that help to power our PDCA loop. This is a long running practice in business, and if you don’t already, I suggest you establish a scoreboard that helps you measure your agile success. It should include metrics from your process adoption assessment as well as your organization’s agile-adapted Moneyball metrics. In agile we often talk about big-visible charts, your agile success scorecard should be one. Share the results of your agile journey and the impact it is having on your organization, and help people understand what the metrics mean and what decisions or actions should be made based on the indications of the metrics. There will be times things don’t look good, but done right, your agile success scorecard should help spur and inspire an environment of continuous improvement that embraces agile principles and practices you’ve embarked on implementing.

Although, I don’t call out any specific examples of agile success scorecards — it would be great if you would share your examples or metrics you like or resources that can help others.

There are many worthy reads on this topic, but a couple more that I like are Agile Fluency, established by Diana Larsen and James Shore, as well as this article by Sean McHugh, How To Not Destroy your Agile Teams with Metrics.

Posted in Agile Analytics | Leave a comment

Frameworks for large agile projects?

frameworksThings are getting more and more interesting with the use of agile in larger and larger projects. We now have a number of frameworks that we can use, such as LeSS, Scaled Agile Framework® (SAFe®), DAD and Scrum at Scale. These frameworks can all be investigated with a few clicks of your mouse. And in true style the internet has a number of people telling us that these frameworks are bad. That they are prescriptive, or that they lack flexibility. If you look you can find the flame wars where you can get messages such as:

  • Frameworks are bad and that you should simply make your own approach up
  • My framework is better than your framework
  • Frameworks are not agile
  • And indeed many others

I have a different view. These frameworks contain many years of experience from people who have been working in the software industry and have a rich experience. They have recorded their ideas and given us information about things that work. They are uncovering better ways of developing software by doing it and helping others. Where have I heard that before? Of course if you think that you have more experience than all these people combined, then of course you should go it your own way. But if that is true, please tell us what your experiences are!

So which framework matches your needs best? Now that really is something that you can only answer, although you can take advice. All of the frameworks have something to recommend them, and while they are all built on what turns out to be very similar foundations, they do sometimes assume a different starting point. Some are for people who are more experienced, while some offer more structure to help you get started.

All of the frameworks include the principle of continuous improvement, meaning that they should all be seen as a starting point. As you learn, you will apply your lessons through inspect and adapt, or the familiar Deming cycle of PDCA. You own the framework that you adopt!

The warning is that frameworks are not a software development silver bullet. They will need investment and effort to establish and grow. How to design your framework, how to build it, and how to get the people ready are really key questions. Are you at a starting point for a framework or do you need to spend more time establishing your basic agile teams, educating the people or exploring your lean process?

The experience is that framework implementations which are nurtured and supported exceed beyond the expectations. While those that are established in the hopes of a quick and easy miracle, deliver as expected.

Good luck!

Scaled Agile Framework and SAFe are registered trademarks of Scaled Agile, Inc.

Posted in SAFe, scaled agile framewok, Scaling Agile | 3 Comments

How ADLM Gobbles Up DevOps

Print

 

 

 

 
In the 1980’s and 90’s, the business software landscape was dominated by a diverse list of cutting-edge companies such as Best Software, i2, Brock Control Systems, Mapics, Ross Systems, Infinium, FBO Systems, Manugistics and MSA (of course I could go on and on). Now long gone, these and hundreds more really great companies have been gobbled up or rendered obsolete by the rising class of ERP giants. In this article I’ll explain why history will repeat itself leading to the extinction of most DevOps tools as leading ADLM platforms continue to assert their dominance across the diverse software development and delivery automation ecosystem.

History Informs Our Future and The Evolution of ERP

You already know that for the past twenty years, most companies have leveraged some form of ERP system to manage virtually every core business processes. One benefit of this tightly integrated solution is a powerful inter-functional data flow that enables corporate agility and provides the highest level of visibility. This super-integrated architectural model has become the standard adopted by virtually every enterprise around the globe. What you may not know is today’s “ERP model” is the result of four distinct evolutionary generations that I believe help predict the next major evolution of automated software delivery.

Phase 1 – Automation
Enterprise Information Systems (EIS) – In the 1960’s, early automation systems were developed to support important individual business functions such as general ledger, inventory management, billing, payroll, etc. These systems were architected completely independently of each other and added little value to the enterprise beyond their narrow scope.

Phase 2 – Core Data Model
Manufacturing Resource Planning (MRP) – Then in the 1970’s, the idea of a master production schedule was devised so that a few of these isolated systems could gain greater visibility into future inventory and product requirements of the organization. The big idea behind the master production schedule was the creation of an open data model that could be leveraged by other systems impacted by the manufacturing production schedule.

Phase 3 – Expansion
Manufacturing Resource Planning II (MRP II) – In the 1980’s, software vendors began to build and sell off-the-shelf packages that promised “best of breed” process design. These solutions provided tightly integrated versions of the key manufacturing processes required to produce products.

Phase 4 – Business Process Domination
Enterprise Resource Planning (ERP) – Finally, in the 1990’s, business software vendors expanded well beyond the manufacturing scope by creating highly-integrated solutions that now cover just about every standard business function imaginable. These systems leverage a unified data model to dramatically improve visibility, consistency, accuracy and planning capabilities enterprise-wide.

What Does ERP Have To Do With Automated Software Delivery?

The evolution of ERP has taught us how natural pressures forced the creation of a unified and comprehensive “business data model” spanning the entire enterprise. Those software vendors with enough influence to dictate that data model were the ultimate winners in the ERP space.

In the very first generation of ERP (EIS), software was leveraged to deliver a high degree of automation to many business processes that were previously manual. Over time, it became clear that these newly automated processes were interconnected and the evolution towards a tightly integrated and unified data model was underway. The business objective that fueled each successive generation outlined above was the need to design more efficient business processes that increased organizational visibility and agility.

As Marc Andreessen famously said in 2011, “…more and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defense.” (Why Software Is Eating The World). That statement resonates even stronger four years later. Today, virtually every corporate organization is seeing the familiar pressure to deliver software more efficiently and reliably. If history is indeed our guide, any highly fragmented and/or isolated process required to deliver incremental software change will face mounting pressure to be merged into an integrated end-to-end enterprise-grade platform that can deliver improved cross-functional visibility with even greater efficiency.

Why Application Delivery Lifecycle Management Will Win

In my view, there are really only two broad solution categories in the realm of automated software design and delivery – Application Development Lifecycle Management (ADLM) and the catch-all term DevOps (which I’ve hijacked here to describe any other type of process automation that assists the software delivery process).

DevOps tools are often narrow point solutions that have sprouted from open source projects, in-house development or commercial vendors. Organizations rely heavily upon a diverse collection of these DevOps tools to help document, validate and automate a steady flow of software change along its path to end-users.

Here’s the problem DevOps tools are beginning to face: Like the EIS systems of the 60’s, fragmented DevOps tools have little or no visibility into the overall end-to-end process; however, they do generate lots of important data that is often locked away and isolated. This isolation creates a clear barrier to efficiency, visibility and agility across the software delivery process. Because of the limited function each individual DevOps tool performs, none have the gravitational pull required to define the larger data model. As comprehensive enterprise software delivery platforms emerge elsewhere, standard DevOps tools will face ever-increasing pressure to fold inside them.

Currently, Application Development Lifecycle Management (ADLM) solutions provide a platform to manage development projects, team resources and all manner of development activities. ADLM platforms also contain the “master production schedule” for every development initiative – past, present and future. The data contained within ADLM is now at the core of a quickly emerging software delivery data model and leading ADLM vendors are expanding their footprint pwell beyond traditional use cases. When it comes to ownership of this software delivery data model, I see no other solution category across the entire ecosystem with enough enterprise clout to pose a serious challenge to leading ADLM vendors.

Unified Software Delivery Platforms Are Already Emerging

Each of the five current ADLM leaders (according to Gartner’s most recent Magic Quadrant) are now racing to bring to market an enterprise software delivery platform that integrates many key DevOps capabilities.

Here’s my two cents on each…

The Giants in the space – IBM and Microsoft both have plenty of muscle and IP today. Clearly both are moving down the path toward a comprehensive software delivery platform. IBM acquired DevOps vender urban{code} several years ago and is hard at work building their developerWorks platform. Seemingly everyday, Microsoft is adding some kind of DevOps capability into its Visual Studio product suite. Still, I don’t see either vendor gaining much traction outside of their traditional (albeit very large) customer base. Perhaps more importantly, neither seem to have bonafide credentials within the super-influential agile development community and I believe this kind of street-cred (at least for now) is a must-have to dominate this space.

Atlassian does enjoy wide support among the agile community and no doubt has the broadest adoption footprint of any of the current ADLM leaders. Atlassian is in a strong position to mount a serious threat. However, Atlassian’s core product (JIRA), is widely believed to lack heavy-weight depth in the ADLM feature spectrum and it is often implemented as a departmental or “team tool”. They’ll have to develop deeper strategic planning and multi-team project capabilities to beat the rivals.

This May, software giant Computer Associates announced a definitive agreement to purchase ADLM heavyweight Rally and its agile development platform. In its announcement, CA said it intends to leverage Rally’s capabilities to “complement and expand CA’s strengths in the areas of DevOps and cloud management”. With the crucial addition of Rally, CA is now in a strong position to assemble its diverse capabilities into a single unified and enterprise-caliber software delivery platform. Now… can they seamlessly integrate all of the pieces-parts into a cohesive solution with a unified data model? If so, how long will it take?

Finally, I believe VersionOne may have a slight edge over the other ADLM vendors in the race toward a unified software delivery platform. I may be a bit biased because of my direct involvement in a joint project currently underway – none the less, here are four reasons why they will absolutely be a dominant force to recon with:

Vision: Robert Holler, VersionOne CEO, is clearly buying into the “enterprise software delivery platform” vision. He and his team have a well thought out strategy and they are actively executing against that strategy.

DevOps Automation: VersionOne has partnered with ClearCode Labs and both teams have been hard at work integrating ClearCode’s Continuous Delivery Automation framework into the VersionOne core product. This integration provides VersionOne the ability to orchestrate virtually any DevOps tool or platform and (just as importantly) incorporate all related data across VersionOne’s product suite to fee its quickly expanding data model.

JIRA Integration: VersionOne has just announced a tight integration into the JIRA platform. This integration will give them the ability to fold fragmented JIRA installations across the enterprise into the unified VersionOne platform providing a more strategic and enterprise-grade solution.

Availability: VersionOne’s automated delivery platform is available now and they are demonstrating their comprehensive solution to the eager agile community this week at the sold-out Agile2015 conference in Washington, DC.

Summary

The top 5 ADLM vendors are already well on their way toward developing enterprise-grade software delivery platforms that will consume many of the current “DevOps” automation solutions. Soon, development organizations will benefit from a comprehensive platform that can deliver increased efficiency, visibility and agility when compared to the heterogeneous solutions that have been cobbled together today.

About the Author

dennis-150x150-150x150Dennis Ehle, is a pioneer and thought leader in continuous delivery automation and agile delivery methodologies. Dennis is passionate about helping agile teams dramatically reduce the transaction cost associated with delivering incremental change. His company, ClearCode Labs, does just that by helping organizations continuously deliver high-quality software releases more frequently merging proven methodologies with empowering tools and technology. Twitter: @DennisEhle

Article originally posted on DevOps.com

Posted in DevOps | Leave a comment

Agile 2015 Conference Highlights: Saluting Enterprise Agility

I am just returning from a fantastic week at the 2015 Agile Alliance Agile conference held from August 3-7 just outside Washington D.C. and wanted to share some highlights with those who were unable to attend. This conference attracts international interest and was attended by over 2300 participants, including both experienced practitioners looking to refine their game, as well as novices seeking to join in and reap the powerful benefits of this mainstream set of values and principles that we call “agile”.

As a title sponsor, VersionOne featured the latest innovations in its Enterprise Agile Platform to help enterprises succeed with scaling agile, our support for the Scaled Agile Framework® (SAFe®), and new capabilities such as TeamSync™ for JIRA.

The industry focus on DevOps continues, as do discussions on successfully navigating barriers to change and scaling successfully.  VersionOne featured a unified DevOps solution showcasing demonstrations of the new ClearCode integration that enables an automated visual flow of change throughout the software cycle from discovery through final delivery.

booth

 

 

 

 

 

 

 

 

 

The VersionOne theme, “Enterprise Agility:  Revolutionizing How Teams at All Levels Work Together,” echoed well with the conference sessions and discussions focusing on scaling agile across enterprises.  At VersionOne, we know that revolutionary change, change that really matters, can only be achieved by people working together at all levels.  Conference sessions and experience reports discussed keys to successful transformations, including the importance of executive support and addressing the underlying culture and the soft skills needed to succeed.  Conversations at the VersionOne booth included Dean Leffingwell, the creator of the Scale Agile Framework® (SAFe®), sharing insights around scaling agile.  Jeff Sutherland, Scrum co-originator, was also spotted sharing insights at the booth as well.

jeff&peter

VersionOne toasted 10 years of the State of Agile™ survey, the industry’s longest running survey, by serving champagne during the Wednesday evening vendor show.  A very popular tribute, needless to say!  And if you have not done so yet, please take a few minutes to participate in the State of Agile survey for this year (and you might win an Apple watch).  Go to www.stateofagile.com

VersionOne consultant, Susan Evans, gave an inspirational experience talk about following your beliefs to ensure your happiness and motivation at work.  Write your own career user story with job satisfaction acceptance criteria.  Are you in the right job?  Do you love your job?   Read her 3 part blog on this topic: http://blogs.versionone.com/agile_management/2015/01/05/99-problems-but-a-coach-aint-one-part-1-of-3/

Steve Ropa, VersionOne consultant, presented “Agile Craftsmanship and Technical Excellence:  How to Get There”.  To change your organization, set an example of “this is what we do here”.  Seek to become a mentor to others and to engage in continuous learning.  Read his blog related to this topic:  http://blogs.versionone.com/agile_management/2015/08/06/how-to-become-a-software-craftsman/

Also, Satish Thatte, another VersionOne Consultant, gave a light-hearted talk on “Scaling Agile Your Way,” based on his blog:  http://blogs.versionone.com/agile_management/2014/10/14/scaling-agile-your-way-how-to-develop-and-implement-your-custom-approach-part-4-of-4/

Then, of course, there were evening festivities.  The best party to be invited to was hosted by VersionOne at Bobby McKey’s Dueling Piano Bar featuring very talented musicians and songs we all knew and loved.  A great time and lots of fun were had by all.  No walking out early here!

joThe conference party theme on Thursday evening was Super Heroes.  Of course, the real heroes attending that night were those industry leaders who had the vision and the courage to guide their organizations and teams to a winning strategy focused on a culture of agility and lean principles.  One of the sessions presented by Michael Hamman described agile (transformational) leadership as the ability to grow adaptive capability across all aspects of the organization. In another session, Doc Norton encouraged adopting an experimentation-oriented mindset by challenging assumptions, compliance, and fear of failure.  In the closing keynote, James Tamm encouraged us to examine our own personal defensiveness as a way to overcome conflict and unhealthy cultural dynamics so we can move into an open, trusting, and collaborative culture.

Not surprising given the venue, a number of session topics focused on agile in government, dispelling once and for all the myths that agile cannot be successfully applied in the government sector.  The government sectors often face more ingrained cultural challenges to agile adoption than their commercial counterparts including:

  • Federal policies that agencies are audited against and contractor relationships dictated by contractual requirements which address traditional and “waterfallish” approaches
  • Earned value reporting and accounting driven by artifacts and activity versus outcome
  • Contract competitions which stifle collaboration
  • Command and control hierarchy that restrict flow of information and innovation

However this is changing and the fact is that many government agencies are overcoming these barriers and are realizing the benefits of agility.  Coming from the government sector myself and knowing agile works, this success is near to my heart.  And frankly, who should want to see this success more than taxpayers:  a government delivering a continuous stream of value efficiently.

To summarize key takeaways:

  • Scrum is more than a set of process and activities; it is about the continuous delivery of value and getting things done.
  • Large organizations across all industries are scaling agile across the entire enterprise and discussing how to optimize results.
  • To streamline delivery cycle time and improve time to market, you must tackle DevOps and this is the new focus of improvement in many organizations.
  • True agile transformation must address individuals and interactions, establishing a culture of trust and collaboration and alignment to vision and goals. This requires executive level commitment and action.

The closing keynote cited net income improvements of 755% between collaborative and adversarial work environments.  We need to see more leadership willing to tackle these challenges and deliver. It is never too late.

Whether you want to initiate an enterprise level agile transformation or just revitalize your practices, visit ttp://www.versionone.com/customer-success/ for information on getting started with our solution programs.

we want you

Finally, mark your calendars for next year’s conference. It will take place from July 25 – July 29, 2016 in Atlanta, Georgia, home base for the VersionOne family.  It promises to be even bigger and better!  Hope to see you there next year for another great learning opportunity and chance to reconnect with old friends and meet and connect with new associates who share your passion for enterprise agility.

Posted in Agile Project Management | 1 Comment