We recently interviewed Kel Koenig, release train engineer, at Dean Health Plan to find out why the organization selected VersionOne and the Scaled Agile Framework® (SAFe®) to accelerate its agile transformation and achieve its business goals.
In the video below, Koenig talks about how the company is increasing IT throughput, improving software quality, and increasing collaboration with their business partners.
Accelerated agile transformation
Increased IT throughput
Improved software quality
Better collaboration with business partners
Dean Health Plan, one of the largest integrated health care systems in the U.S., needed to improve the efficiency of IT and decrease the amount of time they were spending designing upfront requirements. The company decided that the SAFe provided them the best opportunity to successfully meet these goals. As Dean Health Plan began their agile transformation, they also realized that they needed an enterprise agile platform that supported SAFe.
After an extensive evaluation Dean Health Plan chose the VersionOne Enterprise Agile Platform. The organization decided that the combination of SAFe and VersionOne provided the best opportunity to increase the throughput in IT, improve the quality of their software, and increase collaboration with their business partners. In addition to extensive SAFe support, VersionOne was selected for its outstanding customer service, and ease of use to help their teams with the transition.
“The combination of SAFe and VersionOne provided the best opportunity to help us reach our business goals.”
Release Train Engineer Dean Health Plan
Six months after transitioning to SAFe and VersionOne, Dean Health Plan increased collaboration across their organization. Business partners began participating in program increment planning and sprint planning. Teams started receiving more feedback during sprint demos that occur on a biweekly basis, and the IT teams began collaborating more within the teams and amongst the different teams.
“The VersionOne partnership has been very helpful with our SAFe implementation. The company has always been very responsive to any questions we have on the product,” said Koenig. “We’ve also been working with one of VersionOne’s partners, ICON Consulting, to help us with the enterprise transformation that’s been going on at Dean Health Plan.”
Scaled Agile Framework and SAFe are registered trademarks of Scaled Agile, Inc.
Are you measuring the value, risk, and quality flowing through your DevOps pipelines? Here is a value-based approach to measuring DevOps performance that will help your organization better evaluate the effectiveness of its DevOps initiatives.
As organizations become increasingly value-stream conscious, new metrics and insights are required to continuously streamline the flow of value through DevOps. Merging data generated during DevOps with the critical business data generated earlier in the software lifecycle can produce new and powerful insights that can help dramatically improve how DevOps teams create, validate, and deliver business value.
DevOps flow metrics describe the ease in which potential value, in the form of backlog items, flows through each DevOps “delivery phase.”
These DevOps metrics identify bottlenecks in the value stream and highlight the amount of time value spends in non-value adding “wait states” at any point between developers and end users. Leveraging flow metrics can help organizations eliminate waste and accelerate time to value by decreasing the amount of time backlog items spend moving between developers and end users.
2- Deployment Risk
Deployment risk metrics objectively describe the relative risk of any incremental deployment compared to previous deployments.
These DevOps metrics analyze dependencies, detect technical anomalies, and identify changes to fragile code as well a whole slew of additional proven risk factors that drive deployment risk. Leveraging risk metrics can help organizations prioritize testing activities and identify costly defects before they are deployed, while improving audit and compliance data to support quicker deployment cycles.
3- Code Quality
Code quality metrics help measure the overall quality of code over time as well as the effectiveness of testing activities — manual or automatic.
These DevOps metrics highlight known defects in any potential deployment, document the ratio of new effort vs. repair effort for any given release, and connect defects with the underlying source code to support “cluster reporting,” as well as other metrics that precisely track relationships between defects and source code. Better code quality metrics help businesses maintain high levels of customer satisfaction and help the organization strike a comfortable balance between speed and quality.
*Keep an eye out for our upcoming articles that will dive into the specific value-based metrics you can use to measure your DevOps flow, deployment risk, and code quality.
Why Your DevOps Metrics Must Be Value-Aware
As software development organizations focus on increasing enterprise agility, the importance of software value streams inevitably bubbles to the forefront. Software value streams describe the flow of business value, in the form of new software, as it flows from idea all the way to the end user.
In order to measure DevOps performance and enterprise agility, you need metrics that explicitly describe the inner workings of your value streams between developers and end users.
For more information about value streams, check out my recent video interview with Dean Leffingwell, co-founder and chief methodologist at Scaled Agile, Inc.
The three categories of DevOps Performance Measurement — DevOps flow, deployment risk, and code quality — provide end-to-end visibility of your entire value stream. It’s important to note that none of the metrics described above can be derived using only data generated from DevOps. To produce these metrics, DevOps must be tightly integrated into the overall value stream, utilizing a single unified and correlated data model.
The Problem with Traditional DevOps Metrics
Each week I talk to enterprise leaders who share frustration about the lack of actionable DevOps metrics. To be sure, DevOps tools can generate tons of data and hundreds of novel metrics. Yet traditional metrics provide limited strategic insight for one critically important reason:
DevOps pipelines are not the start of the software value stream; in most cases they are really just the last mile. However, these pipelines are profoundly unaware of the underlying business value, in the form of new features and fixes, flowing through them. This has resulted in a crippling misalignment between DevOps and the value-centric data generated by the business upstream.
Until these two distinct segments of the enterprise value stream are joined together into a single end-to-end flow with correlated data, DevOps metrics will continue to provide very little strategic value to the business. To help illustrate this point, I’ve assembled a list of twenty standard DevOps performance metrics based on direct customer feedback and my own real-world experience. If you are familiar with DevOps, you’ve probably already dealt with many of these metrics at one point or another.
Outdated DevOps Metrics
Commits per day or period
Commits per developer per day or period
Number of unique code contributors
Change volume — ratio changed vs. static code
Builds per day or period
Average build duration
Average time to deploy
Percentage of failed deployments
Ratio of develop units vs. test units (time, cost, headcount, etc.)
Requirements/test coverage ratio
Total number of automated tests
Ratio of manual tests vs. automated tests
New defect/ticket volume or rate
Defect/ticket resolution rate
Mean time to discover defects
Mean time to restore/repair defects
Broken build rate
Mean time to restore/repair broken build
Number of broken build commits by developer
Note: For this post, I have intentionally omitted system performance and feedback metrics such as system availability, usage, and response time.
5 Shortcomings of Traditional DevOps Metrics
While the metrics above can be extremely helpful to DevOps teams, especially when adopting a “whole system” perspective, they can also have some major shortcomings:
When viewed independently, they can easily miscommunicate problems and promote actions that negatively impact the software value stream.
They do little to promote and extend enterprise agility.
They provide few actionable insights back to the business.
They make no attempt to describe the business value that is constantly flowing through DevOps.
They don’t describe value flow, code quality, or deployment risk.
You can easily collect endless streams of information about changes to source code, the build/CI executions, manual and automated testing activities, building and tearing down virtual infrastructure, and artifacts as they get deployed into environments, and on and on.
The problem with all this data is that it is rarely correlated back to the flow of business value. For example, your deployment automation tool can probably tell you that it just successfully deployed some binary code into some environment, but a much more important question is what backlog items did it just deploy and what “net-new” value was injected into the target environment as a result.
At the end of the day, the twoprimary objectives of DevOps within any organization are fairly basic:
DevOps should streamline the flow of business value, in the form of incremental software enhancements, between developers and end users.
DevOps should facilitate fast feedback to further support business decision making.
DevOps relies on a combination of people, process, and tools to accomplish these primary objectives. A quick scan of the metrics above and it’s clear that they actually tell the business very little about how well DevOps is “performing” against its high-level mission.
Metrics are fed by data and you can’t produce meaningful metrics without meaningful data. As enterprise organizations are increasingly focused on streamlining their software value streams, end-to-end, the disconnect between value creation and value delivery is short-circuiting our ability to provide metrics that really matter to the business. Value-based performance metrics allow organizations to measure DevOps in the context of its ability to build, validate, and deploy potential business value.
I hope this has encouraged you to take a deeper look at how your organization is tracking the value that is being delivered to customers. These three value-based DevOps performance measures are just the tip of the iceberg, so if you want to learn more, check out the first article in this series, How Measuring DevOps Performance Increases Enterprise Agility.
Do you know exactly how much value you delivered to your customers last week, month, or quarter? Can you describe, with precision, the cumulative business value contained within each incremental deployment? Here’s how you can start measuring DevOps performance to gain end-to-end visibility from concept to cash and why you can no longer afford not to.
Why You Should Measure DevOps Performance
Here are 10 reasons you can’t afford not to measure DevOps performance. Measuring the performance of software delivery enables you to:
1- Monitor the flow of business value through delivery
2- Identify and remove bottlenecks and impediments
3- Determine where business value spends time and measure improvement
4- Measure the improvement and reduction of deployment batch size
5- Identify potentially risky packages, releases, and deployments
6- Automate delivery compliance reporting
7- Perform root-cause and post-mortem analysis in real time
8- Quantify delivery performance improvement
9- Prioritize DevOps investments more intelligently
10- Validate the ROI of software initiatives
DevOps Performance Measurement (DPM) allows you to track, measure, and analyze the business value flowing through delivery pipelines. Starting with a single commit in a source code repository, DPM methodically traces the granular bits that collectively represent user stories, defects, and features as they progress through delivery streams until finally they can be consumed by your customers.
Having insight into how business value is flowing through delivery empowers you to reduce the risk of deployments, automate compliance, optimize bottlenecks, determine the root cause of failures, and make more intelligent decisions about portfolio investments.
How Do You Measure DevOps Performance?
Simply put, you measure DevOps performance by:
1- Integrating the data generated from software delivery’s manual processes and automated tools
2- Mapping the software delivery data collected across a unified timeline
3- Correlating each bit of the software delivery data back to the originating user stories or defects
This includes information that describes source code commits, the continuous integration process, manual and automated testing, artifact deployment, manual approvals, and any other step required to advance value to end users.
Unfortunately, gathering this data is not enough. This highly fragmented data must be consolidated into a unified timeline and tightly correlated back to the underlying business value. This final step of associating each bit of data back to a value driver is a critical step — without correlation, you simply cannot measure the end-to-end flow of value.
Until now, the DevOps machines we’ve been building have been profoundly “value agnostic”. This lack of value awareness has created a short circuit in our ability to effectively measure business value flow. To date, DevOps measurements have mostly focused on operational metrics like build frequency and deployment speed.
Not surprisingly, these kind of metrics have little importance to executives and stakeholders beyond those directly responsible for software delivery functions. Tracking the flow of business value has quickly become essential to increasing the strategic importance of DevOps and increasing the overall level of enterprise agility.
Increasing Enterprise Agility with End-to-End Visibility
With the wide popularity and adoption of agile, enterprises have gotten much better at tracking value through strategic planning and development. Strategic initiatives and themes are prioritized and then decomposed into smaller chunks, epics, features, and finally stories and defects. Teams can track the status of users stories and we now have real insight into how value is progressing through the first half of our value streams.
However, once those stories and defects get converted into binary code, the value stream goes dark, creating a black hole into which the organization can’t see. By tracking the user story as it is recomposed into artifacts, packages, releases, and deployments, you extend the understanding of your value streams through delivery, thus providing true end-to-end visibility across value creation and value delivery.
End-to-end visibility is the ability to track, measure, and visualize the flow of business value from the initial idea until it is delivered into your customers’ hands.
View Your Delivery Value Stream with VersionOne Delivery at a Glance
The progress we’ve made in tracking business value through strategy and development is significant, but it is not enough. Until you fully integrate delivery into your software lifecycle management, you are flying blind. If you are only as strong as your weakest link and you have zero visibility into delivery, then you have no idea how strong you are or how you can improve.
Until you fully incorporate delivery into your value stream and measure the flow of value from strategy through development and delivery, you can never be sure where your biggest bottleneck is or the best way to increase enterprise agility.
I hope this has encouraged you to take a deeper look at how much visibility your organization has into the flow of business value through delivery. To truly increase your enterprise’s agility, you must have full end-to-end visibility and that requires building a strategy around DevOps Performance Measurement (DPM).