How Unified Software Development and Delivery Makes the Vision of DevOps a Reality

2 Barriers to Unifying Dev and Ops

What are the two barriers to unifying Development and Operations?

Are you finding that DevOps is more vision than reality? Here’s how you can unify the systems that DevOps workflows depend upon to help make your DevOps vision a reality.

DevOps Can Be More Vision Than Reality

The DevOps movement has provided organizations building software with a vision of increased deployment frequency, product quality and mean time to recovery gained from improved collaboration and automation.

While propagating that vision has been a success, executing against it often remains challenging – especially in the enterprise. Ultimately the DevOps movement seeks to tightly unify Dev and Ops workflows, but so far two systemic barriers have kept these functions from becoming truly unified.

2 Barriers to Unifying Dev and Ops

I believe successfully unifying plan, develop, validate, deploy and run workflows is still challenging for two fundamental reasons:

  1. Plan and develop work items (features, fixes, stories, etc.) are not directly linked to operational outputs (builds, artifacts, environments, etc.)
  2. Lots of fragmented automation make it difficult to orchestrate and creates many pockets of siloed data.

1. Development Workitems Are Not Directly Linked to Operational Outputs









In any software delivery process, there is an inherent disconnect between development workitems and delivery outputs. The image above highlights a common pattern that organizations adopting DevOps face regardless of their level of DevOps maturity. This platform disconnect between functional workitems and delivery outputs makes it very difficult to truly unify development and operations.

Starting with the green box on the left, you have a simple representation of the agile development process. The main units of flow moving through the development organization’s storyboards have traditionally been workitems such as features, fixes, stories, epics, etc… However, once these development initiatives get converted into builds or artifacts and deployed into environments, the linkage gets muddy. At that point, “release” or “deployment” units of flow are only loosely affiliated with their corresponding workitems back in the agile storyboard on the left.

Feature attributes such as cycle time and current status can be tracked accurately while moving within context of the development storyboard, but manual updates to that data are required during downstream delivery. This creates a very weak understanding of the real-time flow of value once you get beyond the planning tool and into the downstream and more “operational” software delivery process.

According to a recent DevOps Survey conducted by VersionOne, more than 87 percent of respondents indicated that multiple systems are required to manually cross-reference features and fixes with their corresponding builds, artifacts and environments. This problem then gets magnified as functional changes “queue up” in later stage environments between release events. This lack of automated manifest reporting makes it increasingly difficult to express with certainty which workitems are included within specific artifacts and deployed into specific environments at any given point in time.

Here are a few questions that are typically difficult to answer with absolute certainty:






It will continue to be difficult for all stakeholders across the end-to-end delivery pipeline to collaborate at the highest level if Dev and Ops platforms are not truly unified. Building DevOps maturity mandates a tight linkage between functional workitems and corresponding delivery outputs to streamline the flow of value and simplify cross functional collaboration.

2. Automation Processes and Tools Are Fragmented










A clear and positive outcome of the DevOps movement is the emergence of a plethora of point process automation tools. These tools have been important enablers of DevOps practices and have dramatically reduced the amount of time required to validate, deliver and manage new software. However, the primary data models of these DevOps automation tools are wholly unaware of concepts such as features and fixes. Since these workitems represent the actual “content” flowing thru automation, visibility and traceability at the feature/fix level is critical to driving efficiency in a DevOps setting.

The image above depicts the fragmented delivery environment that frustrates our ability to link delivery outputs with functional workitems. This graphic was shared with me recently by an organization trying to enhance their ability to track the flow of value, in real-time, thru their delivery pipelines. If DevOps is a priority at your organization, this example is probably similar to what you have now or what you will have in the not too distant future.

As this very busy diagram indicates, the DevOps automation tools we depend upon to move value from the initial commit all the way out to production are continuously generating important audit, test and deployment data at every stop across the delivery pipelines. However, this data is often under-leveraged and buried deep inside tools completely unaware of the features and fixes flowing thru them.

Because of this fragmentation and lack of context, it is very difficult to provide critical status and audit data back to DevOps stakeholders. Without a unified development and delivery platform, correlating data generated through delivery pipelines back to specific features and fixes will continue to be a largely manual, error prone and time-consuming process.

4 Costs of Dev and Ops Not Sharing a Unified Platform

The costs of development and delivery not being unified is a missed opportunity. While small and incremental gains toward end-to-end unification have yielded progress, the reality is that most enterprise software development organizations are still struggling to improve:

value-stream1. Value Stream Efficiency

Because of the units of flow problem, stakeholders don’t have automated visibility into the status or and/or deployed location of the features and fixes flowing thru a delivery pipeline. As a result, manual effort is required to perform continuous business-to-operational cross-reference reporting and analysis that introduces material and unnecessary overhead into the software delivery value stream.

continuous-improvement2. Opportunities for Continuous Improvement

The plethora of fragmented point automation generates siloed data that is difficult to access and correlate back to a discrete set of features and fixes without significant human intervention. This fragmentation makes it difficult to collect meaningful statistics that can identify bottlenecks across the entire software delivery chain. This data is the crucial fuel required to drive the kind of continuous process improvements needed to materially increase delivery frequency and shorten time to market.

software-quality3. Software Quality & Failure Rate of New Releases

The lack of end-to-end visibility into the entire value stream makes it difficult to know with absolute precision which functional changes have been included in any given build or artifact. This reconciliation process is almost always manual and is susceptible to errors that increase the odds of deploying unstable or incomplete “work-in-progress” into critical environments.

meantime-recovery4. Mean Time to Recovery & Slower Analysis

The lack of detailed end-to-end delivery accounting and audit history, at the business level, frustrates the ability to find root cause and issue repairs for issues and defects once uncovered. Additionally, this un-correlated data makes it difficult to perform the detailed analysis needed to identify system or process failures that caused the introduction of critical production defects in the first place.

What Is a Unified Software Delivery Platform?








In order to make the vision of DevOps a reality, a truly unified platform that supports the end-to-end delivery stream – from idea to production is a primary requirement.  A crucial capability to achieve platform unification is the ability to link together all of the data generated throughout the delivery process. If data can be gathered and correlated at the time of creation, a comprehensive dashboard can be created that supports real-time collaboration across stakeholders.

Most organizations that have multiple agile teams are already using some sort of agile lifecycle management platform to manage priorities and coordinate development activities. By reimagining our storyboards as development, validation, and deployment orchestration hubs, we can unify the planning and development platforms with the infrastructure required to support downstream automation – without ripping out or replacing any of the tools and technology you’ve already implemented.

By leveraging centralized pipeline orchestration, you can better track work items as they move from one stage to the next in your storyboard. Because the orchestration layer understands automation in context of the features and fixes flowing thru it, stories can now be directly associated with the artifacts, builds, config files or deployments, linking these two traditionally decoupled platforms.

When your storyboard is linked with all the DevOps automation tools that move changes from the first commit all the way out to production you can begin to capture and associate the important audit, test and deployment data generated at each and every point within your delivery pipelines.This is the type of unified software delivery platform that can help make the vision of DevOps a reality.







Here are a few characteristics of a Unified Software Development and Delivery Platform:

  • Unified DevOps repository that can support robust cross-referencing between business value (features/fixes) and operational objects (builds, artifacts, deployments).
  • Ability to visualize, measure and optimize the journey of features and fixes from idea all the way to production deployment.
  • Robust pipeline orchestration that leverages existing DevOps automation and eliminates or minimizes the need for manual handoffs.

The 5 Benefits of Unified Software Development and Delivery

increased-collaboration1. Increased Collaboration Across All Disciplines

Product Owners, Project Managers, Developers, Testers, and Ops team members can more easily collaborate because business work items are linked to delivery outputs providing visibility, traceability and clarity across the entire value stream.

increased-automation2. Increased Automation and Streamlined Value Streams

The plethora of fragmented point DevOps automation tools are now orchestrated by the unified DevOps orchestration engine, reducing the need for human intervention.

increased-deployment3. Increased Deployment Frequency & Shorter Time to Market

With clear visibility of the entire value stream it is much easier to make the continuous process improvements that can increase delivery frequency and time to market.

improve-software4. Improved Software Quality & Reduce Failure Rate for New Releases

The ability to automatically cross-reference any build or binary to the features and fixes included within – with absolute precision – greatly reduces the chances of testing in the wrong environment, accidently promoting works in progress or items with unmet dependencies. This capability results in higher release quality with less wasted manual effort.

shorter-meantime5. Shorter Mean Time to Recovery & Faster Analysis

Unified audit and traceability throughout the entire software delivery process – from idea to production – will make it much easier to uncover issues prior to deployment. When defects do reach end-users, post-mortem root cause analysis can occur in minutes instead of weeks, uncovering root cause and prevent issues from recurring.


The independent evolution of planning platforms, build automation, testing and release management tools has created a profound and systematic data division between Dev planning platforms and Ops automation. As long as these disconnects persist, achieving the key DevOps ideal of cross functional collaboration and streamlined process flow is challenged.

Unified Software Development and Delivery is the process of merging these two universes to provide a comprehensive and end-to-end value stream that documents the flow of business value from idea to production. The VersionOne® Continuum™ for DevOps solution is one example this type of platform.  For more information, visit

Posted in Continuous Delivery, continuous improvement, Continuous Integration, DevOps | Leave a comment

“When will it be done?”

Sometime last year, I started working with a Fortune 100 company on a large, distributed product development effort. There were many “refactoring opportunities” – a term a friend once used to describe my code. Like many large efforts spread across locations, there were many constraints.

One day, towards the beginning of the engagement, we were pragmatically introducing agile practices and principles when one of the executives decided to pay us a visit. After a few friendly greetings, he walked up to me and said “So you’re the agile guy,” using a tone which sort of left me feeling like a suspect who has just been targeted and painted with lasers. He then asked the question in the forefront of his mind “When will it be done?”

For the first time, I suddenly realized the power of “It.” Without much thought, I quickly replied with the facts: “From what I know, no one has done a good job answering that to date.” Not knowing much about the project, but wanting to provide context, I followed up by saying “Agile methods will help us be able to tell you what is done which is the strongest evidence we might have to when it will be done.”

From Project to Product

For years, I have been helping leaders understand how to use agile methods by reframing the discussion. In this case, I might defend this executive by guessing that “When will it be done?” was the only question he felt he could ask. It could be that he had no other questions in mind, or it could be that past progress data has been so weak or non-existent, that all or nothing investigation was the path of least resistance with the best results to date. It could also be that the question comes from the years of conditioning around asking “When will it be done?”

All or nothing thinking is deep in the ethos of many companies. It may be that this is merely an organizational, or industry, norm that is well established. If, like me, you’ve been in the game for a bit, there is an interesting and unnamed progression that contains the agile movement and provides a challenge for its future.

If the 1990s were the decade of project (on time, on budget and within budget) then the 2000s could be viewed as the decade of process (or progress). The rebels who spawn the various methodologies later branded as “agile.” were frustrated by a lack of real progress. You could think of this progress as moving from 60% of 100% in the 1990s to 100% of 20% in the 2000s.






This change is so much more important than “Who is agile and who is waterfall?” This change allows for investors to use the whole product (100% of x%) as a way to validate or invalidate their investment, and possibly choosing to change their overall portfolio investments. Or put another way, it allows for a shift from “on budget” to “is valuable.”

What’s the next best investment?

Often overlooked and under discussed, agile practices have provided a way to shift toward questioning investments based on incremental evidence of completion. Teams who are in earnest embracing and practicing agile methods often move toward progress as more of a constant. With less worry about “What will we get done?” the new, and more ambiguous question, becomes “What should we get done?”

Scan the figure from left to right again, and you’ll see a progression of certainty. As cycle times for learning decreases, in the form of iterative product increments, we are able to more quickly assess how wrong we were. Using an analogy, if your car is (or was) an unreliable piece of junk, you head out on a journey wondering if you will make it to your destination. If your car is a trusted delivery vehicle, you are freer to wonder about other questions like “Do we still want to go there?” or “How are the passengers doing?” or other more valuable, non-progress based questions.

Refactoring Our Rhetoric

Changing the dialog from “When will it be done?” to “What is done?” provides an alternative question and new perspective. It challenges both investors (the executive) and producers (the teams) to shift towards validating products and user experiences that are “good enough.”

Concretely, let’s explore a few refactorings that surface when you make the switch to 100% of x%:

  • Planning for complete user experiences supports customer empathy as a guiding force
  • Validation over completion introduces a sort of test driven product which routes out waste
  • 100% of x% injects the idea of evaluating value returned for product increment investment

While there are many others, let’s explore these three, starting with customer empathy. Thinking in chunks of product, like user experiences, and the validation of each chunk tends to more quickly surface “the who” aspect of product development.

Customer Empathy: The product community in this example was building a game. Games provide a nice basis for validation because play is part of the product. Being overly certain about who might like what is a great way to build the wrong game. Simple tools like pragmatic personas now become powerful validators that can stop the building of the wrong thing simply by challenging the experience a player might have.

Incremental Validation: There are more companies than I’d like to admit who are working hard to build the wrong thing faster. Or put another way, they are so overly certain that they need to “get it done” that they fail to validate it until there is a ton of it in play. Moving away from it and towards incremental validation of a meaningful user experience, helps learning happen sooner. It’s not mutually exclusive with agile practices, but learning from meaningful user experience does not simply happen just because you are working sprints.

Iterative Evaluation: The best way to measure (evaluate) is to test in the market. This is easier for some products than others. For example, it’s easier to deploy and validate a mobile ready web app than it is to do the same for a pace maker. As these are obvious extremes, your product most like sits on a continuum between the two. Ask yourself what you could do to slide towards faster market validation, sooner, is a strong, simple take away that you can reflect on immediately.

More pragmatically, when you shift toward 100% of 8% (as an example), you can then ask, “If the first 8% was a poor return, should we still do the other 92%?” Or, you might find that by simply asking how you are going to evaluate that first 8%, you step into a deeper level of early product validation thinking that is often missed when people over focus on “How much can we get done in this sprint?” or as was the case with the executive in my experience, staying stuck in the land of “When will it be done?”

But so many people are all about “It”?

After reading this, you’ll find your awareness of “It” as a singular measure is more prevalent than you knew. Most common “Its” live in larger planning where investors are not aware of the power of incremental validation, or eco-systems where all the investors hear is agile speak instead of product speak or validation language.

If you are an executive, an influencer, or a big boss type, I challenge you to refactor the “its” you hear towards smaller chunks of meaningful investments. If the word smaller vexes you, then shift to an investment mindset: assuming that some investments pay more return than others, what is the right place to invest just enough to learn where you should invest next?

If deep in your brain, you are still thinking about building software like buying bonds, you need to refactor that metaphor towards hedge fund trading, where a series of small failures are wildly overwhelmed by the large returns around them. If you knew what stocks to buy, you would. Since you don’t you are forced to engage investments with a measure of certainty or an awareness of uncertainty and an eye toward measuring and adjusting based on the evidence and your experience.

If none of that works, buy a copy of Antifragile by Nassim Taleb. He seems to know more about agility that most coaches I know. I mean, look at the title, it contains both fragile and agile in one word.



Posted in Agile Metrics | Leave a comment

Executive Visibility in Successful Agile Enterprises

captainA colleague recently asked me, “What if developers only got paid when features ship?”

“They’d only do the easy ones,” I replied, only half-joking.

But as I thought about it more, I asked myself, “What if nobody in the entire value stream – executives included – got paid until features ship?”

Now, that might sound like a far-fetched idea, until you realize that that is exactly the position your business is in – especially if you work in a product company.   Who pays for things they haven’t received?

So, what if that was the case in your company?

How much more collaborative would your organization be?

How many meetings would you NOT have?

And what would your executives care about?

My bet is that your executives wouldn’t worry about tracking anything that doesn’t help them know whether or not features are getting finished and delivered quickly enough.

This bet isn’t based on just an internal hunch.  Over the last couple of years, I’ve asked every executive I’ve spoken with, as well as individuals who have to regularly report to executives, just what it is that they really care about.

The consistency of the answers I’ve received is remarkable:

“Time to market”

“Speed to cash”

“System lead time”

They might be using slightly different terms, but they’re all saying the same thing.  They’re saying that the most important thing they can know is how long it takes from the time they realize the need for some capability to the time that capability gets into the hands of their customers.

Why is this so, even though the “you get a paycheck only when features ship” policy isn’t in effect?

Well, we could point to the abundance of “lean business” awareness out there today.   Or we could talk about how, in today’s business climate, you’re either disrupting your competitors or they are disrupting you.  And we’d be right.

We could reason that small batch sizes and short lead times actually reduce costs and increase opportunities for revenue, along with providing the continual feedback that helps us make sure that we’re building the right things.  Again, we would have a strong case.

But if we step back and just think about metrics, we can see that the things we could measure, like internal cycle times, WIP defect trends, escaped defects, and low-level burnup and burndown rates, have one thing in common:  they all affect lead the time of your software development and delivery system.

That’s why, in my opinion, if you could only measure one thing, system lead time would be it.  And that’s why I believe I’m not hearing executives of successful agile enterprises asking for low-level metrics – at least not for their primary decision-making information.

Start High, Drill Down Only When Necessary

Executives need visibility into the measures that are relevant to their responsibility for business outcomes.  This visibility also needs to be easily-accessible.

If, as more executives are telling us, the measures that best indicate organizational performance are those related to the speed with which they can deliver, then those measures are the ones that need to be at their fingertips.  If all is well with those, there probably won’t be a need to drill down further.

This doesn’t mean that lower-level measures aren’t valuable.  It’s in those measures that we often find clues to what needs to be tweaked to continually improve lead time.

But starting low requires triangulation and analysis, which is time-consuming and subject to misinterpretation.  Said another way, it’s expensive and confusing, and who wants that?

By focusing reporting at a high, outcome-oriented level, executives can concentrate on what’s immediately important to them.  If those higher-level measures start to trend negatively, then they can explore the underlying data in more detail.

Configuring your executive views with this high-to-low progression in mind will allow you to help your executives make better decisions more quickly.  And better, quicker decisions can make the difference between an organization that thrives and one that doesn’t.

Which kind of organization do you want to belong to?

Learn more about Scorecards and other executive visibility options in VersionOne.

Posted in Agile Executive, Agile Leadership, Agile Metrics | 2 Comments

Measuring Agile Success?!?#?

Another ChartAbout six months ago, I wrote a blog post called Top 10 Tips for Measuring Agile Success, and the reality is that it wasn’t necessary a set of tips as it was a blog about the to ten ways people responded to the VersionOne State of Agile survey and some related metrics that support them. Way before that blog was ever published, the question of how to measure agile success was a common one that I and many other agile coaches would receive when working with organizations and executives. Since the blog was published, I’ve had more questions and in some cases some rather odd reactions to the concept of measuring agile success. Some questions are very direct — “which metrics really work?” Or, “which metrics should be used at the various levels of the organization?” Then there are the reactions or questions like, “aren’t you aware of the impact of metrics?” Or, the statement, “suggesting the one way is ridiculous.” Or, the best reaction, “dude, I hate metrics.”

Okay, I can accept all this and I get the confusion and general concern, and trust me — I share some of these sentiments. Instead of looking at the question from the stand point of which metrics are the best, let’s explore the topic or the questions of how do we measure agile success and why is it important.

Let’s start with the “why”, and I think the primary “why” is obvious — the cost of change can be significant. There’s not only a tangilble investment in training, coaching, reorganization, staff changes, and even re-engineering the physical environment, but there’s also the significant intangible cost associated with productivity loss due to teams reforming, working through the chaos, and emerging through the change usually with something that looks much different than what you started with. I don’t think I’ve been around a team or organization going through the change associated with adopting agile that hasn’t had staff turnover, fits-and-starts, and a brief time of general struggle both for the people and the software output as everyone comes up to speed. So, trying to understand the return or the off-setting value gained is an important reason to measure agile success. To that end, it’s not really measuring agile success, it is better stated as measuring the success of the process investment change that organization is embarking upon or has recently spent six-months enduring.

Plan-Do-Check-ActAnother “why” for measuring agile success is to enable the PDCA loop. The PDCA loop (a.k.a. the Deming Circle or Plan-Do-Check-Act [Adjust]) is a core business and leadership practice and it is called out in all lean and agile approaches. The concept is simple — establish a goal, decide what you are going to do, get it done, inspect the results, make adjustments based on observations, and then do it all over again as you march to the goal — the essence of iterative development and continuous improvement. Measuring the organizations progress and performance allows for the inspection to occur; thus, you adapt and get better the next time around.

So, we need to ensure that the organizational change we’ve embarked on is making the positive impact we expect and a key part of ensuring this is measuring to enable continuous improvement.

How we measure our agile success is a bit more complex — mostly because there are two things to measure. First, we need to measure the adoption of agile principles, processes, and practices. Second, we need to measure how our organization is performing to assess the impact of changing to agile.

The approach to measure agile process success is generally around leveraging Agile Assessments which hope to identify where your organization is on an “agile maturity” spectrum. There are several long established approaches that internal and external coaches use. The concept of measuring maturity is simple, conduct a self-assessment based on both quantitative and qualitative measures in several areas including team dynamics, team practices, requirements management, planning activities, and technical practices (just to name a few). For these measures to mean anything, you need to start with a baseline (how mature are you today?) and then select a reasonable cadence to re-assess on your road to … more maturity? There are some very-useful existing maturity assessments out there including Agility Health, the classic Nokia Test, and about 20+ others listed on Ben Linder’s blog.

Agile assessments do have some aspects of measuring impact; however, the focus is generally isolated to certain areas and or used to reflect the success back to the process. Measuring agile success from the standpoint of impact on the organization should be more focused on The Moneyball metrics of the business. Measuring impact is much more difficult sometimes because it can be difficult to tie a direct correlation between the agile delivery metrics and the traditional business metrics. It is also difficult because the lack of understanding of the agile delivery metrics. Making matters worse is how people sometimes focus on the wrong ones, which takes me back to The Moneyball reference. It’s important for organizations to select the right metrics to focus on and the right ones to tie together. As mentioned by Michael Mauboussin in his HBR article The True Measures of Success, leadership needs to understand the cause and effect of metrics. What this means, metrics if not selected correctly can provide misdirection and can result in misbehaviors — basically people will make bad decisions and game the metric.

To give you an example of a [not so solid] agile success impact metric, lets look at a common metric that people often argue about – sales revenue tied to the delivery organization’s velocity based on story points (e.g. revenue / velocity). The first challenge with this is using the term story points [and velocity], you tend lose or confuse people not familiar with the concept and, if they do, an argument about estimation generally ensues and people often change their point measuring stick. To avoid this challenge, go with safer, lean metrics or simply put, the count of stories or things (great advice from Jeff Morgan – @chzy). The next challenge with this metric is that it may be too generalized and not really lead to better results. There may be better goal focused measure such as publication mentions after a release that leads to an increase in the number of product trials. Or possibly a goal of reduces support tickets which leads to improvements in customer retention or renewals. All of these are good, but alone they don’t necessarily provide an ability to measure agile success. To help assess your agile success, correlate the impact metrics with the lean, agile metric — the number of stories delivered during the same period. For example, use the number of stories delivered to normalize product revenue, number of web visitors, number of trials, and the number of support calls. Watch and assess these trends over six months and see the impacts.

I recently read a book called RESOLVED: 13 Resolutions for LIFE by Orrin Woodward. Although the book is aimed at leadership development, one of the resolutions talks about establishing and maintaining a scoreboard. The idea is that we should have a set of metrics that we constantly visit that help to power our PDCA loop. This is a long running practice in business, and if you don’t already, I suggest you establish a scoreboard that helps you measure your agile success. It should include metrics from your process adoption assessment as well as your organization’s agile-adapted Moneyball metrics. In agile we often talk about big-visible charts, your agile success scorecard should be one. Share the results of your agile journey and the impact it is having on your organization, and help people understand what the metrics mean and what decisions or actions should be made based on the indications of the metrics. There will be times things don’t look good, but done right, your agile success scorecard should help spur and inspire an environment of continuous improvement that embraces agile principles and practices you’ve embarked on implementing.

Although, I don’t call out any specific examples of agile success scorecards — it would be great if you would share your examples or metrics you like or resources that can help others.

There are many worthy reads on this topic, but a couple more that I like are Agile Fluency, established by Diana Larsen and James Shore, as well as this article by Sean McHugh, How To Not Destroy your Agile Teams with Metrics.

Posted in Agile Adoption, Agile Coaching, Agile Metrics, Agile Teams, continuous improvement, Lean | Leave a comment

Frameworks for large agile projects?

frameworksThings are getting more and more interesting with the use of agile in larger and larger projects. We now have a number of frameworks that we can use, such as LeSS, Scaled Agile Framework® (SAFe®), DAD and Scrum at Scale. These frameworks can all be investigated with a few clicks of your mouse. And in true style the internet has a number of people telling us that these frameworks are bad. That they are prescriptive, or that they lack flexibility. If you look you can find the flame wars where you can get messages such as:

  • Frameworks are bad and that you should simply make your own approach up
  • My framework is better than your framework
  • Frameworks are not agile
  • And indeed many others

I have a different view. These frameworks contain many years of experience from people who have been working in the software industry and have a rich experience. They have recorded their ideas and given us information about things that work. They are uncovering better ways of developing software by doing it and helping others. Where have I heard that before? Of course if you think that you have more experience than all these people combined, then of course you should go it your own way. But if that is true, please tell us what your experiences are!

So which framework matches your needs best? Now that really is something that you can only answer, although you can take advice. All of the frameworks have something to recommend them, and while they are all built on what turns out to be very similar foundations, they do sometimes assume a different starting point. Some are for people who are more experienced, while some offer more structure to help you get started.

All of the frameworks include the principle of continuous improvement, meaning that they should all be seen as a starting point. As you learn, you will apply your lessons through inspect and adapt, or the familiar Deming cycle of PDCA. You own the framework that you adopt!

The warning is that frameworks are not a software development silver bullet. They will need investment and effort to establish and grow. How to design your framework, how to build it, and how to get the people ready are really key questions. Are you at a starting point for a framework or do you need to spend more time establishing your basic agile teams, educating the people or exploring your lean process?

The experience is that framework implementations which are nurtured and supported exceed beyond the expectations. While those that are established in the hopes of a quick and easy miracle, deliver as expected.

Good luck!

Scaled Agile Framework and SAFe are registered trademarks of Scaled Agile, Inc.

Posted in SAFe, scaled agile framewok, Scaling Agile | 3 Comments

How ADLM Gobbles Up DevOps







In the 1980’s and 90’s, the business software landscape was dominated by a diverse list of cutting-edge companies such as Best Software, i2, Brock Control Systems, Mapics, Ross Systems, Infinium, FBO Systems, Manugistics and MSA (of course I could go on and on). Now long gone, these and hundreds more really great companies have been gobbled up or rendered obsolete by the rising class of ERP giants. In this article I’ll explain why history will repeat itself leading to the extinction of most DevOps tools as leading ADLM platforms continue to assert their dominance across the diverse software development and delivery automation ecosystem.

History Informs Our Future and The Evolution of ERP

You already know that for the past twenty years, most companies have leveraged some form of ERP system to manage virtually every core business processes. One benefit of this tightly integrated solution is a powerful inter-functional data flow that enables corporate agility and provides the highest level of visibility. This super-integrated architectural model has become the standard adopted by virtually every enterprise around the globe. What you may not know is today’s “ERP model” is the result of four distinct evolutionary generations that I believe help predict the next major evolution of automated software delivery.

Phase 1 – Automation
Enterprise Information Systems (EIS) – In the 1960’s, early automation systems were developed to support important individual business functions such as general ledger, inventory management, billing, payroll, etc. These systems were architected completely independently of each other and added little value to the enterprise beyond their narrow scope.

Phase 2 – Core Data Model
Manufacturing Resource Planning (MRP) – Then in the 1970’s, the idea of a master production schedule was devised so that a few of these isolated systems could gain greater visibility into future inventory and product requirements of the organization. The big idea behind the master production schedule was the creation of an open data model that could be leveraged by other systems impacted by the manufacturing production schedule.

Phase 3 – Expansion
Manufacturing Resource Planning II (MRP II) – In the 1980’s, software vendors began to build and sell off-the-shelf packages that promised “best of breed” process design. These solutions provided tightly integrated versions of the key manufacturing processes required to produce products.

Phase 4 – Business Process Domination
Enterprise Resource Planning (ERP) – Finally, in the 1990’s, business software vendors expanded well beyond the manufacturing scope by creating highly-integrated solutions that now cover just about every standard business function imaginable. These systems leverage a unified data model to dramatically improve visibility, consistency, accuracy and planning capabilities enterprise-wide.

What Does ERP Have To Do With Automated Software Delivery?

The evolution of ERP has taught us how natural pressures forced the creation of a unified and comprehensive “business data model” spanning the entire enterprise. Those software vendors with enough influence to dictate that data model were the ultimate winners in the ERP space.

In the very first generation of ERP (EIS), software was leveraged to deliver a high degree of automation to many business processes that were previously manual. Over time, it became clear that these newly automated processes were interconnected and the evolution towards a tightly integrated and unified data model was underway. The business objective that fueled each successive generation outlined above was the need to design more efficient business processes that increased organizational visibility and agility.

As Marc Andreessen famously said in 2011, “…more and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defense.” (Why Software Is Eating The World). That statement resonates even stronger four years later. Today, virtually every corporate organization is seeing the familiar pressure to deliver software more efficiently and reliably. If history is indeed our guide, any highly fragmented and/or isolated process required to deliver incremental software change will face mounting pressure to be merged into an integrated end-to-end enterprise-grade platform that can deliver improved cross-functional visibility with even greater efficiency.

Why Application Delivery Lifecycle Management Will Win

In my view, there are really only two broad solution categories in the realm of automated software design and delivery – Application Development Lifecycle Management (ADLM) and the catch-all term DevOps (which I’ve hijacked here to describe any other type of process automation that assists the software delivery process).

DevOps tools are often narrow point solutions that have sprouted from open source projects, in-house development or commercial vendors. Organizations rely heavily upon a diverse collection of these DevOps tools to help document, validate and automate a steady flow of software change along its path to end-users.

Here’s the problem DevOps tools are beginning to face: Like the EIS systems of the 60’s, fragmented DevOps tools have little or no visibility into the overall end-to-end process; however, they do generate lots of important data that is often locked away and isolated. This isolation creates a clear barrier to efficiency, visibility and agility across the software delivery process. Because of the limited function each individual DevOps tool performs, none have the gravitational pull required to define the larger data model. As comprehensive enterprise software delivery platforms emerge elsewhere, standard DevOps tools will face ever-increasing pressure to fold inside them.

Currently, Application Development Lifecycle Management (ADLM) solutions provide a platform to manage development projects, team resources and all manner of development activities. ADLM platforms also contain the “master production schedule” for every development initiative – past, present and future. The data contained within ADLM is now at the core of a quickly emerging software delivery data model and leading ADLM vendors are expanding their footprint pwell beyond traditional use cases. When it comes to ownership of this software delivery data model, I see no other solution category across the entire ecosystem with enough enterprise clout to pose a serious challenge to leading ADLM vendors.

Unified Software Delivery Platforms Are Already Emerging

Each of the five current ADLM leaders (according to Gartner’s most recent Magic Quadrant) are now racing to bring to market an enterprise software delivery platform that integrates many key DevOps capabilities.

Here’s my two cents on each…

The Giants in the space – IBM and Microsoft both have plenty of muscle and IP today. Clearly both are moving down the path toward a comprehensive software delivery platform. IBM acquired DevOps vender urban{code} several years ago and is hard at work building their developerWorks platform. Seemingly everyday, Microsoft is adding some kind of DevOps capability into its Visual Studio product suite. Still, I don’t see either vendor gaining much traction outside of their traditional (albeit very large) customer base. Perhaps more importantly, neither seem to have bonafide credentials within the super-influential agile development community and I believe this kind of street-cred (at least for now) is a must-have to dominate this space.

Atlassian does enjoy wide support among the agile community and no doubt has the broadest adoption footprint of any of the current ADLM leaders. Atlassian is in a strong position to mount a serious threat. However, Atlassian’s core product (JIRA), is widely believed to lack heavy-weight depth in the ADLM feature spectrum and it is often implemented as a departmental or “team tool”. They’ll have to develop deeper strategic planning and multi-team project capabilities to beat the rivals.

This May, software giant Computer Associates announced a definitive agreement to purchase ADLM heavyweight Rally and its agile development platform. In its announcement, CA said it intends to leverage Rally’s capabilities to “complement and expand CA’s strengths in the areas of DevOps and cloud management”. With the crucial addition of Rally, CA is now in a strong position to assemble its diverse capabilities into a single unified and enterprise-caliber software delivery platform. Now… can they seamlessly integrate all of the pieces-parts into a cohesive solution with a unified data model? If so, how long will it take?

Finally, I believe VersionOne may have a slight edge over the other ADLM vendors in the race toward a unified software delivery platform. I may be a bit biased because of my direct involvement in a joint project currently underway – none the less, here are four reasons why they will absolutely be a dominant force to recon with:

Vision: Robert Holler, VersionOne CEO, is clearly buying into the “enterprise software delivery platform” vision. He and his team have a well thought out strategy and they are actively executing against that strategy.

DevOps Automation: VersionOne has partnered with ClearCode Labs and both teams have been hard at work integrating ClearCode’s Continuous Delivery Automation framework into the VersionOne core product. This integration provides VersionOne the ability to orchestrate virtually any DevOps tool or platform and (just as importantly) incorporate all related data across VersionOne’s product suite to fee its quickly expanding data model.

JIRA Integration: VersionOne has just announced a tight integration into the JIRA platform. This integration will give them the ability to fold fragmented JIRA installations across the enterprise into the unified VersionOne platform providing a more strategic and enterprise-grade solution.

Availability: VersionOne’s automated delivery platform is available now and they are demonstrating their comprehensive solution to the eager agile community this week at the sold-out Agile2015 conference in Washington, DC.


The top 5 ADLM vendors are already well on their way toward developing enterprise-grade software delivery platforms that will consume many of the current “DevOps” automation solutions. Soon, development organizations will benefit from a comprehensive platform that can deliver increased efficiency, visibility and agility when compared to the heterogeneous solutions that have been cobbled together today.

About the Author

dennis-150x150-150x150Dennis Ehle, is a pioneer and thought leader in continuous delivery automation and agile delivery methodologies. Dennis is passionate about helping agile teams dramatically reduce the transaction cost associated with delivering incremental change. His company, ClearCode Labs, does just that by helping organizations continuously deliver high-quality software releases more frequently merging proven methodologies with empowering tools and technology. Twitter: @DennisEhle

Article originally posted on

Posted in DevOps | Leave a comment

Agile 2015 Conference Highlights: Saluting Enterprise Agility

I am just returning from a fantastic week at the 2015 Agile Alliance Agile conference held from August 3-7 just outside Washington D.C. and wanted to share some highlights with those who were unable to attend. This conference attracts international interest and was attended by over 2300 participants, including both experienced practitioners looking to refine their game, as well as novices seeking to join in and reap the powerful benefits of this mainstream set of values and principles that we call “agile”.

As a title sponsor, VersionOne featured the latest innovations in its Enterprise Agile Platform to help enterprises succeed with scaling agile, our support for the Scaled Agile Framework® (SAFe®), and new capabilities such as TeamSync™ for JIRA.

The industry focus on DevOps continues, as do discussions on successfully navigating barriers to change and scaling successfully.  VersionOne featured a unified DevOps solution showcasing demonstrations of the new ClearCode integration that enables an automated visual flow of change throughout the software cycle from discovery through final delivery.











The VersionOne theme, “Enterprise Agility:  Revolutionizing How Teams at All Levels Work Together,” echoed well with the conference sessions and discussions focusing on scaling agile across enterprises.  At VersionOne, we know that revolutionary change, change that really matters, can only be achieved by people working together at all levels.  Conference sessions and experience reports discussed keys to successful transformations, including the importance of executive support and addressing the underlying culture and the soft skills needed to succeed.  Conversations at the VersionOne booth included Dean Leffingwell, the creator of the Scale Agile Framework® (SAFe®), sharing insights around scaling agile.  Jeff Sutherland, Scrum co-originator, was also spotted sharing insights at the booth as well.


VersionOne toasted 10 years of the State of Agile™ survey, the industry’s longest running survey, by serving champagne during the Wednesday evening vendor show.  A very popular tribute, needless to say!  And if you have not done so yet, please take a few minutes to participate in the State of Agile survey for this year (and you might win an Apple watch).  Go to

VersionOne consultant, Susan Evans, gave an inspirational experience talk about following your beliefs to ensure your happiness and motivation at work.  Write your own career user story with job satisfaction acceptance criteria.  Are you in the right job?  Do you love your job?   Read her 3 part blog on this topic:

Steve Ropa, VersionOne consultant, presented “Agile Craftsmanship and Technical Excellence:  How to Get There”.  To change your organization, set an example of “this is what we do here”.  Seek to become a mentor to others and to engage in continuous learning.  Read his blog related to this topic:

Also, Satish Thatte, another VersionOne Consultant, gave a light-hearted talk on “Scaling Agile Your Way,” based on his blog:

Then, of course, there were evening festivities.  The best party to be invited to was hosted by VersionOne at Bobby McKey’s Dueling Piano Bar featuring very talented musicians and songs we all knew and loved.  A great time and lots of fun were had by all.  No walking out early here!

joThe conference party theme on Thursday evening was Super Heroes.  Of course, the real heroes attending that night were those industry leaders who had the vision and the courage to guide their organizations and teams to a winning strategy focused on a culture of agility and lean principles.  One of the sessions presented by Michael Hamman described agile (transformational) leadership as the ability to grow adaptive capability across all aspects of the organization. In another session, Doc Norton encouraged adopting an experimentation-oriented mindset by challenging assumptions, compliance, and fear of failure.  In the closing keynote, James Tamm encouraged us to examine our own personal defensiveness as a way to overcome conflict and unhealthy cultural dynamics so we can move into an open, trusting, and collaborative culture.

Not surprising given the venue, a number of session topics focused on agile in government, dispelling once and for all the myths that agile cannot be successfully applied in the government sector.  The government sectors often face more ingrained cultural challenges to agile adoption than their commercial counterparts including:

  • Federal policies that agencies are audited against and contractor relationships dictated by contractual requirements which address traditional and “waterfallish” approaches
  • Earned value reporting and accounting driven by artifacts and activity versus outcome
  • Contract competitions which stifle collaboration
  • Command and control hierarchy that restrict flow of information and innovation

However this is changing and the fact is that many government agencies are overcoming these barriers and are realizing the benefits of agility.  Coming from the government sector myself and knowing agile works, this success is near to my heart.  And frankly, who should want to see this success more than taxpayers:  a government delivering a continuous stream of value efficiently.

To summarize key takeaways:

  • Scrum is more than a set of process and activities; it is about the continuous delivery of value and getting things done.
  • Large organizations across all industries are scaling agile across the entire enterprise and discussing how to optimize results.
  • To streamline delivery cycle time and improve time to market, you must tackle DevOps and this is the new focus of improvement in many organizations.
  • True agile transformation must address individuals and interactions, establishing a culture of trust and collaboration and alignment to vision and goals. This requires executive level commitment and action.

The closing keynote cited net income improvements of 755% between collaborative and adversarial work environments.  We need to see more leadership willing to tackle these challenges and deliver. It is never too late.

Whether you want to initiate an enterprise level agile transformation or just revitalize your practices, visit ttp:// for information on getting started with our solution programs.

we want you

Finally, mark your calendars for next year’s conference. It will take place from July 25 – July 29, 2016 in Atlanta, Georgia, home base for the VersionOne family.  It promises to be even bigger and better!  Hope to see you there next year for another great learning opportunity and chance to reconnect with old friends and meet and connect with new associates who share your passion for enterprise agility.

Posted in Agile Adoption, Agile Benefits, Agile Coaching, Agile Leadership, Agile Methodologies | 1 Comment

The Trouble with NFRs


The use of Scaled Agile Framework® (SAFe®) as a way of scaling Agile brings the need to identify Non Functional Requirements (NFRs) throughout the value stream. This need to deal with NFRs is also seen in DAD and LeSS, we have always known that NFRs are key to software success.

In some cases the NFRs will be implied, you have a legal requirement to deliver software that is fit for purpose. You also need to comply with any relevant legislation such as the UK Data Protection Act. In other cases NFRs are explicit. They will be called out such as requirements for web site colour schemes to assist with visually impaired people.

Other NFRs could be identified by the teams and integrated into the definition of done. These NFRs are based on experience and craftsmanship such as the need to ensure reviews, or configuration management.

When looking at the SAFe big picture we see that NFRs are effectively everywhere. These exist on each of the backlogs shown, and constrain these backlogs. There are also other NFRs, as mentioned above.


We see that SAFe identifies on the Big Picture NFRs that are applied from portfolio to story level. These constrains are general and can apply to all Epics, Features and Stories in any particular backlog. This means that an NFR must be satisfied by a number of stories, unlike acceptance criteria which are defined on a per feature or story basis. The following is a class diagram that shows how the Non-Functional requirements relate to the epics, features and stories. Note how the Epic does have non-functional requirements while all of the functional requirements for it live at the feature level.


NFR Species

Non-functional requirements come in many guises. They can be known up front or they can be discovered as time moves forward. Like anything else agile they can be elaborated at any time outside of a sprint. For the purpose of this discussion we will recognise two main NFR types. Team NFRs and product NFRs.

Team NFRs

Team NFRs are norms of behaviour that are agreed within a team and between teams. The VersionOne product feature of communities is an ideal place to establish and elaborate these NFRs. For instance the Definition of Done would be a Topic in a community called Agile Teams. This Topic would include a number of other topics. For example the definition of done could include:

  • Use of TDD and the way the tests are maintained
  • Coding Standard and how it is enforced by the teams
  • Configuration Management
  • Continuous Everything usage
  • Documentation Standards

This approach could also be extended to the definition of ready and indeed for any other team norms that need to be in place and continuously improved by the teams. These NFRs are then available for all to understand and improve as need be.

Technical NFRs

Other NFRs are at a more technical level. Examples include the performance of applications, security, and maintainability. In fact many words that end with ility suggest NFRs. These NFRs can be tested, in many cases using test automation.

Using the VersionOne features of Tests and Test Sets to record NFRs, we can imagine that these are developed as we progress. Here we would model these test sets with the tests using Given, When, Then where we could and otherwise using whatever language fits best. The problem with this approach is that the V1 tool only allows a test set to belong to one backlog item at a time. Therefore we would have to apply them only to the Portfolio Epic, or whatever the highest level abstraction was. Tests are likewise constrained. However, test sets and tests can belong to more than one regression suite at a time, something we will look into further on.

We should consider that part of a definition of done for a story or feature would include the successful demonstration of the NFRs that are specified to constrain the item. An Epic’s NFRs are met when all of the features (or Sub-Epics) in the Epic have their NFRs met and all other Epic level NFRs are satisfied.

For example a performance requirement may be for an entire user transaction which may contain a number of stories. The individual story needs to be integrated into the feature level before the performance requirement can be tested. It can then be tested for every story that is part of the feature. This aligns to the way the regression testing works.

We see that NFRs apply to the epic and feature level as well as to the story level. There is a nesting in place that lines up to the way that NFRs constrain the epics, features and Stories in a SAFe model.

Defining NFRs, an Example

One way to define NFRs in V1 is to use the regression testing features. In this mechanism we can generate a number of regression tests and test sets into a regression test suite which can then be part of a regression plan, which can be used to bring NFRs into the testing regime. The advantage to this approach is that a regression plan can have tests and test suites assigned to them in a non-exclusive relationship.

Let’s take an example, security. Imagine two different security test suits, penetration testing and SQL injection testing. These work at different levels and can be applied to different scope. Imagine further that each of these test sets are made up of a number of tests such as shown in the table below. The Pen tests show different attacks and the injection tests also different types of SQL injection attacks. In reality both test sets would be much bigger, but this is enough to form the discussion.

Now a test set cannot be related to more than one epic/story at a time. It can however be assigned to a number of regression suites at a time. It is confusing to note that tests can be directly assigned to a regression suite and also be assigned to a test set that can be assigned to the suite as well. This makes sense in usage however.

Test Set Test
Penetration Tests (can be bought as a package sometimes) Scan all ports for status
Password attack
url malformed attack
Injection tests Delete information attacks
Reveal information attacks
Alter information attacks

Scope and Use of NFRs

Now we have two test sets which can be part of a security regression plan, and which could be configured as two regression suites. These suites would work like this. The penetration suite would be best applied at the system or release level. Penetration tests are usually targeted at a system under test which is a full build. These tests are about accessing the system through unauthorised approaches. Injection tests are applied to individual fields in the completed application. They therefore apply to the story level. In this case an injection test set would apply to nearly every story that contains a UI component, or other exposure of database functionality. And these UI components would need to pass this NFR before they were ”done”.

So in this case of injection NFRs it may be best to place a test into each story that is for the test set to be performed. Then we could plot the status of the test set, through the status of the test. Not that the test set can be associated with the test set through the relationships. Downside is that a task for this would have to be created for each and every story that has UI features.

Another approach for this would be to create a downstream dependency in the test set to the story. This way it is possible to see all of the stories that are dependent upon the injection test NFR. However this approach does not track the dependencies as they are satisfied.

The Pen Test NFR is related to a release. This in fact would be related to all releases. So we would want to associate the pen testing test set for NFRs to the Agile Release Train. To do this we can add the actual tests as regression tests and generate test sets for the project from here. These test sets can then be configuration managed as each agile release train could have a different test set, but would all be part of the same regression test suit.

The planning level for the regression test suit should therefore be set to the value stream, while the planning level for the test set should be the agile release train. This is a unique relationship as only one planning level can be defined at a time.

In fact if we have a number of programme increments (PIs) the pen test set would be growing through these PIs. In this case hold off generating the test set until you are ready to run it, but add new regression tests to the regression test suit, in preparation for this moment. The tests in the test sets can be configured later as need be.

In Summary

1: For a new NFR at the portfolio or value stream level, add these to the regression test suite. These go in as regression tests and are used to either generate or be added to a test set. These test sets can be designed such that they are run at the release train level (Pen Tests) or run at the story level (Injection Tests).

2: If they are to be run at the release train level then the test set has the planning level set to the to show this.

3: If they are to be run at the story, definition of story done level, then a test will need to be created in the story and linked to the test set manually, or a downstream dependency placed on the test set from each story that needs to demonstrate that it has met the NFR constraint.

4: As a new standard of work is encountered then it is raised as a topic in the collaboration room and progressed in this manner. The definition of Done or Ready may well be impacted.

The above allows us to create and track NFRs at the various levels that they are needed at.

Scaled Agile Framework and SAFe are registered trademarks of Scaled Agile, Inc.

Posted in Agile Adoption, Agile Coaching, Agile Development, Agile Methodologies, Agile Tools, SAFe, scaled agile framewok | Leave a comment

The 7 Best DevOps Books

DevOps books








With the relative newness of DevOps, there are not yet a ton of DevOps books. That’s why we’ve assembled a list of the 7 best DevOps books based on four criteria: the number of ratings from Amazon, the average Amazon rating, number of ratings from GoodReads and the average GoodReads rating. Both Amazon and GoodReads use a scale of 1 to 5 stars with 5 stars being the best.

We did all the legwork digging through Amazon and GoodReads to determine how many reviews each book has as well as the average rating on each site so that you can quickly find the DevOps book that is just the right fit for your needs!

DevOps Books List

1. The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win

  • By Gene Kim, Kevin Behr, George Spafford
  • 4.6 Average Amazon rating (1,012 ratings)
  • 4.17 Average GoodReads rating (3,350 ratings)

Book Description:

Bill is an IT manager at Parts Unlimited. It’s Tuesday morning and on his drive into the office, Bill gets a call from the CEO.

The company’s new IT initiative, code named Phoenix Project, is critical to the future of Parts Unlimited, but the project is massively over budget and very late. The CEO wants Bill to report directly to him and fix the mess in ninety days or else Bill’s entire department will be outsourced.

With the help of a prospective board member and his mysterious philosophy of The Three Ways, Bill starts to see that IT work has more in common with manufacturing plant work than he ever imagined. With the clock ticking, Bill must organize work flow streamline interdepartmental communications, and effectively serve the other business functions at Parts Unlimited.

In a fast-paced and entertaining style, three luminaries of the DevOps movement deliver a story that anyone who works in IT will recognize. Readers will not only learn how to improve their own IT organizations, they’ll never view IT the same way again.

2. What is DevOps?

  • By Mike Loukides
  • 3.7 Average Amazon rating (57 ratings)
  • 3.46 Average GoodReads rating (167 ratings)

Book Description:

Have we entered the age of NoOps infrastructures? Hardly. Old-style system administrators may be disappearing in the face of automation and cloud computing, but operations have become more significant than ever. As this O’Reilly Radar Report explains, we’re moving into a more complex arrangement known as “DevOps.”

Mike Loukides, O’Reilly’s VP of Content Strategy, provides an incisive look into this new world of operations, where IT specialists are becoming part of the development team. In an environment with thousands of servers, these specialists now write the code that maintains the infrastructure. Even applications that run in the cloud have to be resilient and fault tolerant, need to be monitored, and must adjust to huge swings in load. That was underscored by Amazon’s EBS outage last year.

From the discussions at O’Reilly’s Velocity Conference, it’s evident that many operations specialists are quickly adapting to the DevOps reality. But as a whole, the industry has just scratched the surface. This report tells you why.

3. Building a DevOps Culture

  • By Mandi Walls
  • 4.2 Average Amazon rating (20 ratings)
  • 3.20 Average GoodReads rating (108 ratings)

Book Description:

DevOps is as much about culture as it is about tools. When people talk about DevOps, they often emphasize configuration management systems, source code repositories, and other tools. But, as Mandi Walls explains in this Velocity report, DevOps is really about changing company culture—replacing traditional development and operations silos with collaborative teams of people from both camps. The DevOps movement has produced some efficient teams turning out better products faster. The tough part is initiating the change. This report outlines strategies for managers looking to go beyond tools to build a DevOps culture among their technical staff.

Topics include:

  • Documenting reasons for changing to DevOps before you commit
  • Defining meaningful and achievable goals
  • Finding a technical leader to be an evangelist, tools and process expert, and shepherd
  • Starting with a non-critical but substantial pilot project
  • Facilitating open communication among developers, QA engineers, marketers, and other professionals
  • Realigning your team’s responsibilities and incentives
  • Learning when to mediate disagreements and conflicts
  • Download this free report and learn how to the DevOps approach can help you create a supportive team environment built on communication, respect, and trust.

4. Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation

  • By Jez Humble, David Farley
  • 4.4 Average Amazon rating (66 ratings)
  • Winner of the 2011 Jolt Excellence Award

Book Description:

Getting software released to users is often a painful, risky, and time-consuming process. This groundbreaking new book sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers, and operations, delivery teams can get changes released in a matter of hours—sometimes even minutes–no matter what the size of a project or the complexity of its code base.

Jez Humble and David Farley begin by presenting the foundations of a rapid, reliable, low-risk delivery process. Next, they introduce the “deployment pipeline,” an automated process for managing all changes, from check-in to release. Finally, they discuss the “ecosystem” needed to support continuous delivery, from infrastructure, data and configuration management to governance.

5. Next Gen DevOps: Creating the DevOps Organisation

  • By Grant Smith
  • 4.5 Average Amazon rating (2 Amazon ratings)
  • 4.00 Average GoodReads rating (3 GoodReads ratings)

Book Description:

A coherent and actionable DevOps framework is now available to businesses through a revolutionary new book, Next Gen DevOps: Creating the DevOps Organisation. Utilising nearly two decades’ experience at firms including AOL, Electronic Arts (EA) and British Gas’ Connected Homes, the book’s author and pioneer of the DevOps movement, Grant Smith, has distilled the essence of DevOps into an easily-implementable framework. Next Gen DevOps merges behaviour-driven development, infrastructure-as-code, automated testing, monitoring and continuous integration into a single coherent process. The book presents an implementation strategy that allows firms large or small, start-up or enterprise to embrace the move to DevOps.

By presenting a new way to look at the operations discipline, Next Gen DevOps challenges the old idea of a team languishing at the end of the software development lifecycle, forever context-switching between support tasks, security, data management, infrastructure and software deployment. Armed with the lessons learned from history and the Agile software development movement, combined with the latest in Software-as-a-Service (SaaS) solutions, cloud computing and automated testing, Next Gen DevOps sets out Grant’s vision for IT in business’ biggest evolution yet. “Every company is now an internet firm – and that means changes in the way we work,” Grant Smith says. “It’s time to drop the silos between our IT teams and work as organisations to improve and develop our products. Using simple theories and practices, Next Gen DevOps: Creating the DevOps Organisation offers a framework that can transform any internet company.

6. The IT Manager’s Guide to Continuous Delivery: Delivering Software in Days

  • By Andrew Phillips, Michiel Sens
  • 4.2 Average Amazon rating (2 Amazon ratings)

Book Description:

Turning good ideas into marketable software quickly is now a business imperative for every enterprise. Delivering software features faster and with high quality is the first critical step. The subsequent step is to rapidly collect feedback from users to guide the next set of ideas for further improvements. Critical software development objectives such as these set the stage for The IT Manager’s Guide to Continuous Delivery: Delivering Software in Days, Instead of Months.

The book champions the concept of Continuous Delivery in enabling organizations to build automated software delivery platforms for releasing high-quality applications faster. The book also presents how Continuous Delivery is a set of processes and practices that radically removes waste from the software production process and creates a rapid and effective feedback loop with end users.

7. Leading the Transformation: Applying Agile and DevOps Principles at Scale

  • By Gary Gruver, Tommy Mouser

Book Description:

Software is becoming more and more important across a broad range of industries, yet most technology executives struggle to deliver software improvements their businesses require.

Leading-edge companies like Amazon and Google are applying DevOps and Agile principles to deliver large software projects faster than anyone thought possible. But most executives don’t understand how to transform their current legacy systems and processes to scale these principles across their organizations.

Leading the Transformation is executive guide, providing a clear framework for improving development and delivery. Instead of the traditional Agile and DevOps approaches that focus on improving the effectiveness of teams, this book targets the coordination of work across teams in large organizations—an improvement that executives are uniquely positioned to lead.


DevOps is an emerging methodology that is growing and changing quickly. This relative newness and rapid change make it difficult to find great DevOps books. I hope our list has made your search a little easier and that you have found some DevOps books you are interested in reading!

What are some other DevOps books you would add to the list?


Posted in DevOps | Leave a comment

How to Become a Software Craftsman









How to become a Software Craftsman has become a huge subtext in the software community and the development conversation. One of the things that I’ve been exploring is how do we get there? How do we go from where we are to becoming  true software craftsmen?

It’s not this magical “Oh, we’re agile, we put posters of the manifesto everywhere, so now we’re agile and we’re software craftsmen.” It takes work, and it takes activities. I’ve been doing a lot of exploration into this and believe I have discovered three paths to the summit of software craftsmanship.

Software Craftsman Defined

Let’s first define what I mean by a software craftsman. Everybody has their own views, but I think of a software craftsman as someone who has practiced the techniques of XP, agile, and DevOps till those techniques have worked themselves into the person’s subconscious. This software craftsman creates software using these techniques almost through muscle memory. They no longer have to think about what they need to do to create beautiful code, they just execute in their relentless pursuit of creating amazing software products.

How You Get to be a Software Craftsman

So now that we have defined what being a software craftsman means to me, let’s explore how we get there. I’ve found that there are three paths to becoming a software craftsman. These paths are comprised of software development skills that need to be developed. The first path develops the people skills, the second path develops the technical skills and the third path explores the principles derived from the other two paths.

Let’s survey what each of these paths covers.

The People Path

One of aspects that I’ve been exploring a lot is that there are people problems and there are technical problems to DevOps and software craftsmanship. Of course, the technical skills are critical, but no more so than the people side. Just like it does a person no good to only strengthen their right arm, while their left atrophies, it does us no good to only strengthen our technical skills, while our people skills go to waste.

We must learn and apply technical tools from a people perspective. Craftsmanship means doing things by hand and knowing how to execute with an artist’s touch. It’s not enough to say, “I expect you to be doing test-driven development.” You have to be able to help people understand what test-driven development is.

Those of us in the software development community must help foster craftsmanship because it certainly isn’t being taught in school. The typical computer science college graduate does not understand how to do test-driven development or agile. We’re making some progress, but it’s still not good enough.

The Technical Path

There is also the technical side to keep in mind. Practices include test-driven development, refactoring and continuous integration. It is also important to know when and how to write acceptance tests and how to apply these as well as how to automate these. These are the nuts and bolts of a solid software craftsman. The idea of refactoring becomes part of daily life, not just using the buzzword but intuitively building it into everything you do.

These are the steps you’re going to continue with, and at some point that will lead you to DevOps and continuous delivery. These technical practices and methodologies are more organizational than individualized, but DevOps and continuous delivery do require a discipline that only a software craftsman can really, truly supply.

The Principles Path

I’ve found that you can break this down into certain areas of foundational skills that need to be developed. The first of those foundational skills is coding. Sounds kind of silly to say it, but it’s worth saying; a programmer needs to be excellent at coding.


By coding I don’t mean one language. You must be astute in multiple languages. No true craftsman only knows one way of doing anything. It doesn’t matter what languages, but there needs to be at least two, preferably more.


Designing is an important aspect, but is tricky in the agile world because we say, “don’t get caught up in big, up front designing.” I believe that, but you do need to understand design. Whether it be unit design, large architectural pieces or systems design, you do need to understand and be able to apply good design.

Applying Agile Principles

Learning the 12 principles of the Agile Manifesto isn’t very difficult; applying them is much harder. You have to understand when they apply, when simplicity really is essential, what simplicity is and how to apply simplicity to a particular problem.


We have many tools available to us. Mastering these tools as a true craftsman is not about simply using them, it’s about knowing how to use them wisely. Like the saying goes, “If you need a hammer, whatever tool is handy is a hammer.” That’s not necessarily the best approach. A craftsman seeks the right tool for the right job and uses that tool masterfully.

Work Habits

You need to be able to establish strong work habits. You need to be able to, not just by yourself, but in a team, practice these habits. Test-driven development and continuous integration are tools to help us with practice our work habits. Having the work habits and the discipline around those work habits to be able to say yes or no and to be able to say, “This is what I need to do, and it’s what I will do” is crucial.


Professionalism is something you notice more often when it’s not there than when it is. There is a quiet confidence and understanding in professionals. There is the ability to know where you’re going and what you’re doing without conscious thinking. You have confidence in professionals because you know that they will do great work. That’s what I mean by professionalism, it’s very difficult to define, but it’s absolutely vital to a strong development shop. Especially, as we aspire to craftsmanship.

Traditional Craftsman Education

To learn how to teach software craftsmanship, we have to look no further than the trades where craftsmanship originated. Traditionally, craftsmen were created through apprenticeships. By going through an apprenticeship program, young craftsmen, no matter what their background or their education, learn not just what they should do but how to do it. They’d learn the tricks and techniques that don’t necessarily come just from reading a book, taking a class or passing a test. It’s about learning from doing, and it’s learning by doing things together.

The next component of how craftsmanship has been historically taught is recognizing progress. To recognize progress, you need a path to follow. It’s no secret that there’s no really well-defined career path for software developers.

The typical path of a developer is to start as a junior software engineer, progress to a senior software engineer and, if you’re really good, you become a tech lead. As a tech lead, you have to now tell other people how to do it. Then, if you’re really good at that, they take you completely out of the thing you love, which is programming, and make you a manager. Then, you get to try to figure out how to make other people do what you love to do most. That path has never really worked.

Micro-certifications are becoming very popular in the world of education and development. Think of micro-certifications as similar to boy scout badges. You could have a badge in test-driven development, concrete data systems or web design. By obtaining these badges, you can recognize progress and simultaneously you can be recognized for that progress. When taking an approach such as this you should start associating some of the compensation and programs with the development of these badges.

When you are done with your apprenticeship, you are, of course, not done learning. At this stage, you grow into a journeyman. A journeyman is a very time-honored tradition. The idea of a journeyman is that now you are good enough to go out on your own. In the traditional craftsmanship model, a journeyman would have wandered from village to village practicing their craft.

In the software world, this might mean you work on a team for a year or maybe two. Then, you go to another team. The wandering part doesn’t have to be quite as frequent as the traditional journeyman, but the idea is that you need to continue to develop your skills and to develop them not in a single place but to explore other areas.

If you’ve been doing nothing but data mining for six months, then maybe for the next six months you should be focused on webpages so that you are building a broad base of skills. That’s what a journeyman’s life is. We should be spending the majority of our time as journeymen.


These are the steps. It’s not the easiest path in the world, but it’s absolutely worth it as you go along. We should all aspire to be great at our craft and be true craftsmen in our discipline. I hope this has inspired you to take a look at what areas you can develop to become a stronger software craftsman.

What other skills do you think are important for a software craftsman to develop?

About the Author

versionone-coaches-steve-ropaSteve Ropa
CSM, CSPO, Innovation Games Facilitator, SA
Agile Coach and Product Consultant, VersionOne

Steve has more than 25 years of experience in software development and 15 years of experience working with agile methods. Steve is passionate about bridging the gap between the business and technology and nurturing the change in the nature of development. As an agile coach and VersionOne product trainer, Steve has supported clients across multiple industry verticals including: telecommunications, network security, entertainment and education. A frequent presenter at agile events, he is also a member of Agile Alliance and Scrum Alliance.

Posted in Software Craftsmanship | Leave a comment