How ADLM Gobbles Up DevOps

Print

 

 

 

 

 

In the 1980’s and 90’s, the business software landscape was dominated by a diverse list of cutting-edge companies such as Best Software, i2, Brock Control Systems, Mapics, Ross Systems, Infinium, FBO Systems, Manugistics and MSA (of course I could go on and on). Now long gone, these and hundreds more really great companies have been gobbled up or rendered obsolete by the rising class of ERP giants. In this article I’ll explain why history will repeat itself leading to the extinction of most DevOps tools as leading ADLM platforms continue to assert their dominance across the diverse software development and delivery automation ecosystem.

History Informs Our Future and The Evolution of ERP

You already know that for the past twenty years, most companies have leveraged some form of ERP system to manage virtually every core business processes. One benefit of this tightly integrated solution is a powerful inter-functional data flow that enables corporate agility and provides the highest level of visibility. This super-integrated architectural model has become the standard adopted by virtually every enterprise around the globe. What you may not know is today’s “ERP model” is the result of four distinct evolutionary generations that I believe help predict the next major evolution of automated software delivery.

Phase 1 – Automation
Enterprise Information Systems (EIS) – In the 1960’s, early automation systems were developed to support important individual business functions such as general ledger, inventory management, billing, payroll, etc. These systems were architected completely independently of each other and added little value to the enterprise beyond their narrow scope.

Phase 2 – Core Data Model
Manufacturing Resource Planning (MRP) – Then in the 1970’s, the idea of a master production schedule was devised so that a few of these isolated systems could gain greater visibility into future inventory and product requirements of the organization. The big idea behind the master production schedule was the creation of an open data model that could be leveraged by other systems impacted by the manufacturing production schedule.

Phase 3 – Expansion
Manufacturing Resource Planning II (MRP II) – In the 1980’s, software vendors began to build and sell off-the-shelf packages that promised “best of breed” process design. These solutions provided tightly integrated versions of the key manufacturing processes required to produce products.

Phase 4 – Business Process Domination
Enterprise Resource Planning (ERP) – Finally, in the 1990’s, business software vendors expanded well beyond the manufacturing scope by creating highly-integrated solutions that now cover just about every standard business function imaginable. These systems leverage a unified data model to dramatically improve visibility, consistency, accuracy and planning capabilities enterprise-wide.

What Does ERP Have To Do With Automated Software Delivery?

The evolution of ERP has taught us how natural pressures forced the creation of a unified and comprehensive “business data model” spanning the entire enterprise. Those software vendors with enough influence to dictate that data model were the ultimate winners in the ERP space.

In the very first generation of ERP (EIS), software was leveraged to deliver a high degree of automation to many business processes that were previously manual. Over time, it became clear that these newly automated processes were interconnected and the evolution towards a tightly integrated and unified data model was underway. The business objective that fueled each successive generation outlined above was the need to design more efficient business processes that increased organizational visibility and agility.

As Marc Andreessen famously said in 2011, “…more and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defense.” (Why Software Is Eating The World). That statement resonates even stronger four years later. Today, virtually every corporate organization is seeing the familiar pressure to deliver software more efficiently and reliably. If history is indeed our guide, any highly fragmented and/or isolated process required to deliver incremental software change will face mounting pressure to be merged into an integrated end-to-end enterprise-grade platform that can deliver improved cross-functional visibility with even greater efficiency.

Why Application Delivery Lifecycle Management Will Win

In my view, there are really only two broad solution categories in the realm of automated software design and delivery – Application Development Lifecycle Management (ADLM) and the catch-all term DevOps (which I’ve hijacked here to describe any other type of process automation that assists the software delivery process).

DevOps tools are often narrow point solutions that have sprouted from open source projects, in-house development or commercial vendors. Organizations rely heavily upon a diverse collection of these DevOps tools to help document, validate and automate a steady flow of software change along its path to end-users.

Here’s the problem DevOps tools are beginning to face: Like the EIS systems of the 60’s, fragmented DevOps tools have little or no visibility into the overall end-to-end process; however, they do generate lots of important data that is often locked away and isolated. This isolation creates a clear barrier to efficiency, visibility and agility across the software delivery process. Because of the limited function each individual DevOps tool performs, none have the gravitational pull required to define the larger data model. As comprehensive enterprise software delivery platforms emerge elsewhere, standard DevOps tools will face ever-increasing pressure to fold inside them.

Currently, Application Development Lifecycle Management (ADLM) solutions provide a platform to manage development projects, team resources and all manner of development activities. ADLM platforms also contain the “master production schedule” for every development initiative – past, present and future. The data contained within ADLM is now at the core of a quickly emerging software delivery data model and leading ADLM vendors are expanding their footprint pwell beyond traditional use cases. When it comes to ownership of this software delivery data model, I see no other solution category across the entire ecosystem with enough enterprise clout to pose a serious challenge to leading ADLM vendors.

Unified Software Delivery Platforms Are Already Emerging

Each of the five current ADLM leaders (according to Gartner’s most recent Magic Quadrant) are now racing to bring to market an enterprise software delivery platform that integrates many key DevOps capabilities.

Here’s my two cents on each…

The Giants in the space – IBM and Microsoft both have plenty of muscle and IP today. Clearly both are moving down the path toward a comprehensive software delivery platform. IBM acquired DevOps vender urban{code} several years ago and is hard at work building their developerWorks platform. Seemingly everyday, Microsoft is adding some kind of DevOps capability into its Visual Studio product suite. Still, I don’t see either vendor gaining much traction outside of their traditional (albeit very large) customer base. Perhaps more importantly, neither seem to have bonafide credentials within the super-influential agile development community and I believe this kind of street-cred (at least for now) is a must-have to dominate this space.

Atlassian does enjoy wide support among the agile community and no doubt has the broadest adoption footprint of any of the current ADLM leaders. Atlassian is in a strong position to mount a serious threat. However, Atlassian’s core product (JIRA), is widely believed to lack heavy-weight depth in the ADLM feature spectrum and it is often implemented as a departmental or “team tool”. They’ll have to develop deeper strategic planning and multi-team project capabilities to beat the rivals.

This May, software giant Computer Associates announced a definitive agreement to purchase ADLM heavyweight Rally and its agile development platform. In its announcement, CA said it intends to leverage Rally’s capabilities to “complement and expand CA’s strengths in the areas of DevOps and cloud management”. With the crucial addition of Rally, CA is now in a strong position to assemble its diverse capabilities into a single unified and enterprise-caliber software delivery platform. Now… can they seamlessly integrate all of the pieces-parts into a cohesive solution with a unified data model? If so, how long will it take?

Finally, I believe VersionOne may have a slight edge over the other ADLM vendors in the race toward a unified software delivery platform. I may be a bit biased because of my direct involvement in a joint project currently underway – none the less, here are four reasons why they will absolutely be a dominant force to recon with:

Vision: Robert Holler, VersionOne CEO, is clearly buying into the “enterprise software delivery platform” vision. He and his team have a well thought out strategy and they are actively executing against that strategy.

DevOps Automation: VersionOne has partnered with ClearCode Labs and both teams have been hard at work integrating ClearCode’s Continuous Delivery Automation framework into the VersionOne core product. This integration provides VersionOne the ability to orchestrate virtually any DevOps tool or platform and (just as importantly) incorporate all related data across VersionOne’s product suite to fee its quickly expanding data model.

JIRA Integration: VersionOne has just announced a tight integration into the JIRA platform. This integration will give them the ability to fold fragmented JIRA installations across the enterprise into the unified VersionOne platform providing a more strategic and enterprise-grade solution.

Availability: VersionOne’s automated delivery platform is available now and they are demonstrating their comprehensive solution to the eager agile community this week at the sold-out Agile2015 conference in Washington, DC.

Summary

The top 5 ADLM vendors are already well on their way toward developing enterprise-grade software delivery platforms that will consume many of the current “DevOps” automation solutions. Soon, development organizations will benefit from a comprehensive platform that can deliver increased efficiency, visibility and agility when compared to the heterogeneous solutions that have been cobbled together today.

About the Author

dennis-150x150-150x150Dennis Ehle, is a pioneer and thought leader in continuous delivery automation and agile delivery methodologies. Dennis is passionate about helping agile teams dramatically reduce the transaction cost associated with delivering incremental change. His company, ClearCode Labs, does just that by helping organizations continuously deliver high-quality software releases more frequently merging proven methodologies with empowering tools and technology. Twitter: @DennisEhle

Article originally posted on DevOps.com

Posted in DevOps | Leave a comment

Agile 2015 Conference Highlights: Saluting Enterprise Agility

I am just returning from a fantastic week at the 2015 Agile Alliance Agile conference held from August 3-7 just outside Washington D.C. and wanted to share some highlights with those who were unable to attend. This conference attracts international interest and was attended by over 2300 participants, including both experienced practitioners looking to refine their game, as well as novices seeking to join in and reap the powerful benefits of this mainstream set of values and principles that we call “agile”.

As a title sponsor, VersionOne featured the latest innovations in its Enterprise Agile Platform to help enterprises succeed with scaling agile, our support for the Scaled Agile Framework® (SAFe®), and new capabilities such as TeamSync™ for JIRA.

The industry focus on DevOps continues, as do discussions on successfully navigating barriers to change and scaling successfully.  VersionOne featured a unified DevOps solution showcasing demonstrations of the new ClearCode integration that enables an automated visual flow of change throughout the software cycle from discovery through final delivery.

booth

 

 

 

 

 

 

 

 

 

The VersionOne theme, “Enterprise Agility:  Revolutionizing How Teams at All Levels Work Together,” echoed well with the conference sessions and discussions focusing on scaling agile across enterprises.  At VersionOne, we know that revolutionary change, change that really matters, can only be achieved by people working together at all levels.  Conference sessions and experience reports discussed keys to successful transformations, including the importance of executive support and addressing the underlying culture and the soft skills needed to succeed.  Conversations at the VersionOne booth included Dean Leffingwell, the creator of the Scale Agile Framework® (SAFe®), sharing insights around scaling agile.  Jeff Sutherland, Scrum co-originator, was also spotted sharing insights at the booth as well.

jeff&peter

VersionOne toasted 10 years of the State of Agile™ survey, the industry’s longest running survey, by serving champagne during the Wednesday evening vendor show.  A very popular tribute, needless to say!  And if you have not done so yet, please take a few minutes to participate in the State of Agile survey for this year (and you might win an Apple watch).  Go to www.stateofagile.com

VersionOne consultant, Susan Evans, gave an inspirational experience talk about following your beliefs to ensure your happiness and motivation at work.  Write your own career user story with job satisfaction acceptance criteria.  Are you in the right job?  Do you love your job?   Read her 3 part blog on this topic: http://blogs.versionone.com/agile_management/2015/01/05/99-problems-but-a-coach-aint-one-part-1-of-3/

Steve Ropa, VersionOne consultant, presented “Agile Craftsmanship and Technical Excellence:  How to Get There”.  To change your organization, set an example of “this is what we do here”.  Seek to become a mentor to others and to engage in continuous learning.  Read his blog related to this topic:  http://blogs.versionone.com/agile_management/2015/08/06/how-to-become-a-software-craftsman/

Also, Satish Thatte, another VersionOne Consultant, gave a light-hearted talk on “Scaling Agile Your Way,” based on his blog:  http://blogs.versionone.com/agile_management/2014/10/14/scaling-agile-your-way-how-to-develop-and-implement-your-custom-approach-part-4-of-4/

Then, of course, there were evening festivities.  The best party to be invited to was hosted by VersionOne at Bobby McKey’s Dueling Piano Bar featuring very talented musicians and songs we all knew and loved.  A great time and lots of fun were had by all.  No walking out early here!

joThe conference party theme on Thursday evening was Super Heroes.  Of course, the real heroes attending that night were those industry leaders who had the vision and the courage to guide their organizations and teams to a winning strategy focused on a culture of agility and lean principles.  One of the sessions presented by Michael Hamman described agile (transformational) leadership as the ability to grow adaptive capability across all aspects of the organization. In another session, Doc Norton encouraged adopting an experimentation-oriented mindset by challenging assumptions, compliance, and fear of failure.  In the closing keynote, James Tamm encouraged us to examine our own personal defensiveness as a way to overcome conflict and unhealthy cultural dynamics so we can move into an open, trusting, and collaborative culture.

Not surprising given the venue, a number of session topics focused on agile in government, dispelling once and for all the myths that agile cannot be successfully applied in the government sector.  The government sectors often face more ingrained cultural challenges to agile adoption than their commercial counterparts including:

  • Federal policies that agencies are audited against and contractor relationships dictated by contractual requirements which address traditional and “waterfallish” approaches
  • Earned value reporting and accounting driven by artifacts and activity versus outcome
  • Contract competitions which stifle collaboration
  • Command and control hierarchy that restrict flow of information and innovation

However this is changing and the fact is that many government agencies are overcoming these barriers and are realizing the benefits of agility.  Coming from the government sector myself and knowing agile works, this success is near to my heart.  And frankly, who should want to see this success more than taxpayers:  a government delivering a continuous stream of value efficiently.

To summarize key takeaways:

  • Scrum is more than a set of process and activities; it is about the continuous delivery of value and getting things done.
  • Large organizations across all industries are scaling agile across the entire enterprise and discussing how to optimize results.
  • To streamline delivery cycle time and improve time to market, you must tackle DevOps and this is the new focus of improvement in many organizations.
  • True agile transformation must address individuals and interactions, establishing a culture of trust and collaboration and alignment to vision and goals. This requires executive level commitment and action.

The closing keynote cited net income improvements of 755% between collaborative and adversarial work environments.  We need to see more leadership willing to tackle these challenges and deliver. It is never too late.

Whether you want to initiate an enterprise level agile transformation or just revitalize your practices, visit ttp://www.versionone.com/customer-success/ for information on getting started with our solution programs.

we want you

Finally, mark your calendars for next year’s conference. It will take place from July 25 – July 29, 2016 in Atlanta, Georgia, home base for the VersionOne family.  It promises to be even bigger and better!  Hope to see you there next year for another great learning opportunity and chance to reconnect with old friends and meet and connect with new associates who share your passion for enterprise agility.

Posted in Agile Adoption, Agile Benefits, Agile Coaching, Agile Leadership, Agile Methodologies | Leave a comment

The Trouble with NFRs

Introduction

The use of Scaled Agile Framework® (SAFe®) as a way of scaling Agile brings the need to identify Non Functional Requirements (NFRs) throughout the value stream. This need to deal with NFRs is also seen in DAD and LeSS, we have always known that NFRs are key to software success.

In some cases the NFRs will be implied, you have a legal requirement to deliver software that is fit for purpose. You also need to comply with any relevant legislation such as the UK Data Protection Act. In other cases NFRs are explicit. They will be called out such as requirements for web site colour schemes to assist with visually impaired people.

Other NFRs could be identified by the teams and integrated into the definition of done. These NFRs are based on experience and craftsmanship such as the need to ensure reviews, or configuration management.

When looking at the SAFe big picture we see that NFRs are effectively everywhere. These exist on each of the backlogs shown, and constrain these backlogs. There are also other NFRs, as mentioned above.

BigPicture

We see that SAFe identifies on the Big Picture NFRs that are applied from portfolio to story level. These constrains are general and can apply to all Epics, Features and Stories in any particular backlog. This means that an NFR must be satisfied by a number of stories, unlike acceptance criteria which are defined on a per feature or story basis. The following is a class diagram that shows how the Non-Functional requirements relate to the epics, features and stories. Note how the Epic does have non-functional requirements while all of the functional requirements for it live at the feature level.

AgileUML

NFR Species

Non-functional requirements come in many guises. They can be known up front or they can be discovered as time moves forward. Like anything else agile they can be elaborated at any time outside of a sprint. For the purpose of this discussion we will recognise two main NFR types. Team NFRs and product NFRs.

Team NFRs

Team NFRs are norms of behaviour that are agreed within a team and between teams. The VersionOne product feature of communities is an ideal place to establish and elaborate these NFRs. For instance the Definition of Done would be a Topic in a community called Agile Teams. This Topic would include a number of other topics. For example the definition of done could include:

  • Use of TDD and the way the tests are maintained
  • Coding Standard and how it is enforced by the teams
  • Configuration Management
  • Continuous Everything usage
  • Documentation Standards

This approach could also be extended to the definition of ready and indeed for any other team norms that need to be in place and continuously improved by the teams. These NFRs are then available for all to understand and improve as need be.

Technical NFRs

Other NFRs are at a more technical level. Examples include the performance of applications, security, and maintainability. In fact many words that end with ility suggest NFRs. These NFRs can be tested, in many cases using test automation.

Using the VersionOne features of Tests and Test Sets to record NFRs, we can imagine that these are developed as we progress. Here we would model these test sets with the tests using Given, When, Then where we could and otherwise using whatever language fits best. The problem with this approach is that the V1 tool only allows a test set to belong to one backlog item at a time. Therefore we would have to apply them only to the Portfolio Epic, or whatever the highest level abstraction was. Tests are likewise constrained. However, test sets and tests can belong to more than one regression suite at a time, something we will look into further on.

We should consider that part of a definition of done for a story or feature would include the successful demonstration of the NFRs that are specified to constrain the item. An Epic’s NFRs are met when all of the features (or Sub-Epics) in the Epic have their NFRs met and all other Epic level NFRs are satisfied.

For example a performance requirement may be for an entire user transaction which may contain a number of stories. The individual story needs to be integrated into the feature level before the performance requirement can be tested. It can then be tested for every story that is part of the feature. This aligns to the way the regression testing works.

We see that NFRs apply to the epic and feature level as well as to the story level. There is a nesting in place that lines up to the way that NFRs constrain the epics, features and Stories in a SAFe model.

Defining NFRs, an Example

One way to define NFRs in V1 is to use the regression testing features. In this mechanism we can generate a number of regression tests and test sets into a regression test suite which can then be part of a regression plan, which can be used to bring NFRs into the testing regime. The advantage to this approach is that a regression plan can have tests and test suites assigned to them in a non-exclusive relationship.

Let’s take an example, security. Imagine two different security test suits, penetration testing and SQL injection testing. These work at different levels and can be applied to different scope. Imagine further that each of these test sets are made up of a number of tests such as shown in the table below. The Pen tests show different attacks and the injection tests also different types of SQL injection attacks. In reality both test sets would be much bigger, but this is enough to form the discussion.

Now a test set cannot be related to more than one epic/story at a time. It can however be assigned to a number of regression suites at a time. It is confusing to note that tests can be directly assigned to a regression suite and also be assigned to a test set that can be assigned to the suite as well. This makes sense in usage however.

Test Set Test
Penetration Tests (can be bought as a package sometimes) Scan all ports for status
Password attack
url malformed attack
Injection tests Delete information attacks
Reveal information attacks
Alter information attacks

Scope and Use of NFRs

Now we have two test sets which can be part of a security regression plan, and which could be configured as two regression suites. These suites would work like this. The penetration suite would be best applied at the system or release level. Penetration tests are usually targeted at a system under test which is a full build. These tests are about accessing the system through unauthorised approaches. Injection tests are applied to individual fields in the completed application. They therefore apply to the story level. In this case an injection test set would apply to nearly every story that contains a UI component, or other exposure of database functionality. And these UI components would need to pass this NFR before they were ”done”.

So in this case of injection NFRs it may be best to place a test into each story that is for the test set to be performed. Then we could plot the status of the test set, through the status of the test. Not that the test set can be associated with the test set through the relationships. Downside is that a task for this would have to be created for each and every story that has UI features.

Another approach for this would be to create a downstream dependency in the test set to the story. This way it is possible to see all of the stories that are dependent upon the injection test NFR. However this approach does not track the dependencies as they are satisfied.

The Pen Test NFR is related to a release. This in fact would be related to all releases. So we would want to associate the pen testing test set for NFRs to the Agile Release Train. To do this we can add the actual tests as regression tests and generate test sets for the project from here. These test sets can then be configuration managed as each agile release train could have a different test set, but would all be part of the same regression test suit.

The planning level for the regression test suit should therefore be set to the value stream, while the planning level for the test set should be the agile release train. This is a unique relationship as only one planning level can be defined at a time.

In fact if we have a number of programme increments (PIs) the pen test set would be growing through these PIs. In this case hold off generating the test set until you are ready to run it, but add new regression tests to the regression test suit, in preparation for this moment. The tests in the test sets can be configured later as need be.

In Summary

1: For a new NFR at the portfolio or value stream level, add these to the regression test suite. These go in as regression tests and are used to either generate or be added to a test set. These test sets can be designed such that they are run at the release train level (Pen Tests) or run at the story level (Injection Tests).

2: If they are to be run at the release train level then the test set has the planning level set to the to show this.

3: If they are to be run at the story, definition of story done level, then a test will need to be created in the story and linked to the test set manually, or a downstream dependency placed on the test set from each story that needs to demonstrate that it has met the NFR constraint.

4: As a new standard of work is encountered then it is raised as a topic in the collaboration room and progressed in this manner. The definition of Done or Ready may well be impacted.

The above allows us to create and track NFRs at the various levels that they are needed at.

Scaled Agile Framework and SAFe are registered trademarks of Scaled Agile, Inc.

Posted in Agile Adoption, Agile Coaching, Agile Development, Agile Methodologies, Agile Tools, SAFe, scaled agile framewok | Leave a comment

The 7 Best DevOps Books

DevOps books

 

 

 

 

 

 

 

With the relative newness of DevOps, there are not yet a ton of DevOps books; some would argue that there are even fewer that are worth reading. That’s why we’ve assembled a list of the 7 best DevOps books based on four criteria: the number of ratings from Amazon, the average Amazon rating, number of ratings from GoodReads and the average GoodReads rating. Both Amazon and GoodReads use a scale of 1 to 5 stars with 5 stars being the best.

We did all the legwork digging through Amazon and GoodReads to determine how many reviews each book has as well as the average rating on each site so that you can quickly find the DevOps book that is just the right fit for your needs!

DevOps Books List

1. The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win

  • By Gene Kim, Kevin Behr, George Spafford
  • 4.6 Average Amazon rating (1,012 ratings)
  • 4.17 Average GoodReads rating (3,350 ratings)

Book Description:

Bill is an IT manager at Parts Unlimited. It’s Tuesday morning and on his drive into the office, Bill gets a call from the CEO.

The company’s new IT initiative, code named Phoenix Project, is critical to the future of Parts Unlimited, but the project is massively over budget and very late. The CEO wants Bill to report directly to him and fix the mess in ninety days or else Bill’s entire department will be outsourced.

With the help of a prospective board member and his mysterious philosophy of The Three Ways, Bill starts to see that IT work has more in common with manufacturing plant work than he ever imagined. With the clock ticking, Bill must organize work flow streamline interdepartmental communications, and effectively serve the other business functions at Parts Unlimited.

In a fast-paced and entertaining style, three luminaries of the DevOps movement deliver a story that anyone who works in IT will recognize. Readers will not only learn how to improve their own IT organizations, they’ll never view IT the same way again.

2. What is DevOps?

  • By Mike Loukides
  • 3.7 Average Amazon rating (57 ratings)
  • 3.46 Average GoodReads rating (167 ratings)

Book Description:

Have we entered the age of NoOps infrastructures? Hardly. Old-style system administrators may be disappearing in the face of automation and cloud computing, but operations have become more significant than ever. As this O’Reilly Radar Report explains, we’re moving into a more complex arrangement known as “DevOps.”

Mike Loukides, O’Reilly’s VP of Content Strategy, provides an incisive look into this new world of operations, where IT specialists are becoming part of the development team. In an environment with thousands of servers, these specialists now write the code that maintains the infrastructure. Even applications that run in the cloud have to be resilient and fault tolerant, need to be monitored, and must adjust to huge swings in load. That was underscored by Amazon’s EBS outage last year.

From the discussions at O’Reilly’s Velocity Conference, it’s evident that many operations specialists are quickly adapting to the DevOps reality. But as a whole, the industry has just scratched the surface. This report tells you why.

3. Building a DevOps Culture

  • By Mandi Walls
  • 4.2 Average Amazon rating (20 ratings)
  • 3.20 Average GoodReads rating (108 ratings)

Book Description:

DevOps is as much about culture as it is about tools. When people talk about DevOps, they often emphasize configuration management systems, source code repositories, and other tools. But, as Mandi Walls explains in this Velocity report, DevOps is really about changing company culture—replacing traditional development and operations silos with collaborative teams of people from both camps. The DevOps movement has produced some efficient teams turning out better products faster. The tough part is initiating the change. This report outlines strategies for managers looking to go beyond tools to build a DevOps culture among their technical staff.

Topics include:

  • Documenting reasons for changing to DevOps before you commit
  • Defining meaningful and achievable goals
  • Finding a technical leader to be an evangelist, tools and process expert, and shepherd
  • Starting with a non-critical but substantial pilot project
  • Facilitating open communication among developers, QA engineers, marketers, and other professionals
  • Realigning your team’s responsibilities and incentives
  • Learning when to mediate disagreements and conflicts
  • Download this free report and learn how to the DevOps approach can help you create a supportive team environment built on communication, respect, and trust.

4. Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation

  • By Jez Humble, David Farley
  • 4.4 Average Amazon rating (66 ratings)
  • Winner of the 2011 Jolt Excellence Award

Book Description:

Getting software released to users is often a painful, risky, and time-consuming process. This groundbreaking new book sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers, and operations, delivery teams can get changes released in a matter of hours—sometimes even minutes–no matter what the size of a project or the complexity of its code base.

Jez Humble and David Farley begin by presenting the foundations of a rapid, reliable, low-risk delivery process. Next, they introduce the “deployment pipeline,” an automated process for managing all changes, from check-in to release. Finally, they discuss the “ecosystem” needed to support continuous delivery, from infrastructure, data and configuration management to governance.

5. Next Gen DevOps: Creating the DevOps Organisation

  • By Grant Smith
  • 4.5 Average Amazon rating (2 Amazon ratings)
  • 4.00 Average GoodReads rating (3 GoodReads ratings)

Book Description:

A coherent and actionable DevOps framework is now available to businesses through a revolutionary new book, Next Gen DevOps: Creating the DevOps Organisation. Utilising nearly two decades’ experience at firms including AOL, Electronic Arts (EA) and British Gas’ Connected Homes, the book’s author and pioneer of the DevOps movement, Grant Smith, has distilled the essence of DevOps into an easily-implementable framework. Next Gen DevOps merges behaviour-driven development, infrastructure-as-code, automated testing, monitoring and continuous integration into a single coherent process. The book presents an implementation strategy that allows firms large or small, start-up or enterprise to embrace the move to DevOps.

By presenting a new way to look at the operations discipline, Next Gen DevOps challenges the old idea of a team languishing at the end of the software development lifecycle, forever context-switching between support tasks, security, data management, infrastructure and software deployment. Armed with the lessons learned from history and the Agile software development movement, combined with the latest in Software-as-a-Service (SaaS) solutions, cloud computing and automated testing, Next Gen DevOps sets out Grant’s vision for IT in business’ biggest evolution yet. “Every company is now an internet firm – and that means changes in the way we work,” Grant Smith says. “It’s time to drop the silos between our IT teams and work as organisations to improve and develop our products. Using simple theories and practices, Next Gen DevOps: Creating the DevOps Organisation offers a framework that can transform any internet company.

6. The IT Manager’s Guide to Continuous Delivery: Delivering Software in Days

  • By Andrew Phillips, Michiel Sens
  • 4.2 Average Amazon rating (2 Amazon ratings)

Book Description:

Turning good ideas into marketable software quickly is now a business imperative for every enterprise. Delivering software features faster and with high quality is the first critical step. The subsequent step is to rapidly collect feedback from users to guide the next set of ideas for further improvements. Critical software development objectives such as these set the stage for The IT Manager’s Guide to Continuous Delivery: Delivering Software in Days, Instead of Months.

The book champions the concept of Continuous Delivery in enabling organizations to build automated software delivery platforms for releasing high-quality applications faster. The book also presents how Continuous Delivery is a set of processes and practices that radically removes waste from the software production process and creates a rapid and effective feedback loop with end users.

7. Leading the Transformation: Applying Agile and DevOps Principles at Scale

  • By Gary Gruver, Tommy Mouser

Book Description:

Software is becoming more and more important across a broad range of industries, yet most technology executives struggle to deliver software improvements their businesses require.

Leading-edge companies like Amazon and Google are applying DevOps and Agile principles to deliver large software projects faster than anyone thought possible. But most executives don’t understand how to transform their current legacy systems and processes to scale these principles across their organizations.

Leading the Transformation is executive guide, providing a clear framework for improving development and delivery. Instead of the traditional Agile and DevOps approaches that focus on improving the effectiveness of teams, this book targets the coordination of work across teams in large organizations—an improvement that executives are uniquely positioned to lead.

Conclusion

DevOps is an emerging methodology that is growing and changing quickly. This relative newness and rapid change make it difficult to find great DevOps books. I hope our list has made your search a little easier and that you have found some DevOps books you are interested in reading!

What are some other DevOps books you would add to the list?

 

Posted in DevOps | Leave a comment

How to Become a Software Craftsman

Craftmanship

 

 

 

 

 

 

 

How to become a Software Craftsman has become a huge subtext in the software community and the development conversation. One of the things that I’ve been exploring is how do we get there? How do we go from where we are to becoming  true software craftsmen?

It’s not this magical “Oh, we’re agile, we put posters of the manifesto everywhere, so now we’re agile and we’re software craftsmen.” It takes work, and it takes activities. I’ve been doing a lot of exploration into this and believe I have discovered three paths to the summit of software craftsmanship.

Software Craftsman Defined

Let’s first define what I mean by a software craftsman. Everybody has their own views, but I think of a software craftsman as someone who has practiced the techniques of XP, agile, and DevOps till those techniques have worked themselves into the person’s subconscious. This software craftsman creates software using these techniques almost through muscle memory. They no longer have to think about what they need to do to create beautiful code, they just execute in their relentless pursuit of creating amazing software products.

How You Get to be a Software Craftsman

So now that we have defined what being a software craftsman means to me, let’s explore how we get there. I’ve found that there are three paths to becoming a software craftsman. These paths are comprised of software development skills that need to be developed. The first path develops the people skills, the second path develops the technical skills and the third path explores the principles derived from the other two paths.

Let’s survey what each of these paths covers.

The People Path

One of aspects that I’ve been exploring a lot is that there are people problems and there are technical problems to DevOps and software craftsmanship. Of course, the technical skills are critical, but no more so than the people side. Just like it does a person no good to only strengthen their right arm, while their left atrophies, it does us no good to only strengthen our technical skills, while our people skills go to waste.

We must learn and apply technical tools from a people perspective. Craftsmanship means doing things by hand and knowing how to execute with an artist’s touch. It’s not enough to say, “I expect you to be doing test-driven development.” You have to be able to help people understand what test-driven development is.

Those of us in the software development community must help foster craftsmanship because it certainly isn’t being taught in school. The typical computer science college graduate does not understand how to do test-driven development or agile. We’re making some progress, but it’s still not good enough.

The Technical Path

There is also the technical side to keep in mind. Practices include test-driven development, refactoring and continuous integration. It is also important to know when and how to write acceptance tests and how to apply these as well as how to automate these. These are the nuts and bolts of a solid software craftsman. The idea of refactoring becomes part of daily life, not just using the buzzword but intuitively building it into everything you do.

These are the steps you’re going to continue with, and at some point that will lead you to DevOps and continuous delivery. These technical practices and methodologies are more organizational than individualized, but DevOps and continuous delivery do require a discipline that only a software craftsman can really, truly supply.

The Principles Path

I’ve found that you can break this down into certain areas of foundational skills that need to be developed. The first of those foundational skills is coding. Sounds kind of silly to say it, but it’s worth saying; a programmer needs to be excellent at coding.

Coding

By coding I don’t mean one language. You must be astute in multiple languages. No true craftsman only knows one way of doing anything. It doesn’t matter what languages, but there needs to be at least two, preferably more.

Design

Designing is an important aspect, but is tricky in the agile world because we say, “don’t get caught up in big, up front designing.” I believe that, but you do need to understand design. Whether it be unit design, large architectural pieces or systems design, you do need to understand and be able to apply good design.

Applying Agile Principles

Learning the 12 principles of the Agile Manifesto isn’t very difficult; applying them is much harder. You have to understand when they apply, when simplicity really is essential, what simplicity is and how to apply simplicity to a particular problem.

Tooling

We have many tools available to us. Mastering these tools as a true craftsman is not about simply using them, it’s about knowing how to use them wisely. Like the saying goes, “If you need a hammer, whatever tool is handy is a hammer.” That’s not necessarily the best approach. A craftsman seeks the right tool for the right job and uses that tool masterfully.

Work Habits

You need to be able to establish strong work habits. You need to be able to, not just by yourself, but in a team, practice these habits. Test-driven development and continuous integration are tools to help us with practice our work habits. Having the work habits and the discipline around those work habits to be able to say yes or no and to be able to say, “This is what I need to do, and it’s what I will do” is crucial.

Professionalism

Professionalism is something you notice more often when it’s not there than when it is. There is a quiet confidence and understanding in professionals. There is the ability to know where you’re going and what you’re doing without conscious thinking. You have confidence in professionals because you know that they will do great work. That’s what I mean by professionalism, it’s very difficult to define, but it’s absolutely vital to a strong development shop. Especially, as we aspire to craftsmanship.

Traditional Craftsman Education

To learn how to teach software craftsmanship, we have to look no further than the trades where craftsmanship originated. Traditionally, craftsmen were created through apprenticeships. By going through an apprenticeship program, young craftsmen, no matter what their background or their education, learn not just what they should do but how to do it. They’d learn the tricks and techniques that don’t necessarily come just from reading a book, taking a class or passing a test. It’s about learning from doing, and it’s learning by doing things together.

The next component of how craftsmanship has been historically taught is recognizing progress. To recognize progress, you need a path to follow. It’s no secret that there’s no really well-defined career path for software developers.

The typical path of a developer is to start as a junior software engineer, progress to a senior software engineer and, if you’re really good, you become a tech lead. As a tech lead, you have to now tell other people how to do it. Then, if you’re really good at that, they take you completely out of the thing you love, which is programming, and make you a manager. Then, you get to try to figure out how to make other people do what you love to do most. That path has never really worked.

Micro-certifications are becoming very popular in the world of education and development. Think of micro-certifications as similar to boy scout badges. You could have a badge in test-driven development, concrete data systems or web design. By obtaining these badges, you can recognize progress and simultaneously you can be recognized for that progress. When taking an approach such as this you should start associating some of the compensation and programs with the development of these badges.

When you are done with your apprenticeship, you are, of course, not done learning. At this stage, you grow into a journeyman. A journeyman is a very time-honored tradition. The idea of a journeyman is that now you are good enough to go out on your own. In the traditional craftsmanship model, a journeyman would have wandered from village to village practicing their craft.

In the software world, this might mean you work on a team for a year or maybe two. Then, you go to another team. The wandering part doesn’t have to be quite as frequent as the traditional journeyman, but the idea is that you need to continue to develop your skills and to develop them not in a single place but to explore other areas.

If you’ve been doing nothing but data mining for six months, then maybe for the next six months you should be focused on webpages so that you are building a broad base of skills. That’s what a journeyman’s life is. We should be spending the majority of our time as journeymen.

Conclusion

These are the steps. It’s not the easiest path in the world, but it’s absolutely worth it as you go along. We should all aspire to be great at our craft and be true craftsmen in our discipline. I hope this has inspired you to take a look at what areas you can develop to become a stronger software craftsman.

What other skills do you think are important for a software craftsman to develop?

About the Author

versionone-coaches-steve-ropaSteve Ropa
CSM, CSPO, Innovation Games Facilitator, SA
Agile Coach and Product Consultant, VersionOne

Steve has more than 25 years of experience in software development and 15 years of experience working with agile methods. Steve is passionate about bridging the gap between the business and technology and nurturing the change in the nature of development. As an agile coach and VersionOne product trainer, Steve has supported clients across multiple industry verticals including: telecommunications, network security, entertainment and education. A frequent presenter at agile events, he is also a member of Agile Alliance and Scrum Alliance.

Posted in Software Craftsmanship | Leave a comment

10th Annual State of Agile Survey is Open!

300x250It’s hard to imagine that we’re celebrating the 10th year of the State of Agile™ survey! The annual survey has become the largest, longest-running, and most comprehensive survey serving the agile community.

Last year nearly 4,000 of your peers shared what they’ve learned through their agile experiences. The survey gives agile software professionals around the world the opportunity to provide insights on a wide range of agile topics including the benefits of agile, top tips for scaling agile, how to measure agile success, and the most popular agile project management tools. This year we will be conducting an even deeper analysis of the trends over the past 10 years.

To help celebrate the 10th anniversary, VersionOne is giving away 10 Apple Watches in a random drawing in each of the 10 weeks that the survey is open. The survey, which takes about 10 minutes to complete, is open until Oct. 2. The full report will be available in early 2016.

For the past nine years, the results from the State of Agile survey have been helping organizations realize the benefits of agile faster, easier and smarter. Help the agile community make the 10th annual State of Agile survey the most valuable report yet!

State of Agile is a registered trademark of VersionOne Inc.

Posted in Agile Adoption, Agile Benefits, Agile Project Management | Leave a comment

Three Leadership “Musts” for DevOps

 

Three Leadership

 

 

 

 

 

 

 

 

A middle-of-the-night phone call is never a good thing – especially when the director of technology operations is on the other end. It was 2:00 a.m. in the summer of 2003 when I was abruptly awakened from my phone’s vibration.

My nightmare started as the director of technology operations reported that the system was down with no resolution in sight.

A company system outage is comparable to cutting off blood flow to the brain. When the system is down, there’s no cash and the business starts to die. No matter the size or stature of a company, technology leaders constantly carry the fear that even the smallest system outage could seriously damage their work. While this fear is hidden deep inside the psyche, it’s a reality that all tech leaders learn to live with.

My system outage was no different from eBay’s in the summer of 1999. The eBay auction site suffered from a series of system outages – the longest outage lasting 22 hours.

That outage cost eBay $5 million of transaction revenue. This $5 million may sound like a lot, but, in reality, it was nothing compared to the $4 billion drop in the company’s market value as the result of the outage.

WHAT ABOUT NOW?

Almost two decades later, and technology experts are still experiencing major complications in their systems.

In July 2015 alone, recent outages and system failures have affected the New York Stock Exchange, United Airlines, Department of State’s Visa system, Apple’s iStore, and – most notably – the Royal Bank of Scotland’s IT systems, which reported that a half-million financial transactions vanished from the system due to an unknown error.

They say “time heals all wounds,” but system outages may be the exception to this rule, as effects can be severe.

“For the Fortune 1000, the average total cost of unplanned application downtime per year is $1.25 billion to $2.5 billion,” says Stephen Elliot, IDC Analyst. “The average cost of a critical application failure per hour is $500,000 to $1 million.”

We are still not immune to these outages and we must take great care in avoiding these issues, or risk losing time, money and business.

WHAT IS HAPPENING?

Today’s systems are growing exponentially more complicated. Rising demand, volumes of aging data, patchwork of software, and network infrastructure each impact a system’s complexity and deployments. IDC estimates that the average amount of monthly deployments will double in two years.

To combat the case of system failures, technology leaders must adjust to an era of instant consumption – the “have to have it now” era. The world is no longer satisfied with singular massive updates to their systems every 12 or 6 months. Rather, we need to “deploy on demand,” where software can be updated several times per day with 100% resilience.

In other words, we need DevOps.

THE WAVE OF CHANGE IS HERE

Simply described, DevOps is the collaboration and communication between software developers and technology professionals in the IT value chain to deploy software to customers. Gene Kim, author of The Phoenix Project refers to DevOps as “the outcome of applying Lean Principles to the IT value stream.”

To achieve greatness, DevOps demands leadership vision and involvement. This requires sponsorship, so operational and cultural norms can change. It’s likely that your company will need to incorporate all of these changes to ensure long-term success.

DevOps is successful because it dramatically reduces a company’s operational risk by creating conditions that advance company culture, interactions, and tools.

Imagine a world where product, development, QA, infosec, and operations are orchestrating together to deliver business value at the fastest pace possible in an “IT value stream.” And fast execution isn’t the only benefit here – the process also has high predictability and low risk. This symphony of establishing a reliable flow across the organization – along with cultivating the right culture – is the foundation on which change can be made.

In May 2011, LinkedIn’s valuation doubled to $9 billion on its second day after IPO. With the stock soaring and a flood of new users flocking to the professional social networking site, LinkedIn was Wall Street’s golden child. Kevin Scott, LinkedIn’s top engineer didn’t feel as confident. Scott knew that the system and its engineers were being crushed by it’s own technology infrastructure inhibiting growth.

In a bold move, Scott launched Project InVersion, an initiative where all new feature development for LinkedIn stopped so that every engineer focused on rebuilding its core technology infrastructure. “You go public, have all the world looking at you, and then we tell management we’re not going to deliver anything new while all of engineering works on this project for the next two months,” Scott says. “It was a scary thing.” This work centered on LinkedIn’s ability to build out DevOps so that it could scale and accelerate while eliminating technical risks.

This resulted in extending the company’s deployment capabilities so that it can deploy changes at a moments notice at any time of day. Further, it helped support the growth of LinkedIn’s user base to over 364 million members and a market cap of $28 billion.

THE THREE LEADERSHIP MUST-HAVES

Gene Kim describes DevOps as a “philosophical movement.” And he’s right. As DevOps garners more attention, experts are deliberating its “best practices” and developing tools to support those practices.

To enable success, I have found there are 3 “musts” that leadership should have when launching a DevOps movement. These “musts” are based on the premise that DevOps requires disruptive leadership.

1. Executive Involvement

Leaders, including the CTO and the CEO, must work together to make DevOps a strategic priority. Just as soldiers, airplanes, satellites, and technology are strategic assets for the military, technology leaders need to utilize DevOps assets to achieve their goals. Leaders should engage with business counterparts when harnessing the strategic value of DevOps.

Successful DevOps transformations require executive participation and understanding. With the DevOps’ unification of the technology value stream, it becomes a unique strategic capability that enables faster innovation and faster time to market.

2. Organizational Design Focused on Agile Value Delivery

DevOps transformations are not simple. They are difficult and require creativity which leads to a journey that not all people in your company are prepared to take.

Value Driven Organizations

The best way to confront this challenge is to develop a healthy organization design. Separate organizational silos split by domains may be traditional; however, they are no longer effective. Many organizations, particularly those using Agile, are experiencing success by building cross-functional teams. Each team creates work in segments of time, or “sprints.” Each sprint results in the team delivering potentially shippable increments of work product. Moreover, place more emphasis on grouping team to swarm on delivering shared objectives. This structure will have a powerful effect on your company’s ability to collaborate and build business value.

This approach places more emphasis on teamwork. The teams design, build, and test as a team. And, throughout the development process, these teams actively coordinate with Technology Operations, InfoSec, and others to ensure that their work can be deployed.

Craftsmanship & Automation

Great DevOps companies require thoughtful and deliberate decisions to encourage great engineering craftsmanship. This craftsmanship ensures software is built with practices that encourage high quality product. The practices we follow should focus on receiving fast feedback on whether or not the code really works.

Today, practices like Test Driven Development (TDD) are used to create lines of tests before the code is written. By writing the code to after the tests are created, developers create a collection of code that, by definition, is already tested before it’s finished, thus reducing errors in increasing quality.

Automation is another key element to the product development flow. Once automated, a developer can automatically test the code with a simple click. The system can test the changes across thousands of developers’ new code in a fraction of time compared to manual tests.

3. Synchronized Product Planning and DevOps Planning

Several successful DevOps groups are also accelerating their delivery capabilities with support teams. Technology operations, infosec, architecture, and risk/compliance teams are often involved in product planning.

This results in a higher degree of coordination in the product development cycle. Aspects of security, scalability, reliability, are baked into the solution from the earliest stages of planning. Moreover, by tying together areas of release management practices at the beginning, the organization’s ability to coordinate product delivery matures faster.

DevOps may seem like a lot of work, but technology leaders should consider it a smart business investment. Companies unwilling or unable to adapt will be left behind and trapped under the weight of their own antiquated practices. Those slow to react will not be able to compete due to limitations of deployment speed and resiliency. However, it’s the companies employing DevOps that will outmaneuver and outpace their competition, leaving others in the dust.

Stacey Louie is the CEO of Bratton & Company, a leading Agile Transformation consultancy based in the Silicon Valley. As an Enterprise Agile Coach, he was instrumental in PayPal’s 400 team global agile transformation as well as supporting other Fortune 500 companies including Cisco, Hewlett Packard, and eBay. He also held the position of division CTO/CIO of public companies including Verisk Analytics and Stewart Information Systems.

 

Posted in DevOps, Uncategorized | Leave a comment

How to Collaborate in DevOps Software Development

Puzzle

 

 

 

 

 

 

 

DevOps Software Development is new to many organizations and figuring out how to best collaborate can be challenging. One of the recurring roadblocks experienced by the organizations we serve revolves around collaboration. What are some of the difficulties they face and how can DevOps address these to help deliver great software and build systems that scale and last?

At Blue Agility, we have been leading large-scale agile transformations to help our clients align business and IT, achieve faster time-to-market, and remain competitive in the current marketplace.

DevOps Software Development

Software development is an intense collaborative process where success depends on the ability to create, share and integrate information at a very rapid pace. With globalization comes a growing need to foster highly productive software development teams that can operate successfully in this global market. Distance creates an additional challenge to development processes, as fewer opportunities exist for rich interaction and direct communication occurs less frequently.

Virtual team collaboration is the collaboration of teams that are not located in the same physical location. These teams could be either on-site, near-shore, offshore or a combination of the three types.

BA_1

 

Whether dealing with teams collaborating in the same location or virtual teams across multiple locations, collaboration is key to a successful DevOps transformation.

 

 

DevOps is focused on improving the principles of collaboration including:

Voice of the Customer
Just in Time Requirements
Refinement
Social Interaction
Transparency
Demonstration
Fast Feedback

How to Collaborate

So how is collaboration best optimized within DevOps?

The key is to enable effective collaboration at the three following layers:

Team Collaboration: DevOps builds on the concept of small teams working together to achieve “great things.”

collab

 

 

 

 

 


Team of Teams Collaboration:
A group of teams working in cadence and synchronizing often.

BA_3

 

 

 

 

 


Intent/Idea Collaboration:
Alignment to ideas/concepts that have been identified, analyzed and approved for delivery.

BA_4

 

 

 

 

 

With the challenges of collaboration, tooling to support the development teams becomes critical. Whichever tool is selected, it must have the ability to deliver transparent and effective collaboration for all three layers to truly be successful across the entire delivery life cycle.

Last Word

Ultimately, the improved collaboration afforded by DevOps Software Development leads to better reliability, more time to focus on the core business, faster time to market, and of course, happier clients.

Posted in DevOps, Uncategorized | Leave a comment

DevOps Culture and the Informed Workspace

DevOps-Culture

 

 

 

 

 

 

 

While the DevOps culture has been heavily focused on what tools to use, little thought has been given to what type of workspaces are needed. Ever since the early days of agile, the importance of an informative workspace has been known. Many of the practices around working together, pair programming and the Onsite Customer from Extreme Programming were to enable the rapid flow and visualization of how the team is doing. Other aspects, such as the Big Visual Chart, were included to keep the information flowing. We have made great progress in this category, but we still have more to do.

Now, fast forward to the DevOps Culture. Much like the original agile movement, DevOps is a relatively unique change in the world of work that involves both cultural and technical shifts. It’s just not enough to have cool new tools like Puppet and Chef, or any of the other cool tools that make continuous delivery “a thing.” We need to be able to think about how we are planning our stories. We need to be able to be including acceptance criteria that go beyond just “is it done,” but also all the way to “is it staying done?”

We often run into this as agile consultants. I have often gone to work with a client and their number one concern as we are going through the engagement is “ok, and what do we do on day one of the sprint when we don’t have you here coaching us?” Now, they are fine on their own, but part of the plan is to know what to do after the training wheels are off. Let’s look at that same idea in terms of DevOps Culture. Stories have a very limited life. Once the product owner has accepted the story, we tear it up and throw it away, metaphorically speaking. But that isn’t the end of the story. The software now needs to live and breathe in the big wide world. How do we do that? DevOps is of course the answer, but what exactly does that mean?

Workspaces in the DevOps Culture

As mentioned earlier in this article, the idea of an informed workspace is a valuable tool in our belt for moving deeper and wider into the DevOps culture. Think back to one of the biggest cultural changes called for in the early adoption of agile. Agile called for bringing QA into the room. We aren’t treating QA as a separate team, but part of the team. All of a sudden we are paying close attention to the number of tests that are passing. So now part of our Big Visual Chart is focused on the pass/fail rate of our tests, not just the status of the story itself. This shift took a lot of effort, and I think that if we are honest with ourselves it’s not done yet. But that is an article for another time. We want to take a deeper look at the keys to successfully affecting a further transformation to the DevOps culture. What aspects will really help us do more than just have lots of automated builds that we call done, with no thought to what happens after?

The first step is to think about the cultural changes required. What is it that we will need to change in our thinking in order to make DevOps more than just another buzzword at our shop? The first, and hardest, change is to stop thinking in terms of the “DevOps team”. The whole team is part of Dev and Ops. There just is no wall to throw “finished” product over anymore. It’s all about creating great and long lasting software. There are many steps that we have already taken to get there, but this is one of the biggest. So let’s take a look at the different activities that really make a DevOps culture thrive.

DevOps Activities

Of course, the first thing one thinks about when discussing DevOps is the activities that support continuous delivery. This means an even higher need for all tests to be automated. Unit tests are merely the start, followed closely by automating your acceptance tests. Having these tests running continuously throughout the day is basically the cost of entry into the DevOps culture. Without a strong continuous integration server, running tests all day and every day, we just can’t be sure that what we are releasing is of a high enough quality to stay healthy in the real world. After that, the art of continuous deployment becomes an additional challenge. Orchestration tools are vital to make sure the right bits get bundled with the other right bits, and then get put where they belong. And then, since we are all part of “keeping the lights on,” we need monitoring tools to help us visualize whether our software really is behaving properly. So yes, there is a definite technical aspect to DevOps.

That’s a lot of moving parts! We need to keep track of where we are. This leads to one of the cool parts of a true DevOps implementation. All those cool monitors that the Ops guys get with the fancy graphs and uptime charts? They come into the room with the Ops people. And we need to add to them. We are going to track story progress from idea to implementation, and then into the wild. My acceptance criteria are going to include things like “must have Nagios hooks” and “will use less then x% of CPU”. And now we have to live up to it. This means it is more important than ever to be able to visualize the entire flow. Our Big Visual Charts need to be able to show us not just how the current iteration is going. They must show us the state of the build server, the state of the various builds and where they are in any of the extended process, such as UAT, etc. And, in the unlikely event of a failure anywhere along the line, or in post-production, we can follow a clear chain of events back to find the problem quickly.

Conclusion

So now we see that, while DevOps is primarily a people problem, there are a lot of technical aspects that enable a strong DevOps culture. The key to success is the union of the people and technical aspects, which in a way make DevOps a Cyborg. In order to balance these two aspects, and in order to keep ourselves from burning countless hours and countless brain cells chasing down all of the moving parts, we need to focus on Information. The more Information that we can have at our fingertips, the more effective we will be. Each team will identify which information is the most meaningful to them, and how best to interpret it. You can bet that this will be in live charts rather than stale reports. Being able to orchestrate the entire flow of a story’s life, from inception to realization to retirement will be much easier if we can visualize each step of the way. If this means our team room might start looking like the command center in war games, what’s so bad about that?

About the Author

versionone-coaches-steve-ropaSteve Ropa
CSM, CSPO, Innovation Games Facilitator, SA
Agile Coach and Product Consultant, VersionOne

Steve has more than 25 years of experience in software development and 15 years of experience working with agile methods. Steve is passionate about bridging the gap between the business and technology and nurturing the change in the nature of development. As an agile coach and VersionOne product trainer, Steve has supported clients across multiple industry verticals including: telecommunications, network security, entertainment and education. A frequent presenter at agile events, he is also a member of Agile Alliance and Scrum Alliance.

Posted in Uncategorized | 1 Comment

Five Tips for Improving Communication

Communication is the key to solving problems and successfully collaborating, but many of us still have difficulty communicating with particular team members. Why?

Because the words we use mean different things to different people in different contexts.
Matt Badgley, an agile product consultant at VersionOne, recently gave a presentation at Agile Day Atlanta about communication techniques you can use to solve problems and improve team meetings.

MattBadgley_550x309

VersionOne: Why is it important to focus on the words we use?

Matt: We all know that collaboration is the key to success. Ultimately, solving a problem is generally done by people talking to each other and working things out. Solving problems often happens inadvertently, through conversations.

So that’s why communication is key, and communication is made up, of course, of verbal and nonverbal cues. The same goes for the role of ScrumMaster. So, if you are in the role of product owner or ScrumMaster and you’re not good at facilitating communication, you are not going to be successful. So that’s why it’s really important.

When you actually talk about what words mean, you will find that certain words in certain organizations trigger emotions. They are bad words. They are basically four-letter words that are emotional for people. So you have to be aware of that. You will also find that there are some terms that mean one thing in one context and something totally different in another context. For example, epic is a word we use all the time in agile. And even the word project means different things, and it actually evokes different feelings in people.

VersionOne: In your presentation you shared some fun facts about communication – can you share those with us?

Matt: One of the most interesting statistics is that women speak roughly twenty thousand words per day on average, while men speak on average seven thousand words per day, and we all have around twenty-five thousand words in our active vocabulary.

Generally, we say between one hundred to one hundred and seventy-five words per minute, although we can listen to up to eight hundred. That is why we can often eavesdrop on other people’s conversations and gain insight. Our conscious minds can only process about forty bits of information per second, which includes colors and things like that. However, our subconscious mind, which deals with our motor skills, processes around eleven million.

One last little fun fact: the word that has been shown through studies of the brain to be the most dangerous in the world is the word no – probably because we learn that word at a very early age and get our hands slapped. So if you say no in a conversation, that instantly turns the context of the conversation around, or changes the tone. This just goes to show that the actual words we use are often undervalued and can mean so much more.

VersionOne: What are some of the ways you suggest for people to solve that problem?

Matt: In my presentation I make five suggestions.

1) Don’t redefine the obvious.

For example, when talking about requirements, we often use the word feature or capability. Now the scaled agile framework refers to requirements as a business epic or a feature epic. You’ll hear different terms that people throw out, just simply to change the term. So, be very deliberate on whether or not you need to change a word or not.

2) Be deliberate and intentional.

If you make the decision to change a term, be deliberate and intentional about using it. For example, the Spotify model uses the word squad rather than team. Squad makes you think of the military or a small group that is a subset of a sports team. A team is a bigger composition, but a squad is a smaller and more intentional group of people. By redirecting and changing people to use that term, it has some underlying meaning that’s beyond the word team.

3) Be aware of biases around a word.

Bias is a preconceived feeling around certain words. A funny one to use is the word ScrumMaster. The term master has some bias behind it, some predefined bias that people bring into the room with them. It’s not always perceived how it is meant to be, although ScrumMaster does actually mean the master of the scrum process, the sensei. At the end of the day, that bias can be dangerous. So be aware of the bias.

4) Use domain language.

Use the words that the business uses already. This suggestion goes with number one: don’t redefine the obvious, but also don’t go out of your way to use a word that’s not unique to your industry. Accept and embrace some of the acronyms that are associated with the industry. For example, in the agile industry, we use the term product owner and sprints, so embrace those kind of words.

5) Use visual elements when defining a glossary.

It may sound strange to create a visual glossary, but the idea comes from how we learned words as kids. You learned the word apple because you saw a picture of an apple. Defining ways in which people can not only read the word, but also visualize the word helps things stick.

Check out these posts to learn more about how you can improve your communication by focusing on what words mean.

Posted in Agile Leadership, Agile Project Management, Agile Teams | Leave a comment