↓ Archives ↓

Category → Agile Operations

Improving Flow: Fix the Handoffs to Remove Your Worst Bottlenecks

Minimizing time to market and getting faster feedback from customers are primary concern for businesses who want to stay competitive. You need to be able to go from a business idea to a customer-facing running service as quickly, reliably, and effortlessly as possible. This as a flow of work that crosses many organizational silos.

Where does this flow often bog down? Handoffs. Whether the handoffs are within a team (e.g. Dev to Dev) or between teams (e.g. Dev to Ops), there is always the need to pass work from one stage of the lifecycle to the next.

http://www.flickr.com/photos/seven13avenue/2791099838/in/photostream/

At DTO Solutions, our clients are often already aware that they have flow problems when they ask us to for help. When we use techniques like Value-Stream Mapping to learn how the work flows, handoff problems are prominent forms of waste that jump off of the page. The diagram below uses pie charts to highlight the relative time lost due to difficult handoffs during the product life cycle.

What are common reasons for difficult handoffs?

  • Conversations, email, multitudinous wikis, spreadsheets, and trouble ticket systems are used to describe, in human language, how to process work. Words are open to interpretation and documents often lag behind current operating procedure. Just imagine being the person planning or performing the work and traversing the information across these various tools.
  • Software product artifacts differ between stages of the process. Sometimes software resides in a directory on a file share and other times it’s a TAR file. The software handoff may contain the same bits, but must be handled or converted by the downstream stage of the software delivery process.
  • Work can be considered “done” yet be unfinished or in a non-working state.The lack of a test or means to verify the work was done correctly often leads to products not ready for the next person down the line. This can leave the person in the downstream stage with what is essentially scrap that has to be rejected or redone.
  • Ad hoc procedures or loose scripting often lead to different approaches and implementations for what should be standard operating procedure. This can lead to silo-specific utilities with different levels of quality and testing.

Handoff problems affect organizations, both big and small. Obviously, one answer to solving handoff problems is to minimize them. But if you are in an organization larger than just a handful of people, that just isn’t a realistic option. To decrease time to market and enable fast feedback, you are going to have to roll up your sleeves to solve the handoff problems.

Where are good places to start making handoffs smoother?

Here are a few of the top fixes that we find important for solving handoff problems at their source:

  • Consistent packaging
    • The most direct way to simplify software handoffs between Dev and Ops is using a common system package format like RPM or Debian. Using a system package format also aligns application deployment and system provisioning practices.
  • Encapsulated procedures
    • Rather than loose scripts or team-specific ones, choose a framework that enables modular automation. Using a modular approach results in a shared tool box of utilities and captured process.
  • Converting information flows into artifact flows
    • Rather than rely on human read text as the product for the downstream process to handle, formalize it as an automation product and build on the idea of encapsulated procedures.
  • Procedure verification tests
    • Verification testing should not be dominated by manual checks described in text documents. Building on the idea of converting information into artifacts, implement verification using a test automation framework. Most apps have some level of testing to verify functionality. Build a testing framework to verify an operation (eg, software deployment) procedure was successful by executing an automated test.

In subsequent posts, we’ll address each one of these fixes.

 

The post Improving Flow: Fix the Handoffs to Remove Your Worst Bottlenecks appeared first on dev2ops.

Integrating DevOps tools into a Service Delivery Platform (VIDEO)

The ecosystem of open source DevOps-friendly tools has experienced explosive growth in the past few years. There are so many great tools out there that finding the right one for a particular use case has become quite easy.

As the old problem of a lack of tooling fades into the distance, the new problem of tool integration is becoming more apprent. Deployment tools, configuration management tools, build tools, repository tools, monitoring tools — By design, most of the popular modern tools in our space are point solutions.


But DevOps problems are, by definition, fundamentally lifecycle problems. Getting from business idea to running features in a customer facing environment requires coordinating actions, artifacts, and knowledge across a variety of those point solution tools. If you are going to break down the problamatic silos and get through that lifecycle as rapidly and reliably as possible, you will need a way to integrate those point solutions tools.

 

The classic solution approach was for a single vendor to sell you a pre-integrated suite of tools. Today, these monolithic solutions have been largely rejected by the DevOps community in favor of a collection open source tools that can be swapped out as requirements change. Unfortunately, this also means that the burden of integration has fallen to the individual users. Even with the scriptable and API-driven nature of these modern open source tools, this isn’t a trivial task. Try as the industry might to standardize, every organization has varying requirements and makes varying technology decisions, thus making a once-size-fits-all implementation a practical impossibilty (which is also why the classic monolithic tool approach achieved, on averaged, mixed results at best).

DTO Solutions has made a name for itself through helping it’s clients sort out requirements and build toolchains that integrate open source (and closed source) tools to automate the full Development to Operations lifecycle. Through that work, a series of design patterns and best practices have proven themselves to be useful and repeatable across a variety of sizes and types of companies and environments. These design patterns and best practices have over time become formalized into what DTO calls a “Service Delivery Platform”.

I recently sat down with my colleague at DTO Solutions, Anthony Shortland, to have him walk me through the Service Delivery Platform concept.

In this video, Anthony covers:

  • The “quadrant” approach to thinking about the problem
  • The elements of the service delivery platform
  • The roles of various tools in the service delivery platform (with examples)
  • The importance of integrating both infrastructure provisioning and application deployment (especially in Cloud environments)
  • The standardized lifecycle for both infrastructure and applications

Below the video is a larger version of the generic diagram Anthony is explaining. Below that is an exmaple of a recent implementation of the design (along with the tool and process choices for that specific project).

 

 

The post Integrating DevOps tools into a Service Delivery Platform (VIDEO) appeared first on dev2ops.

Using Rundeck and Chef to Build DevOps Toolchains at #ChefConf 2012 (VIDEO)

I presented at #ChefConf 2012 in Burlingame last Thursday on using Rundeck and Chef to Build DevOps toolchains.

The heart of the presentation was a demonstration of continuous build and deployment showing Adam Jacob's chef-rundeck plugin working as a Rundeck resource model source (node provider) and jobs using knife and the Chef server API to manage databag-based application configuration.

At the process level, the presentation connects the dots between service delivery platform design and the loosely-coupled toolchains.

Despite the hotel-wide power outage in the middle of the presentation, the video crew recovered nicely! Below you will find the video and the slides.

 

 

 

 

 

 

 

 

Using Rundeck and Chef to Build DevOps Toolchains at #ChefConf 2012

I presented at #ChefConf 2012 in Burlingame last Thursday on using Rundeck and Chef to Build DevOps toolchains.

The heart of the presentation was a demonstration of continuous build and deployment showing Adam Jacob’s chef-rundeck plugin working as a Rundeck resource model source (node provider) and jobs using knife and the Chef server API to manage databag-based application configuration.

At the process level, the presentation connects the dots between service delivery platform design and the loosely-coupled toolchains.

Despite the hotel-wide power outage in the middle of the presentation, the video crew recovered nicely! Below you will find the video and the slides.

The post Using Rundeck and Chef to Build DevOps Toolchains at #ChefConf 2012 appeared first on dev2ops.

High Velocity Release Management with Alex Honor and Betsy Hearnsberger (VIDEO)

This one is for the managers out there who straddle the Dev and Ops divide.

Alex Honor and Betsy Hearnsberger have seen the importance of release management dramatically change over the past decade. Through their collective experiences working inside organizations like E*TRADE, Ask.com, NASA Ames, and Zynga (as well as Alex's subsequent consulting work at DTO Solutions) they've each amassed a wealth of experience and insight into dealing with high velocity release engineering in large scale and complex organizations.

Since their professional paths have crossed multiple times, I figured I'd get Alex and Betsy together in front of a whiteboard for a chat. In these videos they talk about the common challenges they see, advice for managers addressing these issues, solution approaches that work, and criteria for tool selection.

Please note that these videos were originally shot on July 29, 2011. Due to a technical problem is was thought that these videos were lost. Lucky for us, they have been fully recovered. I'll have to get them both on camera again soon to discuss how their thinking has evolved since then. 

 

Part 1: Common problems

 

Part 2: Management's approach to the problems

 

Part 3: Solution patterns and tool selection

 

High Velocity Release Management with Alex Honor and Betsy Hearnsberger

This one is for the managers out there who straddle the Dev and Ops divide.

Alex Honor and Betsy Hearnsberger have seen the importance of release management dramatically change over the past decade. Through their collective experiences working inside organizations like E*TRADE, Ask.com, NASA Ames, and Zynga (as well as Alex’s subsequent consulting work at DTO Solutions) they’ve each amassed a wealth of experience and insight into dealing with high velocity release engineering in large scale and complex organizations.

Since their professional paths have crossed multiple times, I figured I’d get Alex and Betsy together in front of a whiteboard for a chat. In these videos they talk about the common challenges they see, advice for managers addressing these issues, solution approaches that work, and criteria for tool selection.

Please note that these videos were originally shot on July 29, 2011. Due to a technical problem is was thought that these videos were lost. Lucky for us, they have been fully recovered. I’ll have to get them both on camera again soon to discuss how their thinking has evolved since then. 

 

Part 1: Common problems

 

Part 2: Management’s approach to the problems

 

Part 3: Solution patterns and tool selection

The post High Velocity Release Management with Alex Honor and Betsy Hearnsberger appeared first on dev2ops.

Kanban and DevOps Roundtable (Video)

Ok so it's more of a semi-circle than a roundtable... I was at the first ever Kanban for DevOps class this past week in Sunnyvale, CA and after looking around the room I couldn't let these folks go without getting them on video:
- Luke Kanies (Puppet Labs)
- John Willis (Enstratus)
- Gene Kim (Author)
- Dominica DeGrandis (David J. Anderson & Associates)

Lucky for our readers, they didn't disappoint. We talk about why we think Kanban is an excellent tool for solving DevOps flow problems and our Kanban experiences thus far. 

Here is the video:

 

Update: If you are in the Atlanta area, John Willis has started the Atlanta Limited WIP Society!

Kanban and DevOps Roundtable (Video)

Ok so it’s more of a semi-circle than a roundtable… I was at the first ever Kanban for DevOps class this past week in Sunnyvale, CA and after looking around the room I couldn’t let these folks go without getting them on video:

  • Luke Kanies (Puppet Labs)
  • John Willis (Enstratus)
  • Gene Kim (Author)
  • Dominica DeGrandis (David J. Anderson & Associates)

Lucky for our readers, they didn’t disappoint. We talk about why we think Kanban is an excellent tool for solving DevOps flow problems and our Kanban experiences thus far. 

Here is the video:

 

Update: If you are in the Atlanta area, John Willis has started the Atlanta Limited WIP Society!

The post Kanban and DevOps Roundtable (Video) appeared first on dev2ops.

DevOps Lessons from Lean: Small Batches Improve Flow

DevOps problems are fundamentally flow problems. Work doesn't flow properly from one end of the lifecycle (Dev) to the other end of the lifecycle (Ops).

While spirited discussions on tools are a regular occurrence in DevOps circles, there are other simple, yet profound, techniques that have nothing to do with technology but have proven to have a huge impact on improving flow.

Top of that list? Work in small batches.

It seems so simple that it couldn't possibly make that big of difference, but it does. And there is historical precedent for it as well. The principle of working is small batches has proved it's merit in Agile software development and on an even larger stage during the manufacturing revolutions of the 1970s and 1980s.

The reasons why working in small batches has such a strong net positive impact on flow might seem a bit counterintuitive at first. In the absence of relying on "because I told you so", below are the best explanations I could find as to why this works.

 

What is a "batch size"?
A batch is the unit of work that passes from one stage to the next stage in a process. The batch size the scale of that work product.

 

What are the benefits of reducing batch sizes?

Reduces cycle time and gets you quicker feedback - With a small batch size, each batch makes it through the full lifecycle quicker. Since work on a feature isn't complete until it is successfully running in production and getting feedback from users, large batch sizes simply delay that feedback. This means the larger the batch the longer you wait to find out if you did it right. It's easier to make business and technical decisions and easier to recover from a mistake if you are working on shorter time horizons.

Reduces risk of an error or outage - With a small batch size, you are reducing the amount of complexity that has to be dealt with at any one time by the people working on the batch. The reduction in complexity comes not only from the number and size of the moving parts that are touched while working on the batch, but also in the amount of person-to-person communication that needs to happen (due to smaller teams). This is just acknowledging the natural limitations of human beings. The more complexity people have to deal with, the more mistakes there will be. Smaller batch size also leads to quicker feedback, so if there is an error in the batch it will be caught sooner. A small batch size lends itself well to quicker problem detection and resolution (the field of focus in addressing the problem can be contained to the footprint of that small batch and the work that is still fresh in everyone's mind).

Reduces product risk - This builds on the idea of faster feedback. The sooner you can put an individual feature in front of your target audience, the sooner you will know if you've achieved the right product and market fit. The larger the batch size, the greater the product risk when you finally release that batch. Statistics shows us that it's beneficial to decompose a large risk into a series of small risks. For example, bet all of your money on a single coin flip and you have a 50% chance of losing all of your money. Break that bet into 4 smaller bets and it would take 4 sequential bets to result in financial ruin (1 in 16 or 6.25% chance of losing all of your money).

Large batch sizes also often lead to compounding schedule delays and cost overruns. The larger the batch, the more likely it is that a mistake was made in estimating or during the work itself. The chance and potential impact of these mistakes compounds as the batch size grows… increasing the delay in being able to get that all important feedback from the users and increasing your product risk.

Improves efficiency and lowers overhead - Conventional wisdom holds that large batches allow greater productivity (i.e. you get more done with large uninterrupted periods of work) and lower overhead (less batches = less transactional costs). As has been proven in the manufacturing world (Lean) and now software development (Agile), this simply isn't the case. The larger the scope of the batch, the more complexity the individual has to deal with. The complexity of a debug task grows as 2ⁿ when n things are changed in one batch. In knowledge work, the larger the uninterrupted period of work leads to greater change complexity, greater the volume of debug work, and more handoff complexity. That is all added overhead. But even assuming the individual was still being more efficient by working in a large batch, you would still be creating greater inefficiency for the end-to-end process.

For a large batch of changes, especially those made to an even larger system, the handoff to the next step in the process is going to be highly inefficient for the receiving party to deal with (think: Development to Operations "toss if over the wall" handoff of a major release). And if something goes wrong, the time between when the error was introduced and when it will be discovered is so long that it is no longer fresh in the mind of the person who introduced the error. Small batches also have been proven to actually reduce transaction costs because of a curious fact of human nature… people get better at and find ways to increasingly improve the things they are forced to do more often.

Improves management visibility and control - Reducing batch sizes gives you a greater number of instrumentation points by which you can visualize and measure the flow of work through your organization. It's notoriously difficult to accurately determine progress of in-flight work. You are largely going to be limited to the subjective analysis of project managers and the biased opinion of the person doing the work. The only points where you can have certainty is either when the work has just started or when the work has just completed (and accepted by the next step in the process). With large batch sizes you have to wait long periods of time between those start and completion points, making it difficult to see how things are flowing, providing little guarantee that you will have adequate warning if things are going wrong, and allowing for few opportunities to make adjustments to optimize or triage. With small batch sizes you can see work move through the lifecycle with certainty, spot problems early, and make ongoing adjustments to optimize the flow of delivery.

Encourages decoupled architectures with less dependency issues - Smaller batch sizes can also have a positive impact on architecture. Most IT systems are built from within the context of large projects. Large projects create them and then large projects are undertaken to change them. The result is a built-in tolerance for monolithic architectures with complex dependencies. As you move to small batch sizes you are naturally limiting the work in progress on a particular segment of your code/infrastructure. While initially this might seem like it will slow the organization down, the principles of flow show that this will actually give you greater throughput over time. But in order to speed things up even further, you will end up looking for ways to increasingly decouple and isolate (including making fault tolerant) your architecture to allow for greater parallelization of work.

 

What are the economic benefits of reducing batch size?
In manufacturing and in software development, reducing batch sizes has been showen to have a significant impact on the economics of the production process. The diagram below (scanned from Donald G. Reinertsen's "The Principles of Product Development Flow", pg 121) lays out the direct links between smaller batch sizes and improved economics. I think the logic speaks for itself.

 

 

What are your control points for reducing batch sizes?
Reducing batch sizes is a policy decision that needs to be implemented at multiple levels: 

Project Initiation and Funding - How projects are formed and funded tends to have a strong correlation to batch size. The definition of requirements and success criteria, in addition to the allocation of budget, is usually done in a large batch that corresponds to a specific or set of business goals that were created at the quarterly or yearly scale. The inertia of this large batch is often carried throughout the rest of the lifecycle, becoming a pacemaker of sorts that encourages large batch sizes. Positive work done to break down these large initial batches into smaller batches can turn that inertia back into a net positive effect for the company. Reduction in the time horizon for the expected results of a project is usually a good way to force the issue (e.g. try scoping and budgeting projects to single month size rather than quarter/multi-quarter size).

Project management - When creating projects consider what is the smallest amount of change that can be undertaken in the shortest amount of time and still achieve a measurable result. This will naturally lead to smaller teams working on smaller batches of work that can flow independently through the lifecycle with faster feedback and lower risk to the overall system.

Testing - Demand that individual pieces of work are tested as soon as those pieces of work are completed (and not wait for the entire project/release to be code complete). Continuous integration and it's built in unit/smoke tests is a crude example of this principle. Carry that further. Ensure that full deployment and testing efforts are ongoing during any project. This will automatically force engineers to think about their work in small units that can be completed and handed off for testing at regular intervals (naturally creating the urge to reduce batch sizes).

Release management - Break down large releases into small units of deployment that employ standardized packaging and configuration management mechanisms. These units of deployment should be aligned towards the things that are changed (i.e. application services) rather than large project releases that change many things. In addition to reducing deployment and configuration woes, this also has the effect of standardizing batch sizing across lifecycle by determining the appropriate unit of change for your infrastructure.

 

I'm standing the on shoulders of people a lot smarter than me in this post. If you are interested in these ideas please check out:
http://www.amazon.com/Principles-Product-Development-Flow-Generation/dp/1935401009/ref=cm_cr_pr_product_top
http://www.startuplessonslearned.com/2009/02/work-in-small-batches.html http://www.dbrmfg.co.nz/Production%20Batch%20Issues.htm
http://www.informit.com/articles/article.aspx?p=1833567&seqNum=3

DevOps Lessons from Lean: Small Batches Improve Flow

Update: This and other related topics will be in the upcoming DevOps Cookbook.


DevOps problems are fundamentally flow problems. Work doesn’t flow properly from one end of the lifecycle (Dev) to the other end of the lifecycle (Ops).

While spirited discussions on tools are a regular occurrence in DevOps circles, there are other simple, yet profound, techniques that have nothing to do with technology but have proven to have a huge impact on improving flow.

Top of that list? Work in small batches.

It seems so simple that it couldn’t possibly make that big of difference, but it does. And there is historical precedent for it as well. The principle of working is small batches has proved it’s merit in Agile software development and on an even larger stage during the manufacturing revolutions of the 1970s and 1980s.

The reasons why working in small batches has such a strong net positive impact on flow might seem a bit counterintuitive at first. In the absence of relying on “because I told you so”, below are the best explanations I could find as to why this works.

 

What is a “batch size”?

A batch is the unit of work that passes from one stage to the next stage in a process. The batch size the scale of that work product.

 

What are the benefits of reducing batch sizes?

Reduces cycle time and gets you quicker feedback - With a small batch size, each batch makes it through the full lifecycle quicker. Since work on a feature isn’t complete until it is successfully running in production and getting feedback from users, large batch sizes simply delay that feedback. This means the larger the batch the longer you wait to find out if you did it right. It’s easier to make business and technical decisions and easier to recover from a mistake if you are working on shorter time horizons.

Reduces risk of an error or outage - With a small batch size, you are reducing the amount of complexity that has to be dealt with at any one time by the people working on the batch. The reduction in complexity comes not only from the number and size of the moving parts that are touched while working on the batch, but also in the amount of person-to-person communication that needs to happen (due to smaller teams). This is just acknowledging the natural limitations of human beings. The more complexity people have to deal with, the more mistakes there will be. Smaller batch size also leads to quicker feedback, so if there is an error in the batch it will be caught sooner. A small batch size lends itself well to quicker problem detection and resolution (the field of focus in addressing the problem can be contained to the footprint of that small batch and the work that is still fresh in everyone’s mind).

Reduces product risk - This builds on the idea of faster feedback. The sooner you can put an individual feature in front of your target audience, the sooner you will know if you’ve achieved the right product and market fit. The larger the batch size, the greater the product risk when you finally release that batch. Statistics shows us that it’s beneficial to decompose a large risk into a series of small risks. For example, bet all of your money on a single coin flip and you have a 50% chance of losing all of your money. Break that bet into 4 smaller bets and it would take 4 sequential bets to result in financial ruin (1 in 16 or 6.25% chance of losing all of your money).

Large batch sizes also often lead to compounding schedule delays and cost overruns. The larger the batch, the more likely it is that a mistake was made in estimating or during the work itself. The chance and potential impact of these mistakes compounds as the batch size grows… increasing the delay in being able to get that all important feedback from the users and increasing your product risk.

Improves efficiency and lowers overhead - Conventional wisdom holds that large batches allow greater productivity (i.e. you get more done with large uninterrupted periods of work) and lower overhead (less batches = less transactional costs). As has been proven in the manufacturing world (Lean) and now software development (Agile), this simply isn’t the case. The larger the scope of the batch, the more complexity the individual has to deal with. The complexity of a debug task grows as 2ⁿ when n things are changed in one batch. In knowledge work, the larger the uninterrupted period of work leads to greater change complexity, greater the volume of debug work, and more handoff complexity. That is all added overhead. But even assuming the individual was still being more efficient by working in a large batch, you would still be creating greater inefficiency for the end-to-end process.

For a large batch of changes, especially those made to an even larger system, the handoff to the next step in the process is going to be highly inefficient for the receiving party to deal with (think: Development to Operations “toss if over the wall” handoff of a major release). And if something goes wrong, the time between when the error was introduced and when it will be discovered is so long that it is no longer fresh in the mind of the person who introduced the error. Small batches also have been proven to actually reduce transaction costs because of a curious fact of human nature… people get better at and find ways to increasingly improve the things they are forced to do more often.

Improves management visibility and control - Reducing batch sizes gives you a greater number of instrumentation points by which you can visualize and measure the flow of work through your organization. It’s notoriously difficult to accurately determine progress of in-flight work. You are largely going to be limited to the subjective analysis of project managers and the biased opinion of the person doing the work. The only points where you can have certainty is either when the work has just started or when the work has just completed (and accepted by the next step in the process). With large batch sizes you have to wait long periods of time between those start and completion points, making it difficult to see how things are flowing, providing little guarantee that you will have adequate warning if things are going wrong, and allowing for few opportunities to make adjustments to optimize or triage. With small batch sizes you can see work move through the lifecycle with certainty, spot problems early, and make ongoing adjustments to optimize the flow of delivery.

Encourages decoupled architectures with less dependency issues - Smaller batch sizes can also have a positive impact on architecture. Most IT systems are built from within the context of large projects. Large projects create them and then large projects are undertaken to change them. The result is a built-in tolerance for monolithic architectures with complex dependencies. As you move to small batch sizes you are naturally limiting the work in progress on a particular segment of your code/infrastructure. While initially this might seem like it will slow the organization down, the principles of flow show that this will actually give you greater throughput over time. But in order to speed things up even further, you will end up looking for ways to increasingly decouple and isolate (including making fault tolerant) your architecture to allow for greater parallelization of work.

 

What are the economic benefits of reducing batch size?
In manufacturing and in software development, reducing batch sizes has been showen to have a significant impact on the economics of the production process. The diagram below (scanned from Donald G. Reinertsen’s “The Principles of Product Development Flow”, pg 121) lays out the direct links between smaller batch sizes and improved economics. I think the logic speaks for itself.

 

 

What are your control points for reducing batch sizes?
Reducing batch sizes is a policy decision that needs to be implemented at multiple levels:

Project Initiation and Funding - How projects are formed and funded tends to have a strong correlation to batch size. The definition of requirements and success criteria, in addition to the allocation of budget, is usually done in a large batch that corresponds to a specific or set of business goals that were created at the quarterly or yearly scale. The inertia of this large batch is often carried throughout the rest of the lifecycle, becoming a pacemaker of sorts that encourages large batch sizes. Positive work done to break down these large initial batches into smaller batches can turn that inertia back into a net positive effect for the company. Reduction in the time horizon for the expected results of a project is usually a good way to force the issue (e.g. try scoping and budgeting projects to single month size rather than quarter/multi-quarter size).

Project management - When creating projects consider what is the smallest amount of change that can be undertaken in the shortest amount of time and still achieve a measurable result. This will naturally lead to smaller teams working on smaller batches of work that can flow independently through the lifecycle with faster feedback and lower risk to the overall system.

Testing - Demand that individual pieces of work are tested as soon as those pieces of work are completed (and not wait for the entire project/release to be code complete). Continuous integration and it’s built in unit/smoke tests is a crude example of this principle. Carry that further. Ensure that full deployment and testing efforts are ongoing during any project. This will automatically force engineers to think about their work in small units that can be completed and handed off for testing at regular intervals (naturally creating the urge to reduce batch sizes).

Release management - Break down large releases into small units of deployment that employ standardized packaging and configuration management mechanisms. These units of deployment should be aligned towards the things that are changed (i.e. application services) rather than large project releases that change many things. In addition to reducing deployment and configuration woes, this also has the effect of standardizing batch sizing across lifecycle by determining the appropriate unit of change for your infrastructure.

 

I’m standing the on shoulders of people a lot smarter than me in this post. If you are interested in these ideas please check out:
http://www.amazon.com/Principles-Product-Development-Flow-Generation/dp/1935401009/ref=cm_cr_pr_product_top
http://www.startuplessonslearned.com/2009/02/work-in-small-batches.html http://www.dbrmfg.co.nz/Production%20Batch%20Issues.htm
http://www.informit.com/articles/article.aspx?p=1833567&seqNum=3

 

The post DevOps Lessons from Lean: Small Batches Improve Flow appeared first on dev2ops.