↓ Archives ↓

Category → Provisioning Toolchain

How Adobe turned operations into a sevice and built a service delivery platform in the cloud

This video is from Velocity Santa Clara in June 2013. Adobe’s Srinivas Peri and SimplifyOps’ Alex Honor discuss how a packaged software tools group turned itself into an internal provider of operations services. Featured in the presentation is CDOT, the platform they built out of open source tools like Rundeck, Chef, Jenkins, and Zabbix (and non-open source technologies like AWS, Splunk, and PagerDuty) . But Beyond the tools, this is an interesting story of learning how to shift a groups mindset and figuring out what your “customers” want along the way. You can view the slides from this presentation here.

 



 

The post How Adobe turned operations into a sevice and built a service delivery platform in the cloud appeared first on dev2ops.

Will Sterling presentation on Rundeck at the April CLUE Meeting (Video)

Rundeck community member Will Sterling (from Datalogix) did a great presentation introducing Rundeck to the Colorado Linux Users and Enthusiasts meeting in Denver.

Alex Honor posted a helpful writeup on rundeck.org:

If you are new to Rundeck, watch Will Sterling give an introduction to what Rundeck can do and how he uses it to automate work at DataLogix.

Here are some notable quotes:

  • “Multi-tentant command orchestration and process automation with WebGUI, CLI, and RESTful API.”
  • “Target nodes with rich metadata. Never use hostnames again.”
  • “Process automation via multi-step jobs…Options allow users to pick one or more values.”
  • “Rundeck makes everything in the GUI available through the API.”

Besides showing off the basics, Will opened up Eclipse to step through ruby code that talks to Puppet to communicate node information to Rundeck. His code only includes nodes he can ping and have a certain class.

Will also showed off how he uses resty as nice shell based way to access the Rundeck API.

 

The post Will Sterling presentation on Rundeck at the April CLUE Meeting (Video) appeared first on dev2ops.

Integrating DevOps tools into a Service Delivery Platform (VIDEO)

The ecosystem of open source DevOps-friendly tools has experienced explosive growth in the past few years. There are so many great tools out there that finding the right one for a particular use case has become quite easy.

As the old problem of a lack of tooling fades into the distance, the new problem of tool integration is becoming more apprent. Deployment tools, configuration management tools, build tools, repository tools, monitoring tools — By design, most of the popular modern tools in our space are point solutions.


But DevOps problems are, by definition, fundamentally lifecycle problems. Getting from business idea to running features in a customer facing environment requires coordinating actions, artifacts, and knowledge across a variety of those point solution tools. If you are going to break down the problamatic silos and get through that lifecycle as rapidly and reliably as possible, you will need a way to integrate those point solutions tools.

 

The classic solution approach was for a single vendor to sell you a pre-integrated suite of tools. Today, these monolithic solutions have been largely rejected by the DevOps community in favor of a collection open source tools that can be swapped out as requirements change. Unfortunately, this also means that the burden of integration has fallen to the individual users. Even with the scriptable and API-driven nature of these modern open source tools, this isn’t a trivial task. Try as the industry might to standardize, every organization has varying requirements and makes varying technology decisions, thus making a once-size-fits-all implementation a practical impossibilty (which is also why the classic monolithic tool approach achieved, on averaged, mixed results at best).

DTO Solutions has made a name for itself through helping it’s clients sort out requirements and build toolchains that integrate open source (and closed source) tools to automate the full Development to Operations lifecycle. Through that work, a series of design patterns and best practices have proven themselves to be useful and repeatable across a variety of sizes and types of companies and environments. These design patterns and best practices have over time become formalized into what DTO calls a “Service Delivery Platform”.

I recently sat down with my colleague at DTO Solutions, Anthony Shortland, to have him walk me through the Service Delivery Platform concept.

In this video, Anthony covers:

  • The “quadrant” approach to thinking about the problem
  • The elements of the service delivery platform
  • The roles of various tools in the service delivery platform (with examples)
  • The importance of integrating both infrastructure provisioning and application deployment (especially in Cloud environments)
  • The standardized lifecycle for both infrastructure and applications

Below the video is a larger version of the generic diagram Anthony is explaining. Below that is an exmaple of a recent implementation of the design (along with the tool and process choices for that specific project).

 

 

The post Integrating DevOps tools into a Service Delivery Platform (VIDEO) appeared first on dev2ops.

Using Rundeck and Chef to Build DevOps Toolchains at #ChefConf 2012 (VIDEO)

I presented at #ChefConf 2012 in Burlingame last Thursday on using Rundeck and Chef to Build DevOps toolchains.

The heart of the presentation was a demonstration of continuous build and deployment showing Adam Jacob's chef-rundeck plugin working as a Rundeck resource model source (node provider) and jobs using knife and the Chef server API to manage databag-based application configuration.

At the process level, the presentation connects the dots between service delivery platform design and the loosely-coupled toolchains.

Despite the hotel-wide power outage in the middle of the presentation, the video crew recovered nicely! Below you will find the video and the slides.

 

 

 

 

 

 

 

 

Using Rundeck and Chef to Build DevOps Toolchains at #ChefConf 2012

I presented at #ChefConf 2012 in Burlingame last Thursday on using Rundeck and Chef to Build DevOps toolchains.

The heart of the presentation was a demonstration of continuous build and deployment showing Adam Jacob’s chef-rundeck plugin working as a Rundeck resource model source (node provider) and jobs using knife and the Chef server API to manage databag-based application configuration.

At the process level, the presentation connects the dots between service delivery platform design and the loosely-coupled toolchains.

Despite the hotel-wide power outage in the middle of the presentation, the video crew recovered nicely! Below you will find the video and the slides.

The post Using Rundeck and Chef to Build DevOps Toolchains at #ChefConf 2012 appeared first on dev2ops.

DevOps Days Mountain View 2011: Orchestration at Scale (Video)

Panel on orchestration and distributed management at DevOps Days Mountain View 2011.

John Vincent (Noah)
Alex Honor (DTO Solutions - RunDeck)
Michael Hale (Heroku)
Yan Pujante (LinkedIn)
James Turnbull (PuppetLabs - mcollective)

Moderator: John Willis (DTO Solutions)

See all videos from DevOps Days Mountain View 2011

DevOps Days Mountain View:
http://devopsdays.org/events/2011-mountainview/

Special thanks to LinkedIn for hosting DevOps Days Mountain View 2011.

Also, thank you to the sponsors:
AppDynamics  DTO Solutions  Google  MaestroDev  New Relic  Nolio
O'Reilly Media  PagerDuty  Puppet Labs  Reactor8  Splunk  StreamStep
ThoughtWorks  Usenix

 

DevOps Days Mountain View 2011: To Package or Not to Package (Video)

Lively and sometimes contentious panel at DevOps Days Mountain View 2011. To Package or not to Package - Cutting Edge Software Distribution

Jordan Sissel (Loggly)
Joshua Timberman (Opscode)
Phil Hollenback (Yahoo Mail)
Noah Campbell (DTO Solutions)

Moderator: Kris Buytaert (Inuits)

See all videos from DevOps Days Mountain View 2011

DevOps Days Mountain View:
http://devopsdays.org/events/2011-mountainview/

Special thanks to LinkedIn for hosting DevOps Days Mountain View 2011.

Also, thank you to the sponsors:
AppDynamics  DTO Solutions  Google  MaestroDev  New Relic  Nolio
O'Reilly Media  PagerDuty  Puppet Labs  Reactor8  Splunk  StreamStep
ThoughtWorks  Usenix

 

Kohsuke Kawaguchi presents Jenkins to Silicon Valley DevOps Meetup (Video)

Kohsuke Kawaguchi stopped by the Silicon Valley DevOps Meetup on June 7, 2011 to give an in-depth tour of Jenkins. Kohsuke is the founder of both the Hudson and the Jenkins open source projects and now works for CloudBees.

Kohsuke's presentation covered not only the Jenkins basics but also more advanced topics like distributed builds in the cloud, the matrix project, and build promotion. Video and slides are below.

Once again, thanks to Box.net for hosting the event!

Criteria for Fully Automated Provisioning

"Done" is one of those interesting words. Everyone knows what it means in the abstract sense. However, look at how much effort has to go into getting developers to agree that done really does mean 100% done (no testing, docs, formatting, acceptance, etc. left to do).

"Fully" is similarly an interesting word. I can't tell you how many times I've encountered a a situation where someone says that they've "fully automated" their deployments. Then when they walk me through the steps involved with a typical deployment it's full of just-in-time hand-editing of scripts, copying and pasting, fetching of artifacts, manual "finishing" or "verification" steps, and things of that nature. Even worse, if you ask two different people to walk you through the same process you might get two completely different versions of "fully" definitely not meaning "fully".

Just like Agile developers use the mantra "done means done". Operations needs the mantra "fully automated means fully automated". Without a clear definition of what "fully automated" means, it's going to be difficult to come up with any kind of consensus around solutions.

As part of the original "Web Ops 2.0: Achieving Fully Automated Provisioning" whitepaper, we listed a criteria for "Fully Automated Provisioning". I've taken that content and posted to the new DevOps Toolchain project. Hopefully it will spur some discussion on what "fully automated" actually means.

Here's the initial list of criteria:

1. Be able to automatically provision an entire environment -- from "bare-metal" to running business services -- completely from specification

Starting with bare metal (or stock virtual machine images), can you provide a specification to your provisioning tools and the tools will in turn automatically deploy, configure, and startup your entire system and application stack? This means not leaving runtime decisions or "hand-tweaking" for the operator. The specification may vary from release to release or be broken down into individual parts provided to specific tools, but the calls to the tools and the automation itself should not vary from release to release (barring a significant architectural change).

2. No direct management of individual boxes

This is as much a cultural change as it is a question of tooling. Access to individual machines for purposes other than diagnostics or performance analysis should be highly frowned upon and strictly controlled. All deployments, updates, and fixes must be deployed only through specification-driven provisioning tools that in turn manages each individual server to achieve the desired result.

3. Be able to revert to a "previously known good" state at any time

Many web operations lack the capability to rollback to a "previously known good" state. Once an upgrade process has begun, they are forced to push forward and firefight until they reach a functionally acceptable state. With fully automated provisioning you should be able to supply your provisioning system with a previously known good specification that will automatically return your applications to a functionally acceptable state. The most successful rollback strategy is what can be described as "rolling forward to a previous version”. Database issues are generally the primary complication with any rollback strategy, but it is rare to find a situation where a workable strategy can't be achieved.

4. It’s easier to re-provision than it is to repair

This is a litmus test. If your automation is implemented correctly, you will find it is easier to re-provision your applications than it is to attempt to repair them in place. “Re-provisioning” could simply mean an automated cycle of validating and regenerating application and system configurations or it could mean a full provisioning cycle from the base OS up to running business applications.

5. Anyone on your team with minimal domain specific knowledge can deploy or update an environment

You don't always want your most junior staff to be handling provisioning, but with a full automated provisioning system they should be able to do just that. Once your domain specific experts collaborate on the specification for that release, anyone familiar with a few basic commands (and having the correct security permissions) should be able to deploy that release to any integrated development, test, or production environment.

 

DevOps Toolchain project announced at O’Reilly’s Velocity online conference

If you are the type who gets distracted at work while trying to stay plugged into the industry, yesterday was a big big problem.  In Austin, you had SXSW going on; in San Francisco, you had OSBC; in San Jose you had Cloud Connect; and on the internet you had the O'Reilly Velocity Online Conference.  Wow!

The dev2ops guys were busy.  Damon and Alex were presenting at Cloud Connect while I was presenting at Velocity OLC.  I'm an Austin resident, but SXSW really isn't the DevOps hang-out, at least yet! (heh). 

At Velocity, it was my privilege to announce the next generation of the provisioning toolchain project.  Some of the feedback we received from the original toolchain paper was from the front lines of DevOps: "yeah that's pretty interesting, but there is alot more to a datacenter than just provisioning". Good point.

So we scope creeped the hell out of the automated provisioning paper and started the devops-toolchain project dedicated to defining best practices in DevOps and open source tools available to accomplish those practices. 

 

So this time, the devops-toolchain project is an opensource community driven project, which due to its nature will need to be reved frequently due to the constantly shifting nature of "best practices".  We've kicked started some of the content at http://code.google.com/p/devops-toolchain/  and formed a Google Group for the discussion at http://groups.google.com/group/devops-toolchain. Come join the conversation!

Here are the slides from my presentation:

 

 

The Velocity team did a great job hosting the conference! An example of the great content presented is from Ward Spangenberg from Zynga. He updated us on the latest on security in Cloud deployments.  Getting security worked out gets more compute into the cloud:

 

I'm an OSBC alumni. If you're into vintage conference or need to find a way to get over insomnia, check this out from 2007...