↓ Archives ↓

Category → chef

Delivering Delight At ChefConf 2014

Chef CEO Barry Crist stirs the pot to open ChefConf 2014 The theme from ChefConf 2014 is stirring delight and if you listened to Barry Crist during todays opening keynote, then you would know its more appropriately “F****ing Delight”.  Amidst high energy, loud music and a smattering of exciting expletives, Barry Crist called for attendees to stir delight in their enterprises using DevOps methodologies and, of course, Chef. Crist defined delight when the customer’s experience exceeds expectations.  Using Uber as an example when traveling in New York, he joked that the influx of Uber cars was so high that it took only 30 seconds for him to wait for a ride.  And what more he didn’t have to deal with payment when exiting the car at the airport.  In short, the service not only provided the expected functionality of something as simple as a car ride, but provided a delightful experience.  The new economy, we are part of he said, is the “delight economy”.  And the call to action for...

The post Delivering Delight At ChefConf 2014 appeared first on DevOps.com.

Microsoft bridges the gap between Azure and DevOps

DevOps can be easy for a startup. Many of the concepts and principles of DevOps come quite naturally to a fresh company just getting started. It’s a different story, however, for large, established enterprises trying to wrap their arms around this DevOps thing. For IT admins working in Microsoft’s Azure cloud platform, though, things just got a lot easier thanks to the integration of Chef and Puppet Labs. Many organizations are heavily invested in the Microsoft ecosystem. IT teams and administrators are familiar with Windows domains, Active Directory, Hyper-V, and Azure. They know how to write scripts and leverage PowerShell to get things done, but they’re much less familiar with the core principles of DevOps, or common DevOps automation tools like Chef and Puppet Labs. One of the driving forces of DevOps is the ability—or perhaps necessity—to automate those tasks that can be automated so IT resources are freed up to focus on more important tasks. Deploying and configuring virtual servers and applications is necessary, but it can also be...

The post Microsoft bridges the gap between Azure and DevOps appeared first on DevOps.com.

Microsoft’s DevOps Gambit and DevOps.com’s Business Directory

Last week was a big one for Microsoft in DevOps. At the Build conference they unveiled several new features and functionality in both Azure and Visual Studio Online to make give them more DevOps chops and DevOps friendly. At just about the same time they announced partnerships and integrations with both Chef and Puppet Labs to bring even more DevOps to Azure.  That is an awful lot of DevOps functionality in a short period of time. Frankly it is to be expected though. In doing my research for DevOps.com prior to launch, I had a chance to speak to some folks at Microsoft about DevOps. Frankly they were frustrated. They thought they had a great story around DevOps and had not been able to attract the DevOps community to what they had available.  Microsoft, much like IBM at one time owned the developer market.  In fact the sheer number of developers who still develop in Visual Studio and code for .Net and other Microsoft platforms, is pretty amazing. Now you...

The post Microsoft’s DevOps Gambit and DevOps.com’s Business Directory appeared first on DevOps.com.

Config Management & F***ing Shell Scripts

Joke tech projects pop up all the time. Some, like the lamentable titstare, should never have seen the light of day. Others, like Mark Zuckerberg’s Facemash start out as a joke and grow into something that is anything but. Most come and go without anyone really noticing. Whilst their lives are typically short we often learn more from people’s reactions to these projects than we do from their use of them. An excellent example of this popped up recently when a couple of frustrated Chef and Ansible users came up with a simple configuration management alternative, the regrettably named F***ing Shell Scripts (FSS). As they explain it, as in “Why can’t we just use f***ing shell scripts?”. Whilst usability issues with configuration management tools – particularly in the Enterprise – is a topic close to my heart, I would not normally have had much more than a passing interest in FSS. What grabbed my attention was the reaction it elicited when posted to Hacker News. With over 150 comments FSS...

Is DevOps a Title?

So, I’ve wanted to write on this topic for a while as I think it deserves a little attention! I’ve heard numerous times that you shouldn’t have DevOps in your title or that job reqs shouldn’t be “DevOps Engineers”. This came up again at the DevOps State of the Union event that we hosted in Boston recently. There were definitely some very vocal folks saying that it just didn’t make sense to look for a DevOps engineer or hire a DevOps engineer. These folks make the following points: -          DevOps is a methodology not a job description – we don’t call our developers Agile engineers, they are just developers. They happen to follow the Agile methodology but they their job description isn’t to do Agile, it’s to create awesome products. -          DevOps should permeate an organization – having a DevOps group sort of missed the point of DevOps. DevOps should be embedded into the fabric of the company, not an adjunct to the development process. If it isn’t embedded how...

Docker as a framework for your DevOps culture

If you’re like me and spend a lot of time evaluating new technologies with the goal of “doing more faster” with your engineering organization, then you’re certainly aware of the many choices of DevOps tools like Puppet, Chef, Ansible, Salt, etc. In my opinion each of these tools is amazing at one piece of the DevOps puzzle – configuration management, and some are OK at other DevOps tasks. Each of the tools has a slightly different approach to configuration management but after comparing them I find that you’re usually picking a tool based on stylistic reasons like the DSL, existing recipes/playbooks, or existing familiarity. While I have my opinion on which of the above tools is “the best”, each tool only solves the configuration management piece of the puzzle. If DevOps is truly a culture change and not just the codification of operational tasks then we need tools that help foster that culture and I think Docker goes a long way towards doing that. Coming from the perspective of a...

Bootstrapping Chef (or Whatever) for Autoscaled EC2 Instances

I realize it is traditional to start writing a new blog with some background and a deep introspection as to the author’s personal motivation for writing said blog, but I’ve never been one for tradition. Thus, for my first official DevOps post, I think I’ll jump write in with a technical tutorial on a problem had to solve last summer that I haven’t seen well documented anywhere else. You can read my bio for more, but here’s what you can expect on HackOps. I’m an industry analyst (in security; I’m the CEO of Securosis), but one with a bad habit of giving technical talks at DEFCON. In other words, a mix of research and analysis at both a technical and executive level. The Problem With that, let’s start with the technical: Last summer I was putting together a demonstration for the Black Hat conference when I ran into a little roadblock. I wanted to launch instances and have them automatically connect to a Chef server and pull down a default...

Ops is dead, long live DevOps

Yes, that’s a bold statement, but if the vibe at the recent AWS re:Invent conference was any indication there are a number of methodologies, technologies, and companies working to make that a reality. This article isn’t focused on whether this is a good or bad thing (my personal belief is that ops will always be a critical part of the IT delivery process). Regardless, it is critical for IT operations personnel to understand the implications of this tectonic shift. As Marc Andreessen, founder of the venture capital firm Andreessen Horowitz, has said, “software is eating the world.” As a result, ops professionals must have the ability to automate tasks that used to be managed manually. Whether or not you think that is the death of ops as we know it I will leave to you, but it does mean we will live in a world where DevOps will offer distinct advantages. Before we dive into what this change means to ops personnel, let’s review the underlying assumptions. The fundamental driver...

Command-line cookbook dependency solving with knife exec

Note: This article was originally published in 2011. In response to demand, I've updated it for 2014! Enjoy! SNS

Imagine you have a fairly complicated infrastructure with a large number of nodes and roles. Suppose you have a requirement to take one of the nodes and rebuild it in an entirely new network, perhaps even for a completely different organization. This should be easy, right? We have our infrastructure in the form of code. However, our current infrastructure has hundreds of uploaded cookbooks - how do we know the minimum ones to download and move over? We need to find out from a node exactly what cookbooks are needed for that node to be built.

The obvious place to start is with the node itself:

$ knife node show controller
Node Name:   controller
Environment: _default
FQDN:        controller
IP:          182.13.194.41
Run List:    role[base], recipe[apt::cacher], role[pxe_server]
Roles:       pxe_server, base
Recipes      apt::cacher, pxe_dust::server, dhcp, dhcp::config
Platform:    ubuntu 10.04

OK, this tells us we need the apt, pxe_dust and dhcp cookbooks. But what about them - do they have any dependencies? How could we find out? Well, dependencies are specified in two places - in the cookbook metadata, and in the individual recipes. Here's a primitive way to illustrate this:

bash-3.2$ for c in apt pxe_dust dhcp
> do
> grep -iER 'include_recipe|^depends' $c/* | cut -d '"' -f 2 | sort | uniq
> done
apt::cacher-client
apache2
pxe_dust::server
tftp
tftp::server
utils

As I said - primitive. However the problem doesn't end here. In order to be sure, we now need to repeat this for each dependency, recursively. And of course it would be nice to present them more attractively. Thinking about it, it would be rather useful to know what cookbook versions are in use too. This is definitely not a job for a shell one liner - is there a better way?

As it happens, there is. Think about it - the Chef server already needs to solve these dependencies to know what cookbooks to push to API clients. Can we access this logic? Of course we can - clients carry out all their interactions with the Chef server via the API. This means we can let the server solve the dependencies and query it via the API ourselves.

Chef provides two powerful ways to access the API without having to write a RESTful client. The first, Shef, is an interactive REPL based on IRB, which when launched gives access to the Chef server. This isn't trivial to use. The second, much simpler way is the knife exec subcommand. This allows you to write Ruby scripts or simple one-liners that are executed in the context of a fully configured Chef API Client using the knife configuration file.

Now, since I wrote this article, back in summer 2011, the API has changed, which means that my original method no longer works. Additionally, we are now served by at least two local dependency solvers, in the form of Berkshelf (whose dependency solver, 'solve' is now available as an individual Gem), and Librarian-chef. In this updated version, I'll show how to use the new Chef server API to perform the same function. Berkshelf and Librarian solve a slightly different problem, in that in this instance we're trying to solve dependencies for a node, so for the purposes of this article I'll consider them out of scope.

For historical purposes, here's the original solution:

knife exec -E '(api.get "nodes/controller/cookbooks").each { |cb| pp cb[0] => cb[1].version }'

The /nodes/NODE_NAME/cookbooks endpoint returns the cookbook attributes, definitions, libraries and recipes that are required for this node. The response is a hash of cookbook name and Chef::CookbookVersion object. We simply iterate over each one, and pretty print the cookbook name and the version.

Let's give it a try:

$ knife exec -E '(api.get "nodes/controller/cookbooks").each { |cb| pp cb[0] => cb[1].version }'
{"apt"=>"1.1.1"}
{"tftp"=>"0.1.0"}
{"apache2"=>"0.99.3"}
{"dhcp"=>"0.1.0"}
{"utils"=>"0.9.5"}
{"pxe_dust"=>"1.1.0"}

The current way to solve dependencies using the Chef server API resides under the environments end point. This makes sense, if you think of environments as a way to define and constrain version numbers for a given set of nodes. This means that constructing the API call, and handling the results is slightly more than can easily be comprehended in a one-liner, which gives us the opportunity to demonstrate the use of knife exec with a script on the filesystem.

First let's create the script:

USAGE = "knife exec script.rb NODE_NAME"

def usage_and_exit
  STDERR.puts USAGE
  exit 1
end

node_name = ARGV[2]

usage_and_exit unless node_name

node = api.get("nodes/#{node_name}")
run_list_expansion = node.expand!("server")

cookbook_solution = api.post("environments/#{node.chef_environment}/cookbook_versions",
                            :run_list => run_list_expansion.recipes)

cookbook_solution.each do |name, cb|
  puts name + " => " + cb.version
end

exit

The way knife exec scripts work is to pass the arguments following knife to Ruby as the ARGV special variable, which is an array of each space-separated argument. This allows us to produce a slightly more general solution, to which we can pass the name of the node for which we want to solve. The usage handling is obvious - we print the usage to stderr if the command is called without a node name. The meat of the script is the API call. First we get the node object (from ARGV[2], i.e. the node we passed to the script) from the Chef server. Next we expand the run list - this means check for and expand any run lists in roles. Finally we call the API to provide us with cookbook versions for the specified node in the environment in which the node currently resides, passing in the recipes from the expanded run list. Finally we iterate over the cookbooks we get back, and print the name and version. Note that this script could easily be modified to solve for a different environment, which would be handy if we wanted to confirm what versions we'd get were we to move the node to a different environment. Let's give in a whirl:

$ knife exec src/knife-cookbook-solve/solve.rb asl-dev-1
chef_handler => 1.1.4
minitest-handler => 0.1.3
base => 0.0.2
hosts => 0.0.1
yum => 2.3.0
tmux => 1.1.1
ssh => 0.0.6
fail2ban => 1.2.2
users => 2.0.6
security => 0.1.0
sudo => 2.0.4
atalanta-users => 0.0.2
community_users => 1.5.1
sudoersd => 0.0.2
build-essential => 1.4.2

To conclude as did the original article....Nifty! :)

Using Test Doubles in ChefSpec

One of the improvements to ChefSpec in the 3.0 release is the ability to extend test coverage to execute blocks. Doing so requires the infrastructure developer to stub out shell commands run as part of the idempotence check. This is pretty simple as ChefSpec provides a macro to stub shell commands. However doing so where the idempotence check uses a Ruby block is slightly more involved. In this article I explain how to do both.

Quick overview of ChefSpec

ChefSpec is a powerful and flexible unit testing utility for Chef recipes. Extending the popular Ruby testing tool, Rspec, it allows the developer to make assertions about the Chef resources declared and used in recipe code. The key concept here is that of Chef resources. When we write Chef code to build infrastructure, we're using Chef's domain-specific language to declare resources - abstractions representing the components we need to build and configure. I'm not going to provide a from-the-basics tutorial here, but dive in with a simple example. Here's a test that asserts that our default recipe will use the package resource to install OpenJDK.

require 'chefspec'

RSpec.configure do |config|
  config.platform = 'centos'
  config.version = '6.4'
  config.color = true

  describe 'stubs-and-doubles' do

    let(:chef_run) {  ChefSpec::Runner.new.converge(described_recipe) }

    it 'installs OpenJDK' do
      expect(chef_run).to install_package 'java-1.7.0-openjdk'
    end

  end
end

If we run this first, we'll see something like this:

$ rspec -fd spec/default_spec.rb 

stubs-and-doubles

================================================================================
Recipe Compile Error
================================================================================

Chef::Exceptions::RecipeNotFound
--------------------------------
could not find recipe default for cookbook stubs-and-doubles

  installs OpenJDK (FAILED - 1)

Failures:

  1) stubs-and-doubles installs OpenJDK
     Failure/Error: let(:chef_run) {  ChefSpec::Runner.new.converge(described_recipe) }
     Chef::Exceptions::RecipeNotFound:
       could not find recipe default for cookbook stubs-and-doubles
     # ./spec/default_spec.rb:10:in `block (3 levels) in <top (required)>'
     # ./spec/default_spec.rb:13:in `block (3 levels) in <top (required)>'

Finished in 0.01163 seconds
1 example, 1 failure

Failed examples:

rspec ./spec/default_spec.rb:12 # stubs-and-doubles installs OpenJDK

This is reasonable - I've not even written the recipe yet. Once I add the default recipe, such as:

package 'java-1.7.0-openjdk'

Now the test passes:

$ rspec -fd spec/default_spec.rb 


stubs-and-doubles
  installs OpenJDK

Finished in 0.01215 seconds
1 example, 0 failures

Test Doubles

ChefSpec works by running a fake Chef run, and checking that the resources were called, with the correct parameters. Behind the scenes, your cookbooks are loaded, but instead of performing real actions on the system, the Chef Resource class is modified such that messages are sent to ChefSpec instead. One of the key principles of Chef is that resources should be idempotent - the action should only be taken if required, and it's safe to rerun the resource. In most cases, the Chef provider knows how to guarantee this - it knows how to check that a package was installed, that a directory was created. However, if we use an execute resource - a resource where we're calling directly to the underlying operating system - Chef has no way to tell if the command we called did the right thing. Unless we explicitly tell Chef how to check, it will just run the command again and again. This causes a headache for ChefSpec, because it doesn't have a built-in mechanism for faking operating system calls - so when it comes across a guard, it requires us to help it out, by stubbing the command.

This introduces some testing vocabulary - vocabulary that is worth stating explicitly, for the avoidance of confusion. I'm a fan of the approach described in Gerard Meszaros' 2007 book xUnit Test Patterns: Refactoring Test Code, and this is the terminology used by Rspec. Let's itemise a quick glossary:

  • System Under Test (SUT) - this is the thing we're actually testing. In our case, we're testing the resources in a Chef recipe. Note we're explictly not testing the operating system.
  • Depended-on Component (DOC) - usually our SUT has some external dependency - a database, a third-party API, or on our case, the operating system. This is an example of a DOC
  • Test Double - when unit testing, we don't want to make real calls to the DOC. It's slow, can introduce unwanted variables into our systems, and if it becomes unavailable our tests won't run, or will fail. Instead we want to be able to interact with something that represents the DOC. The family of approaches to implement this abstraction is commonly referred to as Test Doubles.
  • Stubbing - when our SUT depends on some input from the DOC, we need to be able to control those. A typical approach is to stub the method that makes a call to the DOC, typically returning some canned data.

Let's look at a real example. The community runit cookbook, when run on a RHEL derivative, will, by default, build an RPM and install it. The code to accomplish this looks like this:

bash 'rhel_build_install' do
      user 'root'
      cwd Chef::Config[:file_cache_path]
      code <<-EOH
tar xzf runit-2.1.1.tar.gz
cd runit-2.1.1
./build.sh
rpm_root_dir=`rpm --eval '%{_rpmdir}'`
rpm -ivh '/root/rpmbuild/RPMS/runit-2.1.1.rpm'
EOH
      action :run
      not_if rpm_installed
end

Observe the guard - not_if rpm_installed. Earlier in the recipe, that method is defined as:

rpm_installed = "rpm -qa | grep -q '^runit'"

ChefSpec can't handle direct OS calls, and so if we include the runit cookbook in our recipe, we'll get an error. Let's start by writing a simple test that asserts that we include the runit recipe. I'm going to use Berkshelf as my dependency solver, which means I need to add a dependency in my cookbook metadata, and supply a Berksfile that tells Berkshelf to check the metadata for dependencies. I also need to add Berkshelf support to my test. My test now looks like this:

require 'chefspec'
require 'chefspec/berkshelf'

RSpec.configure do |config|
  config.platform = 'centos'
  config.version = '6.4'
  config.color = true

  describe 'stubs-and-doubles' do

    let(:chef_run) {  ChefSpec::Runner.new.converge(described_recipe) }

    it 'includes the runit recipe' do
      expect(chef_run).to include_recipe 'runit'
    end

    it 'installs OpenJDK' do
      expect(chef_run).to install_package 'java-1.7.0-openjdk'
    end

  end
end

And my recipe like this:

include_recipe 'runit'
package 'java-1.7.0-openjdk'

Now, when I run the test, ChefSpec complains:

1) stubs-and-doubles includes the runit recipe
     Failure/Error: let(:chef_run) {  ChefSpec::Runner.new.converge(described_recipe) }
     ChefSpec::Error::CommandNotStubbed:
       Executing a real command is disabled. Unregistered command: `command("rpm -qa | grep -q '^runit'")`"

       You can stub this command with:

         stub_command("rpm -qa | grep -q '^runit'").and_return(...)
     # ./spec/default_spec.rb:11:in `block (3 levels) in <top (required)>'
     # ./spec/default_spec.rb:14:in `block (3 levels) in <top (required)>'

ChefSpec tells us exactly what we need to do, but let's unpack it a little, using the vocabulary from above. The SUT, our stunts-and-doubles cookbook, has a dependency on the operating system - the DOC. This means we need to be able to insert a test double of the operating system, specifically a test stub, which will provide a canned answer to our rpm command. ChefSpec makes it very easy for us by providing a macro that does exactly this. We need to run this before every example, so we can put it in a before block. The new test now looks like this:

require 'chefspec'
require 'chefspec/berkshelf'

RSpec.configure do |config|
  config.platform = 'centos'
  config.version = '6.4'
  config.color = true

  describe 'stubs-and-doubles' do

    before(:each) do
      stub_command("rpm -qa | grep -q '^runit'").and_return(true)
    end

    let(:chef_run) {  ChefSpec::Runner.new.converge(described_recipe) }

    it 'includes the runit recipe' do
      expect(chef_run).to include_recipe 'runit'
    end

    it 'installs OpenJDK' do
      expect(chef_run).to install_package 'java-1.7.0-openjdk'
    end

  end
end

Now when we run the test, it passes:

$ rspec -fd spec/default_spec.rb 

stubs-and-doubles
  includes the runit recipe
  installs OpenJDK

Finished in 0.57793 seconds
2 examples, 0 failures

That's all fine and dandy, but suppose we execute some Ruby for our guard instead of a shell command. Here's an example from one of my cookbooks, in which I set the correct Selinux policy to allow apache to proxy to a locally running Netty server:

unless (node['platform'] == 'Amazon' or node['web_proxy']['selinux'] == 'Disabled')
  execute 'Allow Apache Network Connection in SELinux' do
    command '/usr/sbin/setsebool -P httpd_can_network_connect 1'
    not_if { Mixlib::ShellOut.new('getsebool httpd_can_network_connect').run_command.stdout.match(/--> on/) }
    notifies :restart, 'service[httpd]'
  end
end

Now, OK, I could have used grep, but I prefer this approach, and it's a good enough example to illustrate how we handle this kind of case in ChefSpec. First, let's write a test:

it 'sets the Selinux policy to allow proxying to localhost' do
  expect(chef_run).to run_execute('Allow Apache Network Connection in SELinux')
  resource = chef_run.execute('Allow Apache Network Connection in SELinux')
  expect(resource).to notify('service[httpd]').to(:restart)
end

If we were to run this, ChefSpec would complain that we didn't have an execute resource with a run action on our run list. So we then add the execute block from above to the default recipe. I'm going to omit the platform check for simplicity, and just include the execute resource. We're also going to need to define an httpd service. Of course we're never going to actually run this code, so I'm not fussed that the service exists despite us never installing Apache. My concern in this article is to teach you about the testing, not write a trivial and pointless cookbook.

Now our recipe looks like this:

include_recipe 'runit'
package 'java-1.7.0-openjdk'

service 'httpd'

execute 'Allow Apache Network Connection in SELinux' do
  command '/usr/sbin/setsebool -P httpd_can_network_connect 1'
  not_if { Mixlib::ShellOut.new('getsebool httpd_can_network_connect').run_command.stdout.match(/--> on/) }
  notifies :restart, 'service[httpd]'
end

When we run the test, we'd expect all to be fine. We're asserting that there's an execute resource, that runs, and that it notifies the httpd service to restart. However, this is what we see:

Failures:

  1) stubs-and-doubles includes the runit recipe
     Failure/Error: let(:chef_run) {  ChefSpec::Runner.new.converge(described_recipe) }
     Errno::ENOENT:
       No such file or directory - getsebool httpd_can_network_connect
     # /tmp/d20140208-30704-g1s3d4/stubs-and-doubles/recipes/default.rb:8:in `block (2 levels) in from_file'
     # ./spec/default_spec.rb:20:in `block (3 levels) in <top (required)>'
     # ./spec/default_spec.rb:23:in `block (3 levels) in <top (required)>'

  2) stubs-and-doubles installs OpenJDK
     Failure/Error: let(:chef_run) {  ChefSpec::Runner.new.converge(described_recipe) }
     Errno::ENOENT:
       No such file or directory - getsebool httpd_can_network_connect
     # /tmp/d20140208-30704-g1s3d4/stubs-and-doubles/recipes/default.rb:8:in `block (2 levels) in from_file'
     # ./spec/default_spec.rb:20:in `block (3 levels) in <top (required)>'
     # ./spec/default_spec.rb:27:in `block (3 levels) in <top (required)>'

  3) stubs-and-doubles sets the Selinux policy to allow proxying to localhost
     Failure/Error: let(:chef_run) {  ChefSpec::Runner.new.converge(described_recipe) }
     Errno::ENOENT:
       No such file or directory - getsebool httpd_can_network_connect
     # /tmp/d20140208-30704-g1s3d4/stubs-and-doubles/recipes/default.rb:8:in `block (2 levels) in from_file'
     # ./spec/default_spec.rb:20:in `block (3 levels) in <top (required)>'
     # ./spec/default_spec.rb:31:in `block (3 levels) in <top (required)>'

Finished in 1.11 seconds
3 examples, 3 failures

Boom! What's wrong? Well, ChefSpec isn't smart enough to warn us about the guard we tried to run, and actually tries to run the Ruby block. I'm (deliberately) running this on a machine without the ability to run the getsebool command to trigger this response, but on my usual workstation running Fedora, this will silently pass. This is what prompted me to write this article, because my colleague who runs these tests on his mac kept getting this No such file or directory - getsebool httpd_can_network_connect error, despite the Jenkins box (running CentOS) and my workstation working just fine. So - what's the solution? Well, we need to do something similar to that which ChefSpec did for us earlier. We need to create a test double, only this time it's Mixlib::ShellOut that we need to stub. There are three steps we need to follow. We need to capture the :new method that is called on Mixlib::ShellOut, and instead of returning canned data, like we did when we called stub_command, we want to return the test double, standing in for the real instance of Mixlib::Shellout, and finally we want to control the behaviour of the test double, making it return the output we want for out test. So, first we need to create the test double. We do that with the double method in Rspec:

shellout = double

This just gives us a blank test double - we can do anything we like with it. Now we need to stub the constructor, and return the double:

Mixlib::ShellOut.stub(:new).and_return(shellout)

Finally, we specify how the shellout double should respond when it receives the :run_command method.

  allow(shellout).to receive(:run_command).and_return('--> off')

We want the double to return a string that won't cause the guard to be triggered, because we want to assert that the execute method is called. We can add these three lines to the before block:

before(:each) do
  stub_command("rpm -qa | grep -q '^runit'").and_return(true)
  shellout = double
  Mixlib::ShellOut.stub(:new).and_return(shellout)
  allow(shellout).to receive(:run_command).and_return('--> off')
end

Now when we run the test, we'd expect the Mixlib guard to be stubbed, the test double returned, and the test double to respond to having the :run_command method called be that it returns a string which doesn't match the guard, and thus the execute should run! Let's give it a try:

Failures:

  1) stubs-and-doubles includes the runit recipe
     Failure/Error: let(:chef_run) {  ChefSpec::Runner.new.converge(described_recipe) }
     NoMethodError:
       undefined method `stdout' for "--> off":String
     # /tmp/d20140208-30741-eynz5u/stubs-and-doubles/recipes/default.rb:8:in `block (2 levels) in from_file'
     # ./spec/default_spec.rb:20:in `block (3 levels) in <top (required)>'
     # ./spec/default_spec.rb:23:in `block (3 levels) in <top (required)>'

Alas! What have we done wrong? Look closely at the error. Ruby tried to call :stdout on a String. Why did it do that? Look at the guard again:

not_if { Mixlib::ShellOut.new('getsebool httpd_can_network_connect').run_command.stdout.match(/--> on/) }

Aha... we need another double. When the first double is called, we need to return something that can accept a stdout call, which in turn will return the string. Let's add that in:

before(:each) do
  stub_command("rpm -qa | grep -q '^runit'").and_return(true)
  shellout = double
  getsebool = double
  Mixlib::ShellOut.stub(:new).and_return(shellout)
  allow(shellout).to receive(:run_command).and_return(getsebool)
  allow(getsebool).to receive(:stdout).and_return('--> off')
end

Once more with feeling:

$ bundle exec rspec -fd spec/default_spec.rb 

stubs-and-doubles
  includes the runit recipe
  installs OpenJDK
  sets the Selinux policy to allow proxying to localhost

Finished in 0.7313 seconds
3 examples, 0 failures

Just to illustrate how the double interacts with the test, let's quickly change what getsebool returns:

allow(getsebool).to receive(:stdout).and_return('--> on')

Now when we rerun the test, it fails:

Failures:

  1) stubs-and-doubles sets the Selinux policy to allow proxying to localhost
     Failure/Error: expect(chef_run).to run_execute('Allow Apache Network Connection in SELinux')
       expected "execute[Allow Apache Network Connection in SELinux]" actions [] to include :run
     # ./spec/default_spec.rb:31:in `block (3 levels) in <top (required)>'

This time the guard prevented the execute from running, and as such the resource collection didn't contain this resource, and so the test failed.

Conclusion

One of the great beauties of ChefSpec (and of course Chef) is that at its heart it's just Ruby. This means that at almost any point you can reach into the standard Ruby development toolkit for your testing or infrastructure development needs. Hopefully this little example will be helpful to you. If it inspires you to read more about mocking, I can recommend the following resources: