↓ Archives ↓

Category → Code

A Look At MCollective 2.0.0 and Beyond

It’s been a long time since I wrote any blog posts about MCollective, I’ll be rectifying that by writing a big series of blog posts over the next weeks or months.

MCollective 2.0 was recently released and it represents a massive internal restructure and improvement cycle. In 2.0 not a lof of the new functionality is visible immediately on the command line but the infrastructure now exist to innovate quite rapidly in areas of resource discovery and management. The API has had a lot of new capabilities added that allows MCollective to be used in many new use cases as well as improving on some older ones.

Networking and Addressing has been completely rewritten and reinvented to be both more powerful and more generic. You can now use MCollective in ways that were previously not possible or unsuitable for certain use cases, it is even more performant and more pluggable. Other parts of the ecosystem like ActiveMQ and the STOMP protocol has had major improvements and MCollective is utilising these improvements to further its capabilities.

The process of exposing new features based on this infrastructure rewrite to the end user has now started. Puppet Labs have recently released version 2.1.0 which is the first in a new development cycle and this release have hugely improved the capabilities of the discovery system – you can now literally discover against any conceivable source of data on either the client side or out on your network or a mix of both. You can choose when you want current network conditions to be your source of truth or supply the source of truth from any data source you might have. In addition an entirely new style of addressing and message delivery has been introduced that creates many new usage opportunities.

The development pace of MCollective has taken a big leap forward, I am now full time employed by Puppet Labs and working on MCollective. Future development is secure and the team behind is growing as we look at expending it’s feature set.

I’ll start with a bit of a refresher about MCollective for those new to it or those who looked in the past at but maybe want to come back for another look. In the coming weeks I’ll follow up with a deeper look into some of the aspects highlighted below and also the new features introduced since 2.0.0 came out.

Version 2.0 represents a revolutionary change to MCollective so there is lots of ground to cover each blog post in the series will focus on one aspect of the new features and capabilities.

The Problem

Modern systems management has moved on from just managing machines with some reasonably limited set of software on them to being a really large challenge in integrating many different systems together. More and more the kinds of applications we are required to support are made up of many internal components spread across 100s of machines in ever increasing complexity. We are now Integration Experts above all – integrate legacy systems with cloud ones, manage hi-brid public and private clouds, integrate external APIs with in house developed software and often using cutting edge technologies that tend to be very volatile. Today we might be building our infrastructure on some new technology that does not exist tomorrow.

Worse still the days of having a carefully crafted network that’s a known entity with individually tuned BIOS settings and hand compiled kernels is now in the distant past. Instead we have machines being created on demand and shutdown when the demand for their resources have passed. Yet we still need to be able to manage them, monitor them and configure them. The reality of a platform where at some point of the day it can be 200 nodes big and later on the same day it can be 50 nodes has invalidated many if not most of our trusted technologies like monitoring, management, dns and even asset tracking.

Developers have had tools that allow them to cope with this ever changing landscape by abstracting the communications between 2 entities via a well defined interface. Using an interface to define a communications contract between component A and component B means if we later wish to swap out B for C that if we’re able to create a wrapper around C that complies to the published interface we’ll be able to contain the fallout from a major technology change. They’ve had more dynamic service registries that’s much more capable of coping with change or failure than the older rigid approach to IT management afforded.

Systems Administrators has some of this in that most of our protocols are defined in RFCs and we can generally assume that it would be feasible to swap one SMTP server for another. But what about the management of the actual mail server software in question? You would have dashboards, monitoring, queue management, alerting on message rates, trend analysis to assist in capacity planning. You would have APIs to create new domains, users or mail boxes in the mail system often accessed directly by frontend web dashboards accessible by end users. You would expose all or some of these to various parts of your business such as your NOC, Systems Operators and other technical people who have a stake in the mail systems.

The cost of changing your SMTP server is in fact huge and the fact that the old and new server both speak SMTP is just a small part of the equation as all your monitoring, management capability and integration with other systems will need to be redeveloped often resulting in changes in how you manage those systems leading to retraining of staff and a cycle of higher than expected rate of human error. The situation is much worse if you had to run a heterogeneous environment made up of SMTP software from multiple vendors.

In very complex environments where many subsystems and teams would interact with the mail system you might find yourself with a large mixture of Authentication Methods, Protocols, User Identities, Auditing and Authorization – if you’re lucky to have them at all. You might end up with a plethora of systems from front-end web applications to NOCs or even mobile workforce all having some form of privileged access to the systems under management – often point to point requiring careful configuration management. Managing this big collection of AAA methods and network ACL controls is very complex often leading to environments with weak AAA management that are almost impossible to make compliant to systems like PCI or SOX.

A Possible Solution

One part of a solution to these problems is a strong integration framework. One that provides always present yet pluggable AAA. One that lets you easily add new capabilities to the network in a way that is done via a contract between the various systems enabling networks made up of heterogeneous software stacks. One where interacting with these capabilities can be done with ease from the CLI, Web or other mediums and that remains consistent in UX as your needs change or expand.

You need novel ways to address your machines that are both dynamic and rigid when appropriate. You need a platform thats reactive to change, stable yet capable of operating sanely in degraded networking conditions. You need a framework that’s capable of doing the simplest task on a remote node such as running a single command to being the platform you might use to build a cloud controller for your own PAAS.

MCollective is such an framework aimed at the Systems Integrator. It’s used by people just interacting with it on a web UI to do defined tasks to commercial PAAS vendors using it as the basis of their cloud management. There are private clouds built using MCollective and libvirt that manages 100s of dom0s controlling many more virtual machines. It’s used in many industries solving a wide range of integration problems.

The demos you might have seen have usually been focussed on CLI based command and control but it’s more than that – CLIs are easy to demo, long running background orchestration of communication between software subsystems is much harder to demo. As a command and control channel for the CLI MCollective shines and is a pleasure to use but MCollective is an integration framework that has all the components you might find in larger enterprise integration systems, these include:

  • Resource discovery and addressing
  • Flexible registration system capable of building CMDBs, Asset Systems or any kind of resource tracker
  • Contract based interfaces between client and servers
  • Strong introspective abilities to facilitate generic user interfaces
  • Strong input validation on both clients and servers for maximum protection
  • Pluggable AAA that allows you to change or upgrade your security independant of your code
  • Overlay networking based on Message Orientated Middleware where no 2 components require direct point to point communications
  • Industry standard security via standard SSL as delivered by OpenSSL based on published protocols like STOMP and TLS
  • Auto generating documentation
  • Auto generating packaging for system components that’s versioned and managed using your OS capabilities without reinventing packaging into yet another package format
  • Auto generating code using generators to promote a single consistant approach to designing network components

MCollective is built as distributed system utilising Message Orientated Middleware. It presents a Remote Procedure Call based interface between your code and the network. Unlike other RPC systems it’s a parallel RPC system where a single RPC call can affect one or many nodes at nearly the same time affording great scale, performance and flexibility – while still maintaining a more traditional rolling request cycle approach.

Distributed systems are hard, designing software to be hosted across a large number of hosts is difficult. MCollective provides a series of standards, conventions and enforced relationships that when embraced allow you to rapidly write code and run it across your network. Code that do not need to be aware of the complexities of AAA, addressing, network protocols or where clients are connecting from – these are handled by layers around your code.

MCollective specifically is designed for your productivity and joy – these are the ever present benchmarks every feature is put against before merging. It uses the Ruby language that’s very expressive and easy to pick up. It has a bunch of built in error handling that tend to do just the right thing and when used correctly you will almost never need to write a user interface – but when you do need custom user interfaces it provides a easy to use approach for doing so full of helpers and convention to make it easy to create a consistant experience for your software.

How to design interaction between loosely coupled systems is often a question people struggle with, MCollective provides a single way to design the components and provides a generic way to interact with those components. This means as a Systems Integrator you can focus on your task at hand and not be sucked into the complexities of designing message passing, serialization and other esoteric components of distributed systems. But it does not restrict you to the choices we made as framework developers as almost every possible components of MCollective is pluggable from network transport, encryption systems, AAA, serializers and even the entire RPC system can be replaced or complimented by different one that meets your needs.

The code base is meticulously written to be friendly, obvious and welcoming to newcomers to this style of programming or even the Ruby language. The style is consistant throughout, the code is laid out in a very obvious manner and commented where needed. You should not have a problem just reading the code base to learn how it works under the hood. Where possible we avoid meta programming and techniques that distract from the readability of the code. This coding style is a specific goal and required for this kind of software and an aspect we get complimented on weekly.

You can now find it pre-packaged in various distributions such as Ubuntu, Fedora and RHEL via EPEL. It’s known to run on many platforms and different versions of Ruby and has even been embedded into Java systems or ran on iPhones.

Posts in this series

This series is still being written, posts will be added here as they get written:

Common Messaging Patterns Using Stomp – Part 5

This is a post in a series of about Middleware for Stomp users, please read the preceding parts starting at 1 before continuing below.

Today changing things around a bit and not so much talking about using Stomp from Ruby but rather how we would monitor ActiveMQ. The ActiveMQ broker has a statistics plugin that you can interact with over Stomp which is particularly nice – being able to interrogate it over the same protocols as you would to use it.

I’ll run through some basic approaches to monitor:

  • The size of queues
  • The memory usage of persisted messages on a queue
  • The rate of messages through a topic or a queue
  • Various memory usage statistics for the broker itself
  • Message counts and rates for the broker as a whole

These are your standard kinds of things you need to know about a running broker in addition to various things like monitoring the length of garbage collections and such which is standard when dealing with Java applications.

Keeping an eye on your queue sizes is very important. I’ve focused a lot on how Queues help you scale by facilitating horizontally adding consumers. Monitoring facilitates the decision making process for how many consumers you need – when to remove some and when to add some.

First you’re going to want to enable the plugin for ActiveMQ, open up your activemq.xml and add the plugin as below and restart when you are done:


A quick word about the output format of the messages you’ll see below. They are a serialized JSON (or XML) representation of a data structure. Unfortunately it isn’t immediately usable without some pre-parsing into a real data structure. The Nagios and Cacti plugins you will see below have a method in them for converting this structure into a normal Ruby hash.

The basic process for requesting stats is a Request Response pattern as per part 3.

stomp.subscribe("/temp-topic/stats", {"transformation" => "jms-map-json"})
# request stats for the random generator queue from part 2
stomp.publish("/queue/ActiveMQ.Statistics.Destination.random_generator", "", {"reply-to" => "/temp-topic/stats"})
puts stomp.receive.body

First we subscribe to a temporary topic that you first saw in Part 2 and we specify that while ActiveMQ will output a JMS Map it should please convert this for us into a JSON document rather than the java structures.

We then request Destination stats for the random_generator queue and finally wait for the response and print it, what you’ll get from it can be seen below:


Queue Statistics
Queue sizes are basically as you saw above, hit the Stats Plugin at /queue/ActiveMQ.Statistics.Destination.<queue name> and you get stats back for the queue in question.

Below table lists the meaning of these values from what I understand – quite conceivable I am wrong about the specifics of ones like enqueueTime for example so happy to be corrected in comments:

destinationName The name of the queue in JMS URL format
enqueueCount Amount of messages that was sent to the queue and committed to it
inflightCount Messages sent to the consumers but not consumed – they might be sat in the prefetch buffers
dequeueCount The opposite of enqueueCount – messages sent from the queue to consumers
dispatchCount Like dequeueCount but includes messages that might been rolled back
expiredCount Messages can have a maximum life, these are ones thats expired
maxEnqueueTime The maximum amount of time a message sat on the queue before being consumed
minEnqueueTime The minimum amount of time a message sat on the queue before being consumed
averageEnqueueTime The average amount of time a message sat on the queue before being consumed
memoryUsage Memory used by messages stored in the queue
memoryPercentUsage Percentage of available queue memory used
memoryLimit Total amount of memory this queue can use
size How many messages are currently in the queue
consumerCount Consumers currently subscribed to this queue
producerCount Producers currently producing messages

I have written a nagios plugin that can check the queue sizes:

$ check_activemq_queue.rb --host localhost --user nagios --password passw0rd --queue random_generator --queue-warn 10 --queue-crit 20
OK: ActiveMQ random_generator has 1 messages

You can see there’s enough information about the specific queue to be able to draw rate of messages, consumer counts and all sorts of useful information. I also have a quick script that will return all this data in a format suitable for use by Cacti:

$ activemq-cacti-plugin.rb --host localhost --user nagios --password passw0rd --report exim.stats
size:0 dispatchCount:168951 memoryUsage:0 averageEnqueueTime:1629.42897052992 enqueueCount:168951 minEnqueueTime:0.0 consumerCount:1 producerCount:0 memoryPercentUsage:0 destinationName:queue://exim.stats messagesCached:0 memoryLimit:20971520 inflightCount:0 dequeueCount:168951 expiredCount:0 maxEnqueueTime:328585.0

Broker Statistics
Getting stats for the broker is more of the same, just send a message to /queue/ActiveMQ.Statistics.Broker and tell it where to reply to, you’ll get a message back with these properties, I am only listing ones not seen above, the meanings is the same except in the broker stats its totals for all queues and topics.

storePercentUsage Total percentage of storage used for all queues
storeLimit Total storage space available
storeUsage Storage space currently used
tempLimit Total temporary space available
brokerId Unique ID for this broker that you will see in Advisory messages
dataDirectory Where the broker is configured to store its data for queue persistence etc
brokerName The name this broker was given in its configuration file

Additionally there would be a value for each of your connectors listing the URL to it including protocol and port

Again I have a Cacti plugin to get these values out in a format usable in Cacti data sources:

$ activemq-cacti-plugin.rb --host localhost --user nagios --password passw0rd --report broker
stomp+ssl:stomp+ssl storePercentUsage:81 size:5597 ssl:ssl vm:vm://web3 dataDirectory:/var/log/activemq/activemq-data dispatchCount:169533 brokerName:web3 openwire:tcp://web3:6166 storeUsage:869933776 memoryUsage:1564 tempUsage:0 averageEnqueueTime:1623.90502285799 enqueueCount:174080 minEnqueueTime:0.0 producerCount:0 memoryPercentUsage:0 tempLimit:104857600 messagesCached:0 consumerCount:2 memoryLimit:20971520 storeLimit:1073741824 inflightCount:9 dequeueCount:169525 brokerId:ID:web3-44651-1280002111036-0:0 tempPercentUsage:0 stomp:stomp://web3:6163 maxEnqueueTime:328585.0 expiredCount:0

You can find the plugins mentioned above in my GitHub account.

In the same location is a generic checker that publishes a message and wait for its return within a specified number of seconds – good turn around test for your broker.

I don’t really have good templates to share but you can see a Cacti graph I built below with the above plugins.

Common Messaging Patterns Using Stomp – Part 4

This is an ongoing post in a series of posts about Middlware for Stomp users, please read parts 1, 2 and 3 of this series first before continuing below.

Back in Part 2 we wrote a little system to ship metrics from nodes into Graphite via Stomp. This solved the goals of the problem then but now lets see what to do when our needs change.

Graphite is like RRD where it would summarize data over time and eventually discard old data. Contrast that with OpenTSDB that never summarizes or delete data and can store billions of data points. Imagine we want to use Graphite for a short term reporting service for our data but we also need to store the data long term without losing any data. So we really want to store the data in 2 locations.

We have a few options open to use:

  • Send the metric twice from every node, once to Graphite and once to OpenTSDB.
  • Write a software router that receives metrics on one queue and then route the metric to 2 other queues in the middleware.
  • Use facilities internal to the middleware to do the routing for us

The first option is an obvious bad idea and should just be avoided – this would be the worst case scenario for data collection at scale. The 3rd seems like the natural choice here but first we need to understand the facilities the middleware provides. Todays article will explore what ActiveMQ can do for you in this regard.

The 2nd seems an odd fit but as you’ll see below the capabilities for internal routing at the middleware layer isn’t all that exciting, useful in some cases but I think most projects will reach for some kind of message router in code sooner or later.

Virtual Destinations
If you think back to part 2 you’ll remember we have a publisher that publishes data into a queue and any number of consumers that consumes the queue. The queue will load balance the messages for us thus helping us scale.

In order to also create OpenTSDB data we essentially need to double up the consumer side into 2 groups. Ideally each set of consumers will be scalable horizontally and the sets of consumers should both get a copy of all the data – in other words we need 2 queues with all the data in it, one for Graphite and one for OpenTSDB.

You will also remember that Topics have the behavior of duplicating data they receive to all consumers of the topics. So really what we want is to attach 2 queues to a single topic. This way the topic will duplicate the data and the queues will be used for the scalable consumption of the data.

ActiveMQ provides a feature called Virtual Topics that solves this exact problem by convention. You publish messages to a predictably named topic and then you can create any number of queues that will all get a copy of that message.

The image above shows the convention:

  • Publish to /topic/VirtualTopic.metrics
  • Create consumers for /queue/Consumer.Graphite.VirtualTopic.metrics

Create as many of the consumer queues as you want, changing Graphite for some unique name and each of the resulting queues will behave like a normal queue with all the load balancing, storage and other queue like behaviors but all the queues will get a copy of all the data.

You can customize the name pattern of these queues by changing the ActiveMQ configuration files. I really like this approach to solving the problem vs approaches found in other brokers since this is all done by convention and you do not need to change your code to set up a bunch of internal structures that describes the routing topology. I consider routing topology that is living in code of the consumers to be a form of hard coding. Using this approach all I need to do is make sure the names of the destinations to publish to and consume from is configurable strings.

Our Graphite consumer would not need to change other than the name of the queue it should read and ditto for the producer.

If we find that we simply could not change the code for the consumers/producer or if it just was not a configurable setting you can still achieve this behavior by using something called a Composite Destinations in ActiveMQ that could describe this behavior purely in the config file with any arbitrarily named queues and topics.

Selective Consumers
Imagine we wish to give each one of our thousands of servers a unique destination on the middleware so that we can send machines a command directly. You could simple create queues like /queue/nodes.web1.example.com and keep creating queues per server.

The problem with this approach is that internally to ActiveMQ each queue is a thread. So you’d be creating thousands of threads – not ideal.

As we saw before in Part 3 messages can have headers – there we used the reply-to header. Below you’ll find some code that sets an arbitrary header:

stomp.publish("/queue/nodes", "service restart httpd", {"fqdn" => "web1.example.com"})

We are publishing a message with the text service restart httpd to a queue and we are setting a fqdn header.

Now if every server in our estate subscribed to this one queue with the knowledge you have at this point of Queues this would have the effect of sending this restart request to some random one of our servers, not ideal!

The JMS specification allow for something called selectors to be used while subscribing to a destination:

stomp.subscribe("/queue/nodes", {"selector" => "fqdn = 'web1.example.com'"})

The selector header sets the logic to apply to every message which will help decide if you get the message on your subscription or not. The selector language is defined using SQL 92 language and you can generally apply logic to any header in the message.

This way we set up a queue for all our servers without the overhead of 1000s of threads.

The choice for when to use a queue like this and when to use a traditional queue comes down to weighing up the overhead of validating all the SQL statements vs creating all the threads. There are also some side effects if you have a cluster of brokers – the queue traffic gets duplicated to all cluster brokers where with traditional queues the traffic only gets send to a broker if that broker actually has any subscribers interested in this data.

So you need to carefully consider the implications and do some tests with your work load, message sizes, message frequencies, amount of consumers etc.

There is a 3rd option that combines these 2 techniques. You’d create queues sourcing from the topic based on JMS Selectors deciding what data hits what queue. You would set up this arrangement in the ActiveMQ config file.

This, as far as I am aware, covers all the major areas internal to ActiveMQ that you can use to apply some routing and duplication of messages.

These methods are useful and solves some problems but as I pointed out it’s not really that flexible. In a later part of this series I will look into software routers from software like Apache Camel and how to write your own.

From a technology choices point of view future self is now thanking past self for building the initial metrics system using MOM since rather than go back to the drawing board when our needs changed we were able to solve our problems by virtue of the fact that we built it on a flexible foundation using well known patterns and without changing much if any actual code.

This series continue in part 5.

Common Messaging Patterns Using Stomp – Part 3

Yesterday I showed a detailed example of a Asynchronous system using MOM. Please read part 1 and part 2 of this series first before continuing below.

The system shown yesterday was Asynchronous since there is no coupling, no conversation or time constraints. The Producer does not know or care what happens to the messages once Published or when that happens. This is a kind of nirvana for distributed systems but sadly it’s just not possible to solve every problem using this pattern.

Today I’ll show how to use MOM technologies to solve a different kind of problem. Specifically I will show how large retailers scale their web properties using these technologies to create web sites that is more resilient to failure, easier to scale and easier to manage.

Imagine you are just hitting buy on some online retailers web page, perhaps they have implemented a 1-click based buying system where the very next page would be a Thank You page showing some details of your order and also recommendations of what other things you might like. It would have some personalized menus and in some cases even a personalized look and feel.

By the time you see this page your purchase is not complete, it is still going on in the background but you have a fast acknowledge back and immediately you are being enticed to spend more money with relevant products.

To achieve this in a PHP or Rails world you would typically have a page that runs top down and do things like generate your CSS page, generate your personalized menu then write some record into a database perhaps for a system like delayed job to process the purchase later on and finally it would do a bunch of SQL queries to find the related items.

This approach is very difficult to scale, all the hard work happens in your front controller, it has to be able to communicate with all the technology you choose in the backend and you end up with a huge monolithic chunk of code that can rapidly become a nightmare. If you need more capacity to render the recommendations you have no choice but to scale up the entire stack.

The better approach is to decouple all of the bits needed to generate a web page, if you take the narrative above you would have small single purpose services that does the following:

  • Take a 1-click order request, save it and provide an order number back. Start an Asynchronous process to fulfill the order.
  • Generate CSS for the custom look and feel for user X
  • Generate Menu for logged in user X
  • Generate recommendation for product Y based on browsing history for user X

Here we have 4 possible services that could exist on the network and that do not really relate to each other in any way. They are decoupled, do not share state with each other and can do their work in parallel independently from each other.

Your front controller now would become something that simply Published to the MOM requests for each of the 4 services providing just the information each service needs and then wait for the responses. Once all 4 responses were received the page would be assembled and rendered. If some response does not arrive in time a graceful failure can be done – like render a generic menu, or do not show the recommendations only show the Thank You text.

There are many benefits to this approach, I’ll highlight some I find compelling below:

  • You can scale each Service independently based on performance patterns – more order processors as this requires slower ACID writes into databases etc.
  • You can use different technology where appropriate. Your payment systems might be .Net while your CSS generation is in Node.JS and recommendations are in Java
  • Each system can be thought of as a simple standalone island with its own tech stack, monitoring etc, thinking about and scaling small components is much easier than a monolithic system
  • You can separate your payment processing from the rest of your network for PCI compliance by only allowing the middleware to connect into a private subnet where all actual credit information lives
  • Individual services can be upgraded or replaced with new ones much easier than in a monolithic system thus making the lives of Operations much better and lowering risk in ongoing maintenance of the system.
  • Individual services can be reused – the recommendation engine isn’t just a engine that gets called at the end of a sale but also while browsing through the store, the same service can serve both types of request

This pattern is often known as Request Response or similar terms. You should only use it when absolutely needed as it increases coupling and effectively turn your service into a Synchronous system but it does have it’s uses and advantages as seen above.

Sample Code
I’ll show 2 quick samples of how this conversation flow works in code and expand a bit into the details wrt to the ActiveMQ JMS Broker. The examples will just have the main working part of the code not the bits that would set up connections to the brokers etc, look in part 2 for some of that.

My example will create a service that generates random data using OpenSSL, maybe you have some reason to create a very large number of these and you need to distribute it across many machines so you do not run out of entropy.

As this is basically a Client / Server relationship I will use these terms, first the client part – the part that requests a random number from the server:

stomp.publish("/queue/random_generator", "random", {"reply-to" => "/temp-queue/random_replies"})
Timeout::timeout(2) do
   msg = stomp.receive
   puts "Got random number: #{msg.body}"    

This is pretty simple, the only new thing here is that we are subscribing first to a Temporary Queue that we will receive the responses on and we send the request including this queue name. Below will have some more detail on temp queues and temp topics. The timeout part is important you need this to be able to handle the case where all of the number generators died or if the service is just too overloaded to service the request.

Here is the server part, it gets a request then generates the number and replies to the reply-to destination.

require 'openssl'
loop do
      msg = client.receive
      number = OpenSSL::Random.random_bytes(8).unpack("Q").first
      stomp.publish(msg.headers["reply-to"], number)
      puts "Failed to generate random number: #{$!}"

You could start instances of this code on 10 servers and the MOM will load share the requests across the workers thus spreading out the available entropy across 10 machines.

When run this will give you nice big random numbers like 11519368947872272894. The web page in our example would follow a very similar process only it would post the requests to each of the services mentioned and then just wait for all the responses to come in and render them.

Temporary Destination
The big thing here is that we're using a Temporary Queue for the replies. The behavior of temporary destinations differ from broker to broker and how the Stomp library needs to be used also changes. For ActiveMQ the behavior and special headers etc can be seen in their docs.

When you subscribe to a temporary destination like the client code above internally to ActiveMQ it sets up a queue that has a your connection as an exclusive subscriber. Internally the name would be something else entirely from what you gave it, it would be unique and exclusive to you. Here is an example for a Temporary Queue setup on a remote broker:


If you were to puts the contents of msg.headers["reply-to"] in the server code you would see the translated queue name as above. The broker does this transparently for you.

Other processes can write to this unique destination but your connection would be the only one able to consume message from it. Soon as your connection closes or you unsubscribe from it the broker will free the queue, delete any messages on it and anyone else trying to write to it will get an exception.

Temporary queues and this magical translation happens even across a cluster of brokers so you can spread this out geographically and it would work.

Setting up a temporary queue and informing a network of brokers about it is a costly process so you should always try to set up a temporary queue early on in the process life time and reuse it for all the work you have to do.

If you need to correlate responses to their requests then you should use the correlation-id header for that - set it on the request and when constructing the reply read it from the request and set it again on the new reply.

This series continue in part 4.

Common Messaging Patterns Using Stomp – Part 2

Yesterday I gave a quick intro to the basics of Message Orientated Middleware, today we’ll build something kewl and useful.

Graphite is a fantastic statistics as a service for your network package. It can store, graph, slice and dice your time series data in ways that was only imaginable in the dark days of just having RRD files. The typical way to get data into it is to just talk to its socket and send some metric. This is great mostly but have some issues:

  • You have a huge network and so you might be able to overwhelm its input channel
  • You have strict policies about network connections and are not allowed to have all servers open a connection to it directly
  • Your network is spread over the globe and sometimes the connections are just not reliable, but you do not wish to loose metrics during this time

Graphite solves this already by having a AMQP input channel but for the sake of seeing how we might solve these problems I’ll show how to build your own Stomp based system to do this.

We will allow all our machines to Produce messages into the Queue and we will have a small pool of Consumers that read the queue and speak to Graphite using the normal TCP protocol. We’d run Graphite and the Consumers on the same machine to give best possible availability to the TCP connections but the Middleware can be anywhere. The TCP connections to Graphite will be persistent and be reused to publish many metrics – a connection pool in other words.

So first the Producer side of things, this is a simple CLI tool that take a metric and value on the CLI and publish it.

require 'rubygems'
require 'stomp'
raise "Please provide a metric and value on the command line" unless ARGV.size == 2
raise "The metric value must be numeric" unless ARGV[1] =~ /^[\d\.]+$/
msg = "%s.%s %s %d" % [Socket.gethostname, ARGV[0], ARGV[1], Time.now.utc.to_i]
  Timeout::timeout(2) do
    stomp = Stomp::Client.new("", "", "stomp.example.com", 61613)
    stomp.publish("/queue/graphite", msg)
rescue Timeout::Error
  STDERR.puts "Failed to send metric within the 2 second timeout"
  exit 1

This is all there really is to sending a message to the middleware, you’d just run this like

producer.rb load1 `cat /proc/loadavg|cut -f1 -d' '`

Which would result in a message being sent with the body

devco.net.load1 0.1 1323597139

The consumer part of this conversation is not a whole lot more complex, you can see it below:

require 'rubygems'
require 'stomp'
def graphite
  @graphite ||= TCPSocket.open("localhost", 2003)
client = Stomp::Connection.new("", "", "stomp.example.com", 61613, true)
loop do
    msg = client.receive
    graphite.puts msg
    STDERR.puts "Failed to receive from queue: #{$!}"
    sleep 1

This subscribes to the queue, loops forever while reading messages that then get sent to Graphite using a normal TCP socket. This should be a bit more complex to use the transaction properties I mentioned since a crash here will loose a single message.

So that is really all there is to it! You’d totally want to make the receiving end a bit more robust, make it a daemon perhaps using the Daemons or Dante Gems and add some logging. You’d agree though this is extremely simple code that anyone could write and maintain.

This code has a lot of non obvious side effects though simply because we use the Middleware for communication:

  • It’s completely decoupled, the Producers don’t know anything about the Consumers other than the message format.
  • It’s reliable because the Consumer can die but the Producers would not even be aware or need to care about this
  • It’s scalable – by simply starting more Consumers you can consume messages from the queue quicker and in a load balanced way. Contrast this with perhaps writing a single multi threaded server with all that entails.
  • It’s trivial to understand how it works and the code is completely readable
  • It protects my Graphite from the Thundering Herd Problem by using the middleware as a buffer and only creating a manageable pool of writers to Graphite
  • It’s language agnostic, you can produce messages from Perl, Ruby, Java etc
  • The network layer can be made resilient without any code changes

You wouldn’t think this 44 lines of code could have all these properties, but they do and this is why I think this style of coding is particularly well suited to Systems Administrators. We are busy people, we do not have time to implement from scratch our own connection pooling, buffers, spools and everything else you would need to try to duplicate these points from scratch. We have 20 minutes and we just want to solve our problem. Languages like Ruby and technologies like Message Orientated Middleware lets you do this.

I’d like to expand on the one aspect a bit – I mentioned that the network topology can change without the code being aware of it and that we might have restricted firewalls preventing everyone from communicating with Graphite. Our 44 lines of code solves these problems with the help of the MOM.

By using the facilities the middleware provides to create complex networks we can distribute our connectivity layer globally as below:

Here we have producers all over the world and our central consumer sitting in the EU somewhere. The queuing and storage characteristics of the middleware is present in every region. The producers in each region only need the ability to communicate with their regional Broker.

The middleware layer is reliably connected in a Mesh topology but in the event that transatlantic communications are interrupted the US broker will store the metrics till the connection problem is resolved. At that point it will forward the messages on to the EU broker and finally to the Consumer.

We can deploy brokers in a HA configuration regionally to protect against failure there. This is very well suited for multi DC deployments, deployments in the cloud where you have machines in different Regions and Availability Zones etc.

This is also an approach you could use to also allow your DMZ machines to publish metrics without needing the ability to connect directly to the Graphite service. The middleware layer is very flexible in how it’s clustered, who makes the connections etc so it’s ideal for that.

So in the end with just a bit of work once we’ve invested in the underlying MOM technology and deployed that we have solved a bunch of very complex problems using very simple techniques.

While this was done with reliability and scalability in mind for me possibly the bigger win is that we now have a simple network wide service for creating metrics. You can write to the queue from almost any language and you can easily allow your developers to just emit metrics from their Java code and you can emit metrics from the system side perhaps by reusing Munin.

Using code that is not a lot more complex than this I have been able to gather 10s of thousands of Munin metrics in a very short period of time into Graphite. Was able to up my collection frequency to once every minute instead of the traditional 5 minutes and was able to do that with a load average below 1 vs below 30 for Munin. This is probably more to do with Graphite being superior than anything else but the other properties outlined above makes this very appealing. Nodes push their statistics soon as they are built and I never need to edit a Munin config file anymore to tell it where my servers are.

This enabling of all parties in the organization to quickly and easily create metrics without having an operations bottleneck is a huge win and at the heart of what it means to be a DevOps Practitioner.

Part 3 has been written, please read that next.

Common Messaging Patterns Using Stomp – Part 1

As most people who follow this blog know I’m quite a fan of messaging based architectures. I am also a big fan of Ruby and I like the simplicity of the Stomp Gem to create messaging applications rather than some of the more complex options like those based on Event Machine (which I am hard pressed to even consider Ruby) or the various AMQP ones.

So I wanted to do a few blog posts on basic messaging principals and patterns and how to use those with the Stomp gem in Ruby. I think Ruby is a great choice for systems development – it’s not perfect by any stretch – but it’s a great replacement for all the things systems people tend to reach to Perl for.

Message Orientated Middleware represents a new way of inter process communications, different from previous approaches that were in-process, reliant on file system sockets or even TCP or UDP sockets. While consumers and producers connect to the middleware using TCP you simply cannot really explain how messaging works in relation to TCP. It’s a new transport that brings with it its own concepts, addressing and reliability.

There are some parallels to TCP/IP wrt to reliability as per TCP and unreliability as per UDP but that’s really where it ends – Messaging based IPC is very different and best to learn the semantics. TCP is to middleware as Ethernet frames are to TCP, it’s just one of the possible ways middleware brokers can communicate and is at a much lower level and works very differently.

Why use MOM
There are many reasons but it comes down to promoting a style of applications that scales well. Mostly it does this by a few means:

  • Promotes application design that breaks complex applications into simple single function building blocks that’s easy to develop, test and scale.
  • Application building blocks aren’t tightly coupled, doesn’t maintain state and can scale independently of other building blocks
  • The middleware layer implementation is transparent to the application – network topologies, routing, ACLs etc can change without application code change
  • The brokers provide a lot of the patterns you need for scaling – load balancing, queuing, persistence, eventual consistency, etc
  • Mature brokers are designed to be scalable and highly available – very complex problems that you really do not want to attempt to solve on your own

There are many other reasons but for me these are the big ticket items – especially the 2nd one.

Note of warning though while mature brokers are fast, scalable and reliably they are not some magical silver bullet. You might be able to handle 100s of thousands of messages a second on commodity hardware but it has limits and trade offs. Enable persistence or reliable messaging and that number drops drastically.

Even without enabling reliability or persistence you can easily do dumb things that overwhelm your broker – they do not scale infinitely and each broker has design trade offs. Some dedicate single threads to topics/queues that can become a bottleneck, others do not replicate queues across a cluster and so you end up with SPOFs that you might not have expected.

Message passing might appear to be instantaneous but they do not defeat the speed of light, it’s only fast relative to the network distance, network latencies and latencies in your hardware or OS kernel.

If you wish to design a complex application that relies heavily on your middleware for HA and scaling you should expect to spend as much time learning, tuning, monitoring, trending and recovering from crashes as you might with your DBMS, Web Server or any other big complex component of your system.

Types of User
There are basically 2 terms that are used to describe actors in a message orientated system. You have software that produce messages called Producers and ones consuming them called Consumers. Your application might be both a producer and a consumer but these terms are what I’ll use to describe the roles of various actors in a system.

Types of Message
In the Stomp world there really are only two types of message destinations. Message destinations have names like /topic/my.input or /queue/my.input. Here we have 2 message sources – the one is a Topic and the other is a Queue. The format of these names might even change between broker implementations.

There are some riffs on these 2 types of message source – you get short lived private destinations, queues that vanish soon as all subscribers are gone, you get topics that behave like queues and so forth. The basic principals you need to know are just Topics and Queues and detail on these can be seen below, the rest builds on these.

A topic is basically a named broadcast zone. If I produce a single message into a topic called /topic/my.input and there are 10 consumers of that topic then all 10 will get a copy of the message. Messages are not stored when you aren’t around to read them – it’s a stream of messages that you can just tap into as needed.

There might be some buffers involved which means if you’re a bit slow to consume messages you will have a few 100 or 1000 there waiting depending on your settings, but this is just a buffer it’s not really a store and shouldn’t be relied on. If your process crash the buffer is lost. If the buffer overflow messages are lost.

The use of topic is often described as having Publish and Subscribe semantics since consumers Subscribe and every subscriber will get messages that are published.

Topics are often used in cases where you do not need reliable handling of your data. A stock symbol or high frequency status message from your monitoring system might go over topics. If you miss the current stock price soon enough the next update will come that would supersede the previous one so why would you queue them, perfect use for a broadcast based system.

Instead of broadcasting messages queues will store messages if no-one is around to consume them and a queue will load balance the work load across consumers.

This style of message is often used to create async workers that does some kind of long running task.

Imagine you need to convert many documents from MS Word to PDF – maybe after someone uploaded it to your site. You would create each job request in a queue and your converter process consumes the queue. If the converter is too slow or you need more capacity you simply need to add more consumers – perhaps even on different servers – and the middleware will ensure the traffic is load shared across the consumers.

You can therefore focus on a single function process – convert document to PDF – and the horizontal scalability comes at a very small cost on the part of the code since the middleware handles most of that for you. Messages are stored in the broker reliably and if you choose can even survive broker crashes and server reboots.

Additionally queues generally have a transaction metaphor, you start a transaction when you begin to process the document and if you crash mid processing the message will be requeued for later processing. To avoid a infinite loop of a bad message that crash all Consumers the brokers will also have a Dead Letter Queue where messages that have been retried too many times will go to sit in Limbo for an administrator to investigate.

These few basic features enable you to create software that resilient to failure, scalable and not susceptible to thundering herd problems. You can easily monitor the size of queues and know if your workers are not keeping up so you can provision more worker capacity – or retire unneeded capacity.

Playing with these concepts is very easy, you need a middleware broker and the Stomp library for Ruby, follow the steps below to install both in a simple sandbox that you can delete when you’re done. I’ll assume you installed Ruby, Rubygems and Python with your OS package management.

Note I am using CoilMQ here instead of the Ruby Stompserver since Stompserver has some bug with queues – they just don’t work right at all.

$ export GEM_HOME=/tmp/gems
$ export PYTHONPATH=/tmp/python
$ gem install stomp
$ mkdir /tmp/python; easy_install -d /tmp/python CoilMQ
$ /tmp/python/coilmq

At this point you have a working Stomp server that is listening on port 61613, you can just ^C it when you are done. If you want to do the stuff below using more than one machine then add to the command line for stompserver -b and make sure port 61613 is open on your machine. The exercises below will work fine on one machine or twenty.

To test topics we first create multiple consumers, I suggest you do this in Screen and open multiple terms, for each terminal set the GEM_HOME as above.

Start 2 or 3 of these, these are consumers on the topic:

$ STOMP_HOST=localhost STOMP_PORT=61613 /tmp/gems/bin/stompcat /topic/my.input
Connecting to stomp://localhost:61613 as 
Getting output from /topic/my.input

Now we’ll create 1 producer and send a few messages, just type into the console I typed 1, 2, 3 (ignore the warnings about deprecation):

$ STOMP_HOST=localhost STOMP_PORT=61613 /tmp/gems/bin/catstomp /topic/my.input
Connecting to stomp://localhost:61613 as
Sending input to /topic/my.input

You should see these messages showing up on each of your consumers at roughly the same time and all your consumers should have received each message.

Now try the same with /queue/my.input instead of the topic and you should see that the messages are distributed evenly across your consumers.

You should also try to create messages with no consumers present and then subscribe consumers to the queue or topic, you’ll notice the difference in persistence behavior between topics and queues right away

When you’re done you can ^C everything and just rm /tmp/python and /tmp/gems.

That’s it for today, I’ll post several follow up posts soon.

UPDATE: part 2 has been published.

GDash – Graphite Dashboard

I love graphite, I think it’s amazing, I specifically love that it’s essentially Stats as a Service for your network since you can get hold of the raw data to integrate into other tools.

I’ve started pushing more and more things to it on my network like all my Munin data as per my previous blog post.

What’s missing though is a very simple to manage dashboard. Work is ongoing by the Graphite team on this and there’s been a new release this week that refines their own dashboard even more.

I wanted a specific kind of dashboard though:

  • The graph descriptions should be files that you can version control
  • Graphs should have meta data that’s visible to people looking at the graphs for context. The image below show a popup that is activated by hovering over a graph.
  • Easy bookmarkable URLs
  • Works in common browsers and resolutions
  • Allow graphs to be added/removed/edited on the fly without any heavy restarts required using something like Puppet/Chef – graphs are just text files in a directory
  • Dashboards and graphs should be separate files that can be shared and reused

I wrote such a dashboard with the very boring name – GDash – that you can find in my GitHub. It only needs Sinatra and uses the excellent Twitter bootstrap framework for the visual side of things.

click for full size

The project is setup to be hosted in any Rack server like Passenger but it will also just work in Heroku, if you hosted it on Heroku it would create URLs to your private graphite install. To get it going on Heroku just follow their QuickStart Guide. Their free tier should be enough for a decent sized dashboard. Deploying the app into Heroku once you are signed up and setup locally is just 2 commands.

You should only need to edit the config.ru file to optionally enable authentication and to point it at your Graphite and give it a name. After that you can add graphs, the example one that creates the above image is in the sample directory.

More detail about the graph DSL used to describe graphs can be found at GitHub, I know the docs for the DSL needs to be improved and will do so soon.

I have a few plans for the future:

  • As I am looking to replace Munin I will add a host view that will show common data per host. It will show all the data there and you can give it display hints using the same DSL
  • Add a display mode suitable for big monitors – wider layout, no menu bar
  • Some more configuration options for example to set defaults that apply to all graphs
  • Add a way to use dygraphs to display Graphite data

Ideas, feedback and contributions welcome!

Interact with munin-node from Ruby

I’ve blogged a lot about a new kind of monitoring but what I didn’t point out is that I do actually like the existing toolset.

I quite like Nagios. It’s configuration is horrible yes, the web ui is near useless, it throws away useful information like perfdata. It is though a good poller, it’s solid, never crashes, doesn’t use too much resources and have created a fairly decent plugin protocol (except for it’s perfdata representation).

I am at two minds about munin, I like munin-node and the plugin model. I love that there are 100s of plugins available already. I love the introspection that let’s machines discover their own capabilities. But I hate everything about the central munin poller that’s supposed to be able to scale and query all your servers and pre-create graphs. It simply doesn’t work, even on a few 100 machines it’s a completely broken model.

So I am trying to find ways to keep these older tools – and their collective thousands of plugins – around but improve things to bring them into the fold of my ideas about monitoring.

For munin I want to get rid of the central poller, I’d rather have each node produce its data and push it somewhere. In my case I want to put the data into a middleware queue and process the data later into an archive or graphite or some other system like OpenTSDB. I had a look around for some Ruby / Munin integrations and came across a few, I only investigated 2.

Adam Jacob has a nice little munin 2 graphite script that simply talks straight to graphite, this might be enough for some of you so check it out. I also found munin-ruby from Dan Sosedoff which is what I ended up using.

Using the munin-ruby code is really simple:

require 'rubygems'
require 'munin-ruby'
# connect to munin on localhost
munin = Munin::Node.new("localhost", :port => 4949)
# get each service and print it's metrics
munin.services.each do |service|
   puts "Metrics for service: #{service}"
   munin.service(service).params.each_pair do |k, v|
      puts "   #{k} => #{v}"

This creates output like this:

Metrics for service: entropy
   entropy => 174
Metrics for service: forks
   forks => 7114853

So from here it’s not far to go to get these events onto my middleware, I turn them into JSON blobs like, the last one is a stat about the collector:


The code that creates and sends this JSON can be seen here, it’s probably useful just to learn from and create your own as that’s a bit specific to me.

Of course my event system already has the infrastructure to turn these JSON events into graphite data that you can see in the image attached to this post so this was a really quick win.

The remaining question is about presentation, I want to create some kind of quick node view system like Munin has. I loved the introspection that you can do to a munin node to discover graph properties there might be something there I can use otherwise I’ll end up making a simple viewer for this.

I imagine for each branch of the munin data like cpu I can either by default just show all the data or take hints from a small DSL no how to present the data there. You’d need to know that some data needs to be derived or used as guages etc. More on that when I had some time to play.

Social networks for servers

A while ago Techcrunch profiled a company called Nodeable who closed 2 mil funding. They bill themselves as a social network for servers and have some cartoon and a beta invite box on their site but no actual usable information. I signed up but never heard from them. So I’ve not seen what they’re doing at all.
Either way I thought the idea sucked.

Since then I kept coming back to it thinking maybe it’s not bad at all, I’ve seen many companies try to include the rest of the business into the status of their networks with big graph boards and complex alerting that is perhaps not suited to the audience.

These experiments often fail and cause more confusion than clarity as the underlying systems are not designed to be friendly to business people. I had a quick twitter convo with @patrickdebois too and a few people on ##infra-talk were keen on the idea. It’s not really a surprise that a lot of us want to make the events stream of our systems more accessible to the business and other interested parties.

So I setup a copy of status.net – actually I used the excellent appliance from Turnkey Linux and it took 10 minutes. I gave each of my machines an account with the username being their MAC address and hooked into my existing event stream, it was all less than 150 lines of code and the result is quite pleasing.

What makes this compelling is specifically that it is void of technical details, no mention of /dev/sda1 and byte counts and percentages that makes text hard to scan or understand by non tech people. Just simple things like Experiencing high load #warning This is something normal people can easily digest. It’s small enough to scan really quickly and for many users this is all they need to know.

At the moment I have Puppet changes, IDS events and Nagios events showing up on a twitter like timeline for all my machines. I hash tag the tweets using things like #security, #puppet, and #fail for failing puppet resources. #critical, #warning, #ok for nagios etc. I plan on also adding hash tags matching machine roles as captured in my CM. Click on the image to the right for a bigger example.

Status.net is unfortunately not the tool to build this on, it’s simply too buggy and too limited. You can make groups and add machines to groups but this isn’t something like Twitters lists thats user managed, I can see a case where a webmaster will just add the machines he knows his apps runs on in a list and follow that. You can’t easily do this with status.net. My machines has their fqdn as real names, why on earth status.net doesn’t show real names in the timeline I don’t get, I hope it’s a setting I missed. I might look towards something like Yammer for this or if Nodable eventually ships something that might do.

I think the idea has a lot of merit. If I think about the 500 people I follow on twitter, its hard work but not at all unmanageable and you would hope those 500 people are more chatty than a well managed set of servers. The tools we already use like lists, selective following, hashtags and clients for mobiles, desktop, email notifications and RSS all apply to this use case.

Imagine your servers profile information contained a short description of function. The contact email address is the team responsible for it. The geo information is datacenter coordinates. You could identify ‘hot spots’ in your infrastructure by just looking at tweets on a map. Just like we do with tweets for people.

I think the idea has legs, status.net is a disappointment. I am quite keen to see what Nodeable comes out with and I will keep playing with this idea.

Rich data on the CLI

I’ve often wondered how things will change in a world where everything is a REST API and how relevant our Unix CLI tool chain will be in the long run. I’ve known we needed CLI ways to interact with data – like JSON data – and have given this a lot of thought.

MS Powershell does some pretty impressive object parsing on their CLI but I was never really sure how close we could get to that in Unix. I’ve wanted to start my journey with the grep utility as that seemed a natural starting point and my most used CLI tool.

I have no idea how to write parsers and matchers but luckily I have a very talented programmer working for me who were able to take my ideas and realize them awesomely. Pieter wrote a json grep and I want to show off a few bits of what it can do.

I’ll work with the document below:

   "contacts": [
                 {"protocol":"twitter", "address":"ripienaar"},
                 {"protocol":"email", "address":"rip@devco.net"},
                 {"protocol":"msisdn", "address":"1234567890"}
  {"name":"Pieter Loubser",
   "contacts": [
                 {"protocol":"twitter", "address":"pieterloubser"},
                 {"protocol":"email", "address":"foo@example.com"},
                 {"protocol":"msisdn", "address":"1234567890"}

There are a few interesting things to note about this data:

  • The document is an array of hashes, this maps well to the stream of data paradigm we know from lines of text in a file. This is the basic structure jgrep works on.
  • Each document has another nested set of documents in an array – the contacts array.


The examples below show a few possible grep use cases:

A simple grep for a single key in the document:

$ cat example.json | jgrep "name='R.I.Pienaar'"
   "contacts": [
                 {"protocol":"twitter", "address":"ripienaar"},
                 {"protocol":"email", "address":"rip@devco.net"},
                 {"protocol":"msisdn", "address":"1234567890"}

We can extract a single key from the result:

$ cat example.json | jgrep "name='R.I.Pienaar'" -s name

A simple grep for 2 keys in the document:

% cat example.json | 
    jgrep "name='R.I.Pienaar' and contacts.protocol=twitter" -s name

The nested document pose a problem though, if we were to search for contacts.protocol=twitter and contacts.address=1234567890 we will get both documents and not none, that’s because in order to effectively search the sub documents we need to ensure that these 2 values exist in the same sub document.

$ cat example.json | 
     jgrep "[contacts.protocol=twitter and contacts.address=1234567890]"

Placing [] around the 2 terms works like () but restricts the search to the specific sub document. In this case there is no sub document in the contacts array that has both twitter and 1234567890.

Of course you can have many search terms:

% cat example.json | 
     jgrep "[contacts.protocol=twitter and contacts.address=1234567890] or name='R.I.Pienaar'" -s name

We can also construct entirely new documents:

% cat example.json | jgrep "name='R.I.Pienaar'" -s "name contacts.address"
    "name": "R.I.Pienaar",
    "contacts.address": [

Real World

So I am adding JSON output support to MCollective, today I was rolling out a new Nagios check script to my nodes and wanted to be sure they all had it. I used the File Manager agent to fetch the stats for my file from all the machines then printed the ones that didn’t match my expected MD5.

$ mco rpc filemgr status file=/.../check_puppet.rb -j | 
   jgrep 'data.md5!=a4fdf7a8cc756d0455357b37501c24b5' -s sender

Eventually you will be able to then pipe this output to mco again and call another agent, here I take all the machines that didn’t yet have the right file and cause a puppet run to happen on them, this is very Powershell like and the eventual use case I am building this for:

$ mco rpc filemgr status file=/.../check_puppet.rb -j | 
   jgrep 'data.md5!=a4fdf7a8cc756d0455357b37501c24b5' |
   mco rpc puppetd runonce

I also wanted to know the total size of a logfile across my web servers to be sure I would have enough space to copy them all:

$ mco rpc filemgr status file=/var/log/httpd/access_log -W /apache/ -j |
    jgrep -s "data.size"|
    awk '{ SUM += $1} END { print SUM/1024/1024 " MB"}'
2757.9093 MB

Now how about interacting with a webservice like the GitHub API:

$ curl -s http://github.com/api/v2/json/commits/list/puppetlabs/marionette-collective/master|
   jgrep --start commits "author.name='Pieter Loubser'" -s id

Here I fetched the most recent commits in the marionette-collective GitHub repository, searched for ones by Pieter and returns the ID of those commits. The –start argument is needed because the top of the JSON returned is not the array we care for. The –start tells jgrep to take the commits key and grep that.

Or since it’s Sysadmin Appreciation Day how about tweets about it:

% curl -s "http://search.twitter.com/search.json?q=sysadminday"|
   jgrep --start results -s "text"
RT @RedHat_Training: Did you know that today is Systems Admin Day?  A big THANK YOU to all our system admins!  Here's to you!  http://t.co/ZQk8ifl
RT @SinnerBOFH: #BOFHers RT @linuxfoundation: Happy #SysAdmin Day! You know who you are, rock stars. http://t.co/kR0dhhc #linux
RT @google: Hey, sysadmins - thanks for all you do. May your pagers be silent and your users be clueful today! http://t.co/N2XzFgw
RT @google: Hey, sysadmins - thanks for all you do. May your pagers be silent and your users be clueful today! http://t.co/y9TbCqb #sysadminday
RT @mfujiwara: http://www.sysadminday.com/
RT @mitchjoel: It's SysAdmin Day! Have you hugged your SysAdmin today? Make sure all employees follow the rules: http://bit.ly/17m98z #humor
? @mfujiwara: http://www.sysadminday.com/

Here as before we have to grep the results array that is contained inside the results.

I can also find all the restaurants near my village via SimpleGEO:

curl -x localhost:8001 -s "http://api.simplegeo.com/1.0/places/51.476959,0.006759.json?category=Restaurant"|
   jgrep --start features "properties.distance<2.0" -s "properties.address \
                                      properties.name \
                                      properties.postcode \
                                      properties.phone \
    "properties.address": "15 Stratheden Road",
    "properties.distance": 0.773576114771768,
    "properties.phone": "+44 20 8858 8008",
    "properties.name": "The Lamplight",
    "properties.postcode": "SE3 7TH"
    "properties.address": "9 Stratheden Parade",
    "properties.distance": 0.870622234751732,
    "properties.phone": "+44 20 8858 0728",
    "properties.name": "Sun Ya",
    "properties.postcode": "SE3 7SX"

There’s a lot more I didn’t show, it supports all the usual <= etc operators and a fair few other bits.

You can get this utility by installing the jgrep Ruby Gem or grab the code from GitHub. The Gem is a library so you can use these abilities in your ruby programs but also includes the CLI tool shown here.

It’s pretty new code and we’d totally love feedback, bugs and ideas! Follow the author on Twitter at @pieterloubser and send him some appreciation too.