Category → git
A couple of years ago, I showed you how to Visualize Small Batch Sizes with Git by plotting day-to-day the amount of changed source code lines that hadn’t yet been released to production. While this graph gave immediate feedback about the “drift” of development from operations (live), it’s not something easily digested by upper management. What do these guys really care about? It’s all about the releases silly!
Commits are “Units of Work”
Easily understood and very simple to calculate.
What was in the release?
Clicking on the release shows Pivotal Tracker stories worked on and the actual commit revision ID in github.
The PT story types also give a good indication what the main work type was – user features, chores or bugfixes.
Cause and Effect
Now you’re able to show quite clearly to the organization how often you do releases (and even what your releasing). If tagging releases is ‘crawling’, then you are now at the ‘walking’ phase. What would it take to really stretch our legs out and get ‘running’?
Showing the impact of a release is the next logical step. Did you manage to grow traffic? Has the response time of the site improved? See any more bugs after the release? These are all fairly easy to track, and plotting them as a backdrop to the releases demonstrates the real impact your team is having on the bottom-line.
What other metrics make sense here? What would you expect to see? And what would really be kick ass? Share your thoughts on twitter using the #codeinventory hash tag!
A big thanks to Atlassian for allowing me to post this series!!
There is NO reason, not to use a version control system while developing puppet manifest/modules. Stating that should be an open door. It allows you to go back in time, share things more easily and track your changes. There is a lot of information out there on how to work with git or any other system. But here a few tips that might help you developing modules:
Tip 1 : give each module it's own repo and use a superproject to join
In a lot of blogposts and even in the excellent Pro Puppet book I see people checking in their entire environment directory into version control.
I'm all for version control but if you manage your modules dir as one flat repository, you loose the way to easily update and share modules from the forge. In essence you are doing a copy that starts living it's own life.
The idea goes like this:
/etc/puppet/environments/development/modules (super-project repo) -- puppet-apache (sub-project repo) -- puppet-mysql (sub-project repo) ...
The super-project repo will contain links to the submodules it uses. This allows the reuse of the sub-project repos in different super-projects. F.i. puppet-modules-team1, puppet-modules-team2 could be superprojects and use different sub-modules.
Git has the concept of submodules that allows you to link a parent repository with subprojects. Further detailed documentation can be found at http://book.git-scm.com/5_submodules.html
Using this approach with puppet is nicely described at https://we.riseup.net/riseup+tech/puppet-git-submodules, with some tips at https://labs.riseup.net/code/documents/7
I was always scared away of using submodules, because things like checking in the superproject first and forgetting the submodule, makes part of it unusable. Let alone adding the submodule directoy with a 'slash' has bitten me a few times.
It's a good approach but it requires being awake :)
To create a development repo with a specific apache submodule
$ cd /etc/puppet/environments/ $ hg init development $ cd development/ $ mkdir modules $ echo 'modules/apache = [git]git://github.com/puppet-modules/puppet-apache.git' > .hgsub $ hg add .hgsub $ git clone git://github.com/puppet-modules/puppet-apache.git modules/apache $ hg ci -mapache
Now if we are checking this out to our test environment, we'll see that it automatically checks out the submodules.
$ cd /etc/puppet/environments $ hg clone development/ test updating to branch default resolving manifests getting .hgsub getting .hgsubstate cloning subrepo modules/apache from git://github.com/puppet-modules/puppet-apache.git remote: Counting objects: 177, done. remote: Compressing objects: 100% (94/94), done. remote: Total 177 (delta 59), reused 168 (delta 52) Receiving objects: 100% (177/177), 22.97 KiB, done. Resolving deltas: 100% (59/59), done. 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
I found this workflow:
- less scary: no way to wrongly add directories
- it's handy that you can add both svn, git and hg subrepositories
- submodules are checked out by default (no git submodules init, update)
- by default hg wants to commit to all subrepos when commiting to the super-project repo. This Recursive commits can be disabled with the local ui.commitsubrepos configuration setting introduced in Mercurial 1.8.
- More information on Mercurial subrepositories can be found at: http://mercurial.selenic.com/wiki/Subrepository?action=show&redirect=subrepos#Synchronizing_in_subrepositories
- Or for a complete run through : http://www.accidentalhacker.com/using-mercurial-subrepositories/
Tip 2: Think about your git workflow
In many of the examples you'll see people just commit to the 'master' branch. This works of course but if you are working on different modules/features, it's best to think about your git workflow. There are a lot of blogpost describing how to work in feature branches. Still I found it hard to remember the commands for branching, deleting old branches and so on.
It's basicly git helpers that take away a lot of the pain for working on hotfixes, features, releases. The usage is pretty easy:
# Create the master/developer/release/hotfix branches $ git flow init # Start working on a feature (branched from develop) $ git flow feature start feature1 ... do some work $ git add ...somework... $ git commit -m "somework feature1" # This will merge feature1 back to develop $ git flow feature finish feature1 # Now lets start a release $ git flow release start release1 ... do some work $ git add ...somework... $ git commit -m "release1" # This will merge release1 into master $ git flow release finish release1
The whole idea is described in details at http://nvie.com/posts/a-successful-git-branching-model/ . And the following video will show you how it works.
More information can also be found at:
Tip 3: Use pre/post-commit hooks
Even with the awesome editor support we previously described, it's still easy to miss a semi-column, or have an incorrect puppet syntax.
It's good practice to verify the syntax of your puppet manifests before comitting them to version control. This is well described on the puppetlabs version control page.
In essence before you commit, it will execute
puppet --parseonlyto check if the syntax is correct.
A lesser used technique is to run the same check on post-commit:
Instead of running your master repository out of /etc/puppet/environment [PROD], you checkin to an intermediate repository [CHECK].
developer -> LOCAL REPO (pre-commit) -> push -> CHECK REPO -> (post-commit) -> PROD REPO
In the post commit you can:
- also verify the syntax in case someone didn't check before commiting.
- if all successfull push the repo to the PROD directories
This helps overcome the problem that your puppetmaster does a run while the repo is an incorrect state.
More details are described at Git workflow and Puppet Environments
More to come
The next post will be about different ways to test your puppet manifests
I was looking at Jesse Newland’s excellent knife-github-cookbooks tool and thought that this would be an excellent tool to write for Puppet too. With the release of Puppet Faces this is now incredibly easy to do. I created a tool called Puppet-GitHub-Face that allows you to install and compare modules from GitHub to modules in your Puppet module path.
Currently Puppet Faces are only supported in Puppet 2.7.0 and later (2.7.0 is currently an RC but will hopefully be out soon!) and distributing them requires placing them into Puppet’s load path. They’ll shortly be distributable as Puppet modules using pluginsync.
So to install the GitHub face for now clone the git repository:
$ git clone git://github.com/jamtur01/puppet-github-face.git
Copy the relevant files into your Puppet master’s path (for example on Debian/Ubuntu into a source-based install).
$ cp lib/puppet/application/github.rb /usr/local/lib/site_ruby/1.8/puppet/application/github.rb $ cp lib/puppet/face/github.rb /usr/local/lib/site_ruby/1.8/puppet/face/github.rb
You can then see if the GitHub Face is installed like so:
$ puppet help
You should see the GitHub face listed in the available Puppet commands and you can now use it to install and compare installed modules. Let’s start with installing a module from Github, for example my puppet-httpauth module.
$ puppet github install --user jamtur01 --repo puppet-httpauth
This command will connect to GitHub, download the puppet-httpauth module and install it into your module path as the httpauth module (the Face automatically strips off prefix and suffix from a module, for example removing puppet-, puppet-module-, and -module from a repository name).
Warning! If a module of the same name is already in your path it’ll get deleted and replaced with this new module.
You can also compare the state of a currently installed module with its GitHub parent, like so:
$ puppet github compare --user jamtur01 --repo puppet-httpauth
This will return unified diff output showing any differences between the currently installed module and the module upstream on Github.
Hope someone finds this useful!