Category → productivity
Years ago, when I first started using GTD, the work/life management system pioneered by David Allen, my first cut at a GTD implementation was the Hipster PDA. The HPDA (called hipster more because it fits in one's hip pocket, rather than because it's beloved of node.js developers in Shoreditch) is the ultimate in lo-fi technology. It's nothing more than a bunch of 5x3 index cards, a binder clip, a biro and optionally a highlighter.
I stopped using the hipster when I bought my first MacBook. Beguiled as I was by shiny new technology, tags, search, and multi-device synchronisation, I abandoned my index cards and went digital! I've been digital more or less ever since. My main tools have been Things and Omnifocus, although, as an avid Emacs user, I did also have had a brief flirtation with using org-mode for GTD. Here's the thing: on reflection, I'm no better organised, no less stressed, and no more productive with these digital tools than I was with my trusty hipster. If anything, I would say I've been less productive, on the whole. This isn't a criticism of GTD itself - I'm much much less stressed, better organised, and more productive than I was before I put GTD into practice. It's just that these days I'm not so sure that going digital was really such a smart move, for me.
Let's back up a few steps and review the core required components of a GTD system. What we need?
- A way to reassure our brains that each incomplete item has been captured in a trusted system somewhere where we will see it again
- Somewhere to write lists
- Somewhere to capture appointments what GTD calls 'hard landscape'
- Erm... that's it
So hang on: we don't need tags, reminders, multi-device synchronisation, search, pretty formatting, a beautiful UI, or any of the frippery electronic tools offer us. Note – I'm not saying that these additional features don't offer value. What I am saying is that they most definitely aren't required. That is, we can be perfectly, or even optimally, productive with nothing more than a pen, paper, and diary. What's more, I have reason to believe there may be significant advantages to the lo-fi approach.
One of the great temptations with a digital tool is to fill it up with data. Unchecked, what really needs to be no more than a one line definition of next action, or project title, soon becomes a dumping ground for ideas, actions, notes, and other gubbins. Now, one might argue that this is a great place for what GTD would call project support material. I'm unconvinced. Most projects, under GTD terms, are pretty simple. If they need more planning or project support, there's no reason not to create either a physical folder for them, or an electronic one, with supporting documents, spreadsheets, pictures etc. To my mind this is a separate thing from the core infrastructure of simple lists. It's all too easy to dump stuff in the project or action and never see it again.
This brings me to my second disadvantage of a digital tool. I haven't settled on an appropriate word for this phenomenon, so I'll just try to describe its characteristics. When using an analog, lo-fi, manual process, for example pen and paper, or indeed (and it's noteworthy that this phenomenon applies equally to the world of kanban/scrum) a wall packed with sticky notes, we are gifted with the very tactile benefit of being forced to both see and feel the extent of our commitments - internal or otherwise. I have a friend with more than 1000 open items in Things. I think I can fit about 15 items on one side of a 5x3 index card, if I keep my handwriting very very small. So that's 30 if I use both sides. I think by the time I had 33 cards in my pocket, I'd be doing some pretty brutal pruning and someday/maybe populating. The thing is, digital backlogs just don't feel that burdensome. You don't get the pressure-valve effect, the feedback that tells you you need to stop and rethink. Besides, there's just something joyously present about a pocket full of cards - I tend to review them, shuffle them and become intimately familiar with their contents. And there's a manner in which the weekly review process of spreading a bunch of cards out on a desk, or the floor, that's just undeniably both rewarding and tremendously fun!
OK, so I'm romanticising - of course I am. There are disadvantages of a manual approach. Disadvantages that are often the very reason that computers became so popular, and which account for the attractiveness and pervasive use of tools like Things or Omnifocus. The most pressing one is 'backup'. One's GTD system is vitally important. It's the epicentre of one's ability to be productive. If one were to lose it, one would be, so to speak, 'screwed'. Now, with a modern, digital tool, of course one could lose one's phone, ipad, or even bag containing computer, but, the data is often automatically synced between devices or to a 'cloud' service. Basically, one's data is pretty safe.
Now, in days of yore, my backup procedure was a photocopier. Once a week, at review time, I'd photocopy all my cards. It was primitive, clumsy even, but it worked. Furthermore, because I knew that if I lost my HPDA I'd be screwed, I was very, very assiduous in my weekly backup, and thus weekly review. Additionally, I made absolutely sure I never, ever, ever lost the HDPA. For all my careful backups, I never needed to use them. By contrast I've lost my phone more than once and have friends who have been unlucky enough to be the victims of theft. An iPhone5 is somewhat more desirable and therefore a rather more stealable item than a wadge of index cards and a binder clip! Furthermore, there's no rule that states that a lo-fi, analogue hipster-pda user can't use modern digital technology as well. With applications such as Evernote, and hardware such as a Doxie, making regular backups of index cards is now remarkably easy. So while there does seem to be at least a superficial data security issue, I'm not convinced it's actually a deal-breaker.
Another obvious advantage of a digital GTD system, or if you prefer, disadvantage of the HPDA, is the speed of capture. With a single keypress, I can be typing and entering an open loop into my system. Obviously this is an order of magnitude faster than getting out my pile of cards, finding the right ones, finding my pen, and writing down the thought I had. The speed of capture is even greater if the trigger was something I was reading online, or an email I received. A simple highlight of relevant text, and a copy and paste is enough to capture your incomplete. But... this speed of capture is deceptive, in a couple of ways. First, the capture is so quick and easy, it disables any cognitive filtering. If I have to get out a pen and card, and write something down, that 1 to 2 second delay (and let's be realistic, we're not really talking about minutes versus milliseconds) is often enough for me to think: do I really have or want a commitment to this? Am I really going to action this? Can I just let this thought go? Should it go into a someday/maybe with no immediate action? Or can I just wait and see if this thought returns with more intention?
Relatedly, ease of capture with electric system can lead to some bad habits. I don't know whether it's because my formative GTD habits were built on lo-fi technology, but if I capture an incomplete using my hipster, I'm very very likely, nearly certain, to think about and capture the next action at the same time. I tend to find that with the electric system, the combination of the removed cognitive filter, lack of context switch, and the sheer ease of capture, results in me creating several dozen open loops, without an associated action, and suddenly, by the end of the day, I have a list of 30 ill-defined partial thoughts on a list called inbox. This is starting to look a lot like David Allen's amorphous blob of undoability. I thought this was exactly what we were trying to escape!
Another interesting side effect of the relatively slow speed of capture is that when I have to go through the tactile, manual step of finding the two cards (open loops and context) and capturing the project and next action, I am far, far more likely to exercise the two-minute rule. There just seems to be something about the process which kicks my brain into thinking: you're going to write this down in two places, and spend a few seconds working out what the next action is, when the next action is probably not much more of an effort than you'll spend tracking this action. How about you just do it now?
So, from my perspective, as a seasoned GTD practitioner, I truly think there is a significant amount of experiential evidence, and theoretical justification, to support the return of the HPDA. Please understand – this is in no way an anti-digital tirade. My experiences will not match yours, my mind doesn't work like yours, and quite likely you have your own hacks, workflows, processes, and disciplines to help you make your digital systems work perfectly for you. As for me, I'm going to try to switch back to lo-fi, and analogue, and I'll report back in about a month and tell you how things went. Bye for now!
A big thanks to Atlassian for allowing me to post this series!!
There is NO reason, not to use a version control system while developing puppet manifest/modules. Stating that should be an open door. It allows you to go back in time, share things more easily and track your changes. There is a lot of information out there on how to work with git or any other system. But here a few tips that might help you developing modules:
Tip 1 : give each module it's own repo and use a superproject to join
In a lot of blogposts and even in the excellent Pro Puppet book I see people checking in their entire environment directory into version control.
I'm all for version control but if you manage your modules dir as one flat repository, you loose the way to easily update and share modules from the forge. In essence you are doing a copy that starts living it's own life.
The idea goes like this:
/etc/puppet/environments/development/modules (super-project repo) -- puppet-apache (sub-project repo) -- puppet-mysql (sub-project repo) ...
The super-project repo will contain links to the submodules it uses. This allows the reuse of the sub-project repos in different super-projects. F.i. puppet-modules-team1, puppet-modules-team2 could be superprojects and use different sub-modules.
Git has the concept of submodules that allows you to link a parent repository with subprojects. Further detailed documentation can be found at http://book.git-scm.com/5_submodules.html
Using this approach with puppet is nicely described at https://we.riseup.net/riseup+tech/puppet-git-submodules, with some tips at https://labs.riseup.net/code/documents/7
I was always scared away of using submodules, because things like checking in the superproject first and forgetting the submodule, makes part of it unusable. Let alone adding the submodule directoy with a 'slash' has bitten me a few times.
It's a good approach but it requires being awake :)
To create a development repo with a specific apache submodule
$ cd /etc/puppet/environments/ $ hg init development $ cd development/ $ mkdir modules $ echo 'modules/apache = [git]git://github.com/puppet-modules/puppet-apache.git' > .hgsub $ hg add .hgsub $ git clone git://github.com/puppet-modules/puppet-apache.git modules/apache $ hg ci -mapache
Now if we are checking this out to our test environment, we'll see that it automatically checks out the submodules.
$ cd /etc/puppet/environments $ hg clone development/ test updating to branch default resolving manifests getting .hgsub getting .hgsubstate cloning subrepo modules/apache from git://github.com/puppet-modules/puppet-apache.git remote: Counting objects: 177, done. remote: Compressing objects: 100% (94/94), done. remote: Total 177 (delta 59), reused 168 (delta 52) Receiving objects: 100% (177/177), 22.97 KiB, done. Resolving deltas: 100% (59/59), done. 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
I found this workflow:
- less scary: no way to wrongly add directories
- it's handy that you can add both svn, git and hg subrepositories
- submodules are checked out by default (no git submodules init, update)
- by default hg wants to commit to all subrepos when commiting to the super-project repo. This Recursive commits can be disabled with the local ui.commitsubrepos configuration setting introduced in Mercurial 1.8.
- More information on Mercurial subrepositories can be found at: http://mercurial.selenic.com/wiki/Subrepository?action=show&redirect=subrepos#Synchronizing_in_subrepositories
- Or for a complete run through : http://www.accidentalhacker.com/using-mercurial-subrepositories/
Tip 2: Think about your git workflow
In many of the examples you'll see people just commit to the 'master' branch. This works of course but if you are working on different modules/features, it's best to think about your git workflow. There are a lot of blogpost describing how to work in feature branches. Still I found it hard to remember the commands for branching, deleting old branches and so on.
It's basicly git helpers that take away a lot of the pain for working on hotfixes, features, releases. The usage is pretty easy:
# Create the master/developer/release/hotfix branches $ git flow init # Start working on a feature (branched from develop) $ git flow feature start feature1 ... do some work $ git add ...somework... $ git commit -m "somework feature1" # This will merge feature1 back to develop $ git flow feature finish feature1 # Now lets start a release $ git flow release start release1 ... do some work $ git add ...somework... $ git commit -m "release1" # This will merge release1 into master $ git flow release finish release1
The whole idea is described in details at http://nvie.com/posts/a-successful-git-branching-model/ . And the following video will show you how it works.
More information can also be found at:
Tip 3: Use pre/post-commit hooks
Even with the awesome editor support we previously described, it's still easy to miss a semi-column, or have an incorrect puppet syntax.
It's good practice to verify the syntax of your puppet manifests before comitting them to version control. This is well described on the puppetlabs version control page.
In essence before you commit, it will execute
puppet --parseonlyto check if the syntax is correct.
A lesser used technique is to run the same check on post-commit:
Instead of running your master repository out of /etc/puppet/environment [PROD], you checkin to an intermediate repository [CHECK].
developer -> LOCAL REPO (pre-commit) -> push -> CHECK REPO -> (post-commit) -> PROD REPO
In the post commit you can:
- also verify the syntax in case someone didn't check before commiting.
- if all successfull push the repo to the PROD directories
This helps overcome the problem that your puppetmaster does a run while the repo is an incorrect state.
More details are described at Git workflow and Puppet Environments
More to come
The next post will be about different ways to test your puppet manifests
I've spent some time recently on setting up my environment to work more productively on writing puppet manifests. This blogpost highlights some of the findings to get me more productive on editing puppet files and modules. Some older information can be found at Editor Tips on the puppetlabs website.
Tip 1: Syntax highlighting,snippet completion
Puppet syntax is very specific, it's important to get clues about missing curly braces, semi-colums, etc .. as fast as possible. There is support for this in the most common editors:
@Masterzen has created a textmate bundle for use with puppet. You can find it at https://github.com/masterzen/puppet-textmate-bundle.
Michael Halligan describes how to install it from the commandline
mkdir -p /Library/Application\ Support/TextMate/Bundles cd /Library/Application\ Support/TextMate/Bundles git clone git://gitorious.org/git-tmbundle/mainline.git Git.tmbundle git clone http://git.gitorious.org/git-tmbundle/mainline.git Git.tmbundle git clone https://github.com/masterzen/puppet-textmate-bundle.git Puppet.tmbundle git clone https://github.com/drnic/Chef.tmbundle.git Chef.tmbundle osascript -e 'tell app "TextMate" to reload bundles'
If textmate is not your thing, here is how you can pimp up your vim:
When you look around for puppet/vim integration there seem to have been some re-incarnations:
- The first option is just setting the syntax of any .pp file to ruby syntax
- The second option as Garett Honeycutt describes:
- is a more elaborate version of highlighting pp files (orginally written by Luke Kanies).
- this file is distributed with puppet itself.
- Stick gaves us even more advanced tips at :
- R.I. Pienaar showed us how to use Snipmate with vim and puppet :
- His snippets can be found at http://www.devco.net/code/puppet.snippets
To use the vim-puppet plugin, you're best to use pathogen written by Tim Pope. I've followed the instructions at http://tammersaleh.com/posts/the-modern-vim-config-with-pathogen.
I've enabled the following plugins in my update_bundles script
Most notable plugins:
- Tabular gives you automatic => alignment
- Syntastic gives you syntax feedback while you edit files
- Snipmate gives you the snippets on tab expansion
- Specky gives you functionality for rspec files
- vim-ruby gives you extra functionality for ruby files
- vim-cucumber gives you functionality for cucumber files
For more information on the vim-puppet project go to:
The snippets that are expanded in the vim-puppet plugin can be found at:
Tip 2: don't create modules structure by hand
I keep forgetting the correct structure, files etc.. when I create a new module. Luckily there is an easy way to generate a puppet module structure using the puppet-module gem
$ gem install puppet-module $ puppet-module Tasks: puppet-module build [PATH_TO_MODULE] # Build a module for release puppet-module changelog # Display the changelog for this tool puppet-module changes [PATH_TO_MODULE] # Show modified files in an installed module puppet-module clean # Clears module cache for all repositories puppet-module generate USERNAME-MODNAME # Generate boilerplate for a new module puppet-module help [TASK] # Describe available tasks or one specific task puppet-module install MODULE_NAME_OR_FILE [OPTIONS] # Install a module (eg, 'user-modname') from a repositor... puppet-module repository # Show currently configured repository puppet-module search TERM # Search the module repository for a module matching TERM puppet-module usage # Display detailed usage documentation for this tool puppet-module version # Show the version information for this tool Options: -c, [--config=CONFIG] # Configuration file $ puppet-module generate puppetmodule-apache ========================================================================================= Generating module at /Users/patrick/demo-puppet/modules/puppetmodule-apache ----------------------------------------------------------------------------------------- puppetmodule-apache puppetmodule-apache/tests puppetmodule-apache/tests/init.pp puppetmodule-apache/spec puppetmodule-apache/spec/spec_helper.rb puppetmodule-apache/spec/spec.opts puppetmodule-apache/README puppetmodule-apache/Modulefile puppetmodule-apache/metadata.json puppetmodule-apache/manifests puppetmodule-apache/manifests/init.pp
Tip 3 - Geppetto: a Puppet IDE
Note: this NOT related with the Gepetto (one P) project by Alban Peignier
James Turnbull was so kind to make a quick screencast on how it works:
But remember it's Java based, so it might take a while to fire it up :)