Home

Search Posts:

Archives

Login

January 2014

S M T W H F S
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31

***
Update: new version pushed to github. If you have questions, you may shoot me an email at jeremy at etherized dot com (instead of being forced to use my sad little comment system).
***

It's no surprise that puppet can be incredibly useful for managing your nagios config. One common approach is to use puppet's dsl to configure nagios checks using the included nagios_* types, which is a very powerful technique when combined with collected/exported resources; this allows you to export checks from each host you want to monitor, and then have those collected on the nagios server. In this manner you can have hosts automatically show up in nagios once their collection resource has been processed.

I like this approach, but I've been sold on using check_mk along with nagios for a while now, so I need to come up with something a bit different.

For those who are unaware, check_mk consists of an agent (which runs on systems to be monitored by nagios) and a poller (which runs on the nagios server). The idea is that you never need to touch the nagios config files directly; the poller autodetects services and creates a valid nagios config all on its own. In addition, you get a fancy replacement web UI that doesn't look like a time warp from 1995.

In order to work, though, check_mk needs a list of nodes to be monitored and (optionally) a list of tags that describe those nodes. The tags are used by check_mk to apply configurations to resources; for example, you might tag a "production" server with the "production" tag, and configure check_mk to enable 24x7 paging on all services that are so tagged.

So, you can do all this manually, but puppet has all the information you need already. Here's the plan: create a puppet module that all to-be-monitored clients will include, have that module export a config snippet that describes each node, and then have puppet collect those snippets on the nagios server.

I've created such a module, and you can find my puppet module that configures check_mk at my github; an explanation of this module follows below, for the curious.

The checkmk class


The first bit is just a boilerplate I use on all my modules which allows them to be disabled with a variable. This is mainly to work around limitations of the dashboard node classifier; it's easy to apply a resource to a group of nodes, it's not so easy to then exclude that resource from a particular node. For this reason I use a "wrapper" class instead of calling the checkmk::agent class directly and rely on that ugly little magic variable to disable it as needed.

The checkmk::agent class


This is textbook basic puppet stuff, so I won't step through it all. The interesting bits are the crazy exported resources:
   @@file { "$mk_confdir/$fqdn.mk":

content => template( "checkmk/collection.mk.erb"),
notify => Exec["checkmk_inventory_$fqdn"],
tag => "checkmk_conf",
}
@@exec { "checkmk_inventory_$fqdn":
command => "/usr/bin/check_mk -I $fqdn",
notify => Exec["checkmk_refresh"],
refreshonly => true,
tag => "checkmk_inventory",
}

The important thing to understand is that the exported resources are created in the scope of the client, but not realized on the client; they are then available to the server which actually realizes them. In this case, each client calling checkmk::agent will have these resources defined in puppet's stored config backend, and the nagios server will later scoop them up and process them. Exported resources are cousins of virtual resources and the syntax in the DSL is similar; you simply precede the name of the resource with "@@".

You will notice that I'm both creating a file resource and an exec resource. In my initial version of this module, I did not have per-node exec resources, and whenever a node changed I triggered a re-inventory of all nodes. This proved to be a bit excessive; using per-node exec statements allows you to inventory only the nodes that change.

The tricks in the template require a little explaining too. I like to be able to add check_mk tags from variables assigned in the node classifier, and this template takes those variables and creates valid check_mk configuration strings. The scope.to_hash.keys stuff allows me to use reflection in order to identify any variables whose names contain the string "check_mk_tags," and I append their corresponding values to the list of tags. This is, again, a workaround for limitations of the dashboard classifier, where we want some tags coming from a higher scope but we want to append to the list, which forces us to use multiple variables.

So, for example, I might have check_mk_tags_webserver = webserver attached to my "webserver" group in the classifier, but also check_mk_tags_paging = "paging|critical" in my "critical nodes" group; I can then place a node in both groups, and this template will smush all of the tags in both variables together (note that you should delimit your tags with a pipe if you assign multiple tags to the same variable).

The other trick is to do a DNS lookup on the fqdn of the host, and if the DNS lookup fails, hard code facter's "ipaddress" as the IP of the host. This prevents check_mk from choking on hosts with broken DNS. In addition, it tags such hosts with "brokendns," and I like to assign a corresponding hostgroup in nagios so I may easily shame whoever created such badness into fixing his problem.

Oh, one last thing; there's nothing stopping you from using facts or any other puppet variable as a tag. Simply append your fact or variable to the 'mktags' array in the ERB template, and you're good to go!

The checkmk::server class


My server class includes an exec resource (which does a check_mk inventory) as well as the crucial collection of the exported resources above. Note the syntax here to collect resources based on tags, which allows you to be selective when realizing resources.

So, that's it! Happy check_mk'ing!

The ARC


ZFS uses what's known as "adaptive replacement cache" (almost always just called "the arc") to hold both metadata and filesystem data in fast storage, which can dramatically speed up read operations for cached objects. When you start using zfs, this all happens behind the scenes in memory; Solaris or OpenIndiana dedicates a chunk of RAM for the ARC, and it reduces the size of the ARC when memory pressure demands it. Ben Rockwood wrote a good introduction to the ARC and a tool you can use to examine its state, so if you're interested in more details be sure to check that out.

Now, in addition to the ARC that sits in RAM, ZFS also has a facility to use level two adaptive replacement cache ("l2arc") on other "fast" storage. You can, for example, attach a fast SSD to a pool as l2arc, and ZFS will start using it as secondary cache. It won't be as fast as RAM, of course, but it's still potentially much faster than spinning rust, especially for random I/O.

In most cases you can just ignore the ARC, and happily reap the benefits of faster reads from cache. Adding additional RAM or assigning additional l2arc drives enables the ARC to cache more; a nice bonus for sure, but it's not the end of the world if you run out of cache.

There is, however, a critical exception to this: ARC becomes absolutely vital when dedupe is enabled. Before you even think about turning dedupe on, you'd better start thinking about the size of your ARC.

Dedupe and the ARC


When you turn on dedupe, you add a massive chunk of metadata known as the dedupe table ("DDT") into the equation. The dedupe table is where the magic happens; ZFS uses the table to identify duplicated blocks. Any writes with dedupe enabled will require lookups to this table first.

The reason you need to be thinking about ARC when the DDT is in play is this: the DDT is stored in the ARC. If your ARC can't fit the entire DDT, then every single time you try to write or read data, zfs will have to retrieve the DDT from spinning rust. The nature of the table makes this even more of a disaster, since it's a whole lot of small, random I/O - which is something normal hard drives are very bad at.

So heed this warning: if you turn on dedupe without enough RAM to cache the DDT in ARC, your write speeds can decrease by an order of magnitude (or more).

So, how much memory do you need in order to effectively use dedupe? The common answer on mailing lists or IRC is "as much as you can afford," and in practice that's probably the best advice you'll get. There are calculations you can make based on data retrieved from undocumented commands, but as a starting point you should count on at least 1 GB of ARC per 1 TB of data.

One frustrating aspect of this scenario is that it's very difficult to see what your DDT is really doing. ARC data in general is not exposed to the user by tools that come with OI or Solaris; indeed, one must use third party tools such as arc_summary, arcstat, or sysstat to see what's going on at all.

One thing you can do to potentially save yourself a world of pain is to ensure you have SSDs for l2arc. We really want the DDT in RAM, but having it on SSD will prevent the system from being completely useless if memory is exhausted, so it's a great idea to have SSD l2arc devices assigned to any pool that you want to dedupe. Unlike the in-memory ARC, we have some visibility into the l2arc directly provided by the zpool iostat utility:

zpool iostat -v [pool]

Caching strategy


When I added l2arc to my system and turned on dedupe, I paid very close attention to my cache usage. There are two tunables at the zfs dataset level which determine what ends up in the ARC: 'primarycache' and 'secondarycache'.

The values possible for these options are 'all', 'off', and 'metadata.' You can use this to selectively decide whether you want caching on your different layers of ARC; 'primarycache' is RAM, and 'secondarycache' is l2arc. The ddt is "metadata," so the most paranoid approach is to set both of these to "metadata" which will ensure that the DDT always has room to exist.

The problem with this conservative approach is that you lose all the benefits of caching filesystem operations. I attempted to cache only 'metadata' in the primarycache and then cache 'all' in secondarycache, but that doesn't do what you might expect; it turns out that as currently implemented you cannot cache something in secondarycache that is not cached in primarycache first. That means that if you want any filesystem caching anywhwere, you must use 'primarycache = all'. You can then reserve the l2arc for metadata cache if you desire. I settled on all/all, since I noticed that my l2arc was barely used at all when reserved for metadata.

Tuning the ARC for streaming workloads


Even with both caches set to "all," I noticed that my l2arc wasn't filling very quickly. The reason for this is that, by default, the l2arc will only cache random I/O; a sane strategy, since it speeds up the most costly operations. But in my case, with lots of streaming workloads and lots of l2arc, I was missing out on some potential performance gains.

You can set a tuneable that changes this behavior in /etc/system: set zfs:l2arc_noprefetch = 0

After a reboot, streaming workloads will be cached.

Priming the cache


With the streaming caching enabled, virtually every file you read will now be cached for future use. The effect is that, for reads of cached data, your performance will be close to that of running directly off of SSD.

I use this primarily for video games to improve load times. I export my zfs storage via NFS, and on my Linux workstation I install games onto that NFS filesystem and run them in wine.

A couple of things about the l2arc: it doesn't retain the cache through reboots, and less frequently accessed data will "fall off" the cache if it becomes full. This means that, if you want to consistently have some subset of your data cached in the l2arc, you need to read it all in regularly. I do this with a daily cron job:

36 6 * * * sh -c 'tar -cvf - /vault/games/RIFT > /dev/null 2> /dev/null'
Doing so dramatically improves read performance for the cached directories, and it ensures that they will always be in the cache (even when I haven't played RIFT for several days).

Git is a powerful tool, and one that I feel Ops folks could use more extensively. Unfortunately, although git has good documentation and excellent tutorials, it mostly assumes that you're working on a software project; the needs of operations are subtly (but vitally) different.

I suppose this is what "devops" is about, but I'm reluctant to use that buzzword. What I can tell you is this: if you take the time to integrate git into your processes, you will be rewarded for your patience.

There is tons of information about how git works and how to accomplish specific tasks in git. This isn't about that; this is a real-world example with every single git command included. I only barely hint at what git is capable of, but I hope that I give you enough information to get started with using git in your puppet environment. If you have any specific questions, don't worry, because help is out there.

A trivial workflow with git for accountability and rollbacks

I've long used git for puppet (and, indeed, any other text files) in a very basic way. My workflow has been, essentially:

1) navigate to a directory that contains text files that might change

2) turn it into a git repo with:

git init; git add *; git commit -a -m "initial import"
3) whenever I make changes, I do:
git add *; git commit -a -m "insert change reason here"

This simple procedure manages to solve several problems: you have accountability for changes, you have the ability to roll back changes, you have the ability to review differences between versions.

One very attractive feature of git (as compared to svn or cvs) is that one can create a git repo anywhere, and one needs no backing remote repository or infrastructure to do so; the overhead of running git is so minimal that there's no reason not to use it.

This workflow, though, does not scale. It's fine and dandy when you have one guy working on puppet at a time, but multiple people can easily step on each other's changes. Furthermore, although you have the ability to roll back, you're left with a very dangerous way to test changes: that is, you have to make them on the live puppet instance. You can roll your config back, but by the time you do so it might already be too late.

A simple workflow with git branches and puppet environments


It's time to move beyond the "yes, I use version control" stage into the "yes, I use version control, and I actually test changes before pushing them to production" stage.

Enter the puppet "environment" facility - and a git workflow that utilizes it. Puppet environments allow you to specify an alternate configuration location for a subset of your nodes, which provides an ideal way for us to verify our changes; instead of just tossing stuff into /etc/puppet and praying, we can create an independent directory structure for testing and couple that with a dedicated git branch. Once satisfied with the behavior in our test environment, we can then apply those changes to the production environment.

The general workflow


This workflow utilizes an authoritative git repository for the puppet config, with clones used for staging, production, and ad-hoc development. This git repository will contain multiple branches; of particular import will be a "production" branch (that will contain your honest-to-goodness production puppet configuration) and a "staging" branch (which will contain a branch designed to verify changes). Puppet will be configured to use two or more locations on the filesystem (say, /etc/puppet and /etc/puppet-staging) which will be clones from the central repository and will correspond to branches therein. All changes to puppet should be verified by testing a subset of nodes against the configuration in /etc/puppet-staging (on the "staging" branch), and once satisfied with the results they are merged into the "production" branch, and ultimately pulled into /etc/puppet.

Here's what it looks like:

/opt/puppet-git: authoritative git repository. I will refer to it by this location but in your deployment it could be anywhere (remote https, ssh, whatever). Contains at a minimum a "production" branch and a "staging" branch, but optionally may contain many additional feature branches. Filesystem permissions must be read/write by anybody who will contribute to your puppet configuration

/etc/puppet-staging: git repository that is cloned from /opt/puppet-git that always has the "staging" branch checked out. Filesystem permissions must be read/write by anybody who will push changes to staging (consider limiting this to team leads or senior SAs)

/etc/puppet: git repository that is cloned from /opt/puppet-git that always has the "production" branch checked out. Filesystem permissions must be read/write by anybody who will push changes from staging to production (again, consider limiting this to team leads or senior SAs)

The key element here is the authoritative repository (/opt/puppet-git). Changes to the production repository (in /etc/puppet) should never be made directly; rather, you will 'git pull' the production branch from the authoritative repository. The staging repository (and the "staging" branch) is where changes must occur first; when QA is satisfied, the changes from the staging branch will be merged into the production branch, the production branch will be pushed to the authoritative repository, and the production repository will pull those changes into it.

Why do I have three git repositories?


You might be saying to yourself: self, why do I have all of these repositories? Can't I just use the repository in /etc/puppet or /etc/puppet-staging as my authoritative repository? Why have the intermediary step?

There are a couple of reasons for this:

One, you can use filesystem permissions to prevent accidental (or intentional) modification directly to the /etc/puppet or /etc/puppet-staging directories. For example, the /etc/puppet repository may be writeable only by root, but the /etc/puppet-staging repository may be writeable by anybody in the puppet group. With this configuration anybody in the puppet group can mess with staging, but only somebody with root can promote those changes to production.

Two, some git operations (e.g. merging and rebasing) require that you do some branch switcheroo voodoo, and (at least in production) we can't have our repository in an inconsistent state while we're doing so. Furthermore, git documentation recommends in general that you never 'push' changes to branches over an active checkout of the same branch; by using a central repository, we don't have to deal with this issue.

Of course, one advantage of git is its sheer flexibility. You might decide that the staging repository would make a good authoritative source for your configuration, and that's totally fine. I only present my workflow as an option that you can use; it's up to you to determine which workflow fits best in your environment.

Initial prep work


Step 0: determine your filesystem layout and configure puppet for multiple environments

RPM versions of puppet default to using /etc/puppet/puppet.conf as their configuration file. If you've already been using puppet, you likely use /etc/puppet/manifests/site.pp and /etc/puppet/modules/ as the locations of your configuration. You may continue to use this as the default location if you wish.

In addition to the "production" configuration, we must specify additional puppet environments. Modify puppet.conf to include sections by the names of each "environment" you wish to use. For example, my puppet.conf is as follows:

[main]

... snip ...
manifest = /etc/puppet/manifests/site.pp
modulepath = /etc/puppet/modules

[staging]
manifest = /etc/puppet-staging/manifests/site.pp
modulepath = /etc/puppet-staging/modules

You may configure multiple, arbitrary environments in this manner. For example, you may have per-user environments in home directories:
[jeremy]

manifest = /home/jeremy/puppet/manifests/site.pp
modulepath = /home/jeremy/puppet/modules
It is also possible to use the $environment variable itself to allow for arbitrary puppet environments. If you have a great many puppet administrators, that may be preferable to specifying a repository for each administrator individually.
Step 1: create your authoritative repository

If you're coming from the trivial usage of git that I outlined at the start of this post, you already have a repository in /etc/puppet.

If you're already using git for your puppet directory, just do the following:

cp -rp /etc/puppet /opt/puppet-git
If you aren't already using git, that's no problem; do the cp just the same, and then:
cd /opt/puppet-git; git init .; git add *; git commit -a -m "initial import"
Step 2: set up branches in your authoritative repository

Now we have our new central repository, but we need to add the branches we need:
cd /opt/puppet-git; git branch production; git branch staging
Do note that I didn't "check out" either of those branches here; I'm just leaving puppet-git on "master" (which in truth we'll never use). NB: you might consider making this a "bare" repository, as it's never meant to be modified directly
Step 3: set up your "staging" git repository

As configured in step 0, we have an environment where we can test changes in puppet, but right now there's no configuration there (in such a case, nodes in the "staging" environment will use the default puppet configuration). We need to populate this environment with our existing configuration; let's create a clone of our git repo:
git clone /opt/puppet-git /etc/puppet-staging
We now have our copy of the repository, including both of its branches. Let's switch to the "staging" branch:
cd /etc/puppet-staging; git checkout staging

Step 4: set up your "production" git repository

This is essentially the same as step 3, with one twist - we already have something in /etc/puppet. While it's possible to turn /etc/puppet into a git repository with the proper remote relationship to our authoritative repository, I find it's easiest to just mv it out of the way and do a new checkout. Be sure to stop your puppet master while you do this!
service puppetmaster stop; mv /etc/puppet /etc/puppet.orig; git clone /opt/puppet-git /etc/puppet;

cd /etc/puppet; git checkout production; service puppetmaster start

Workflow walkthrough


In this configuration, it is assumed that all changes must pass through the "staging" branch and be tested in the "staging" puppet environment. People must never directly edit files in /etc/puppet, or they will cause merge headaches. They should also never do any complex git operations from within /etc/puppet; instead, these things must be done either through per-user clones or through the staging clone, and then pushed up to the authoritative repository once complete.

This may sound confusing, but hopefully the step-by-step will make it clear.

Step 0: set up your own (user) git repository and branch

While optional for trivial changes, this step is highly recommended for complex changes, and almost required if you have multiple puppet administrators working at the same time. This gives you a clone of the puppet repository on which you are free to work without impacting anybody else.

First, create your own clone of the /opt/puppet-git repository:

git clone /opt/puppet-git /home/jeremy/puppet
Next, create and switch to your own branch:
cd ~/puppet/; git checkout -b jeremy
In the sample puppet.conf lines above, I've already enabled an environment that refers to this directory, so we can start testing nodes against our changes by setting their environments to "jeremy"
Step 1: update your local repository

This is not needed after a fresh clone, but it's a good idea to frequently track changes on your local puppet configuration to ensure a clean merge later on. To apply changes from the staging branch to your own branch, simply do the following:
cd ~/puppet/; git checkout staging; git pull; git checkout jeremy; git rebase staging
This will ensure that all changes made to the "staging" branch in the authoritative repository are reflected in your local repository.
NB: I like to use "rebase" on local-only branches, but you may prefer "merge." In any case that I mention "rebase", "merge" should work too
EDIT: I originally had a 'git pull --all' in this example; use 'git fetch --all' instead
Step 2: make changes in your working copy and test

Now, you can make changes locally to your heart's content, being sure to use 'git commit' and 'git add' frequently (since this is a local branch, none of these changes will impact anybody else). Once you have made changes and you're ready to test, you can selectively point some of your development systems at your own puppet configuration; the easiest way to do so is to simply set the "environment" variable to (e.g.) "jeremy" in the node classifier. If you aren't using a node classifier, first, shame on you! Second, you can also make this change in puppet.conf
step 3: ensure (again) that you're up to date, and commit your changes

If you're happy with the changes you've made, commit them to your own local branch:

cd ~/puppet; git add *; git commit -a -m "my local branch is ready"

Then, just like before, we want to make sure that we're current before we try to merge our changes to staging:

cd ~/puppet; git fetch --all; git checkout staging; git pull; git checkout jeremy; git rebase staging

BIG FRIGGIN WARNING: This is where stuff might go wrong if other people were changing the same things you changed, and committed them to staging before you. You need to be very certain that this all succeeds at this point.

step 4: merge your branch into "staging"

Now that you have a valid puppet configuration in your local repository, you must apply this configuration to staging. In an environment with well defined processes, this step may require authorization by a project manager or team lead who will be the "gatekeeper" to the staging environment. At the very least, let your team members know that staging is frozen while you test your changes. Changes to staging (and by extension production) must be done serially. The last thing you want is multiple people making multiple changes at the same time.

When you're ready, first merge your branch to the staging branch in your local repository:

cd ~/puppet; git checkout staging; git merge jeremy
Assuming that the merge is clean, push it back up to the central repository:
git push

Step 5: update puppet's staging configuration
Now the central repository has been updated with our latest changes and we're ready to test on all "staging" nodes. On the puppet server, go to your staging directory and update from the authoritative repo:
cd /etc/puppet-staging; git pull

Step 6: test changes on staging puppet clients

If you live in the world of cheap virtual machines, free clones, and a fully staffed IT department, you'll have a beautiful staging environment where your puppet configuration can be fully validated by your QA team. In the real world, you'll probably add one or two nodes to the "staging" environment and, if they don't explode within an hour or two, it's time to saddle up.

If you have to make a minor change at this point, you may directly edit the file in /etc/puppet-staging, commit them with 'git commit -a', and then perform a 'git push' to put them in the authoritative repo; if you have a rigid change control procedure in place, you may need to roll back staging and go all the way back to step 1.

As a general rule: try to keep staging as close to production as possible and if possible only test one change in staging at a time. Don't let multiple changes pile up in staging; push to staging only when you're really ready. If a lot of people are waiting to make changes, they should confine them to their own branches until such a time as staging has "caught back up" to production.

Step 7: apply changes to production

Once staging has been verified, you need to merge into production. Again, this step may require authorization from a project manager or team lead who will sign off on the final changes.

Although it is possible to do this directly from the git repo in /etc/puppet-staging, I recommend that you use your own clone so as to leave staging in a consistent state throughout the process.

Start by again updating from the authoritative repo:

cd ~/puppet; git fetch --all; git checkout staging; git pull; git checkout production; git pull
At this point you should manually verify that there aren't any surprises lurking in staging that you don't expect to see:
git diff staging
If everything looks good, apply your changes and send them up to the repo:
git merge staging && git push

Step 8: final pull to production

If you're dead certain that nobody will ever monkey around directly in /etc/puppet, you can just pull down the changes and you're done:
cd /etc/puppet; git pull
That's fine and dandy if everybody follows process, but it may cause trouble if anybody has mucked around directly in /etc/puppet. To be sure that nothing unexpected is going on, you may want to use a different procedure to verify the diff first:
cd /etc/puppet; git fetch; git diff origin/production
Once satisfied, apply the changes to the live configuration:
git merge origin/production

That's great, but what's the catch?


Oh, there's always a catch.

I think the process outlined here is sound, but when you add the human element there will be problems. Somebody will eventually edit something in /etc/puppet and un-sync the repos. Somebody will check something crazy into staging when you're getting ready to push your change to production. Somebody will rebase when they should've merged and merge conflicts will blow up in your face. A million other tiny problems lurk around the corner.

The number one thing to remember with git: stay calm, because you can always go back in time. Even if something goes to hell, every step of the way has a checkpoint.

If you follow the procedure (and I don't mean "this" procedure; I just mean "a" procedure) your chance of pain is greatly reduced. Git is version control without training wheels; it will let you shoot yourself in the foot. It's up to you to follow the procedure, because git will not save you from yourself.

Recent versions of MS Exchange finally support the full version of Outlook Web App ("OWA") for non-MS browsers and operating systems. You can now use, e.g., Chrome on Windows or Firefox on Linux.

Now as a Linux+Chrome user, I was surprised to find that OWA still only worked in "lite" mode for me. Turns out, OWA bases this decision on your user agent string; if you happen to be running Chrome on Linux, you get dumped to the non-AJAX UI.

This seemed arbitrary to me, so I decided to try switching the user agent to that of the Windows version of Chrome - and bam, it just worked.

I'd like to find a chrome extension to handle user agent switching well, but I haven't seen one yet. For now, I just run chrome as the following:

/opt/google/chrome/google-chrome --user-agent="Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US) AppleWebKit/534.4 (KHTML, like Gecko) Chrome/6.0.481.0 Safari/534.4"

Optionally, you can append a flag to use Chrome's spiffy "app" mode as well (or just create an "app shortcut" in chrome and add the user-agent string):

 --app="https://mail.mydomain.com/owa/"

UPDATE!

In the time since I originally wrote this post, Chrome has implemented per-tab UA overrides. Yay! And subsequently there are now good UA switcher extensions.

I now use User-Agent Switcher for Chrome for this purpose. This plugin is especially nice since you may add a domain in its "Permanent Spoof List" settings page, and it will automatically spoof for that domain and that domain only. In my case I add the domain:

mail.mydomain.com

And the UA:

Mozilla/5.0 (Windows NT 6.2; rv:10.0.1) Gecko/20100101 Firefox/10.0.1

After doing that I never have to think about it again.

Some background: I've deployed a central syslog-ng server at work, and I'm looking to use the nifty frontend, Logzilla (formerly known as php-syslog-ng). Unfortunately, Logzilla now requires MySQL 5.1, which isn't provided by RHEL/CentOS 5. This means I have to go rogue and run a non-standard MySQL build.

There are several ways to approach this problem (install from source, install the upstream RPMs), but from my perspective the "rightest" way is to track the current Fedora MySQL RPM and rebuild it for RHEL. This provides a proper RHEL-style name and dependency tree, and it packages all the fixes that Red Hat is going to put into their version.

So, here's how to do it.

Building MySQL 5.1

1) Ensure you can build RPMs as a normal user, since building as root is bad mmk. Dag explains how to go about this.

2) Download the latest SRPM from the latest Fedora source repo. As of the time of this writing, that means mysql-5.1.45-2.fc13.src.rpm from Fedora 13.

3) Grab the build requirements. This is going to heavily depend on what you've already got installed, but I needed to do:

yum install rpm-build autoconf automake gperf imake libtermcap-devel libtool ncurses-devel readline-devel

4) Install the RPM (note - the md5 signature signature mechanism seems to have changed, you must use the --nomd5 flag here, and you cannot directly rpmbuild the f13 srpm):

rpm --nomd5 -ivh mysql-5.1.45-2.fc13.src.rpm

5) Build the RPMs:

rpmbuild -ba [yourhomedir]/redhat/SPECS/mysql.spec

Hopefully, it'll build successfully. The RPMs will be in your [yourhomedir]/redhat/RPMS/ directory. But there's one little problem...

What about libmysqlclient.so.15?

If you tried to yum up to the RPM you just built, depending on what you've got installed you might get some nasty errors about a missing libmysqlclient.so.15. Looks like that library version has been deprecated in 5.1, oh noes!

You've got a couple of options here. You could just copy that library from your old install and do some ugly --force hacky stuff to install the new RPMs. It'll work, but it's nasty. You could also rip out those RPMs that require that library and rebuild *them*, but that's time consuming and means you've got even more stuff that's diverging from upstream. Of course, if you don't actually *use* any of those packages, you could ditch them entirely.

The approach I took was to build a shim RPM that provides *just* this library. I can't take credit for the idea, it turns out somebody had already come before me.

1) Download the spec file for this library from Remi Collet, the original creator.

2) Patch it for RHEL using my patch file, or just download my spec file directly.

3) Get the CentOS or RHEL mysql source rpm from the current repository (which is at the time of this writing CentOS 5.5).

4) Install the srpm (no need to --nomd5 this time!)

rpm -ivh mysql-5.0.77-4.el5_5.3.src.rpm

5) Build the new RPM:

rpmbuild -ba mysqlclient15-remi.spec

This should spit out a sweet new RPM.

Yum it up


Now you just have to install:

yum --nogpg install mysqlclient15-5.0.77-1.x86_64.rpm mysql-libs-5.1.48-2.x86_64.rpm mysql-5.1.48-2.x86_64.rpm mysql-devel-5.1.48-2.x86_64.rpm mysql-server-5.1.48-2.x86_64.rpm

And, voila! MySQL 5.1 with no dependency issues! You could even write a little shell script to pull updates and build the RPMs automatically.

(Side note - be sure to back up your database before upgrading! My upgrade went smoothly, but that's no guarantee that yours will!)

I've been a long time fan of Solaris the technology, and I've been running OpenSolaris for personal use for several years now. I've tried hard to explain to people the merits of what I view as an exceptional but lesser known OS, with minimal success. The OpenSolaris community never really took off the way I (or Sun, probably) had hoped; Sun never really treated community members as first class citizens, and the "real" work on OpenSolaris all came from inside of Sun.

Well, I guess Oracle had enough of all this mess, and they've shot the OpenSolaris project right in the head. OpenSolaris as a product is completely dead, and the nightly builds and code repositories are closing up shop. The CDDL will be used (to some extent) for bits of Solaris code, but that code won't be released until *after* Solaris releases ship, which is a stark contrast to the more open approach Sun took with OpenSolaris.

As a pre-emptive strike against this potentiality, key community members have already spooled up the Illumos project, a fork of OpenSolaris with the proprietary bits replaced by open source software.

Oracle's decision to keep the repositories closed until after release forces Illumos's hand a bit; they no longer have the luxury of simply rebuilding a free Solaris derivative and keeping it up to date with the latest efforts from inside Oracle. Now, they have to choose: do they accept a huge lag and spool up a post-release open source derivative, or do they fork proper and kiss Oracle goodbye forever?

I have to say, the latter option is much more attractive, and it's not entirely out of the realm of possibility either; the Illumos main page now shows a brief response which seems to imply they're moving in this direction. Illumos is a real community project; if OpenSolaris's woes were really, as many of us felt, due mainly to Sun's handling of the community, this represents a tremendous opportunity to address those issues and build a healthier and more vibrant community than OpenSolaris ever had. Oracle is even helping out by scattering former Sun engineers to the wind; how many of those engineers will want to keep working on the OpenSolaris code base as a labor of love? Are there enough of them interested in continuing the work that they started at Sun as a community project? Will they ultimately make Illumos even *better* than Solaris?

Solaris proper is rapidly dying to me. I'm not the customer Oracle wants, and they've made it abundantly clear that they have no interest in having me. They want big enterprise customers with big Oracle database deployments, and screw you open source and startup hippies, your pockets aren't deep enough to even think of playing that game. Well, so be it. But Illumos is providing an opportunity for all of us hippies and startups to rally around a new project, with a new community, that we may finally be a real part of. I think the possibility really is there, and Illumos could become the open source success that OpenSolaris only dreamed of becoming.

Am I sad to see things go this way? A bit, yes, but it's also a huge burden lifted: now we know the real score. There's no more begging Oracle for table scraps that may never come, it's either do or die by the efforts of the community alone. Whichever way that goes, at least there's nobody left to blame.

I won't lie, I've mostly lurked in the shadows of the OpenSolaris community, justifying my lack of participation with an observation that participation was pointless anyway. Why would I even bother when the community was really just an afterthought to Sun? Well, I (and people like me) are being called on that bluff; if we don't step up and contribute now, it's all over for real.

I asked Alix of arixystix.com to make a Plants vs Zombies Snow Pea for me (as a gift to my wife). He came in today, and he's totally rad; check him out!

From

I don't usually go to many live concerts (maybe 3 or 4 a year), so it's a bit of an oddity that two of my favorite bands have been in town within the past two weeks, and I've managed to see both of them.

First up: My Morning Jacket.

From My Morning Jacket

This is a band I've been watching for the past couple of years; their first performance on Austin City Limits piqued my interest with songs mostly from 2005's "Z," and their later performance featured songs from 2008's "Evil Urges." I found the latter to be especially impressive, and afterward I went back through their earlier albums to find solid but... well, rather less interesting music. This is a band that has evolved and improved with each album, so in a way I found their back catalog a bit of a disappoint.

Anyway, they're on tour, and their RDU stop was in the Koka Booth Amphitheatre in Cary. I've now seen three concerts in that venue, and my feelings about it are... mixed.

MMJ's performance itself was stellar, as I expected it would be having seen them execute two nearly flawless performances on ACL. They started off a bit slow with their "classic" material, ramping up to a pretty beefy set that included probably half of the songs from Z or Evil Urges. Jim James is quite a showman, flaunting a cape worn in several odd manners throughout the show; if his voice wasn't so completely out of this world, you might imagine the cape trick to be a bit bizarre, but somehow it all seems to work.

Unfortunately, a couple of things about the concert bothered me. For one, this place was filled with drunks. I mean, EVERYWHERE. There were 20ish kids all over the place; smoking pot, drinking, talking loudly in that I'm-too-drunk-to-modulate-my-voice manner, tripping over each other, and paying absolutely no attention to the band. Why are you guys here? Is MMJ so popular that people who don't care about the music show up just for street cred?

I think part of the issue may be Koka Booth itself. Our "seats" were rather lousy, being stuck back in the lawn. It's not as bad as the cheap seats at Walnut Creek (which are so far away you can't even *see* the performers without looking at the giant TV), but they're a long way away. And while Koka Booth is an attractive stage, their audio was very, very quiet to my ears; a fact that effectively amplified the noise of the crowd in comparison to the music. Some venues crank the volume till your ears bleed, and I'm not asking for that, but... maybe if it was just a bit louder, these kids wouldn't be so distracting.

I got the distinct impression that the real fans were close to the stage, where they got both a good view and, presumably, better companionship.

The net result was a bittersweet experience; as much as I loved seeing James and the crew live, and as awesome as the band was, I couldn't help but be disappointed by the environment. I'm pretty much resolved to never sit in the cheap section at Koka Booth again.

Technology is a marvelous thing. At its best, it enables people to express themselves, to do things that had once been impossible or impractical; but as it does so, the wizards of the old domain find that their arcane knowledge loses value dramatically.

Consider, if you will, the photographer.

Initially, photography was a purely technical exercise, which required not only technical expertise but the possession of costly and cumbersome machinery. There were very few wizards, and everything they did was magic.

As the 20th century progressed, things began to change. 35mm cameras were available in a (relatively) affordable form, and the value proposition shifted. Operating and owning the machine was no longer an impenetrable barrier to entry; people could actually do so in their own homes. They could capture photographs of their own lives.

Of course, there was still magic to it: even though a home user could afford to take photos, the costs were still significant, and the technical skill required to operate consumer-grade devices was still decidedly non-trivial. Technology had facilitated the notion of an "amateur photographer," but an "amateur photographer" was, himself, still something of a wizard.

For the utter non-wizard, there arose "point and shoot" and instant cameras. Anybody who wanted to take a picture was eventually able to do so; but, even so, these devices were still cumbersome, and for any larger prints one still needed at least an amateur wizard.

In parallel to the proliferation of photographic equipment developed a new notion: that of photography as art. It's undeniable that some people have a gift in this respect, some special capacity to capture a specific moment, framed a certain way, optimally composed to elicit a certain response. Entire schools of study were devoted to this art form, and the photographer became more than a technical wizard, he also became viewed as an artist.

It's astounding, then, to watch the extent to which technology has changed the equation. It is true that every advance in the film era (and there were many) opened the gates a little wider, but the real revolution has come from the digital age.

I own a Sony A700, which is a fully digital SLR. This is a device that would have been completely unfathomable 15 years ago, and the notion of such equipment as a mass market consumer product would have been equally unfathomable as recently as 7 years ago. Think about how significant this is: within a mere 15 years, we've gone from something that was almost unimaginable at any cost, to something that almost any middle class enthusiast can find a way to afford.

The readily available Digital SLR has effectively killed photography as technical wizardry. Gone are the physical machinations inherent to film processing. Gone is the wait between capture and development. Gone is the limitation of sharing an image only through a physical object. Even gone is the required expertise and ridiculously specialized equipment required to create images which can be printed at poster quality.

Compared to what came before, this machine is so magical that anybody who touches it becomes a wizard.

I feel a bit for the purely technical career photographer, who strikes me as the equivalent of a gas station attendant as self-serve fuel pumps are developed. His specific form of wizardry is devalued, and his craft has become a commodity. The truly exceptional photographers (who have a knack for consistently finding a powerful image) will always remain in demand, but just being a guy with a camera who knows how to use it is no longer enough to make a living.

One can lament the plight of the technician photographer, but society as a whole clearly wins in this bargain. The notion of "photography as art" is now not only nearly universally recognized, but is also nearly universally accessible, and we currently witness the creation of photographic images the likes and volume of which would have been just as unimaginable as my camera 20 years ago. Artists no longer must emerge as a subset of the small pool of technical wizards, but from the massive pool of... well, virtually everybody who can post a picture online.

The notion of photographer as a technician is nearly dead, but the photographer as an artist? He's more alive than ever; and he is everybody.

I'm using Google Chrome more and more. In addition to my earlier gripe about password saving, there are various other perplexing design decisions. To me, none is more odd than Chrome's menu icons.

For Linux or Windows builds, Chrome/Chromium has no traditional "File/Edit/Tools/Whatever" menu headers, and instead uses a couple of icons on the toolbar:

Google apparently hates the old style menu bar, and rightfully so, since it steals valuable screen real estate from the in-browser apps that it thinks are the future of computing.* Google decides they want nothing to do with it in Chrome, but instead of creating something new (like MS has done with its "Ribbon"), they take those same old menu items and bury them into two "toolbar menus" represented by icons: a "rectangle with a triangle in the upper right corner" icon, and a "wrench" icon.

Right there, some alarm bells are going off. What is a rectangle icon? Is that the page? A document? What the hell is a wrench for? I've never used a wrench on a computer (well, there was that one time...). Based on its usage in other applications I can guess it means advanced settings or... something... right?

(Incidentally, Sun servers have a "wrench" light on them, which indicates they need... an oil change, I guess?)

So, say you're a user staring at a Rectangle menu and a Wrench menu. Under which menu would you expect to find "Developer" menu options? Under which menu would you expect to open a new tab? (I'm aware that you can cheat by looking at my screenshot. Feel free, if that makes you feel better).

Answer: New tab is under wrench, developer is under rectangle. Clear as mud, right? Never mind that "wrench" normally means something like "tinkering" or "settings" or "change oil," and that the "rectangle" menu kinda looks like an empty document. We're in the google world now, it makes perfect sense!**

Of course, all those nasty menu items must go somewhere, and it's not exactly obvious where "somewhere" should be. Google probably figures they can just give the user a couple of obscure icons and let them work it out by the process of elimination. This is probably a valid assumption, but it doesn't make the icons themselves any less perplexing.

Bottom line: apparently, Google has solved the problem of incoherent menu bars - with incoherent toolbars. So... yay?

* To be clear, I'm no File/Edit/Tools/Whatever menu apologist. That's a dated UI paradigm that doesn't map well to many modern real world scenarios; for example, Firefox's "File" menu contains such gems as "Work Offline." What does that have to do with a file? And why is "Print" in the file menu of a web browser? I'm printing a web page, not a file, right? Why is "Find" in an "Edit" menu? I'm not editing anything!

** In case you're curious, these icons actually do represent logical groupings, but good luck guessing what they are by icon alone. The "rectangle" menu contains actions limited in scope to the current document (well this is, actually, a lie - for example the "zoom" options impact all chrome processes - however this is the way the menu is conceived). The "wrench" option is a sort of "Meta" menu, responsible for managing chrome as a whole.