Home

Search Posts:

Archives

Login

January 2014

S M T W H F S
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31

I seem to be pushing up against the limitations of puppet lately. In this particular case, I wanted newly added nodes to automatically receive corresponding entries in /etc/exports on my file server.

Sounds pretty simple, right? Well, it's not.

Puppet doesn't give you much of an opportunity to collect data from other nodes directly. There's no way, for example, to express "give me a list of all nodes" in the DSL; your scope is intentionally restricted to the node on which your running. This sort of metadata that the puppet server knows about other nodes is simply not provided.

In some cases, there's an easy workaround: collect / export. I have used this pattern successfully in the past, and I even posted a blog entry on how it helped me with check_mk. There is one rather large, unfortunate limitation of this pattern though: it only works for the case where your entry can be defined by each of the nodes doing the exporting of the resource.

That may not be entirely clear at first, but consider the case of /etc/exports. This is a file which has no native type to manage it, so the common approach to getting it done would be to write a template. Now, although I can create per-node exported resources, and collect them on the file server, what would they look like? I need all of those entries to end up in /etc/exports - and there's no way I can do that with per-node templates, since the file each node exports must be unique (there is not, for example, exports.d, as there was a conf.d for check_mk).

The closest thing you can currently do (short of writing your own exports provider - a frustrating problem in and of itself given the state of fileparser) is to use collect/export with the Augeas type. But this is limiting, too; you cannot automatically purge stale or removed hosts from the file when using Augeas. There are other, even more hacky solutions that smush multiple files together on the client, but that's fragile and annoying so I don't have any interest in it.

What I want - which seems simple - is a list of nodes in a variable. That's it.

And that got me thinking - what about... facts?

Converting collect/export data into facts

So, what if we create a collect/export generated conf.d style directory, populated with per-host files... and then have a custom fact that collates them into something on the file server?

Why, you know what? Then we would have a string that we could parse to get the data we need.

Here's the trick. First, on our nodes that we're exporting:

    @@file { "/var/puppet/nfs_hosts/$::fqdn":

content => "$::fqdn
",
tag => "nfs_host",
}

This is the most simple scenario; all I care about is the FQDN. You can do other tricks here, make that content whatever you want. You could even need to do something like key/value pairs in there and split them out in the final ERB, but that's not what I needed in this case.

Now, the collection on the file server:

    File <<| tag == 'nfs_host' |>> {

}

Easy enough - so what do we have now? We have a directory full of files named after nodes, each of which contains the node name, on our file server.

How do we make it a fact? We write a fact and push it out to the file server. Hint: writing facts is easy:

    Facter.add("nfs_hosts") do

setcode do
path="/var/puppet/nfs_hosts"
if File.exists?(path) && File.directory?(path) && ! Dir[path + '/*'].empty?
output = Facter::Util::Resolution.exec('/bin/cat /var/puppet/nfs_hosts/*').split('\n').join(' ')
else
output = nil
end
output
end
end

Magic! Now we should get our fact $::nfs_hosts on the file server, which is a space delimited list of all the nodes that exported the resource.

So, this is obviously a hack, but in my estimation it's the least hacky hack of what we've got out there given how few LOC are involved and the fact that it's mostly contained within the DSL. There's one particular limit of this hack that you need to know:

It will take the file server an extra puppet run before the fact is updated.

Due to the... fact... that facts are generated prior to the collection of the exported resources, the $::nfs_hosts fact that the file server reports will not reflect changes made during that puppet run.

The workaround? Run puppet twice as frequently on this node, or be content to wait an extra run cycle, or run puppet manually twice to make sure the change happens more quickly.

Hey, I don't like this any more than you do. But at least you know that it's an option. And knowing is half the battle.

Comments

asas @ Wed Nov 06 03:43:10 -0500 2013

asas
http://www.acunu.com/2/post/2012/07/virtual-nodes-strategies.html

New Comment

Author (required)

Email (required)

Url

Spam validation (required)
Enter the sum of 7 and 6:

Body (required)

Comments |Back