Puppet labs's use of the term puppetmaster is rather clever (in contrast to other un-necessarily offensive uses of "master" in the software world).

While I appreciate the clever name, I'm less impressed with the concept.

At May First/People Link we've spent the last several years (including the last couple months in earnest) working to transition management of our 90-some servers from a collection of hand-written bash scripts to puppet.

Over the years, we've worked hard to keep our servers as secure as possible. We have a team of about a half dozen people who all have root access on all servers. It's all key-based access. To help mitigate a disaster if one person's keys were compromised, we've implemented monkeysphere on all servers, allowing us to easily revoke access.

After spending so much time thinking through our root-access strategy and fully implementing the monkeysphere to reduce our exposure to a single point of vulnerability, I was disappointed by puppet's use of a puppet master. For those less familiar with puppet, it goes something like this:

One server (or god forbid multiple servers), run an externally accessible daemon. Each and every server on your network runs a daemon as root that periodically communicates with the puppet master, receives new instructions, and then (again, as root) executes these instructions.

In other words, if your puppet master is compromised, I'm not sure exactly what you would need to do, short of rebuilding every server in your network.

To make matters worse, it seems as though some users generate and store all server ssh keys (private and public) on the puppet master and then push the private keys to their respective nodes. That means an intruder doesn't need to write to the puppet master, just reading these keys would be enough to compromise all servers in your network.

There must be a better way.

Puppet without masters

After some web-searching, I found a promising thread on the puppet list asking what's lost without a puppet master. This thread lead to a couple other blogs by people who have worked out a system for using puppet without a master.

It turns out that there are two distinct points of centralization with puppet. One is the puppet master (as described above). In addition, there is a concept called storeconfigs - which allows each node in the network to store information in a central database. For example, one server can store a request for an account to be setup on a backup server. The next time the backup server runs, it checks the storeconfigs, finds the request, and creates the user.

It's possible to run puppet with storeconfigs but without running a puppet master (that avoids the hassle of running the puppet daemons, while providing the convenience of centralization). For our purposes, however, we decided to forego both the puppet master and storeconfigs. We did not want any form of centralization that would provide an additional point of vulnerability.

As is common with puppet, we are storing our puppet recipes in a git repository. And, we are publishing to a single, canonical git repository on the Internet. On each node, we have two git repositories - one is a bare repo (that we can push to) and the other is a checked out repo (in /etc/puppet) that is read by puppet. The bare repo has a post-update hook the changes into the /etc/puppet directory, pulls in the changes from the bare repository, and runs puppet against the newly checked out files. Therefore, we can apply new puppet recipes to any server on the network with

git push <server>

No daemons: neither a master daemon nor a puppet daemon running on the node using up memory or providing a potential security hole. The git push happens over an ssh connection - since all system administrators already have root-level ssh access on every server - there is no need to grant any additional access above what we already have.

Pushing works great - but with 90 nodes we don't want to have to push to 90 servers everytime we want a change made. That's where the canonical git repository comes in. A cron job runs a script on each node once an hour that runs git remote update from /etc/puppet. The script then checks the time stamp on the most recent gpg-signed tag and compares it with the time stamp of the current commit. If the most recent gpg-signed tag is newer, it verifies that the tag came from a list of authorized gpg keys (the very same gpg keys used by the monkeysphere to grant root level ssh access). If the gpg signature of the tag can be properly verified, then the changes are merged and puppet is run on the new recipes.

What about privacy?

One of the benefits of a puppet master setup is that nodes get configuration details on a need-to-know basis. The puppet master doesn't share the entire puppet repo - only the compiled manifest for the node with which it's communicating.

Our solution to this problem was to go screaming in the other direction. As you might notice from our support wiki and ticket system, we generally favor transparency. Since we are publishing our entire puppet git repo publicly, there seems little point in trying to hide one node's configuration details from another node.

That also means each node carries around about 4Mb of extra weight in the form of disk space for the git repo. That seems like a small price to pay for the resource savings of not running a puppetd process all the time.

More differences

As I've read the puppet lists, faqs and documentation, I've found yet more ways our use of puppet diverges from the norm.

The first is a little thing really - most people seem to store all their node configurations in a single nodes.pp file. I'm not sure why. Fortunately, puppet's include syntax allows globbing, so we've created a directory and gave each server it's own .pp file. This arrangement makes it much easier to parse the configuration with tools other than puppet (like, Q. How many servers do we have? A. ls | wc -l).

Backup and Nagios monitoring without storeconfigs

More significantly - there are some things we can't do since we are not using storeconfigs. While many puppet users add a variable, like $nagios = true before including their sshd class (which then causes the sshd class to store a configuration for the nagios server to monitor ssh on the node in question), we were forced to come up with alternatives.

My first solution was to simply list all the servers that needed to be monitored in the server node configuration file for the nagios server. Ditto for the backup servers. This approach, however, proved cumbersome and error prone. When adding a new node, I now have to edit three files instead of one. And, how can I easily tell if all nodes have their nagios and/or backup configurations set? The solution was rather simple - there's more than one way to store a config for another node. Our nagios server is called jojobe.mayfirst.org and our backup server is luisa.mayfirst.org. A typical node.pp file looks like this:

node pietri.mayfirst.org {
  # node config goes here

}
if ( $fqdn == "jojobe.mayfirst.org ) {
  nagios_montior { "pietri": }
}
if ( $fqdn == "luisa.mayfirst.org ) {
  backup_access { "pietri": }
}

This way all configuration related to pietri stays in a single file.

Host keys and granting access between servers

storeconfigs is commonly used to distribute host ssh keys. Every node that is added to puppet has it's ssh host key stored centrally and then re-distributed to every other node. That way, you can ssh from node to node without ever getting the ssh fingerprint verification. Avoiding that prompt is particularly important when backing up from one server to another via automated scripts. storeconfigs can additionally be used to copy user's public ssh keys - thus granting user access between servers.

Our solution to this problem: monkeysphere. Rather than maintaining our own private data store of keys, we publish (and sign) our ssh keys via the web of trust. In addition to server keys, each one of our servers' root user has an ssh-enabled gpg key (also publicly signed by us). By configuring each server to trust our system administrators' gpg keys for verifying other keys, we can both avoid the ssh fingerprint manual verification step and we can grant a root user on one server access to another server by simply dropping root@$server.mayfirst.org into an authorized_user_ids file on the target server.

There's no question - the setup was rather tedious (we're using runit to maintain an ssh-agent for each root user), however, now that's in place (and configured via puppet), it's a breeze to add new servers. The only extra step we have to take is to confirm and sign each new server's keys. This "extra" step not only allows our servers to verify each other, but also allows our users to verify the servers as well, so it's hardly an extra step at all.

Shared modules

There's a vibrant community of third party module developers for puppet. Rather than figure out the intricacies of having puppet configure sshd, for example, you can install a contributed sshd module and then you simply add:

include sshd

And you get a default sshd setup. Many of these modules are fairly well developed, offering the ability to easily customize your setup in a number of different ways.

Unfortunately, most of the modules assume you are using storeconfigs and if you are not, they will either fail to work right or you will get noisy errors. At first, this seemed like a problem. However, as I built our puppet recipes, I found myself increasingly frustrated with the third party modules that we could use.

Configuring servers is hard - and requires constant debugging and trouble shooting. puppet already provides a layer of abstraction between you and the server you are setting up. Given the benefits of puppet, I'm willing to spend the time learning the puppet syntax and asking the rest of our system administrators to do the same. This layer of abstraction is further compounded by our use of git to store the configurations (not a problem if you are git hero - but most of us are already struggling to get a handle on using git). Again, all seems worth it for the pay off.

Now enter the puppet module. In addition to learning puppet syntax (and struggling with git) you now need to understand how the third party module works. With software programming, I typically don't need or want to learn how a library or class does what it does - that's the beauty of object-oriented programming: it hides the complexity. But when it comes to configuring the servers that I will be responsible for debugging and maintaining, I really need to know exactly what is happening.

To further compound the problem, I found myself wading through third party module code designed to work on Debian, Ubuntu, CentOS, Redhat, gentoo... and more. We run entirely on Debian - we don't need any of this extra code. And, once I got rid of all the other operating systems, I was still left with a complex module that allows you to configure software in ways we'll never need.

In the end, we tore out most of these third party modules and replaced them with file and exec puppet resources that did exactly what we needed them to do. Our code base is now much smaller and simpler.

Not just a whiner

I have a lot more to whine about (like why native resources for things like nagios that are so easily handled with the file resource?).

However - the remarkable thing about puppet is that it's flexible. Despite some fairly substantial problems with the "typical" use of puppet, the program provides enough flexibility for us to use it in a way that fully meets our needs. After having built my own bash-based set of configuration scripts and deeply exploring puppet, I have a great appreciation for the difficulty of building system configuration software (we considered and rejected cf-engine and chef as not being any better).

And, if you are still not convinced that puppet will work fo you ... you might consider a package I learned about after going down the puppet route: slack.

Very interesting read. Bookmarked for future reference. Thank you.
Comment by Anonymous Tue 31 May 2011 10:58:06 AM EDT
You might also want to take a look at Bcfg2 (http://docs.bcfg2.org/). It uses a different mindset than many of its contemporaries so you may find that it suits you better. The IRC channel (and the community in general) is extremely helpful with any questions you might have.
Comment by Anonymous Tue 31 May 2011 11:17:02 AM EDT

Thanks for the great write-up about masterless setups! I was eager to hear how this was going, and its good to see that it is going well.

There are a few things in your post that I wanted to comment on. To be clear, I'm not making an argument for a master setup, I actually think that a masterless setup is the way to go in a lot of ways, however I dont agree with you about some points.

To make matters worse, it seems as though a common practice is to generate and store all server ssh keys (private and public) on the puppet master and then push the private keys to their respective nodes. That means an intruder doesn't need to write to the puppet master, just reading these keys would be enough to compromise all servers in your network.

I'm not sure that this is a common practice, I'm curious where you got that impression from?

(I also learned more reasons for going without a puppet master, like not needing a server with 16GB of RAM!)

I'm dont know where you got 16GB from (perhaps from someone who is running a very old version of puppet, which did have some memory issues?). But even with the memory issues, 16GB is more than I've ever needed! I know people who are running puppetmaster with only 256megs of RAM... that said, scaling puppetmaster is a known issue in the community, but I dont think it is as drastic as you portray.

It's possible to run puppet with storeconfigs but without running a puppet master (that gets around the bugginess and resource consumption of the puppetmaster and puppet daemons, while providing the convenience of centralization). For our purposes, however, we decided we did not want any form of centralization that would provide an additional point of vulnerability.

I am also not really convinced that storeconfigs presents a point of vulnerability, or that going masterless eliminates one. My storeconfigs database holds pretty trivial, non-compromising information that just links a hostname to a resource, such as nagios. It is a centralized resource, so by definition there is a general vulnerability there, but that isn't specific to storedconfigs. In fact with a masterless setup there are other vulnerabilities that you are getting, that you wouldn't otherwise have with a puppetmaster setup. For example, every masterless node has write access to the storedconfigs, which allows any compromised node to inject files on any other node that is doing file collection.

There's no question - the setup was rather tedious (we're using runit to maintain an ssh-agent for each root user)

I have the impression from reading this that the only tedious thing that you ran into was using runit to maintain a ssh-agent for each root user, but I'm guessing there are other tediums involved, and I'm interested to know what those are. I suspect that your post takes the approach of highlighting the disadvantages of doing a puppetmaster setup, and downplaying the pain in running a masterless setup. I think that there is a lot more pain than you have detailed, which is the part I am interested in.

I'm also not really sure I understand what the purpose the ssh-agent serves in this setup?

Shared modules

Your discussion of shared modules is confusing to me in a number of respects. First of all, I know it is possible to use shared modules on masterless nodes, so I dont see this as an argument either for a masterless puppet setup, or against shared modules.

Now enter the puppet module. In addition to learning puppet syntax (and struggling with git) you now need to understand how the third party module works. With software programming, I typically don't need or want to learn how a library or class does what it does - that's the beauty of object-oriented programming: it hides the complexity. But when it comes to configuring the servers that I will be responsible for debugging and maintaining, I really need to know exactly what is happening.

I guess I fundamentally disagree with you here. I think learning how a shared module works is actually quite a useful process. Its a great educational opportunity to learning things about puppet from other people, and I think it pays off in the long run, enormously. I guess I dont think a shared module like an abstraction like a library, which may be why I think differently about them, I dont just take a module and throw it down without understanding how it works, or what it does, in fact until I am comfortable with the module doing what I need, I dont use it. I find them super useful, the network effects gained from collaborative efforts vastly outweighs the time it takes to understand what the module is doing.

That said, I fully understand the learning curve involved in puppet, and can understand the argument that learning a module is another thing that must be overcome. However, I dont think that means discounting shared modules, rather it just means you aren't ready to take on that additional burden yet, but at some point your familiarity with puppet will make the shared module learning curve flatten out and instead of it being a burden, the benefits will be clear.

To further compound the problem, I found myself wading through third party module code designed to work on Debian, Ubuntu, CentOS, Redhat, gentoo... and more. We run entirely on Debian - we don't need any of this extra code. And, once I got rid of all the other operating systems, I was still left with a complex module that allows you to configure software in ways we'll never need.

I dont find this as problematic as you do, its actually quite easy to ignore the other operating systems, and the modules aren't as complicated as I feel you are making them out to be. Finally, not taking advantage of all the possible ways to configure software is not a bad thing in my opinion. Especially when later I find the need for those things that I didn't need before. In fact, most software I use has functionality that I never need (eg. aptitude moo).

In the end, we tore out most of these third party modules and replaced them with file and exec puppet resources that did exactly what we needed them to do. Our code base is now much smaller and simpler.

My understanding was you switched to puppet to get away from writing bash scripts, this sounds like you are just using puppet to write bash scripts. This is where your comment about libraries belongs, puppet provides you with abstracted types, to hide complexity, its better to use those! I will certainly admit that its not always easy to find a way to do that, and I often recommend that people who are getting going with puppet start simply by just shipping the configuration file and some execs, but it is often said in the puppet communities that overuse of file and exec resources is an indication that something is not right. I think its a little more nuanced than this, but essentially true.

I don't really see the argument about smaller being something that is a benefit, compared to what you lose. Even the most complicated module that I've seen that has tests, and configuration files is only a few hundred K, which is nothing.

Finally, the shared module discussion doesn't seem to be related to a masterless setup at all, it seems more of a rant about your frustration with shared modules (ie. unreadable, and multi-distro). There are plenty of modules that do not use storedconfigs, and work fine with a masterless setup, and personally, I would love to see any issues you ran into with shared modules and a masterless setup be fed back to the shared-module community, so others can benefit from the frustrating efforts you have been going through. I'd love to be able to switch to a masterless setup some day, and having that capability built into modules would make that all the more easier.

Comment by micah [id.mayfirst.org] Wed 01 Jun 2011 10:23:21 AM EDT

Thanks for the great write-up about masterless setups! I was eager to hear how this was going, and its good to see that it is going well.

Thank you for the thoughtful responses :).

To make matters worse, it seems as though a common practice is to generate and store all server ssh keys (private and public) on the puppet master and then push the private keys to their respective nodes. That means an intruder doesn't need to write to the puppet master, just reading these keys would be enough to compromise all servers in your network.

I'm not sure that this is a common practice, I'm curious where you got that impression from?

I got the impression from the ssh_keygen function in the sshd shared puppet module. I was trying to use it without success (I thought the funciton was run on the node). I asked in irc.indymedia.org#puppet and (sorry, don't remember who) explained that functions are always run on the puppetmaster. They went on to explain that they push all keys from the puppetmaster to the nodes after creation, keeping a backup on the puppetmaster.

I made a pretty big and unsubstantiated logical leap in my blog (now corrected) about this being a common practice. I really don't have enough experience to say one way or another.

(I also learned more reasons for going without a puppet master, like not needing a server with 16GB of RAM!)

I'm dont know where you got 16GB from (perhaps from someone who is running a very old version of puppet, which did have some memory issues?). But even with the memory issues, 16GB is more than I've ever needed! I know people who are running puppetmaster with only 256megs of RAM... that said, scaling puppetmaster is a known issue in the community, but I dont think it is as drastic as you portray.

Yes - you are right again. I know I found someone reporting the 16GB of number, but can't seem to find it. Sloppy of me. I've just removed that reference as well.

It's possible to run puppet with storeconfigs but without running a puppet master (that gets around the bugginess and resource consumption of the puppetmaster and puppet daemons, while providing the convenience of centralization). For our purposes, however, we decided we did not want any form of centralization that would provide an additional point of vulnerability.

I am also not really convinced that storeconfigs presents a point of vulnerability, or that going masterless eliminates one. My storeconfigs database holds pretty trivial, non-compromising information that just links a hostname to a resource, such as nagios. It is a centralized resource, so by definition there is a general vulnerability there, but that isn't specific to storedconfigs. In fact with a masterless setup there are other vulnerabilities that you are getting, that you wouldn't otherwise have with a puppetmaster setup. For example, every masterless node has write access to the storedconfigs, which allows any compromised node to inject files on any other node that is doing file collection.

I think we're on the same page here - if your goal is to reduce single points of vulnerabilities, then it doesn't make sense to run puppet without a master but with storeconfigs. My mention of that option is only in passing. I just did a minimal update to try to clarify that point - we are running neither the master or storeconfigs.

There's no question - the setup was rather tedious (we're using runit to maintain an ssh-agent for each root user)

I have the impression from reading this that the only tedious thing that you ran into was using runit to maintain a ssh-agent for each root user, but I'm guessing there are other tediums involved, and I'm interested to know what those are. I suspect that your post takes the approach of highlighting the disadvantages of doing a puppetmaster setup, and downplaying the pain in running a masterless setup. I think that there is a lot more pain than you have detailed, which is the part I am interested in.

Ha. Well, maybe it's all in the phrasing. This blog covers in detail all the problems we had - just phrased in terms of the solutions. Every point of difference described in this post led to hours and even days of research, extra fanangling and heart ache to get it working the way we wanted it to work.

To the credit of puppet (and monkeysphere), I don't think we had to make any compromises (in terms of the functionality we wanted) to run puppet without a master. Others, who rely on reporting mechanisms (or any of the many features of puppet that I still don't know about) that are provided by storeconfigs or puppetmaster may not have the same experience.

The main compromise was in the time it took to set things up. I probably spent 3 - 4 times the number of hours to get us up and running that it would have taken if we used a more traditional model.

I'm also not really sure I understand what the purpose the ssh-agent serves in this setup?

Again - more tangential than anything else. Since each root user has to ssh, using the monkeysphere, to the backup servers, it needs to have ssh-agent running to handle the monkeysphere authentication.

Shared modules

Your discussion of shared modules is confusing to me in a number of respects. First of all, I know it is possible to use shared modules on masterless nodes, so I dont see this as an argument either for a masterless puppet setup, or against shared modules.

Yes - that is correct - shared modules work great in a master-less setup. Perhaps the blog is mis-titled. Masterless puppet is the main focus, but while I was at it, I thought I would explore more of the differences in our approach from the standard approach.

Now enter the puppet module. In addition to learning puppet syntax (and struggling with git) you now need to understand how the third party module works. With software programming, I typically don't need or want to learn how a library or class does what it does - that's the beauty of object-oriented programming: it hides the complexity. But when it comes to configuring the servers that I will be responsible for debugging and maintaining, I really need to know exactly what is happening.

I guess I fundamentally disagree with you here. I think learning how a shared module works is actually quite a useful process. Its a great educational opportunity to learning things about puppet from other people, and I think it pays off in the long run, enormously. I guess I dont think a shared module like an abstraction like a library, which may be why I think differently about them, I dont just take a module and throw it down without understanding how it works, or what it does, in fact until I am comfortable with the module doing what I need, I dont use it. I find them super useful, the network effects gained from collaborative efforts vastly outweighs the time it takes to understand what the module is doing.

I have also learned enormously about puppet from reading other people's shared modules. It's probably the single best way to learn how to write your own puppet code.

That said, I fully understand the learning curve involved in puppet, and can understand the argument that learning a module is another thing that must be overcome. However, I dont think that means discounting shared modules, rather it just means you aren't ready to take on that additional burden yet, but at some point your familiarity with puppet will make the shared module learning curve flatten out and instead of it being a burden, the benefits will be clear.

This blog represents version .01 of MFPL's puppet repository :). We'll see how it changes. I'm certainly re-acting in part to the MFPL support meeting when I presented version .001 (with dozens of shared modules and the need to use git-submodules). I almost got thrown out the window.

To further compound the problem, I found myself wading through third party module code designed to work on Debian, Ubuntu, CentOS, Redhat, gentoo... and more. We run entirely on Debian - we don't need any of this extra code. And, once I got rid of all the other operating systems, I was still left with a complex module that allows you to configure software in ways we'll never need.

I dont find this as problematic as you do, its actually quite easy to ignore the other operating systems, and the modules aren't as complicated as I feel you are making them out to be. Finally, not taking advantage of all the possible ways to configure software is not a bad thing in my opinion. Especially when later I find the need for those things that I didn't need before. In fact, most software I use has functionality that I never need (eg. aptitude moo).

In the end, we tore out most of these third party modules and replaced them with file and exec puppet resources that did exactly what we needed them to do. Our code base is now much smaller and simpler.

My understanding was you switched to puppet to get away from writing bash scripts, this sounds like you are just using puppet to write bash scripts. This is where your comment about libraries belongs, puppet provides you with abstracted types, to hide complexity, its better to use those! I will certainly admit that its not always easy to find a way to do that, and I often recommend that people who are getting going with puppet start simply by just shipping the configuration file and some execs, but it is often said in the puppet communities that overuse of file and exec resources is an indication that something is not right. I think its a little more nuanced than this, but essentially true.

I don't really see the argument about smaller being something that is a benefit, compared to what you lose. Even the most complicated module that I've seen that has tests, and configuration files is only a few hundred K, which is nothing.

Finally, the shared module discussion doesn't seem to be related to a masterless setup at all, it seems more of a rant about your frustration with shared modules (ie. unreadable, and multi-distro). There are plenty of modules that do not use storedconfigs, and work fine with a masterless setup, and personally, I would love to see any issues you ran into with shared modules and a masterless setup be fed back to the shared-module community, so others can benefit from the frustrating efforts you have been going through. I'd love to be able to switch to a masterless setup some day, and having that capability built into modules would make that all the more easier.

Our puppet setup is definitely an early work in progress - and your comments reflect the perspective of someone with many more years of puppet experience under their belt.

You have prompted me to finally submit the one puppet bug that inhibits the use of shared modules between sites using storeconfigs and those not using storeconfigs. Let's hope it gets fixed!

Comment by jamie [id.mayfirst.org] Thu 02 Jun 2011 10:51:29 AM EDT

Hi,

Interesting read. However, if you are not too "religious" about using Puppet, I would suggest checking out Cfengine 3 (www.cfengine.org). It has no requirement of "masters" (or even network connectivity) whatsoever - it is totally flexible on how you architect it. Puppet is based on ideas of Cfengine 2, but Cfengine 3 came in 2008 and is my favorite at the moment because of its unparalleled power and flexibility.

John

Comment by Anonymous Sun 05 Jun 2011 12:54:20 AM EDT

So you created additional complexity of your system by replacing the centralization of a puppetmaster that you own and control with the centralization of a github repository that you do not own, nor control and have to pay a subscription for?

Is this because you don't like the word master?

Not a win in my book.

Comment by Anonymous Fri 07 Dec 2012 02:06:18 AM EST

My experience suggests that most people who run a puppetmaster also keep their puppetmaster files in git. So, we're actually reducing complexity by removing the puppetmaster from the equation, not replacing it with something new. And I couldn't agree more about github - why would anyone do that? We run our own git repository (all it takes is ssh and some disk space).

If you re-read my blog you'll see that I like the use of the term puppetmaster - it's technology design that I don't like.

Comment by jamie Fri 07 Dec 2012 04:17:18 AM EST