Interview with Russell Miller on Automated Deployments

Submitted by guest interviewer Matthew Sacks, of TheBitSource.

Russell Miller has been a systems administrator and engineer for 12 years, and has worked in every IT environment from mom-and-pop companies up to multi-datacenter high-traffic Internet companies having up to 4 thousand servers.

Russell will be delivering a talk on using open source software for automated deployments, including how to provision and configure thousands of servers quickly in an automated fashion. 75% of IT budget is spent on maintaining existing IT operations[1-2], and Forrester cites automation as a key practice for saving money and managing complexity in IT organizations with a smaller workforce.

Automation is a hot topic, and using open source software to perform automation represents increased savings. Miller will be presenting how organizations can leverage these automation tools and spoke with me a bit about automating deployments.

Q: How much of automating deployments is fixing existing processes? 

Russell Miller: It depends on what the existing processes are.  If you are starting from  scratch, where every server gets built by hand from CD, you have a lot of work  ahead of you, because you haven't yet gotten into the proper mindset - you're  still thinking in terms of one off deployments..  If you already have a  centralized asset database that consists of accurate information, much of your  work is already done.  All you have to do at that point is create the glue  that makes everything work well together. 

The hardest part about creating a  deployment system is getting everything into your centralized asset datastore  accurately, and the second hardest part is keeping it that way.  Everything  else is a simple matter of programming.

Something I'm touching on in my presentation is the idea of server spec sheets, which exemplify the mentality that has to be changed before you can build a successful automated deployment system.  Management comes up with a spec sheet where the developers are asked what needs to be installed on each  class of systems.  This never works because the developers usually don't know themselves, and what they come up with is something that is completely useless  to those building the systems.  When the developer specifications are distilled into a puppet[3] manifest, for example, the spec sheet becomes code, the code is repeatable, and there is no chance of the spec sheets going out of sync with the actual builds - because the builds actually are done directly off the  specs. 

This is scary for management when they have not seen this work before, but  when management can push aside their fears and let it happen in a controlled and well-planned way, they usually are astonished with what happens.  Package deployments to every server in 15 minutes?  Application deployments to farms of a hundred servers in 15 minutes, repeatably?  20 servers built from metalin an hour?  Once they see the results, it's game over. Going back isn't an option. 

In other words, you fix the processes that don't work, and keep the ones that do. How many processes need to be fixed is really dependent on how dysfunctional the processes were in the first place.  Good processes are almost an emergent property of a good deployment system. 

Q: Can open source tools do everything that a commercial automation and provisioning system can do? Have you compared the two? 

Russell Milller: I have not used a commercial provisioning system, primarily because the companies I have worked for are very cost conscious.  However, if the option were put on the table, I would probably have second or third thoughts, because the one thing you get from a well-integrated open source deployment system is flexibility.  Everything is designed and built exactly to your specifications and no more. 

My experience with the commercial systems I have used (such as BMC Control-M and Remedy) is that they will turn down your bed, make you coffee, and leave a mint on your pillow, but when you ask them to do real work, you'll spend more time tweaking and on the phone with support teams who don't know a whole lot more than you do about their product than getting actual work done, and in the end you'll probably end up with a system you'll have to live with rather than a system that you actually like, and for a price tag that would keep an open source developer eating for a long time.  This may work for large, established companies that can afford this kind of cost and are willing to trade the sacrifice in flexibility for the support (and legal CYA aspects) of a large software company.  For smaller or more nimble companies, it's really not a very good trade.

Russell Miller's SCALE Presentation: http://www.socallinuxexpo.org/scale8x/presentations/using-open-source-au...

[1] Andrew Bartels, "Defining The MOOSE In The IT Room", Forrester Report, (October 2005)
[2] Glenn O'Donnell, "IT Operations 2009: An Automation Odyssey", Forrester Report, (July 2009)
[3] Puppet Data Center Automation http://reductivelabs.com/products/puppet/

Copyright 2002-2010 Linux Expo of Southern California. All Rights Reserved.