Monday, May 31, 2010

Improving Satan and Solaris SMF

One of the features of Solaris that we heavily rely on in our production environment at work is Service Management Facility or SMF for short. SMF can start/stop/restart services, track dependencies between services and use that to optimize the boot process and lots more. Often handy in production environment is that SMF keeps track of processes that a particular service started and if a process dies, SMF restarts its services.

One gripe I have with SMF is that its process monitoring capabilities are rather simple. A process associated with a contract (service) must die in order for SMF to get the idea that something is wrong and that the service should be restarted. In practice, more often than not a process gets into a weird state that prevents it from working properly, yet it doesn't die. Failures might include excessive cpu or memory usage or even application level failures that can be detected only by interacting with the application (e.g. http health check). SMF in its current implementation is incapable of detecting these failures. And this is where Satan comes into the play.

Satan a small ruby script that monitors a process and following the Crash-only Software philosophy, kills it when a problem is detected. It then relies on SMF to detect the process death(s) and restart the given service. I fell in love with the simplicity of Satan (which was inspired by God) and started exploring the feasibility of using it to improve the reliability of SMF on our production servers.

Upon a code review of the script, I noticed several things that I wished were implemented differently. Here are some:
  • Satan watches processes rather than services as defined via SMF
  • One Satan instance is designed to watch many different processes for different services, which adds unnecessary complexity and lacks isolation
  • Satan is merciless (what a surprise! :-) ) and uses kill -9 without a warning
  • Satan has no test suite!!! :-( (i.e. I must presume that it doesn't work)


Thankfully the source code was out there on GitHub and licensed under BSD license so it was just a matter of a few keystrokes to fork it (open source FTW!). By the time I was done with my changes, there wasn't much of the original source code left, but oh well :-)

I'm happy to present to you http://github.com/IgorMinar/satan for review and comments. The main changes I made are the following:
  • One Satan instance watches single SMF service and its one or more processes
  • The single service to monitor design allows for automatic monitoring suspension via SMF dependencies while the monitored service is being started, restarted or disabled
  • Several bugfixes around how rule failures and recoveries are counted before a service is deemed unhealthy
  • At first Satan tries to invoke svcadm restart and only if that doesn't occur within a specified grace period, it uses kill -9 to kill all processes for the given contract (service)
  • Satan now has decent RSpec test suite (more on that in my previous post)
  • Improved HTTP condition with a timeout setting
  • New JVM free heap space condition to monitor those pesky JVM memory leaks
  • Extensible design now allows for new monitoring conditions (rules) to be defined outside of the main Satan source code
As always there are more things to improve and extend but, I'm hoping that my Satan fork will be a decent version that will allow us to keep our services running more reliably. If you have suggestions, or comments feel free to leave feedback.

Testing matters, even with shell scripts

A few months ago, we migrated out production environment at work from Solaris 10 to OpenSolaris. We loved the change because it allowed us to take advantage of the latest inventions in Solaris land. All was good and dandy until one day one of our servers ran out of disk space and died. WTH? We have monitoring scripts that alert us long before we get even close to running out of space, yet no alert was issued this time. While investigating the cause of this incident, we found out that our monitoring scripts that work well on Solaris 10, didn't monitor the disk space correctly on OpenSolaris. When I asked our sysadmins if they didn't have any tests for their scripts that could validate their functionality, they laughed at me.

Fast forward a few months. A few days ago I started looking at Satan, to augment the self healing capabilities of Solaris SMF (think initd or launchd on stereoids). At first sight I loved the simplicity of the solution, but one thing that startled me during the code review was that there were no tests for the code, except for some helper scripts that made manual testing a bit less painful. At the same time, I spotted several bugs that would have resulted in an unwanted behavior.

Satan relies on invoking solaris commands from ruby and parsing the output and acting upon it. Thanks to its no BS nature, ruby makes for an excellent choice when it comes to writing programs that interact with the OS by executing commands. There are several ways to do this, but the most popular looks like this:
ps_output = `ps -o pid,pcpu,rss,args -p #{pid}`

All you need to do is to stick the command into backticks and optionally use #{variable} for variable expansion. To get a hold of the output, just assign the return value to a variable.

Now if you stick a piece of code like this in the middle of the ruby script you get something next to untestable:
module PsParser
 def ps(pid)
   out_raw = `ps -o pid,pcpu,rss,args -p #{pid}`
   out = out_raw.split(/\n/)[1].split(/ /).delete_if {|arg| arg == "" or arg.nil? }
   { :pid=>out[0].to_i,
     :cpu=>out[1].to_i,
     :rss=>out[2].to_i*1024,
     :command=>out[3..out.size].join(' ') }
 end
end

With the code structured (or unstructured) like this, you'll never be able to test if the code can parse the output correctly. However if you extract the command execution into a separate method call:
module PsParser
 def ps(pid)
   out = ps_for_pid(pid).split(/\n/)[1].split(/ /).delete_if {|arg| arg == "" or arg.nil? }
   { :pid=>out[0].to_i,
     :cpu=>out[1].to_i,
     :rss=>out[2].to_i*1024,
     :command=>out[3..out.size].join(' ') }
 end

 private
 def ps_for_pid(pid)
   `ps -o pid,pcpu,rss,args -p #{pid}`
 end
end

You can now open the module and redefine the ps_for_pid in your tests like this:
require 'ps_parser'

PS_OUT = {
 1 => "  PID %CPU    RSS ARGS
12790   2.7 707020 java",
 2 => "  PID %CPU    RSS ARGS
12791  92.7 107020 httpd"
}

module PsParser
 def ps_for_pid(pid)
   PS_OUT[pid]
 end
end

And now you can simply call the pid method and check if the fake output stored in PS_OUT is being parsed correctly. The concept is the same as when mocking webservices or other complex classes, but applied to running system command and programs.

To conclude, what makes you more confident about a software you want to rely on. An empty test folder: Or all green results from a test/spec suite?