Sunday, August 08, 2010

Change of Status

$ sqlplus -s
SQL> connect
SQL> UPDATE employees SET current = false WHERE email = "";
SQL> disconnect
SQL> exit
$ curl -X POST -H "Content-Type: application/json" \
   -d '{ "firstName":"Igor", "lastName":"Minar"}' \

Tuesday, August 03, 2010

Thanks for All the Fish

Hi guys,

Most of you don't know, but today is the 3rd birthday of In 2007 a bunch of us decided that it was worth it to boldly go where no man has gone before and on August 3, 2007 we launched

At that time very few corporations were actively using some kind of wiki internally and there was no known significant public wiki deployment run by a corporation. We were astonished to see the uptake and user interest and watched the project grow from a few power users and a few dozens of wiki pages to tens of thousands of users and tens of thousands of wiki pages.

Thank you all for contributing, providing feedback and helping us to make the project successful.

Despite being a small team (did you know that officially there wasn't a single person working on wikis full-time?), we managed to get a lot done and I'm very proud of our accomplishments.

I do believe that there is still room for improvements, but I made a decision that these improvements will have to be implemented by someone else. I have found a new challenge that I'm going to pursue and unfortunately it's time for me to hand the wikis project over to a new group that will oversee the operations and development of the site. I did my best to make this transition as smooth as possible and I'm hopeful that the site will be in good hands.

Just by chance my last day at Oracle coincides with wikis' 3rd birth day, I'll take that as a good sign. I wish you all the best and I hope to see you around. Internet is a small place.

Good luck to you all.


PS: If you want to stay in touch, you can find me at linkedin at:

DGC VI: Wiki Organization and Working with the Community

This blog post is part of the DevOps Guide to Confluence series. In this chapter of the guide, we’ll have a look at wiki organization and working with the user community. This post is going to be more subjective than the others, because the recommendation I'm going to make apply to a wiki site with similar goals and purpose as ours. I'm just going to share our experience and hopefully some of it will be useful for others.

The Purpose

First thing that should be clear for you when building a wiki site is what is the purpose that it's going to serve. Confluence has been successfully used for many purposes ranging from team collaboration, documentation writing, to website CMS system just to mention a few. When our team set out to build a wiki site, the goal was to create a wiki platform that could be used by anyone in our company to publicly collaborate with external parties without having to deploy and maintain their own wiki.

It was a pleasant surprise when one of the first groups of users who joined our pilot three years ago were technical writers eager to drop their heavy-weight tools with lots of fancy features in exchange for lightweight and more importantly inclusive collaboration tool. The main issue they were facing was that their processes and tools were very exclusive, and next to impossible for a non-writer to quickly join in order to make small edits. This resulted in lots of proxying of engineering feedback, and inevitable delays. With a wiki, the barrier for entry is very low for almost everyone. There is nothing to install or configure, a browser is all one needs. A wiki allowed a relatively small and overloaded team of technical writers to more efficiently gather and more importantly incorporate feedback from subject matter experts into the documentation. Of course there were trade-offs, mainly in the area of post processing the content for printable documentation (i.e. generating PDFs), but I'm hopeful that as the wiki system matures, more attention will be paid to make this area stronger (Atlassian: hint hint).

Anyway, with the tech writers on board, the purpose, goals and evolution of our site got heavily influenced by their feedback. In exchange we received a lot of high quality content that attracted new users who started using the wiki. This kind of bootstrap of the site greatly helped to speed up the viral adoption across our thirty-thousand-employee company.

Wiki Organization

When we launched our site three years ago, there were no other big corporations with a public facing wiki site (many corporations didn't even have an internal wiki yet, boy that has all changed since then), this put us into a position where we had to be the first explorers in search of best practices as well as things that didn't work at all.

Fortunately, since our team successfully pioneered the area of corporate blogging before the wikis launch, we had some experience with building communities that we could leverage.

Some of the main principles that we reused from our blogs site were:

  • Make the rules and policies as simple as possible
  • It is a goal shared by all employees to create a good image of the company and make the company succesful. We should trust their judgement and empower them to be able to do the right thing.
  • The team running the site is small, so the employees should be able to do as much as possible on their own (self-provisioning FTW!)
  • Since we trust our employees, we should delegate as much decision making and as many responsibilities as possible, and let them delegate some to others, otherwise we won't be able to scale.
  • There should be very little (close to none) policing or content organization done by the core team. We don't have the man-power for that. Besides, the Internet is not being policed by anyone and things tend to just work out. The popular, well organized and valuable content bubbles up, in one way or another.

Implemented Actions

With our principles laid out, we took these actions:

  • We integrated Confluence with our single sign-on and user provisioning system, which made it super easy for employees and external users to log in using their existing accounts.
  • Based on the information in our identity systems, we enrolled accounts of all of our employees into an employee-specific Confluence group, which we utilized when setting up global permissions.
  • The global permissions were set up so that employees (and only employees) could create new wiki spaces on their own, whenever they had the need, for whatever purpose
  • We also opened up all wiki spaces to be viewable by anyone on the Internet, but we left it up to the space admins to restrict permissions if they felt like it was necessary.
  • In order to mitigate spam issues, we made it impossible for anonymous users to obtain any write permissions either on the global or space level
  • We created a single Confluence theme that was applied to all wiki spaces by default and disabled all the other themes. This was done mainly for technical reasons — the Confluence UI has been changing dramatically over the last few years and these changes often resulted in a need to modify a custom theme to make it compatible with these changes. If we allowed anyone to create their own theme, we'd never be able to upgrade, because of a fear that we'd break someone's theme or alternatively we would have to coordinate our updates with all the maintainers of custom themes
  • We created an internal mailing list where space admins and other wiki users could share their experience, ask questions, and report issues.

Things We Need to Work on

Nobody's perfect and neither are we, let's look at what could we improve.

I know I just said that popular content always bubbles up, but considering how hideous the default Confluence front page is, I'd much prefer to utilize that real estate better and highlight popular or interesting content there.

I also think that we could do a better job at highlighting hardworking community members. There were some elaborate attempts to do this, but in my opinion a more lightweight approach could be more suitable for most of the sites.

Lastly, I think that staying in touch with our community is very important, and we could have done a better job at it if we had e.g. quarterly internal mini-conferences on various topics during which we could better gather their feedback. Also some better organized training sessions for our novice users could help boost our growth even further.


The recommendations and practices that worked for us might not be suitable for all Confluence deployments, but in our case things have worked out. There are still many areas where we could have done a better job, but I guess it's good to always have some space for improvements.

In the next chapter of my guide, we'll discuss issues and solutions that are specific for Internet-facing Confluence deployments.

Monday, August 02, 2010

DGC V: Customizing and Patching Confluence

This blog post is part of the DevOps Guide to Confluence series. In this chapter of the guide, we’ll have a look at how to customize and patch Confluence.

Customizing Confluence

Before we talk about any customization at all, I need to warn you. Any kind of customization of Confluence (or any other software) comes with a maintenance and support cost. The problems usually arise during or after a Confluence upgrade, and if they catch you unprepared, you might get yourself in a lot of trouble. Keep this in mind and before you customize anything, justify your intent.

There are several ways how to customize Confluence. For some the maintenance and support cost is low, others give you lots of flexibility at a higher cost. So depending on your needs and requirements you can pick one of the following.

Confluence User Macros

I already mentioned these in the Confluence Configuration chapter — they are easy to create and usually don't break during upgrades, but they are a nightmare to maintain. Avoid them.

Confluence Themes, HTML Headers and Footers

You can easily inject html code in the header and footer by editing the appropriate sections of the Admin UI (described in the config chapter). If this html code contains visual elements, then it's possible that your code will break during upgrades. In general I would avoid editing headers and footers in this way as much as I could unless I was doing something very simple.

Confluence themes are the way to go. You can either pick a theme that was already built and published by someone else, or you can build our own. Building your own theme will give you the most flexibility, but the cost of maintaining and supporting it will be the highest. You can do some things to cut corners, but be prepared to do some Confluence plugin development (a Confluence theme, is really just a type of Confluence plugin).

What worked well for me and our heavily customized theme, is to create our theme as a patch for the Confluence default theme. I simply symlink all the relevant files from Confluence source code into a directory structure that can be built as a Confluence theme/plugin, add my atlassian-plugin.xml and patch the files with changes I need no matter how complex they are. The advantage of this approach is that my theme will always be compatible with my Confluence version (after rebase) and I get all the new features introduced in the new version. The downside is that I often need to rebase my patches during Confluence upgrades, but with a good patch management solution (see below) this headache can be greatly minimized.

Lastly there is Theme Builder from Adaptavist. I haven't personally used this Confluence plugin because it was not popular when we initially created our theme and it was not desirable for us to depend on yet another (unknown at that time) vendor during our Confluence upgrades. If I were about to start creating a theme from scratch I would compare it with my patching method and see what gives me the most benefits. The main concern with Theme Builder I have, is my ability to version control the theme, which if not easily possible might be the deal breaker for me and many others.

Confluence Plugins

I mentioned Confluence Plugins already in the previous chapter, so I'm not going to repeat myself here.

What I'm going to add is that you really can extend and customize Confluence in crazy ways via the plugins. You can either discover the existing plugins at Atlassian Plugin Exchange or you can build your own with Maven (or the Plugin SDK), Java (or another Java compatible language) and Atlassian Plugin Framework.

The nice thing about plugins is that they are encapsulated pieces of code that interact with the rest of Confluence via public API and additionally they are hot plugable. This means that in theory they should work after a Confluence upgrade and that you can install and uninstall them on the fly without a need for restart. While the latter is true in practice, the former is not always the case. Confluence's public apis sometimes change, plugins rely on behavior that was not considered to be part of the public api and the UI changes all the time, so any CSS/javascript code that relies on absolute or relative positioning or fixed DOM structure will need ocassional fixes during upgrades.

Patching Confluence (source) files

Lastly I'm going to mention that one can modify Confluence's behavior by modifying the Confluence core files, this is a large topic and deserves its own section. ;-)

Patching Confluence

Patching Confluence is definitely the most advanced way to customize Confluence, especially if you start changing the Java source code, recompiling and creating your own war files. On the other hand, this way you get the most flexibility and will be able to change anything you want, even those things that plugins can't, all at your own risk.

Issues to Be Aware of

There are several potential issues that you should be aware of before you head down this route:

  • you might break something unintentionally — you can mitigate this with testing
  • you might have a hard time preserving your changes during an upgrade — you can minimize the problems by using good patch management strategy (discussed below)
  • you might have a problem getting support — this was never a problem for me mainly because most of my changes have been very isolated, so I could quickly tell if an issue is caused by my patch or if there is a bug in Confluence.

Reasons for Patching

The reasons for which you might want to patch Confluence fall generally into these four categories:

  • config change - as I mentioned in the Confluence Configuration chapter, some of the configuration is done by modifying files that are part of the Confluence standalone or war distribution (usually those in WEB-INF/classes directory). This is quite unfortunate because it adds a significant overhead to upgrades. I would much prefer if this configuration could be done via files in Confluence Home directory, but until that happens, the best way to manage these changes is by treating them as patches.
  • security fix - Atlassian often releases patches that fix security vulnerabilities in older versions of Confluence. What they actually release is a binary or textual file that represents the fixed version of the affected code. This file can then be just dropped into the appropriate location in (typically) WEB-INF/classes directory and the issue is fixed. This is a nice quick hack, but if your site is bigger or you plan to be on an older version for an extended period, your situation will be a lot more maintainable if you transform the fix into a patch against your version of Confluence.
  • temporary bug fix - occasionally Atlassian releases a temporary fix for an issue in a form similar to a security fix that will later on be properly fixed. In the meantime, the temporary fix can be used to work around the problem. Again, for a bigger site things will be a lot more maintainable if you manage this change as a patch.
  • a ui/behavior change - and lastly if you run a bigger site with lots of requirements coming from different groups of users, you might need to add a feature or disable an existing feature, add or remove a UI element, or change some behavior of Confluence in a way that is not possible via a plugin or a theme, in this case you definitively want to maintain every single such a change as an isolated patch. If you don't then you'll be in a big trouble when a time to upgrade Confluence comes.

Patching Methods

Now that we know why we would be interested in patching Confluence, let's look at how to do it. Again, there are several ways, depending on what do you need to patch.

  • patch the non-compilable files - these typically include config files as well as, javascript, css and template files. If you are patching only these types of files, then you can just create patches against the Confluence standalone or war distributions (since no compilation for these is needed). More likely than not once you get down the patching path, you'll want or need to do a lot more though.
  • patch files that require re-compilation - the Java source code. Soon after I realized that I needed to patch Confluence, I ended up modifying the java source code in order to fix bug or modify behavior. Atlassian makes this relatively easy to do, because along with the binary releases they also offer source code releases which can be used to build Confluence from sources on your own. This is an huge benefit for their customers, especially those who are willing to get their hands dirty to get the most out of Confluence. Once you have access to buildable source code, you can patch it and create your own builds with relatively small effort. The benefit of creating patches against the source release is that you can patch anything and everything (though I'm not saying that you should), starting from config files, js, css and template files all the way to core java class files; and all of that in a consistent and reliable way.

Patch Management

As I mentioned already, whenever you modify the source code you want to create an isolated patch that is a logical grouping of changes needed for one bug fix, config change or feature. Once you have many smaller patches like these you can apply or omit them in a build or update them one at a time when needed.

If you were to use the standard command line tools like diff and patch to work with these patches, you would probably go nuts quickly. There are far better, higher level solutions that can be used. Distributed source code management tools that are becoming an unstoppable force in the SCM arena of software development offer features that make patch management a piece of cake.

Git, for example, offer's a feature called Stash which allows you to create and maintain patches against your git repository. I don't have a personal experience with git-stash, but from the docs it looks like it should do what we want.

The solution that I've been using and loving for the past 3 years is Mercurial and it's core plugin Mercurial Queues. Working with this plugin is also well documented here and here.

Here are some main points for how I do my patching:

  • I store Confluence sources in my main Mercurial repository. I simply grab the source zip from Atlassian's website, unzip it, rename the root directory to "confluence" and put it to my repository.
  • When a new version of Confluence is released, I delete the confluence directory in the working copy of my repo and replace it with the files from the new zip file and commit the files with --addremove flag, which will automatically add all the new files and remove all the deleted files to/from the repository. This allows me to track diffs between Confluence versions, which is very handy when I'm debugging a new issue and want to find out in which release it was introduced.
  • In addition to this main repository I have a versioned (stored as a real hg repo) Mercurial Queue associated with it. The patch repo is very easy to create, just by issuing hg qinit -c command.
  • Now every time I want to change something, I create a new patch with hg qnew mypatchname.patch, modify the confluence source and then just do hg qrefresh to move my changes to mypatchaname.patch. This doesn't commit the changes in the patch repo, you have to do that explicitly via hg qcommit or by changing your current directory to .hg/patches and issuing a regular hg commit there.
  • Once you have patches in your queue, you can now easily apply and unapply patches with commands like hg qpush, hg qpop and hg qgoto.
  • Additionally you can set "guards" on patches, so you can create collections of patches that should be applied only for certain builds. For example, if you have some patches that should be applied only in development environment, you can set guards on them via hg qguard and then switch between these collections via hg qselect followed by hg qpop -a and hg qpush -a.
  • If you have a need to modify an existing patch, just hg qgoto to it, modify the confluence source code and run hg qrefresh and finallyhg qcommit.
  • In order to store binary files in your patches (e.g. images), you'll need to tell MQ to use Git's extended diff format. This is done by adding the following lines into your ~/.hgrc:
    diff = --git
    qrefresh = --git
    email = --git
  • When upgrading Confluence, you first need to hg qpop all the patches, replace and commit the new confluence sources as discussed above and then reapply your patches with hg qpush -a. If you are lucky all changes will apply smoothly, however sooner or later, especially once your patch collection grows into decent size you'll need to rebase your patches. The conflicts are resolved via a 3-way merge, which can be either done by hand, or by using a fancy 3-way merge tool. Once all the patches are applied, be sure to test that everything still works as originally intended.
  • To make conflict resolution less frequent, I strive to create patches that do as little as possible to get things done. I avoid any major refactorings, api changes and "forget" about some best practices, especially in those cases when I know that Atlassian won't be interested in accepting my patch upstream. For these patches the main focus should be on getting things done, robustness and maintainability.
  • Lastly, if there is a patch that is generally useful for all Confluence users, I usually attach it to relevant RFE/bug report on Confluence's bug tracker. The fewer patches I have to maintain the better. When a patch is accepted upstream, I simply remove it with hg qrm.

I could go on and on about what a life-saver Mercurial Queues are but the best way to get to know them is to do some experimentation on your own. I strongly encourage you to do that, it's a good tool to have in your toolbox.

Just to give you some inspiration, here is a list of some of patches that I created for our build:

  • bundle jdbc driver with the war (by specifying new maven dependency in confluence's pom file)
  • configure Seraph login/security framework
  • replace Confluence's favicon with ours
  • modify the default log4j config
  • customize error pages
  • turn Australian English language pack into US English
  • remove all lower() function calls from Hibernate mapping files and Java classes to get major boost in db performance (see CONF-10030 and this doc)
  • disable mail archiving UI
  • enforce that our custom theme is the default and only available theme
  • remote api security enhancements
  • allow only members of our employee group to become space admins
  • as I already mentioned, our whole theme plugin is implemented as a bunch of patches against the default Confluence theme
  • and the list goes on and on...


In this chapter we went through several possible ways to customize Confluence. Plugins and themes are definitely the safest and most manageable way to go, however patching if done right, will give you the most flexibility. If you use the right tools for patch management (like Mercurial Queues), you'll be able to manage big collections of patches with a very little maintenance overhead.

Next time we'll have a look at a non-technical aspect of running a large Confluence wiki site.

Friday, July 30, 2010

DGC IV: Confluence Upgrades

This blog post is part of the DevOps Guide to Confluence series. In this chapter of the guide, we’ll have a look at Confluence upgrades.

Confluence Release History and Track Record

I started using Confluence at around version 2.4.4 (released March 2007). A lot has changed since then, mostly for better. In my early days, Atlassian was spitting out one release after another — typically 3 weeks or less apart — followed by a major release every 3 months. You can check out the full release history on their wiki.

This changed later on and recently there have been fewer minor releases and bigger major releases delivered 3.5-4 months. Depending on your point of view this is good or bad. It now takes longer to get awaited features and fixes, but on the other hand the releases are more solid and better tested.

For major releases, Atlassian now usually offers Early Access Program, which gives you access to milestone builds so that you can see and mold the new stuff before it ships.

Contrary to the past, the minor versions have been very stable lately and have contained only bugfixes, so it is generally safe to upgrade without a lot of hesitation.

The same can't be said about major releases. Even though the stability of x.y.0 releases has been dramatically improving lately, I still consider it risky for a big site to upgrade soon after a major release is announced. Wait for the first bugfix release (x.y.1), monitor the bug tracker, knowledge base and forums, and then consider the upgrade.

Having gone through many upgrades myself, I think that it is a good practice to stay up to date with your Confluence site. We have usually been at most one major version behind and frequently on the latest version, but as I mentioned avoiding the x.y.0 releases. This has been working well for us.

Staying in Touch and Getting Support

In order to know what's going on with Confluence releases, it is a good idea to subscribe to the Confluence Announcements mailing list. This is a very low traffic mailing list used for release and security announcements only.

Atlassian's tech writers usually do a good job at creating informative release notes, upgrade notes and security advisories, so be sure to read those for each release (even if you are skipping some).

There are several other channels through which people working on Confluence (plugin) development can communicate and support each other, these include:

Despite Atlassian's claims about their legendary support, I found the official support channel rarely useful. Being a DIY guy and having a reasonable knowledge about Confluence internals, I usually found myself in need of a more qualified support than what the support channel was created for. For this reason my occasional support tickets usually ended up being escalated to the development team, instead of handled by the support team.

On the other hand the public issue tracker has been an invaluable source of information and a great communication tool. I wish that more of my bug reports had been addressed, but for the most part I have been receiving reasonable amount of attention even though sometimes I had to request escalation to have someone look at and fix issues that were critical for us.

The biggest hurdle I've been experiencing with bug fixes and support was that sites of our size are not the main focus for Atlassian and they are not hesitant to be open about it. I often shake my head when I see features of little value (for us that is - because they target small deployments and have little to do with core wiki functionality) being implemented and promoted, but major architectural issues, bugs and highly anticipated features go without attention for years. Just browser the issue tracker and you'll get the idea.

Confluence Upgrades

The core of the upgrade procedure will depend on the build distribution type you use (standalone, war, building from source), but fundamentally in all cases, you need to shut down your Confluence, replace your app (standalone or war) with the new version and then start it again. An automated upgrade process will take care of updating the database schema, rebuilding the search index and other tasks required for a successful upgrade.

That was the good news, the bad news is that there is a lot more work to be done in order to successfully upgrade a site with as little downtime as possible.

Dev and Test Deployments and Testing

Before you upgrade the real thing, you should at first get familiar with the release by upgrading your dev and test environments.

It's often handy to invite your users to do a brief UAT (user acceptance testing) on your test instance as they might catch something that you or your automated tests haven't.

Picking the Outage Window

Based on your users' usage patterns (as easily identified by web analytics solutions like Google Analytics), you should pick a time when the usage is low. For our global site this has been early mornings at around 4:30 or 5am PT.

When it comes to picking a day, we usually stuck with Tuesdays, Wednesday or Thursdays. Nobody wants to be dealing with an issue during a weekend when internal (infrastructure) or external (Atlassian) support is harder to get hold of.

You also want to communicate the planned outage to your users, so that they are not caught by surprise when you announce an outage on a day when they are releasing important documents on the wiki.

As far as outage duration goes, we usually plan for a 30min outage during a 1 hour window and most of the time have been able to bring the site back online within 30min or less.

Ready, Set, Go!

The actual deployment consists of several steps, which in our case are:

  • disabling load balancing for both nodes (which automatically triggers redirection of all requests to a maintenance pages hosted elsewhere)
  • shutting down both nodes
  • disabling MySQL replication between the master and slave db
  • taking ZFS snapshot of the Confluence Home directory
  • taking ZFS snapshot of the MySQL db filesystem on the master
  • deploying the new war file
  • starting one node (while the loadbalancer still ignores it)
  • watching container and Confluence logs for any signs of problems

At this point, we have one of our nodes up and running (hopefully :-)). We can log in with an admin account and check if everything works as expected. The next tasks include:

  • upgrading installed plugins
  • upgrading custom theme (if there is one)
  • running a bunch of automated or manual tests, just to verify that everything is ok

If things are looking good, we can allow the load balancer to start sending requests to our upgraded node. Continue watching logs and eventually deploy the war on the second node and re-enable the MySQL replication.

If any issues occur during the deployment, we can simply:

  • shut down the upgraded node
  • revert to the latest Confluence Home snapshot
  • revert to the latest MySQL db snapshot
  • redeploy the older version of war file
  • either retry the deployment or re-enable load balancer and deal work on resolving the issues outside of production environment

In my experience from all the dev, test and prod deployments, we've had to roll back and redo an upgrade from scratch only once or twice. It's very unlikely that you'll have to do it, but it's better to be ready than sorry.

If you are building Confluence from patched sources and deploy your own builds frequently, then you might want to consider automating your deployments with tools like Capistrano. This will save you a lot of time and make the deployments more reliable and consistent.


If you do your homework, Confluence is quite easy to upgrade. It's unfortunate that the entire cluster must be shut down for an upgrade even between minor releases, but if you plan your deployment well, you will be able to minimize the downtime to just a few minutes outside of peak hours.

In the next chapter of this guide, we'll take a look at customizing and patching Confluence.

DGC III: Confluence Configuration and Tuning

This blog post is part of the DevOps Guide to Confluence series. In this chapter of the guide, we’ll have a look at Confluence configuration and tuning.

There are four ways how one can modify Confluence's runtime behavior:

  • Config Files in Confluence Home directory
  • Config Files in WEB-INF/classes
  • JVM Options
  • Admin UI

Config Files in Confluence Home directory

Confluence Home directory contains one or more config files that control runtime behavior of Confluence. The most important file is confluence.cfg.xml that must be present in order for Confluence to start. This file can be modified by hand while confluence is shut down, but also gets modified by Confluence occasionally (mostly during upgrades). Your changes will be preserved, as long as you made them while Confluence was offline.

Another relevant file is tangosol-coherence-override.xml which must unfortunately be used to override Confluence’s lame multicast configuration needed for cluster configuration (see below).

Lastly there is config/confluence-coherence-cache-config-clustered.xml which contains configuration of the Confluence cache. Generally you don't want to modify this file by hand. I’ll come back to talk about cache configuration later in the Admin UI section of this chapter.

In general it is advisable to be very consistent about your environment, so that you can then just have a single version of these files that you can distribute on all servers when needed. This includes the directory layout, network interface names, and so on.

A combination of the first two files will allow you to configure the following:


As I mentioned, this configuration is split between two config files. confluence.cfg.xml contains confluence.cluster.* properties, which allow you to set multicast IP, interface and TTL, but not the port. Only tangosol-coherence-override.xml can do that.

The cluster IP is by default derived from a "cluster name" specified via the Admin UI or installation wizard. For some reason Atlassian believes that in an enterprise environment one can just let a software pick a random IP and port to run multicast on. I don’t know about any serious datacenter where things work this way. You’ll likely want to explicitly set IP, port, interface name and TTL and the only way to do that is by modifying these files by hand and ignoring the "cluster name" setting in the UI. Make sure that settings are consistent in both files.

DB Connection Pool

Confluence comes with an embedded connection pool. I believe that you can use your own too (if it comes with your servlet container), but I’d suggest sticking with the embedded one since it is widely used and Atlassian runs their tests with it also. The pool is configured via confluence.cfg.xml and its hibernate.c3p0.* properties. The most important property is pool max_size which will prevent the pool from opening more than a defined number of connections at a time. You want this number to be higher than your typical peak concurrent request count (are you monitoring that?), but not higher than what your db can handle. We have ours set to 300, which is double of our occasional peaks. Don’t forget that in order to take advantage of these connections, you’ll likely need to also increase the worker thread count in your servlet container.

DB Connection

The connection is configured via hibernate.connection.* properties in confluence.cfg.xml. Depending on your db, you might need to specify several settings for the connection to work well and grok UTF-8. For our MySQL db, we need to set the connection url to something like

Note that if you are editing this file by hand, you must escape illegal xml characters. More info about db connection can be found in the Confluence documentation.

Config Files in WEB-INF/classes

Just a side note: if you are building confluence from source then these files can be found at confluence/confluence-project/conf-webapp/src/main/resources/.

These files are the most cumbersome to work with because you need to apply your changes to them after each upgrade. I'll describe how we use our automated patching machinery to do this in the future chapter of this guide. For now let's just go over the available config files and what you can change here.

atlassian-user.xml - used to configure user provisioning, e.g. LDAP. For more info read the docs. - this file allows you to specify the path to Confluence Home directory. There is a better way to set this; see the JVM Options section below. - modify logging preferences, this can also be done via the UI, but AFAIK the changes are not preserved after restart or upgrade.

seraph-config.xml - controls authentication framework. You'll likely need to modify this file if you have a custom authenticator and login page.

I should note that there are many other (usually xml) configuration files bundled with individual jars in WEB-INF/lib, but those rarely need to be modified.

JVM Options

Another way to configure certain settings is via JVM options. From the complete list of recognized options these are the ones we use:

-Dcom.atlassian.user.experimentalMapping=true - this is a critically important setting for us with 180k users. Without it, our cluster panics due to data overload (CONF-12319), unfortunately despite Atlassian’s claims that this experimental feature is production ready, it got broken soon after release, and then again recently, so you’ll have to patch atlassian-user module to get it to work.

-Dconfluence.disable.peopledirectory.anonymous=true - for big public deployments the people directory is a privacy risk and generally useless for anonymous users, we have it disabled for anonymous users.

-Dconfluence.disable.mailpolling=true - early on we decided that we don’t want people to build up mail archives on our site. While the feature is useful for small internal wikis, it’s too much of a risk with little reward to provide it on a public wiki. Unfortunately, this option only disables mail fetching. The UI for setting up mail archives will still be present in the wiki; you'll have to patch Confluence to remove it.

I didn't learn about -Dconfluence.home until recently. I would much prefer to use it than to mess with file in WEB-INF/classes.

Admin UI

Most of the Confluence settings can be configured via Confluence admin interface. The downside is that the configuration is not being versioned, and there is no easy way see diffs and to roll back unless you want to hack the db and replace data from backups. With that in mind lets look at the most important settings.

General Configuration

Server Base Url - make sure this is set up correctly, otherwise confluence and its plugins won’t work properly.

Users see Rich Text Editor by default - we have this set to off. In the past many RTE bugs were causing headaches to our writers especially those who did lots of editing. In Confluence 3.2 and 3.3 the editor has improved a lot and it might be the time for us to reconsider this decision.

CamelCase Links - this used to be one of THE wiki features in general a few years ago, but as wikis have matured and people started creating more and more content, the automatic linking started to cause more problems than help. We have it off.

Threaded Comments - very useful; make sure it’s on.

Remote API (XML-RPC & SOAP) - we have ours on, but I patched the remote api code to restrict access to it.

Compress HTTP Responses - OMG please turn this on if is isn't already. It’s a major performance booster. Alternatively you might want to do the compression in your webserver as Tim pointed out in comments below.

JavaScript served in header - we have this on, but for better performance it should be off. Unfortunately that breaks many plugins and legacy code that uses obtrusive javascript. Since this option has been around for a while, it might be worth it to just set it to off and deal with the remaining broken things as they are identified.

User email visibility - we have this set to visible to admins only, but our power users found it too be a collaboration barrier so I patched the code and made emails visible to our global employees group in addition to the admin group. It would be nice if confluence allowed such a configuration out the of box.

Anonymous Access to Remote API - No sane person will leave this on. If I were in charge, I would go as far as removing it from Confluence product.

Anti XSS Mode - This is a very handy feature. Not 100% bulletproof, but it helped to significantly decrease the number of XSS exploits in Confluence since its introduction.

Attachment Maximum Size (B) - I mentioned this one already in the first chapter when discussing the db configuration. If you are running a cluster (or think that you will eventually run it), set this to some low value. Ours is 5MB.

Connection Timeouts - these options are pretty handy when you have lots of feed macros, gadgets and other plugins that pull contet from remote sites. In order to prevent worker thread pileup in your servlet container don’t go beyond the default 10sec (which is already pretty high).

Daily Backup Administration

As I previously mentioned, this backup feature is useless for anything but tiny sites. Disable it.

Manage Referrers

Collecting referrers is ok, but don’t display them publicly if you run a site on the Internet. Otherwise you run a risk of exposing some internal only URIs that might contain confidential information.


Most of our documentation and content is written in American English, but unfortunately Atlassian doesn’t provide such a language pack. I just patch the default Australian English pack to get a US English pack. It works great and is almost no hassle to maintain.

User macros

I discourage their use in enterprise environement. The lack of versioning, automated testing and documentation makes them a nightmare to maintain. Just create Confluence plugins for everything you need.

PDF Export Language Support

This is a tricky one. It took us quite a while to find the right single font that could be used to generate PDFs in almost all languages. Finally we found soui_zhs.ttf, which is distributed with OpenOffice. It’s a huge file, but it works like charm for all kinds of non-wester languages.


For reasons I’ll discuss later, we disabled all the themes except for our custom one, which is the global and default space theme. To disable a theme you have to go to plugins view and disable the appropriate theme plugins.

Cache Statistics

The name of this section in the UI is misleading, because not only can you view cache statistics here, but more importantly you can fully control the cache size via the UI. And in this case, I’m really glad that there is a UI to manage the cache config xml file, which due to its size is really hard to work with by hand. The changes you make via the UI are persisted in the Confluence Home directory and propagated thought the cluster.

Out of all the things you can tune via the admin UI, the cache tuning will have the biggest impact on your site’s performance. Confluence ships with cache settings optimized for smaller sites, so increasing the cache size is unavoidable for larger deployments.

Tuning the cache settings is a time-consuming process because you need to balance the memory consumption with performance improvements. Usually I revisit the cache stats once a month and look for caches that are performing badly because the number of objects allowed in that particular cache is low. Confluence caching system is composed of many caches that are controlled via this UI.

The best indicator of an overflowing cache is when the "Effectiveness" value is low (under 70-80%) AND “Percent Used” value is high (over 80%) AND usually the “Expired” value will be relatively high compared to “Hit” value in the same cell. This means that Confluence needs to go to the DB too often, even though it could cache the data in memory if the cache was bigger.

If you don’t understand what all the cache names and numbers mean, don’t worry about that too much. As long as you don’t make any dramatic changes too quickly and you monitor your JVM heap usage, you can’t break anything.

As you increase the cache sized, you’ll eventually start running out of heap space. That’s why you need to monitor the JVM and increase the -Xmx value as needed. If the number of concurrent users increases, you might also need to slightly increase the -Xmn value (see the JVM Tuning chapter for more info).

I wish Atlassian would provide better descriptions for all the available caches, because unless you know Confluence internals well, you won’t know what you are doing and that doesn’t feel good. Additionally, I’d like to see a way to limit memory usage, not the number of objects, because their size varies. Ideally, I'd really like to be able to just say "Use 3GB of memory for cache and distribute it in the most efficient way. Oh and let me know if you need more or less memory to work effectively". It would be better if Atlassian moved away from an in-process cache which in my opinion is not a good fit for Confluence. Maybe we'll get there one day.


This section of the Admin UI is where you can install, uninstall, enable and disable plugins and their modules. There is also a Plugin Repository which additionally allows you to install plugins from Altassian’s remote servers or user specified URIs. The recently released Atlassian Universal Plugin Manager will eventually replace the latter one (or both?), I’m glad to see that happening.

I suggest that you disable plugins that you don’t use or don’t want your users to use as soon as possible. We disabled all the bundled themes because we wanted to provide users with only one custom theme developed and maintained by us (I’ll explain the reasoning in a future chapter). For security reasons thehtml and html-include macros should in my opinion be disabled on all but family Confluence deployments. And for performance reasons Confluence Usage Stats plugin is not suitable for any bigger deployments.

Plugin installation is very easy to do. That’s both good and bad. The plugin framework provided by Confluence is a very sophisticated piece of software which allows you to install and uninstall plugins on the fly without any need to restart the server. Need to quickly install a fixed version of a buggy plugin without disturbing hundreds or thousands of users that are currently using your site? Done. That’s how easy it is.

On the other hand, it is tempting to install plugins just because they have cool names or promise great features. You can do that in your dev or test environment, but in production you should only install plugins that you picked after some serious consideration.

This is what I look for when deciding whether to install a plugin or not:

  • was the functionality provided by the plugin requested by larger group of users or is the plugin needed for site administration purposes?
  • was the plugin developed and tested in-house, if no is it supported by Atlassian, if no can we or some respectable Atlassian partner support it should there be some problems?
  • is the plugin compatible with our confluence version? does it have a track record of being compatible or was it made compatible with new Confluence versions as they were released?
  • are there no major unresolved bugs in the areas of performance, scalability, data integrity and security?
  • does the plugin have an automated test suite with good test coverage?

If you answer “yes” to all of these questions, then you may go ahead do a trial before installing the plugin in production. Otherwise, you might provide your feedback to the plugin authors and wait if the pending issues get resolved before proceeding.

I don’t want to be harsh, but especially 2-3 years ago most of the plugins created for Confluence were crap. But as the platform matures, and Atlassian partners get involved more, the quality of available plugins has been slowly increasing. The main issue that I see is that the existing plugins are not developed and tested with large scale deployments in mind. Hopefully things will change as more and more deployments grow beyond small and medium sites. It’s unfortunate that even some commercial plugins, suffer from the very same issues that plague plugins created by bunch of volunteers and enthusiast. So pick your plugins carefully, do a trial, check for unresolved bugs and existing user complaints, and then decide.

I've been reasonably active in the Atlassian development community and from these interactions, I'd like to highlight the work done by Dan Hardiker (Adaptavist) and Roberto Dominguez (Comalatech). And though I haven't worked with guys from CustomWare, they are also considered to be pretty sharp.

Be especially careful with plugins that provide new macros for the wiki content. Once you install such a plugin you won't be able to uninstall it without breaking wiki pages until all the references to that macro are removed (with tens of thousands of pages and no ability to track the references this might be a big challenge).

In general however, try to keep the number of plugins low. It’s better for performance and you won’t get in trouble as often when you need to upgrade Confluence but some of the plugins you use are not compatible with the new Confluence version.


You should now have a good idea about how to configure Confluence and where this configuration is done. In the next chapters we'll look at upgrading Confluence, patching and more.

Tuesday, July 27, 2010

DGC II: The JVM Tuning

This blog post is part of the DevOps Guide to Confluence series. In this chapter of the guide, I’ll be focusing on JVM tuning with the aim to make our Confluence perform well and operate reliably.

JDK Version

First things first: use a recent JDK. Java 5 (1.5) has been EOLed 1.5 years ago, there is absolutely no reason for you to use it with Confluence. As George pointed out in his presentation, there are some significant performance gains to be made just by switching to Java 6 and you can get another performance boost if you upgrade from an older JDK 6 release to a recent one. JDK 6u21 is currently the latest release and that’s what I would pick if I were to set up a production Confluence server today.

If you are wondering about which Java VM to use, I suggest that you stick with Sun’s HotSpot (also known as Sun JDK). It’s the only VM supported by Atlassian and I really don’t see any point in using anything else at the moment.

Lastly it goes without saying that you should use -server JVM option to enable the server VM. This usually happens automatically on server grade hardware, but it's safer to set it explicitly.

VM Observability

For me using JDK 6 is not just about performance, but also about observability of the VM. Java 6 contains many enhancements in the monitoring, debugging and probing arena that make JDK 5 and its VM look like an obsolete black box.

Just to mention some enhancements, the amount of interesting VM telemetry data exposed via JMX is amazing, just point a VisualVM to a local Java VM to see for yourself (no restart or configuration needed). Be sure to install VisualGC plugin for VisualVM. In order to allow remote connections you’ll need to start the JVM with these flags:

Unless you make the port available only on some special admin-only network, you should password protect the JMX endpoint as well as use SSL. The JMX interface is very powerful and in the wrong hands could result in security issues or outages caused by inappropriate actions.

For more info about all the options available read this document.

In addition to JMX, on some platforms there is also good DTrace integration which helped me troubleshoot some Confluence issues in production without disrupting our users.
And lastly there is BTrace that allowed me to troubleshoot a nasty hibernate issue once. It's a very handy tool that as opposed to DTrace, works on all OSes.

I can’t stress enough how important continuous monitoring of your Confluence JVMs is. Only if you know how your JVMs and app are doing, you can tell if your tuning has any effect. George Barnett has also a set of automated performance tests which are handy to load test your test instance and compare results before and after you make some tweaks.

Heap and Garbage Collection Must Haves

After upgrading the JDK version, the next best thing you can do is to give Confluence lots of memory. In the infrastructure chapter of the guide, I mentioned that you should prepare your HW for this, so let’s put this memory to use.

Before we set the heap size, we should decide between 32-bit JVM and 64-bit JVM. 64-bit VM is theoretically a bit slower, but allows you to create huge heaps. 32-bit JVM has heap size limited by the available 32-bit address space and other factors. 32bit OSes will allow you to create heaps up to only 1.6-2.0 GB. 64bit Solaris will allow you to create 32bit JVMs with up to 4GB heap (more info). For anything bigger than that you have to go 64bit. It’s not a big deal, if your OS is 64bit already. The option to start the VM in 64bit mode is -d64. On almost all platforms the default is -d32.

Before I go into any detail, I should explain what are the main objectives of heap and garbage collection tuning for Confluence. The objectives are:
  • heap size - we need to tell JVM how much memory to use
  • garbage collector latency - garbage collection often requires that the JVM stops your application, this is GC pauses are often invisible, but with large heaps and under certain conditions might become very significant (30-60+ seconds)

Additionally we should also know a thing or two about how Confluence uses the heap. The main points are:
  • Objects created by Confluence and stored on the heap generally fall into three categories:
    • short-lived objects - life-cycle of these is bound to a http request
    • medium-lived objects - usually represent cache entries with shorter TTL
    • long-lived objects - represent cache entries with big TTL, settings and infrastructure objects (plugin framework, rendering engine, etc), cache entries taking most of the space.
  • Confluence creates lots of short-lived objects per request
  • Half or more of the heap will be used by long-lived cache objects

By combining our objectives with our knowledge of Confluences heap profile, our tuning should focus on providing enough heap space for the application to have space for the cache, short-lived objects, as well as some extra buffer. Given that long-lived objects will (eventually) reside in the old generation of the heap, we want to avoid promoting short-lived objects there, because otherwise we’ll then need to do massive garbage collections of the old generation unnecessarily. Instead we should try to limit the promotion from young generation only to those objects, that will likely belong to the long-lived category.

We’ll also need to figure out how much heap you need to use. Unfortunately there isn’t an easy way to find this out, except for some educated guessing and trial & error. You can also read this HW Requirements document from Atlassian that can give you an idea about some starting points. I believe we started at 1GB, but over time went through 2GB, 3GB, 3.5GB, 4GB, 5GB all the way to 6GB.

The Confluence heap size depends on the number of concurrent users and the amount of content you have. This is mainly because Confluence uses a massive (well, in our case it is) in-process cache that is stored on the heap. We’ll get to Confluence and cache tuning in a later chapter of this guide.

So let’s set the max heap size. This is done via -Xmx JVM option:
The additional -Xms parameter says that the JVM should reserve all 6GB at startup — this is to avoid heap resizing which can be slow, especially when dealing with large heaps.

The rest of the heap settings in this post are based on 6GB heap size, you might need to make appropriate changes to adjust for your total heap size.

The next JVM option is -Xmn, which specifies how much of the heap should be dedicated to young generation (you should read up on generational gc if you don’t know what I’m talking about). The default is something like 25% or 33%, I set the young generation to ~45% of the entire heap:

Increasing the permanent generation size is also usually required given the number of classes that Confluence loads. This is done via -XX:MaxPermSize option:

Given that determining the right heap size for your environment is non-trivial task for larger instances, especially if occasional memory leaks start consuming the precious memory, you always want to have as much data as possible to debug memory exhaustion issues. Aside from good monitoring (which I mentioned in the previous chapter) you should also configure your JVM to dump the heap, when an OutOfMemoryException occurs. You can then analyze this heap dump for potential memory leaks.

Since we are dealing with relatively big heaps, make sure you have enough space on the disk (heap dumps for 6GB heap usually take 2-4GB). I’ve had a very good experience using Eclipse Memory Analyzer to analyze these large heaps (VisuaVM or jhat are not up for analyzing heaps of this size). The relevant JVM options are:

While trying to minimize gc latency in order to avoid situations when users have to wait several seconds for the stop-the-world (STW) gc to finish before their pages render is a commendable thing to do, the main reason why you want to do this is to avoid Confluence cluster panics.

Confluence has this “wonderful” cluster safety mechanism that is sensitive to any latency bigger than a few tens of seconds. In case a major STW gc occurs, the cluster safety code might announce cluster panic and shut down all the nodes (that’s right, all the nodes, not just the one that is misbehaving).

In order to be informed of any latencies caused by gc, you need to turn on gc logging. This is the magic combination of switches that works well for me:

Unfortunately the file specified via -Xloggc will get overwritten during a jvm restart, so make sure you preserve it either manually before a restart or automatically via some restart script. Additionally reading the gc log is a tough job that requires some practice and since the format varies a lot depending on your JDK version and garbage collector, I’m not going to describe it here.

Performance tweaks

The first performance boosting JVM option I'd like to mention is -XX:+AggressiveOpts, which will turn on performance enhancements that are expected to be on by default in the future JVM versions (more info).

If you are using 64bit JVM then -XX:+UseCompressedOops will make a big difference and will virtually eliminate the performance penalty you pay for switching from 32bit to 64bit JVM.

And lastly there is -XX:+DoEscapeAnalysis which will boost the performance by another few percents.

Optional Heap and GC tweaks

To slow down object promotion into the old generation, you might want to tune the sizes of the survivor space (a heap generation within the young generation). To achieve this, we want the survivor space to be slightly bigger than the default. Additionally I also want to keep the promotion rate down (objects that survive a specific number of collections in the survivor space will be be promoted to the older generation), so I use these options:

I also found that by using parallel gc for the young generation and concurrent mark and sweep gc for the older generation I can practically eliminate any significant SWT gc pauses. Your mileage might vary on this one, so do some testing before you use it in production. These are the settings I use:


The information above was gather from years of experience as well as various sources, including the following:

Running Multiple Web Apps in one VM

Don't do that. Really. Don't. Bad things will happen if you do (OOME, classloading issues etc).


Your JVM should now be in a good shape to host Confluence and serve your clients. In the next chapter of this guide I'll write about Confluence configuration, tuning, upgrades and more.

Sunday, July 25, 2010

DGC I: The Infrastructure

In the introductory post, I mentioned that a Confluence cluster is the way to go big. Let's go through some of the main things to consider when you start preparing your infrastructure.

Confluence cluster

To build a Confluence site, you need Confluence :-). Well, make it two... as in a two-node cluster license. I recommend this for any bigger site with relatively high uptime expectations, even if you know that your amount of traffic won't require load balancing between two nodes. I often find my self in a need of a restart (e.g. during a patch deployment) and with a cluster, you can restart one node at a time and your users won't even know about it.


My team operates other big sites, and from all of them we expect some level of redundancy. Typically we split everything between "odd" (composed of hosts with hostnames ending with an odd number) and "even" strings, and this applies to Confluence nodes as well (that's why you need two-node license). Each string is composed of a border firewall, load balancer, switches and the actual servers (web/application/database/whathaveyou) and both strings can either share the load or work as primary&standby depending on your application needs and network configuration.

This kind of splitting, allows us to take half of our datacenter offline for maintenance when needed or allows us to absorb potential failure of any hardware or software within one string without any perceivable interruption of service.

Sure, you can make things even more redundant by adding a third or forth string, but none of our apps requires that level of redundancy and the cost and complexity of getting there is therefore hard to justify.

There are two important things that matter when it comes to setting up the network, and both can make or break you Confluence clustering.
  1. The latency between the two nodes should be minimal. Ideally they should be just one hop apart and on a fast network (1GBit). There will be a lot of communication going on between your Confluence nodes, and you want it to happen as quickly as possible, otherwise the cluster synchronization will drag down your overall cluster performance. Don't even think about putting the two nodes into different datacenters, let alone on different continents. Confluence clustering was not built for that type of scenario.
  2. Make absolutely sure that your network (mainly switches, OS, firewall) supports multicast.

The best way to check that the multicast works reliably is to use the multicast test tool that is bundled with Coherence (a library that is bundled with Confluence). To run it just run the following command on all nodes and check if all packes are being delivered and no duplicates are present:
java -cp $CONFLUENCE_PATH/WEB-INF/lib/coherence-x.y.jar:$CONFLUENCE_PATH/WEB-INF/lib/tangosol-x.y.jar \ \
-ttl 1 \
-local $NODE_IP

In our environment, it took us months of waiting for the right patch from our network gear vendor and some OS patching to make things totally stable. Fortunately, our ops guys eventually found the magic combination of patches and settings, and then we were good to go.

Our site uses both http and https protocols for content delivery and since we already had an SSL accelerator available in our datacenter we utilized it for Confluence, but I don't think that with current hardware, hw acceleration is not very important these days.

Another noteworthy suggestion I have for your network is the load balancer configuration. We started off with a session-affinity-based load-balancing, but at one point people started to notice that sometimes they see different content than their colleagues. This was due to delay in propagation of changes throughout the cluster. Usually the delay is unnoticeable, but for some reasons it's not always the case. I haven't investigated this issue further and just switched to primary&slave load balancing, which has been working great for us since. This of course will work only if each of your nodes can handle all the traffic on its own, but you can trust me that it solves all the issues with users that don't believe in eventual consistency :-).

Hopefully your load balancer will perform healthchecks against your nodes. The /errors.jsp path is the ideal target for these healthchecks, because it returns HTTP 200 only if everything is ok with the node.

When it comes to firewall rules (you have a firewall right?), you shouldn't allow incoming connection from public networks directly to your servers, all the public traffic should go through loadbalancer only. As for outbound connections, you should allow your servers to connect to any public server on ports 80 (HTTP) and 443 (HTTPS); these connections are needed for feed retrieval, open social gadgets and plugin installation.

Hardware (cpu, memory, disk)

Update: I came across this HW requirements document from Atlassian, which is helpful especially for smaller instances.

When you are making your hardware choices, I suggest you stick with a server that is relatively recent and has decent single-threaded performance, yet offers multicore parallelism. Confluence does relatively a lot of number crunching per http request, so both single-threaded and multi-threaded horse-power are needed to get good results. Additionally Confluence's boot process is not the best one, so with poor single-threaded throughput you'll end up waiting minutes for the app to start (at one point I did!).

Confluence loves memory! So don't be stingy. RAM is cheap these days, so get a few gigs that will be dedicated just to Confluence. My instance uses 6 GB JVM heap and with additional non-heap memory consumption, OS overhead and an extra buffer. I allocated 10GB of RAM for each Confluence node. You will likely start with much lower memory requirements, but as your instance grows, so will the memory requirements - keep that in mind.

When it comes to disk and disk space, you have to realize three things.
  1. Confluence stores all of its persistent data in a (hopefully remote) database.
  2. Confluence relies on fetching data from its Lucene index, stored on the local file system (each node has its own copy). This index is built from the db contents and can be rebuilt at any time.
  3. Attachments, which can represent a huge chunk of your persistent data, will be stored in the database. Confluence won't let you use e.g. shared filesystem when you are running a cluster.

All of this means that you will need a few (dozen) gigabytes of local disk space that can be accessed reasonably quickly. SSD will likely not buy you much, use it for your DB! Server grade hard drives configured in redundant software or hardware RAID should be sufficient for your web/application server (you can skip the RAID if you can rebuild the server really quickly after a disk failure).

OS & Filesystem

The choice of OS is often a religious one. But I think that it's more important that you are comfortable administering your OS than anything else. We use Solaris 10, or more recently OpenSolaris everywhere. Especially OpenSolaris is superior to most (all?) of the OSes out there (heh, now I'm being religious), but it will be worth cat's pee to you if you have no clue about how to work with it and don’t have time or willingness to learn a lot of cool stuff about the OS. In general, I'd say that any 64bit *nix OS should be suitable as long as you know how to use it. You'll want 64bit OS so that you can load it with loads of RAM and create big JVM heap once you need it.

One nice thing that comes with Solaris and OpenSolaris (and BSD) is ZFS file system. If you don't know much about it, I suggest that you read a bit about it. ZFS can make your backup strategy a lot simpler and allows you to revert from a failed upgrade in a matter of seconds. I'm not exaggerating, it happened to me several times. Hopefully Btrfs will soon be production ready for Linux distros and will offer comparable conveniences. If you can't use either of these, you'll have to suck it up and deal with it. I don't envy you...


During the last 3 years, we tried several combinations of deployment configurations for our Confluence site. These include Solaris 10 servers shared by several apps, Solaris 10 Zones (one zone per app) and OpenSolaris with Xen virtualization. Xen and OpenSolaris is what we currently use. It works well, but if I were to make a decision today, I would probably go with OpenSolaris and Zones. This combination gives you the best stability, performance, resource virtualization and application isolation.

In any case, many people ask what is the performance penalty for going virtualized. My answer is that it depends on your application, but for a webapp, more likely than not, it isn't going be the main reason of your performance problems. Decent hardware is going to make the virtualization penalty almost invisible and at the same time will give you flexibility when allocating resources in your data center. Just to give you a rough idea, the overhead for Xen is 10-30%; for Solaris Zones it's a lot less.

Web Container

Atlassian recommends using Tomcat as the web container for Confluence. We could again spend a lot of time fighting a religious battle here, but I'm going to avoid that. If Tomcat works for you and you don't find it lacking features that make enterprise deployments and operation easier, then good for you. You will most likely want it fronted it with Apache webserver or something similar though.

I've been using Sun Web Server 7 in my production environment and was quite happy with it. Another excellent choice is GlassFish v2.1 or v3, which I've been using for Confluence on my Mac. Unfortunately, Confluence doesn't adhere to the Servlet spec in some places, so you'll have to patch it to get it to run with GF v3. Glassfish v2.1 is not affected, but suffers from Xalan classes clashing, so to fix that you need to put Confluence's xalan-x.y.z.jar into $GLASSFISH_HOME/domains/$YOURDOMAN/lib/. Otherwise everything works as expected.

For a bigger site, you'll likely need to increase the worker thread count in your servlet container. Check your container's documentation to see what's the default and how to increase it. You should also know what your peak concurrent request rate is (monitor it!) and in combination with your infrastructure capabilities (load test it!) choose the right value for you. Ours is 256 which is higher than our usual peak traffic, but lower than what we could handle if we had to.

Logs & Monitoring

Paraphrasing my friends daughter: "More data, more better!". Log as much as you can and archive logs. You'll never know when you'll need to search for an exception log and confirm that it started to appear 7 months ago, right after that particular confluence upgrade.

What helped me on several occasions, was having detailed access logs. I use Apache combined log format with an extra attribute - request duration in microseconds. This format will not only give you a good idea about your app's performance, but will also help you track various issues by logging http referal[sic] and user-agent headers. This can often be invaluable info!

Here is a list of different types of logs you should be gathering: confluence log, web container log, jvm gc log and http access log.

In order to get an in-depth information about your visitors, usage patterns and content, I suggest that you integrate your Confluence with web analytics services like Google Analytics or Omniture. From their reports you can learn more about how, when and from where your users use your site.

When it comes to monitoring, your strategy should most certainly include JVM and JMX monitoring. Confluence as well as JVM exposes quite some interesting metrics via JMX. You should know what these values look like throughout the day or week. Only then you'll be able to efficiently troubleshoot issues when they occur (and they will occur!). Bare minimum include: heap space usage, cpu usage, requests per 10 second, errors per 10 seconds, avg request duration

We have a custom monitoring app that allow us to gather, archive and analyze these JVM/JMX metrics, but there are also some open source tools available of various quality (e.g. Munin looks promising).

The second part of our monitoring strategy is implemented as a local agent (we use Satan), that closely monitors the JVM process and the app itself by checking if it's not running out of heap space, as well as by performing http health checks. In the case that multiple failures are registered, the agent restarts the app and emails out an alert with the description of the failure. This allows us to sleep through the night without worrying that a pesky memory leak is going to take down our site at night. Fortunately, we haven’t seen any stability issues for a while now, but things were different in the past.

The last part of our monitoring strategy is implemented as remote http agents. These periodically perform http health checks from various locations on the Internet and send out alerts when an issue is detected. This gives us a good visibility into potential networking issues that wouldn't be caught by a local agent. There are several third party solution that you could use, or you can build your own (and host it cross the globe on EC2).


The choice is up to you. Pick something supported by Atlassian or else you'll likely regret it. We use MySQL5 and for the most part we've been quite happy with it. Our db currently takes ~26GB, so be sure to account for gigabytes of db files and several times that for db backups. The biggest space sucker are attachments. Since a Confluence cluster can currently store attachments only in the database, you have to limit the attachment size, or else you'll likely end up with performance problems due to overloaded db.

We limit attachment size to 5MB. There are several users that are not happy about that, but on the other hand, it helps people to realize that often a simple wiki page is a much better distribution medium than an OpenOffice document attached to a blank wiki page. I'd bet that our users would stick huge ISO images into our db if we allowed them to. My suggestion is to start with a low limit and increase it if there is a business justification for it. Maybe one day Confluence will support S3 or Google storage as the backend for attachments, until then, keep the size limit low.

The db should be hosted on a dedicated server with lots of RAM. I'm fortunate enough to have DBAs that take care of running the DB for me, so I don't have to worry about that part. A good DBA , MANY FAST disks (possibly SSD) and lots of RAM are the key ingredients to well performing db. Of course, make sure the latency between both Confluence nodes and the db server is minimal. You shouldn't think of doing anything worse than 1GBit network and locate the db within the same datacenter.

I mentioned ZFS before and I'll mention it again. If you put the db files that contain your Confluence database on a dedicated ZFS dataset (think volume), you'll be able to take snapshots of your db during upgrades or on the fly (you'll have to momentarily lock the db to do that) and then revert from these snapshots instantly when you need it. This is just awesome. :-)

If you are using MySQL5, your minimal my.cnf should look like this:
The last setting will allow you to upload up to 32MB large files (attachments, plugins, etc) into the db.


Your users will hate you if you lose any of their precious data, so don’t do it! The best way to avoid any data loss is to have a backup strategy in place. Ours is composed of several parts.

Config files are stored in our version control system, which is, surprise surprise, being backed up.

Confluence home directory on our Confluence nodes is being backed up only just before the upgrade via a ZFS snapshot. All the files in there (except for the config files) can be rebuilt from the database, so I don’t worry about them.

The database is being backed up nightly via a SQL dump, which is then backed up on a tape. Additionally, just before an upgrade, we take a ZFS snapshot of the filesystem the db files reside on. This allows us to do instant rollbacks in case the upgrade fails. I experienced a situation where it took us hours to roll back from a SQL dump. It’s slooow. Since then we switched to ZFS snapshots.

The database is really the master storage of all the Confluence data, so in addition to all the backups, we also run a redundant (remember “odd” and “even”?) db server, that the master database is being replicated to on the fly via MySQL master/slave replication. During an upgrade we now also stop the replication, so that we can use the slave right away if something happened to the master during an upgrade and we couldn't use ZFS to rollback.

As if that was not enough, there is one more layer that allows users to recover from user errors in a fine-grained manner. It’s Confluence wiki page versioning and wiki space trash. The combination of these two features, enables users to undo most of the editing mistakes on their own, without bothering site administrators (I’ll talk more about delegation in chapter IV of the guide).

There is also a Confluence built-in backup mechanism, but it works well only for small instances. This backup process is resource intensive, generates lots of data and if I remember correctly breaks ones you reach certain size. Don't use it. You'll have to explicitly disable it via the Confluence Admin UI.

Prod, Test, Dev Environments

The ability to experiment in the production environment will decrease with the increase of users using the site. For this reason, you'll need to build a Test environment that closely matches your production environment. Here you can practice your Confluence upgrade, or run automated tests just before a release. If you are doing Confluence core or plugin development, you'll also need a dev environment. This one can be a simplified and scaled down version of production (e.g. you can forgo clustering) and should be conveniently located on your dev machine or server.


If you follow my advice, you should now have an infrastructure that is will help you run your Confluence site in a performant, scalable and reliably way. If you found something important missing, feel free to post your suggestions as comments.

In the next chapter of this guide we'll look at the JVM tuning.

DevOps Guide to Confluence (DGC)

After working with Atlassian Confluence for 3 years, running one of the bigger public Confluence installations, I realized that there is a major lack of information about how to run Confluence on a larger scale and outside of the intranet firewalls. I'm hoping that I can improve this situation with a blog series that will describe some of the (best?) practices that I implemented while running, tweaking, patching and supporting our Confluence-based site.

Just to throw out some numbers to give context of what I mean by "relatively large":
  • # registered users: 180k+
  • # contributing users: 7k+
  • # wiki spaces: 1.5k+
  • # wiki pages: 65k+
  • # page revisions: 570k+
  • # comments: 10k+
  • # visits per month: ~300k
  • # page views per month: ~800k
  • # http requests per day: ~1m+ (includes crawlers and users with disable javascript)
So I'm not talking about a huge site like amazon, twitter, etc, but still bigger than most of the public facing confluence instances out there.

Some of the practices described in this guide might be an overkill for smaller deployments, so I’ll leave it up to you to pick the right ones for you and your environment.

There are many aspects that need careful consideration if you want to go relatively big, and there are even more of them when you run your site on the Internet as opposed to doing it internally within an organization. In my blog series I'm going to focus on these areas that I consider important:

I'm not going to go into details about why to pick Confluence or why not to pick it. I really just want to focus on how to make it run smoothly and reliably while serving a relatively large audience of users (and robots).

Given that we want to run a site on the Internet, we are lucky to have well defined maintenance windows, that we can work with. Meaning that any downtime will be perceived by at least a portion of your users as your failure, and the only way how you can avoid looking like an idiot is to keep the downtime to the absolute minimum.

You are now probably thinking that a Confluence cluster will solve all your problems with scalability and reliability.

Right, that's what the marketing people tell you. Anyone who knows a thing or two about software engineering, knows that there is no such a thing as "unlimited scalability" and ironically a Confluence cluster can hit several bottlenecks quite quickly in certain situations. That said, a Confluence cluster with all its pros and cons is really the way to go big with Confluence, but you should have realistic expectations about its scalability and reliability.

The fact that makes things even more difficult is that if you do things right, your wiki is going to take off. More users, more content, more traffic, more spam, more crawlers, more users unhappy about any kind of downtime... Growth is what you need to take into account from day one. I'm not saying that you have to start big, you just shouldn't paint your self into a corner and I'm going to mention some tips on how to avoid just that.

I was inspired to write up this guide after watching George Barnett’s presentation from this year’s Atlassian Summit. George made some really good points and I encourage you to watch his talk. My guide will not focus just on performance and scalability, but also on reliability, smooth day-to-day operation and more.

Continue reading: DGC I: The Infrastructure

Monday, May 31, 2010

Improving Satan and Solaris SMF

One of the features of Solaris that we heavily rely on in our production environment at work is Service Management Facility or SMF for short. SMF can start/stop/restart services, track dependencies between services and use that to optimize the boot process and lots more. Often handy in production environment is that SMF keeps track of processes that a particular service started and if a process dies, SMF restarts its services.

One gripe I have with SMF is that its process monitoring capabilities are rather simple. A process associated with a contract (service) must die in order for SMF to get the idea that something is wrong and that the service should be restarted. In practice, more often than not a process gets into a weird state that prevents it from working properly, yet it doesn't die. Failures might include excessive cpu or memory usage or even application level failures that can be detected only by interacting with the application (e.g. http health check). SMF in its current implementation is incapable of detecting these failures. And this is where Satan comes into the play.

Satan a small ruby script that monitors a process and following the Crash-only Software philosophy, kills it when a problem is detected. It then relies on SMF to detect the process death(s) and restart the given service. I fell in love with the simplicity of Satan (which was inspired by God) and started exploring the feasibility of using it to improve the reliability of SMF on our production servers.

Upon a code review of the script, I noticed several things that I wished were implemented differently. Here are some:
  • Satan watches processes rather than services as defined via SMF
  • One Satan instance is designed to watch many different processes for different services, which adds unnecessary complexity and lacks isolation
  • Satan is merciless (what a surprise! :-) ) and uses kill -9 without a warning
  • Satan has no test suite!!! :-( (i.e. I must presume that it doesn't work)

Thankfully the source code was out there on GitHub and licensed under BSD license so it was just a matter of a few keystrokes to fork it (open source FTW!). By the time I was done with my changes, there wasn't much of the original source code left, but oh well :-)

I'm happy to present to you for review and comments. The main changes I made are the following:
  • One Satan instance watches single SMF service and its one or more processes
  • The single service to monitor design allows for automatic monitoring suspension via SMF dependencies while the monitored service is being started, restarted or disabled
  • Several bugfixes around how rule failures and recoveries are counted before a service is deemed unhealthy
  • At first Satan tries to invoke svcadm restart and only if that doesn't occur within a specified grace period, it uses kill -9 to kill all processes for the given contract (service)
  • Satan now has decent RSpec test suite (more on that in my previous post)
  • Improved HTTP condition with a timeout setting
  • New JVM free heap space condition to monitor those pesky JVM memory leaks
  • Extensible design now allows for new monitoring conditions (rules) to be defined outside of the main Satan source code
As always there are more things to improve and extend but, I'm hoping that my Satan fork will be a decent version that will allow us to keep our services running more reliably. If you have suggestions, or comments feel free to leave feedback.