Friday, July 30, 2010

DGC IV: Confluence Upgrades

This blog post is part of the DevOps Guide to Confluence series. In this chapter of the guide, we’ll have a look at Confluence upgrades.

Confluence Release History and Track Record

I started using Confluence at around version 2.4.4 (released March 2007). A lot has changed since then, mostly for better. In my early days, Atlassian was spitting out one release after another — typically 3 weeks or less apart — followed by a major release every 3 months. You can check out the full release history on their wiki.

This changed later on and recently there have been fewer minor releases and bigger major releases delivered 3.5-4 months. Depending on your point of view this is good or bad. It now takes longer to get awaited features and fixes, but on the other hand the releases are more solid and better tested.

For major releases, Atlassian now usually offers Early Access Program, which gives you access to milestone builds so that you can see and mold the new stuff before it ships.

Contrary to the past, the minor versions have been very stable lately and have contained only bugfixes, so it is generally safe to upgrade without a lot of hesitation.

The same can't be said about major releases. Even though the stability of x.y.0 releases has been dramatically improving lately, I still consider it risky for a big site to upgrade soon after a major release is announced. Wait for the first bugfix release (x.y.1), monitor the bug tracker, knowledge base and forums, and then consider the upgrade.

Having gone through many upgrades myself, I think that it is a good practice to stay up to date with your Confluence site. We have usually been at most one major version behind and frequently on the latest version, but as I mentioned avoiding the x.y.0 releases. This has been working well for us.

Staying in Touch and Getting Support

In order to know what's going on with Confluence releases, it is a good idea to subscribe to the Confluence Announcements mailing list. This is a very low traffic mailing list used for release and security announcements only.

Atlassian's tech writers usually do a good job at creating informative release notes, upgrade notes and security advisories, so be sure to read those for each release (even if you are skipping some).

There are several other channels through which people working on Confluence (plugin) development can communicate and support each other, these include:

Despite Atlassian's claims about their legendary support, I found the official support channel rarely useful. Being a DIY guy and having a reasonable knowledge about Confluence internals, I usually found myself in need of a more qualified support than what the support channel was created for. For this reason my occasional support tickets usually ended up being escalated to the development team, instead of handled by the support team.

On the other hand the public issue tracker has been an invaluable source of information and a great communication tool. I wish that more of my bug reports had been addressed, but for the most part I have been receiving reasonable amount of attention even though sometimes I had to request escalation to have someone look at and fix issues that were critical for us.

The biggest hurdle I've been experiencing with bug fixes and support was that sites of our size are not the main focus for Atlassian and they are not hesitant to be open about it. I often shake my head when I see features of little value (for us that is - because they target small deployments and have little to do with core wiki functionality) being implemented and promoted, but major architectural issues, bugs and highly anticipated features go without attention for years. Just browser the issue tracker and you'll get the idea.

Confluence Upgrades

The core of the upgrade procedure will depend on the build distribution type you use (standalone, war, building from source), but fundamentally in all cases, you need to shut down your Confluence, replace your app (standalone or war) with the new version and then start it again. An automated upgrade process will take care of updating the database schema, rebuilding the search index and other tasks required for a successful upgrade.

That was the good news, the bad news is that there is a lot more work to be done in order to successfully upgrade a site with as little downtime as possible.

Dev and Test Deployments and Testing

Before you upgrade the real thing, you should at first get familiar with the release by upgrading your dev and test environments.

It's often handy to invite your users to do a brief UAT (user acceptance testing) on your test instance as they might catch something that you or your automated tests haven't.

Picking the Outage Window

Based on your users' usage patterns (as easily identified by web analytics solutions like Google Analytics), you should pick a time when the usage is low. For our global site this has been early mornings at around 4:30 or 5am PT.

When it comes to picking a day, we usually stuck with Tuesdays, Wednesday or Thursdays. Nobody wants to be dealing with an issue during a weekend when internal (infrastructure) or external (Atlassian) support is harder to get hold of.

You also want to communicate the planned outage to your users, so that they are not caught by surprise when you announce an outage on a day when they are releasing important documents on the wiki.

As far as outage duration goes, we usually plan for a 30min outage during a 1 hour window and most of the time have been able to bring the site back online within 30min or less.

Ready, Set, Go!

The actual deployment consists of several steps, which in our case are:

  • disabling load balancing for both nodes (which automatically triggers redirection of all requests to a maintenance pages hosted elsewhere)
  • shutting down both nodes
  • disabling MySQL replication between the master and slave db
  • taking ZFS snapshot of the Confluence Home directory
  • taking ZFS snapshot of the MySQL db filesystem on the master
  • deploying the new war file
  • starting one node (while the loadbalancer still ignores it)
  • watching container and Confluence logs for any signs of problems

At this point, we have one of our nodes up and running (hopefully :-)). We can log in with an admin account and check if everything works as expected. The next tasks include:

  • upgrading installed plugins
  • upgrading custom theme (if there is one)
  • running a bunch of automated or manual tests, just to verify that everything is ok

If things are looking good, we can allow the load balancer to start sending requests to our upgraded node. Continue watching logs and eventually deploy the war on the second node and re-enable the MySQL replication.

If any issues occur during the deployment, we can simply:

  • shut down the upgraded node
  • revert to the latest Confluence Home snapshot
  • revert to the latest MySQL db snapshot
  • redeploy the older version of war file
  • either retry the deployment or re-enable load balancer and deal work on resolving the issues outside of production environment

In my experience from all the dev, test and prod deployments, we've had to roll back and redo an upgrade from scratch only once or twice. It's very unlikely that you'll have to do it, but it's better to be ready than sorry.

If you are building Confluence from patched sources and deploy your own builds frequently, then you might want to consider automating your deployments with tools like Capistrano. This will save you a lot of time and make the deployments more reliable and consistent.


If you do your homework, Confluence is quite easy to upgrade. It's unfortunate that the entire cluster must be shut down for an upgrade even between minor releases, but if you plan your deployment well, you will be able to minimize the downtime to just a few minutes outside of peak hours.

In the next chapter of this guide, we'll take a look at customizing and patching Confluence.

DGC III: Confluence Configuration and Tuning

This blog post is part of the DevOps Guide to Confluence series. In this chapter of the guide, we’ll have a look at Confluence configuration and tuning.

There are four ways how one can modify Confluence's runtime behavior:

  • Config Files in Confluence Home directory
  • Config Files in WEB-INF/classes
  • JVM Options
  • Admin UI

Config Files in Confluence Home directory

Confluence Home directory contains one or more config files that control runtime behavior of Confluence. The most important file is confluence.cfg.xml that must be present in order for Confluence to start. This file can be modified by hand while confluence is shut down, but also gets modified by Confluence occasionally (mostly during upgrades). Your changes will be preserved, as long as you made them while Confluence was offline.

Another relevant file is tangosol-coherence-override.xml which must unfortunately be used to override Confluence’s lame multicast configuration needed for cluster configuration (see below).

Lastly there is config/confluence-coherence-cache-config-clustered.xml which contains configuration of the Confluence cache. Generally you don't want to modify this file by hand. I’ll come back to talk about cache configuration later in the Admin UI section of this chapter.

In general it is advisable to be very consistent about your environment, so that you can then just have a single version of these files that you can distribute on all servers when needed. This includes the directory layout, network interface names, and so on.

A combination of the first two files will allow you to configure the following:


As I mentioned, this configuration is split between two config files. confluence.cfg.xml contains confluence.cluster.* properties, which allow you to set multicast IP, interface and TTL, but not the port. Only tangosol-coherence-override.xml can do that.

The cluster IP is by default derived from a "cluster name" specified via the Admin UI or installation wizard. For some reason Atlassian believes that in an enterprise environment one can just let a software pick a random IP and port to run multicast on. I don’t know about any serious datacenter where things work this way. You’ll likely want to explicitly set IP, port, interface name and TTL and the only way to do that is by modifying these files by hand and ignoring the "cluster name" setting in the UI. Make sure that settings are consistent in both files.

DB Connection Pool

Confluence comes with an embedded connection pool. I believe that you can use your own too (if it comes with your servlet container), but I’d suggest sticking with the embedded one since it is widely used and Atlassian runs their tests with it also. The pool is configured via confluence.cfg.xml and its hibernate.c3p0.* properties. The most important property is pool max_size which will prevent the pool from opening more than a defined number of connections at a time. You want this number to be higher than your typical peak concurrent request count (are you monitoring that?), but not higher than what your db can handle. We have ours set to 300, which is double of our occasional peaks. Don’t forget that in order to take advantage of these connections, you’ll likely need to also increase the worker thread count in your servlet container.

DB Connection

The connection is configured via hibernate.connection.* properties in confluence.cfg.xml. Depending on your db, you might need to specify several settings for the connection to work well and grok UTF-8. For our MySQL db, we need to set the connection url to something like

Note that if you are editing this file by hand, you must escape illegal xml characters. More info about db connection can be found in the Confluence documentation.

Config Files in WEB-INF/classes

Just a side note: if you are building confluence from source then these files can be found at confluence/confluence-project/conf-webapp/src/main/resources/.

These files are the most cumbersome to work with because you need to apply your changes to them after each upgrade. I'll describe how we use our automated patching machinery to do this in the future chapter of this guide. For now let's just go over the available config files and what you can change here.

atlassian-user.xml - used to configure user provisioning, e.g. LDAP. For more info read the docs. - this file allows you to specify the path to Confluence Home directory. There is a better way to set this; see the JVM Options section below. - modify logging preferences, this can also be done via the UI, but AFAIK the changes are not preserved after restart or upgrade.

seraph-config.xml - controls authentication framework. You'll likely need to modify this file if you have a custom authenticator and login page.

I should note that there are many other (usually xml) configuration files bundled with individual jars in WEB-INF/lib, but those rarely need to be modified.

JVM Options

Another way to configure certain settings is via JVM options. From the complete list of recognized options these are the ones we use:

-Dcom.atlassian.user.experimentalMapping=true - this is a critically important setting for us with 180k users. Without it, our cluster panics due to data overload (CONF-12319), unfortunately despite Atlassian’s claims that this experimental feature is production ready, it got broken soon after release, and then again recently, so you’ll have to patch atlassian-user module to get it to work.

-Dconfluence.disable.peopledirectory.anonymous=true - for big public deployments the people directory is a privacy risk and generally useless for anonymous users, we have it disabled for anonymous users.

-Dconfluence.disable.mailpolling=true - early on we decided that we don’t want people to build up mail archives on our site. While the feature is useful for small internal wikis, it’s too much of a risk with little reward to provide it on a public wiki. Unfortunately, this option only disables mail fetching. The UI for setting up mail archives will still be present in the wiki; you'll have to patch Confluence to remove it.

I didn't learn about -Dconfluence.home until recently. I would much prefer to use it than to mess with file in WEB-INF/classes.

Admin UI

Most of the Confluence settings can be configured via Confluence admin interface. The downside is that the configuration is not being versioned, and there is no easy way see diffs and to roll back unless you want to hack the db and replace data from backups. With that in mind lets look at the most important settings.

General Configuration

Server Base Url - make sure this is set up correctly, otherwise confluence and its plugins won’t work properly.

Users see Rich Text Editor by default - we have this set to off. In the past many RTE bugs were causing headaches to our writers especially those who did lots of editing. In Confluence 3.2 and 3.3 the editor has improved a lot and it might be the time for us to reconsider this decision.

CamelCase Links - this used to be one of THE wiki features in general a few years ago, but as wikis have matured and people started creating more and more content, the automatic linking started to cause more problems than help. We have it off.

Threaded Comments - very useful; make sure it’s on.

Remote API (XML-RPC & SOAP) - we have ours on, but I patched the remote api code to restrict access to it.

Compress HTTP Responses - OMG please turn this on if is isn't already. It’s a major performance booster. Alternatively you might want to do the compression in your webserver as Tim pointed out in comments below.

JavaScript served in header - we have this on, but for better performance it should be off. Unfortunately that breaks many plugins and legacy code that uses obtrusive javascript. Since this option has been around for a while, it might be worth it to just set it to off and deal with the remaining broken things as they are identified.

User email visibility - we have this set to visible to admins only, but our power users found it too be a collaboration barrier so I patched the code and made emails visible to our global employees group in addition to the admin group. It would be nice if confluence allowed such a configuration out the of box.

Anonymous Access to Remote API - No sane person will leave this on. If I were in charge, I would go as far as removing it from Confluence product.

Anti XSS Mode - This is a very handy feature. Not 100% bulletproof, but it helped to significantly decrease the number of XSS exploits in Confluence since its introduction.

Attachment Maximum Size (B) - I mentioned this one already in the first chapter when discussing the db configuration. If you are running a cluster (or think that you will eventually run it), set this to some low value. Ours is 5MB.

Connection Timeouts - these options are pretty handy when you have lots of feed macros, gadgets and other plugins that pull contet from remote sites. In order to prevent worker thread pileup in your servlet container don’t go beyond the default 10sec (which is already pretty high).

Daily Backup Administration

As I previously mentioned, this backup feature is useless for anything but tiny sites. Disable it.

Manage Referrers

Collecting referrers is ok, but don’t display them publicly if you run a site on the Internet. Otherwise you run a risk of exposing some internal only URIs that might contain confidential information.


Most of our documentation and content is written in American English, but unfortunately Atlassian doesn’t provide such a language pack. I just patch the default Australian English pack to get a US English pack. It works great and is almost no hassle to maintain.

User macros

I discourage their use in enterprise environement. The lack of versioning, automated testing and documentation makes them a nightmare to maintain. Just create Confluence plugins for everything you need.

PDF Export Language Support

This is a tricky one. It took us quite a while to find the right single font that could be used to generate PDFs in almost all languages. Finally we found soui_zhs.ttf, which is distributed with OpenOffice. It’s a huge file, but it works like charm for all kinds of non-wester languages.


For reasons I’ll discuss later, we disabled all the themes except for our custom one, which is the global and default space theme. To disable a theme you have to go to plugins view and disable the appropriate theme plugins.

Cache Statistics

The name of this section in the UI is misleading, because not only can you view cache statistics here, but more importantly you can fully control the cache size via the UI. And in this case, I’m really glad that there is a UI to manage the cache config xml file, which due to its size is really hard to work with by hand. The changes you make via the UI are persisted in the Confluence Home directory and propagated thought the cluster.

Out of all the things you can tune via the admin UI, the cache tuning will have the biggest impact on your site’s performance. Confluence ships with cache settings optimized for smaller sites, so increasing the cache size is unavoidable for larger deployments.

Tuning the cache settings is a time-consuming process because you need to balance the memory consumption with performance improvements. Usually I revisit the cache stats once a month and look for caches that are performing badly because the number of objects allowed in that particular cache is low. Confluence caching system is composed of many caches that are controlled via this UI.

The best indicator of an overflowing cache is when the "Effectiveness" value is low (under 70-80%) AND “Percent Used” value is high (over 80%) AND usually the “Expired” value will be relatively high compared to “Hit” value in the same cell. This means that Confluence needs to go to the DB too often, even though it could cache the data in memory if the cache was bigger.

If you don’t understand what all the cache names and numbers mean, don’t worry about that too much. As long as you don’t make any dramatic changes too quickly and you monitor your JVM heap usage, you can’t break anything.

As you increase the cache sized, you’ll eventually start running out of heap space. That’s why you need to monitor the JVM and increase the -Xmx value as needed. If the number of concurrent users increases, you might also need to slightly increase the -Xmn value (see the JVM Tuning chapter for more info).

I wish Atlassian would provide better descriptions for all the available caches, because unless you know Confluence internals well, you won’t know what you are doing and that doesn’t feel good. Additionally, I’d like to see a way to limit memory usage, not the number of objects, because their size varies. Ideally, I'd really like to be able to just say "Use 3GB of memory for cache and distribute it in the most efficient way. Oh and let me know if you need more or less memory to work effectively". It would be better if Atlassian moved away from an in-process cache which in my opinion is not a good fit for Confluence. Maybe we'll get there one day.


This section of the Admin UI is where you can install, uninstall, enable and disable plugins and their modules. There is also a Plugin Repository which additionally allows you to install plugins from Altassian’s remote servers or user specified URIs. The recently released Atlassian Universal Plugin Manager will eventually replace the latter one (or both?), I’m glad to see that happening.

I suggest that you disable plugins that you don’t use or don’t want your users to use as soon as possible. We disabled all the bundled themes because we wanted to provide users with only one custom theme developed and maintained by us (I’ll explain the reasoning in a future chapter). For security reasons thehtml and html-include macros should in my opinion be disabled on all but family Confluence deployments. And for performance reasons Confluence Usage Stats plugin is not suitable for any bigger deployments.

Plugin installation is very easy to do. That’s both good and bad. The plugin framework provided by Confluence is a very sophisticated piece of software which allows you to install and uninstall plugins on the fly without any need to restart the server. Need to quickly install a fixed version of a buggy plugin without disturbing hundreds or thousands of users that are currently using your site? Done. That’s how easy it is.

On the other hand, it is tempting to install plugins just because they have cool names or promise great features. You can do that in your dev or test environment, but in production you should only install plugins that you picked after some serious consideration.

This is what I look for when deciding whether to install a plugin or not:

  • was the functionality provided by the plugin requested by larger group of users or is the plugin needed for site administration purposes?
  • was the plugin developed and tested in-house, if no is it supported by Atlassian, if no can we or some respectable Atlassian partner support it should there be some problems?
  • is the plugin compatible with our confluence version? does it have a track record of being compatible or was it made compatible with new Confluence versions as they were released?
  • are there no major unresolved bugs in the areas of performance, scalability, data integrity and security?
  • does the plugin have an automated test suite with good test coverage?

If you answer “yes” to all of these questions, then you may go ahead do a trial before installing the plugin in production. Otherwise, you might provide your feedback to the plugin authors and wait if the pending issues get resolved before proceeding.

I don’t want to be harsh, but especially 2-3 years ago most of the plugins created for Confluence were crap. But as the platform matures, and Atlassian partners get involved more, the quality of available plugins has been slowly increasing. The main issue that I see is that the existing plugins are not developed and tested with large scale deployments in mind. Hopefully things will change as more and more deployments grow beyond small and medium sites. It’s unfortunate that even some commercial plugins, suffer from the very same issues that plague plugins created by bunch of volunteers and enthusiast. So pick your plugins carefully, do a trial, check for unresolved bugs and existing user complaints, and then decide.

I've been reasonably active in the Atlassian development community and from these interactions, I'd like to highlight the work done by Dan Hardiker (Adaptavist) and Roberto Dominguez (Comalatech). And though I haven't worked with guys from CustomWare, they are also considered to be pretty sharp.

Be especially careful with plugins that provide new macros for the wiki content. Once you install such a plugin you won't be able to uninstall it without breaking wiki pages until all the references to that macro are removed (with tens of thousands of pages and no ability to track the references this might be a big challenge).

In general however, try to keep the number of plugins low. It’s better for performance and you won’t get in trouble as often when you need to upgrade Confluence but some of the plugins you use are not compatible with the new Confluence version.


You should now have a good idea about how to configure Confluence and where this configuration is done. In the next chapters we'll look at upgrading Confluence, patching and more.

Tuesday, July 27, 2010

DGC II: The JVM Tuning

This blog post is part of the DevOps Guide to Confluence series. In this chapter of the guide, I’ll be focusing on JVM tuning with the aim to make our Confluence perform well and operate reliably.

JDK Version

First things first: use a recent JDK. Java 5 (1.5) has been EOLed 1.5 years ago, there is absolutely no reason for you to use it with Confluence. As George pointed out in his presentation, there are some significant performance gains to be made just by switching to Java 6 and you can get another performance boost if you upgrade from an older JDK 6 release to a recent one. JDK 6u21 is currently the latest release and that’s what I would pick if I were to set up a production Confluence server today.

If you are wondering about which Java VM to use, I suggest that you stick with Sun’s HotSpot (also known as Sun JDK). It’s the only VM supported by Atlassian and I really don’t see any point in using anything else at the moment.

Lastly it goes without saying that you should use -server JVM option to enable the server VM. This usually happens automatically on server grade hardware, but it's safer to set it explicitly.

VM Observability

For me using JDK 6 is not just about performance, but also about observability of the VM. Java 6 contains many enhancements in the monitoring, debugging and probing arena that make JDK 5 and its VM look like an obsolete black box.

Just to mention some enhancements, the amount of interesting VM telemetry data exposed via JMX is amazing, just point a VisualVM to a local Java VM to see for yourself (no restart or configuration needed). Be sure to install VisualGC plugin for VisualVM. In order to allow remote connections you’ll need to start the JVM with these flags:

Unless you make the port available only on some special admin-only network, you should password protect the JMX endpoint as well as use SSL. The JMX interface is very powerful and in the wrong hands could result in security issues or outages caused by inappropriate actions.

For more info about all the options available read this document.

In addition to JMX, on some platforms there is also good DTrace integration which helped me troubleshoot some Confluence issues in production without disrupting our users.
And lastly there is BTrace that allowed me to troubleshoot a nasty hibernate issue once. It's a very handy tool that as opposed to DTrace, works on all OSes.

I can’t stress enough how important continuous monitoring of your Confluence JVMs is. Only if you know how your JVMs and app are doing, you can tell if your tuning has any effect. George Barnett has also a set of automated performance tests which are handy to load test your test instance and compare results before and after you make some tweaks.

Heap and Garbage Collection Must Haves

After upgrading the JDK version, the next best thing you can do is to give Confluence lots of memory. In the infrastructure chapter of the guide, I mentioned that you should prepare your HW for this, so let’s put this memory to use.

Before we set the heap size, we should decide between 32-bit JVM and 64-bit JVM. 64-bit VM is theoretically a bit slower, but allows you to create huge heaps. 32-bit JVM has heap size limited by the available 32-bit address space and other factors. 32bit OSes will allow you to create heaps up to only 1.6-2.0 GB. 64bit Solaris will allow you to create 32bit JVMs with up to 4GB heap (more info). For anything bigger than that you have to go 64bit. It’s not a big deal, if your OS is 64bit already. The option to start the VM in 64bit mode is -d64. On almost all platforms the default is -d32.

Before I go into any detail, I should explain what are the main objectives of heap and garbage collection tuning for Confluence. The objectives are:
  • heap size - we need to tell JVM how much memory to use
  • garbage collector latency - garbage collection often requires that the JVM stops your application, this is GC pauses are often invisible, but with large heaps and under certain conditions might become very significant (30-60+ seconds)

Additionally we should also know a thing or two about how Confluence uses the heap. The main points are:
  • Objects created by Confluence and stored on the heap generally fall into three categories:
    • short-lived objects - life-cycle of these is bound to a http request
    • medium-lived objects - usually represent cache entries with shorter TTL
    • long-lived objects - represent cache entries with big TTL, settings and infrastructure objects (plugin framework, rendering engine, etc), cache entries taking most of the space.
  • Confluence creates lots of short-lived objects per request
  • Half or more of the heap will be used by long-lived cache objects

By combining our objectives with our knowledge of Confluences heap profile, our tuning should focus on providing enough heap space for the application to have space for the cache, short-lived objects, as well as some extra buffer. Given that long-lived objects will (eventually) reside in the old generation of the heap, we want to avoid promoting short-lived objects there, because otherwise we’ll then need to do massive garbage collections of the old generation unnecessarily. Instead we should try to limit the promotion from young generation only to those objects, that will likely belong to the long-lived category.

We’ll also need to figure out how much heap you need to use. Unfortunately there isn’t an easy way to find this out, except for some educated guessing and trial & error. You can also read this HW Requirements document from Atlassian that can give you an idea about some starting points. I believe we started at 1GB, but over time went through 2GB, 3GB, 3.5GB, 4GB, 5GB all the way to 6GB.

The Confluence heap size depends on the number of concurrent users and the amount of content you have. This is mainly because Confluence uses a massive (well, in our case it is) in-process cache that is stored on the heap. We’ll get to Confluence and cache tuning in a later chapter of this guide.

So let’s set the max heap size. This is done via -Xmx JVM option:
The additional -Xms parameter says that the JVM should reserve all 6GB at startup — this is to avoid heap resizing which can be slow, especially when dealing with large heaps.

The rest of the heap settings in this post are based on 6GB heap size, you might need to make appropriate changes to adjust for your total heap size.

The next JVM option is -Xmn, which specifies how much of the heap should be dedicated to young generation (you should read up on generational gc if you don’t know what I’m talking about). The default is something like 25% or 33%, I set the young generation to ~45% of the entire heap:

Increasing the permanent generation size is also usually required given the number of classes that Confluence loads. This is done via -XX:MaxPermSize option:

Given that determining the right heap size for your environment is non-trivial task for larger instances, especially if occasional memory leaks start consuming the precious memory, you always want to have as much data as possible to debug memory exhaustion issues. Aside from good monitoring (which I mentioned in the previous chapter) you should also configure your JVM to dump the heap, when an OutOfMemoryException occurs. You can then analyze this heap dump for potential memory leaks.

Since we are dealing with relatively big heaps, make sure you have enough space on the disk (heap dumps for 6GB heap usually take 2-4GB). I’ve had a very good experience using Eclipse Memory Analyzer to analyze these large heaps (VisuaVM or jhat are not up for analyzing heaps of this size). The relevant JVM options are:

While trying to minimize gc latency in order to avoid situations when users have to wait several seconds for the stop-the-world (STW) gc to finish before their pages render is a commendable thing to do, the main reason why you want to do this is to avoid Confluence cluster panics.

Confluence has this “wonderful” cluster safety mechanism that is sensitive to any latency bigger than a few tens of seconds. In case a major STW gc occurs, the cluster safety code might announce cluster panic and shut down all the nodes (that’s right, all the nodes, not just the one that is misbehaving).

In order to be informed of any latencies caused by gc, you need to turn on gc logging. This is the magic combination of switches that works well for me:

Unfortunately the file specified via -Xloggc will get overwritten during a jvm restart, so make sure you preserve it either manually before a restart or automatically via some restart script. Additionally reading the gc log is a tough job that requires some practice and since the format varies a lot depending on your JDK version and garbage collector, I’m not going to describe it here.

Performance tweaks

The first performance boosting JVM option I'd like to mention is -XX:+AggressiveOpts, which will turn on performance enhancements that are expected to be on by default in the future JVM versions (more info).

If you are using 64bit JVM then -XX:+UseCompressedOops will make a big difference and will virtually eliminate the performance penalty you pay for switching from 32bit to 64bit JVM.

And lastly there is -XX:+DoEscapeAnalysis which will boost the performance by another few percents.

Optional Heap and GC tweaks

To slow down object promotion into the old generation, you might want to tune the sizes of the survivor space (a heap generation within the young generation). To achieve this, we want the survivor space to be slightly bigger than the default. Additionally I also want to keep the promotion rate down (objects that survive a specific number of collections in the survivor space will be be promoted to the older generation), so I use these options:

I also found that by using parallel gc for the young generation and concurrent mark and sweep gc for the older generation I can practically eliminate any significant SWT gc pauses. Your mileage might vary on this one, so do some testing before you use it in production. These are the settings I use:


The information above was gather from years of experience as well as various sources, including the following:

Running Multiple Web Apps in one VM

Don't do that. Really. Don't. Bad things will happen if you do (OOME, classloading issues etc).


Your JVM should now be in a good shape to host Confluence and serve your clients. In the next chapter of this guide I'll write about Confluence configuration, tuning, upgrades and more.

Sunday, July 25, 2010

DGC I: The Infrastructure

In the introductory post, I mentioned that a Confluence cluster is the way to go big. Let's go through some of the main things to consider when you start preparing your infrastructure.

Confluence cluster

To build a Confluence site, you need Confluence :-). Well, make it two... as in a two-node cluster license. I recommend this for any bigger site with relatively high uptime expectations, even if you know that your amount of traffic won't require load balancing between two nodes. I often find my self in a need of a restart (e.g. during a patch deployment) and with a cluster, you can restart one node at a time and your users won't even know about it.


My team operates other big sites, and from all of them we expect some level of redundancy. Typically we split everything between "odd" (composed of hosts with hostnames ending with an odd number) and "even" strings, and this applies to Confluence nodes as well (that's why you need two-node license). Each string is composed of a border firewall, load balancer, switches and the actual servers (web/application/database/whathaveyou) and both strings can either share the load or work as primary&standby depending on your application needs and network configuration.

This kind of splitting, allows us to take half of our datacenter offline for maintenance when needed or allows us to absorb potential failure of any hardware or software within one string without any perceivable interruption of service.

Sure, you can make things even more redundant by adding a third or forth string, but none of our apps requires that level of redundancy and the cost and complexity of getting there is therefore hard to justify.

There are two important things that matter when it comes to setting up the network, and both can make or break you Confluence clustering.
  1. The latency between the two nodes should be minimal. Ideally they should be just one hop apart and on a fast network (1GBit). There will be a lot of communication going on between your Confluence nodes, and you want it to happen as quickly as possible, otherwise the cluster synchronization will drag down your overall cluster performance. Don't even think about putting the two nodes into different datacenters, let alone on different continents. Confluence clustering was not built for that type of scenario.
  2. Make absolutely sure that your network (mainly switches, OS, firewall) supports multicast.

The best way to check that the multicast works reliably is to use the multicast test tool that is bundled with Coherence (a library that is bundled with Confluence). To run it just run the following command on all nodes and check if all packes are being delivered and no duplicates are present:
java -cp $CONFLUENCE_PATH/WEB-INF/lib/coherence-x.y.jar:$CONFLUENCE_PATH/WEB-INF/lib/tangosol-x.y.jar \ \
-ttl 1 \
-local $NODE_IP

In our environment, it took us months of waiting for the right patch from our network gear vendor and some OS patching to make things totally stable. Fortunately, our ops guys eventually found the magic combination of patches and settings, and then we were good to go.

Our site uses both http and https protocols for content delivery and since we already had an SSL accelerator available in our datacenter we utilized it for Confluence, but I don't think that with current hardware, hw acceleration is not very important these days.

Another noteworthy suggestion I have for your network is the load balancer configuration. We started off with a session-affinity-based load-balancing, but at one point people started to notice that sometimes they see different content than their colleagues. This was due to delay in propagation of changes throughout the cluster. Usually the delay is unnoticeable, but for some reasons it's not always the case. I haven't investigated this issue further and just switched to primary&slave load balancing, which has been working great for us since. This of course will work only if each of your nodes can handle all the traffic on its own, but you can trust me that it solves all the issues with users that don't believe in eventual consistency :-).

Hopefully your load balancer will perform healthchecks against your nodes. The /errors.jsp path is the ideal target for these healthchecks, because it returns HTTP 200 only if everything is ok with the node.

When it comes to firewall rules (you have a firewall right?), you shouldn't allow incoming connection from public networks directly to your servers, all the public traffic should go through loadbalancer only. As for outbound connections, you should allow your servers to connect to any public server on ports 80 (HTTP) and 443 (HTTPS); these connections are needed for feed retrieval, open social gadgets and plugin installation.

Hardware (cpu, memory, disk)

Update: I came across this HW requirements document from Atlassian, which is helpful especially for smaller instances.

When you are making your hardware choices, I suggest you stick with a server that is relatively recent and has decent single-threaded performance, yet offers multicore parallelism. Confluence does relatively a lot of number crunching per http request, so both single-threaded and multi-threaded horse-power are needed to get good results. Additionally Confluence's boot process is not the best one, so with poor single-threaded throughput you'll end up waiting minutes for the app to start (at one point I did!).

Confluence loves memory! So don't be stingy. RAM is cheap these days, so get a few gigs that will be dedicated just to Confluence. My instance uses 6 GB JVM heap and with additional non-heap memory consumption, OS overhead and an extra buffer. I allocated 10GB of RAM for each Confluence node. You will likely start with much lower memory requirements, but as your instance grows, so will the memory requirements - keep that in mind.

When it comes to disk and disk space, you have to realize three things.
  1. Confluence stores all of its persistent data in a (hopefully remote) database.
  2. Confluence relies on fetching data from its Lucene index, stored on the local file system (each node has its own copy). This index is built from the db contents and can be rebuilt at any time.
  3. Attachments, which can represent a huge chunk of your persistent data, will be stored in the database. Confluence won't let you use e.g. shared filesystem when you are running a cluster.

All of this means that you will need a few (dozen) gigabytes of local disk space that can be accessed reasonably quickly. SSD will likely not buy you much, use it for your DB! Server grade hard drives configured in redundant software or hardware RAID should be sufficient for your web/application server (you can skip the RAID if you can rebuild the server really quickly after a disk failure).

OS & Filesystem

The choice of OS is often a religious one. But I think that it's more important that you are comfortable administering your OS than anything else. We use Solaris 10, or more recently OpenSolaris everywhere. Especially OpenSolaris is superior to most (all?) of the OSes out there (heh, now I'm being religious), but it will be worth cat's pee to you if you have no clue about how to work with it and don’t have time or willingness to learn a lot of cool stuff about the OS. In general, I'd say that any 64bit *nix OS should be suitable as long as you know how to use it. You'll want 64bit OS so that you can load it with loads of RAM and create big JVM heap once you need it.

One nice thing that comes with Solaris and OpenSolaris (and BSD) is ZFS file system. If you don't know much about it, I suggest that you read a bit about it. ZFS can make your backup strategy a lot simpler and allows you to revert from a failed upgrade in a matter of seconds. I'm not exaggerating, it happened to me several times. Hopefully Btrfs will soon be production ready for Linux distros and will offer comparable conveniences. If you can't use either of these, you'll have to suck it up and deal with it. I don't envy you...


During the last 3 years, we tried several combinations of deployment configurations for our Confluence site. These include Solaris 10 servers shared by several apps, Solaris 10 Zones (one zone per app) and OpenSolaris with Xen virtualization. Xen and OpenSolaris is what we currently use. It works well, but if I were to make a decision today, I would probably go with OpenSolaris and Zones. This combination gives you the best stability, performance, resource virtualization and application isolation.

In any case, many people ask what is the performance penalty for going virtualized. My answer is that it depends on your application, but for a webapp, more likely than not, it isn't going be the main reason of your performance problems. Decent hardware is going to make the virtualization penalty almost invisible and at the same time will give you flexibility when allocating resources in your data center. Just to give you a rough idea, the overhead for Xen is 10-30%; for Solaris Zones it's a lot less.

Web Container

Atlassian recommends using Tomcat as the web container for Confluence. We could again spend a lot of time fighting a religious battle here, but I'm going to avoid that. If Tomcat works for you and you don't find it lacking features that make enterprise deployments and operation easier, then good for you. You will most likely want it fronted it with Apache webserver or something similar though.

I've been using Sun Web Server 7 in my production environment and was quite happy with it. Another excellent choice is GlassFish v2.1 or v3, which I've been using for Confluence on my Mac. Unfortunately, Confluence doesn't adhere to the Servlet spec in some places, so you'll have to patch it to get it to run with GF v3. Glassfish v2.1 is not affected, but suffers from Xalan classes clashing, so to fix that you need to put Confluence's xalan-x.y.z.jar into $GLASSFISH_HOME/domains/$YOURDOMAN/lib/. Otherwise everything works as expected.

For a bigger site, you'll likely need to increase the worker thread count in your servlet container. Check your container's documentation to see what's the default and how to increase it. You should also know what your peak concurrent request rate is (monitor it!) and in combination with your infrastructure capabilities (load test it!) choose the right value for you. Ours is 256 which is higher than our usual peak traffic, but lower than what we could handle if we had to.

Logs & Monitoring

Paraphrasing my friends daughter: "More data, more better!". Log as much as you can and archive logs. You'll never know when you'll need to search for an exception log and confirm that it started to appear 7 months ago, right after that particular confluence upgrade.

What helped me on several occasions, was having detailed access logs. I use Apache combined log format with an extra attribute - request duration in microseconds. This format will not only give you a good idea about your app's performance, but will also help you track various issues by logging http referal[sic] and user-agent headers. This can often be invaluable info!

Here is a list of different types of logs you should be gathering: confluence log, web container log, jvm gc log and http access log.

In order to get an in-depth information about your visitors, usage patterns and content, I suggest that you integrate your Confluence with web analytics services like Google Analytics or Omniture. From their reports you can learn more about how, when and from where your users use your site.

When it comes to monitoring, your strategy should most certainly include JVM and JMX monitoring. Confluence as well as JVM exposes quite some interesting metrics via JMX. You should know what these values look like throughout the day or week. Only then you'll be able to efficiently troubleshoot issues when they occur (and they will occur!). Bare minimum include: heap space usage, cpu usage, requests per 10 second, errors per 10 seconds, avg request duration

We have a custom monitoring app that allow us to gather, archive and analyze these JVM/JMX metrics, but there are also some open source tools available of various quality (e.g. Munin looks promising).

The second part of our monitoring strategy is implemented as a local agent (we use Satan), that closely monitors the JVM process and the app itself by checking if it's not running out of heap space, as well as by performing http health checks. In the case that multiple failures are registered, the agent restarts the app and emails out an alert with the description of the failure. This allows us to sleep through the night without worrying that a pesky memory leak is going to take down our site at night. Fortunately, we haven’t seen any stability issues for a while now, but things were different in the past.

The last part of our monitoring strategy is implemented as remote http agents. These periodically perform http health checks from various locations on the Internet and send out alerts when an issue is detected. This gives us a good visibility into potential networking issues that wouldn't be caught by a local agent. There are several third party solution that you could use, or you can build your own (and host it cross the globe on EC2).


The choice is up to you. Pick something supported by Atlassian or else you'll likely regret it. We use MySQL5 and for the most part we've been quite happy with it. Our db currently takes ~26GB, so be sure to account for gigabytes of db files and several times that for db backups. The biggest space sucker are attachments. Since a Confluence cluster can currently store attachments only in the database, you have to limit the attachment size, or else you'll likely end up with performance problems due to overloaded db.

We limit attachment size to 5MB. There are several users that are not happy about that, but on the other hand, it helps people to realize that often a simple wiki page is a much better distribution medium than an OpenOffice document attached to a blank wiki page. I'd bet that our users would stick huge ISO images into our db if we allowed them to. My suggestion is to start with a low limit and increase it if there is a business justification for it. Maybe one day Confluence will support S3 or Google storage as the backend for attachments, until then, keep the size limit low.

The db should be hosted on a dedicated server with lots of RAM. I'm fortunate enough to have DBAs that take care of running the DB for me, so I don't have to worry about that part. A good DBA , MANY FAST disks (possibly SSD) and lots of RAM are the key ingredients to well performing db. Of course, make sure the latency between both Confluence nodes and the db server is minimal. You shouldn't think of doing anything worse than 1GBit network and locate the db within the same datacenter.

I mentioned ZFS before and I'll mention it again. If you put the db files that contain your Confluence database on a dedicated ZFS dataset (think volume), you'll be able to take snapshots of your db during upgrades or on the fly (you'll have to momentarily lock the db to do that) and then revert from these snapshots instantly when you need it. This is just awesome. :-)

If you are using MySQL5, your minimal my.cnf should look like this:
The last setting will allow you to upload up to 32MB large files (attachments, plugins, etc) into the db.


Your users will hate you if you lose any of their precious data, so don’t do it! The best way to avoid any data loss is to have a backup strategy in place. Ours is composed of several parts.

Config files are stored in our version control system, which is, surprise surprise, being backed up.

Confluence home directory on our Confluence nodes is being backed up only just before the upgrade via a ZFS snapshot. All the files in there (except for the config files) can be rebuilt from the database, so I don’t worry about them.

The database is being backed up nightly via a SQL dump, which is then backed up on a tape. Additionally, just before an upgrade, we take a ZFS snapshot of the filesystem the db files reside on. This allows us to do instant rollbacks in case the upgrade fails. I experienced a situation where it took us hours to roll back from a SQL dump. It’s slooow. Since then we switched to ZFS snapshots.

The database is really the master storage of all the Confluence data, so in addition to all the backups, we also run a redundant (remember “odd” and “even”?) db server, that the master database is being replicated to on the fly via MySQL master/slave replication. During an upgrade we now also stop the replication, so that we can use the slave right away if something happened to the master during an upgrade and we couldn't use ZFS to rollback.

As if that was not enough, there is one more layer that allows users to recover from user errors in a fine-grained manner. It’s Confluence wiki page versioning and wiki space trash. The combination of these two features, enables users to undo most of the editing mistakes on their own, without bothering site administrators (I’ll talk more about delegation in chapter IV of the guide).

There is also a Confluence built-in backup mechanism, but it works well only for small instances. This backup process is resource intensive, generates lots of data and if I remember correctly breaks ones you reach certain size. Don't use it. You'll have to explicitly disable it via the Confluence Admin UI.

Prod, Test, Dev Environments

The ability to experiment in the production environment will decrease with the increase of users using the site. For this reason, you'll need to build a Test environment that closely matches your production environment. Here you can practice your Confluence upgrade, or run automated tests just before a release. If you are doing Confluence core or plugin development, you'll also need a dev environment. This one can be a simplified and scaled down version of production (e.g. you can forgo clustering) and should be conveniently located on your dev machine or server.


If you follow my advice, you should now have an infrastructure that is will help you run your Confluence site in a performant, scalable and reliably way. If you found something important missing, feel free to post your suggestions as comments.

In the next chapter of this guide we'll look at the JVM tuning.

DevOps Guide to Confluence (DGC)

After working with Atlassian Confluence for 3 years, running one of the bigger public Confluence installations, I realized that there is a major lack of information about how to run Confluence on a larger scale and outside of the intranet firewalls. I'm hoping that I can improve this situation with a blog series that will describe some of the (best?) practices that I implemented while running, tweaking, patching and supporting our Confluence-based site.

Just to throw out some numbers to give context of what I mean by "relatively large":
  • # registered users: 180k+
  • # contributing users: 7k+
  • # wiki spaces: 1.5k+
  • # wiki pages: 65k+
  • # page revisions: 570k+
  • # comments: 10k+
  • # visits per month: ~300k
  • # page views per month: ~800k
  • # http requests per day: ~1m+ (includes crawlers and users with disable javascript)
So I'm not talking about a huge site like amazon, twitter, etc, but still bigger than most of the public facing confluence instances out there.

Some of the practices described in this guide might be an overkill for smaller deployments, so I’ll leave it up to you to pick the right ones for you and your environment.

There are many aspects that need careful consideration if you want to go relatively big, and there are even more of them when you run your site on the Internet as opposed to doing it internally within an organization. In my blog series I'm going to focus on these areas that I consider important:

I'm not going to go into details about why to pick Confluence or why not to pick it. I really just want to focus on how to make it run smoothly and reliably while serving a relatively large audience of users (and robots).

Given that we want to run a site on the Internet, we are lucky to have well defined maintenance windows, that we can work with. Meaning that any downtime will be perceived by at least a portion of your users as your failure, and the only way how you can avoid looking like an idiot is to keep the downtime to the absolute minimum.

You are now probably thinking that a Confluence cluster will solve all your problems with scalability and reliability.

Right, that's what the marketing people tell you. Anyone who knows a thing or two about software engineering, knows that there is no such a thing as "unlimited scalability" and ironically a Confluence cluster can hit several bottlenecks quite quickly in certain situations. That said, a Confluence cluster with all its pros and cons is really the way to go big with Confluence, but you should have realistic expectations about its scalability and reliability.

The fact that makes things even more difficult is that if you do things right, your wiki is going to take off. More users, more content, more traffic, more spam, more crawlers, more users unhappy about any kind of downtime... Growth is what you need to take into account from day one. I'm not saying that you have to start big, you just shouldn't paint your self into a corner and I'm going to mention some tips on how to avoid just that.

I was inspired to write up this guide after watching George Barnett’s presentation from this year’s Atlassian Summit. George made some really good points and I encourage you to watch his talk. My guide will not focus just on performance and scalability, but also on reliability, smooth day-to-day operation and more.

Continue reading: DGC I: The Infrastructure