SolrCloud on production - the memory

#java Aug 17, 2017 5 min Mike Kowalski

I want to share with you some of my thoughts about using Apache Solr (especially SolrCloud) on production. In one of the projects I’m participating in, we are using it to provide Near-Real-Time (NRT) searching along with frequent updates to the indexes. Here are some tips, that helped our project stand still on production environment against hundreds of simultaneous requests per second.

This post is focused on the memory related aspects of the Solr configuration. Most of the advice presented below apply to other large-heap Java applications as well.

Tip #0 - measure, don’t guess!

When performance matters, it’s important to understand how the things work under the hood and how to configure them. Default settings may be a good starting point, but when going to production environment should encourage you to revisit those settings. Please remember: there is no one-size-fits-all configuration! There are many factors that can affect overall performance - like your data set characteristic, available hardware resources etc. Every change you make should be tested and compared with previous results to measure the real performance influence. Without testing it’s not a profiling or optimization but just hopefully guessing.

In terms of memory, an adequate tool may be a profiler (eg. Java Mission Control, Java Visual VM) combined with some performance testing suite. Other memory-related tools like jstat may also be helpful.

Tip #1 - Increase the memory

Solr, like many others data stores, uses memory to speed up processing. In general, the more memory you have the better. This applies both to the Solr dedicated memory (JVM related) and the memory on the OS level. However, in both cases, there are some rules that you should consider.

Lucene’s needs

Apache Lucene (which is running under the hood of Solr) has been designed for extensively caching it’s internal data structures to optimize the performance. On the filesystem level, Lucene’s index consists of segment files which are easy to cache for the OS. For example, Linux based systems use free memory regions for storing (caching) already read and written data. The rule of thumb is to give the OS two times more memory than for the Solr instance itself. So the system running Solr instance with 8GB of heap should have at least 16GB of RAM. Don’t fear - Lucene along with your OS will take care of the free memory for you :)

Heap size boundary

Back in the old days when the 32bit architectures were spread across the globe, OSes (and also the JVM) were limited in terms of memory available to address. A standard 32bit reference which corresponds to the CPU word length is limited to the 4GB of memory. Adoption of the 64bit architectures increased that limit to the amazing 18.5 exabytes (1EB = 1000 PB). The downside of 64bit addressing is that using pure 64bit addresses means that every reference is two times bigger than before, so the heap of 4GB is now effectively considerably smaller than when using 32bit addresses. To face this issue, Java introduced a trick called Compressed OOPS which in the nutshell enables using 32bit addresses for heaps less than 32GB. After that magical boundary, you will be using plain 64bit addresses which will have a significant impact on the performance (higher CPU load) and the needed amount of memory for the JVM. There is an excellent codecentric’s blog post explaining this phenomenon.

Luckily, SolrCloud has some built-in mechanisms for lowering the memory requirements. Collection (called core) data can be split across nodes using sharding. More shards mean smaller parts of the index to process for each node and smaller heap for the JVM. However, operations using heavy aggregations over full dataset may still require more memory than 32GB and push you into the problems mentioned before.

Tip #2 - Give G1 GC a try!

Working with large JVM heaps is a challenging task especially because of the garbage collection mechanism. Most popular Java garbage collectors (GC) use a technique called reference counting for finding unused objects across the heap. Generally speaking, objects that are not referenced by others can be safely removed from the heap. When there is a need for a free memory to allocate and there is not enough of it on the heap, GC seeks for unused objects and removes them. This phase may require stopping-the-world (STW) which means halting all application threads for some time. The bigger the heap, the longer the full GC phase is. There are some commercial pauseless GCs like Azul Zing but let’s focus on the standard HotSpot JVM features.

JVM offers few GC implementations that may be selected by passing appropriate flag on startup. My advice is: give a G1 GC a try! The G1 has been designed as a low-pause GC especially for working with large heaps (larger than 10GB). It will become a default GC starting with Java 9 (hopefully to be released this year) but it’s already available since Java 7 via -XX:+UseG1GC flag. My experience shows that G1 performs great under constant heavy load when objects appear frequently and live very short and the available memory is limited. Using G1 may also rescue you from GC overhead limit exceeded errors when STW takes too long for a large heap. In general, G1 cleans unnecessary objects earlier resulting in less STW phases during JVM lifecycle.

Solr’s GC configuration can be changed via /etc/default/solr.in.sh file.

Tip #3 - Don’t swap (if you can)

Solr benefits from using RAM (directly or via OS caches) only because of its fast access compared to traditional disk storage. However, even if you have enough memory the OS can slow you down because of swapping. How is that possible?

Let’s examine the default configuration of Ubuntu 16.04 LTS:

$ cat /proc/sys/vm/swappiness
60

According to Wikipedia:

Swappiness is a Linux kernel parameter that controls the relative weight given to swapping out of runtime memory, as opposed to dropping pages from the system page cache.

In short - the higher its value is (closer to the maximum of 100) the more aggressively the OS swaps. On the production environment, where the hardware resources should be always sufficient and selected carefully you should consider lowering the swappiness value to swap less frequently. Solr is able to manage its memory by itself in a very effective way, so swapping can only make things worse.

How to choose the right value for this parameter? Consider following options:

  • vm.swappiness = 10 (safe) reduce swapping frequency
  • vm.swappiness = 1 (recommended) allow minimal swapping
  • vm.swappiness = 0 (risky) swap only when approaching out of memory (OOM) critical condition

In case of my project, swapiness had a strong impact on performance tests repeatability. With default vm.swappiness=60 we observed significant fluctuations of response times (up to about 10%) between each test run. Lowering vm.swappiness value reduced differences between test runs almost completely.

Recap

  1. Test all your configuration changes.
  2. Give more memory (JVM) to the Solr but don’t exceed 32GB heap.
  3. Give the OS two times more memory than for the Solr instance.
  4. Try using G1 GC instead of the Solr’s default Garbage Collector.
  5. Limit or disable swapping at the OS level if you have enough memory.

Mike Kowalski

Software engineer believing in craftsmanship and the power of fresh espresso. Writing in & about Java, distributed systems, and beyond. Mikes his own opinions and bytes.