BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Troubleshooting Memory Issues in Java Applications

Troubleshooting Memory Issues in Java Applications

Bookmarks

Key Takeaways

  • Troubleshooting memory problems can be tricky but the right approach and proper set of tools can simplify the process substantially.
  • Several kinds of OutOfMemoryError messages can be reported by the Java HotSpot JVM, and it is important to understand theses error messages clearly, and have a wide range of diagnostic and troubleshooting tools in our toolkit to diagnose and root out these problems.
  • In this article we cover a broard range of diagnostic tools that can be very useful in troubleshooting memory issues, including:
    • HeapDumpOnOutOfMemoryError and PrintClassHistogram JVM Options
    • Eclipse MAT
    • Java VisualVM
    • JConsole
    • jhat
    • YourKit
    • jmap
    • jcmd
    • Java Flight Recorder and Java Mission Control
    • GC Logs
    • NMT
    • Native Memory Leak Detection Tools such as dbx, libumem, valgrind, purify etc.

For a Java process, there are several memory pools or spaces - Java heap, Metaspace, PermGen (in versions prior to Java 8) and native heap.

Each of these memory pools might encounter its own set of memory problems, for example– abnormal memory growth, slowness in the application or memory leaks, all of which can eventually manifest in the form of an OutOfMemoryError for these spaces.

In this article we will try to understand what these OutOfMemoryError error messages mean, which diagnostic data we should collect to diagnose and troubleshoot these issues, and will investigate some tooling to collect that data and analyze it for resolving these memory problems. This article focuses on how these memory issues can be handled and prevented in the production environments.

The OutOfMemoryError message reported by the Java HotSpot VM gives a clear indication as to which memory space is depleting. Let’s take a look at these various OutOfMemoryError messages in detail, understand them and explore what their likely causes might be, and how we can troubleshoot and resolve them.

OutOfMemoryError: Java Heap Space

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOfRange(Unknown Source)
at java.lang.String.<init>(Unknown Source)
at java.io.BufferedReader.readLine(Unknown Source)
at java.io.BufferedReader.readLine(Unknown Source)
at com.abc.ABCParser.dump(ABCParser.java:23)
at com.abc.ABCParser.mainABCParser.java:59)

This message means that the JVM does not have any free space left in the Java heap, and it cannot continue with the program execution. The most common cause of such errors is that the specified maximum Java heap size is not sufficient to accommodate the full set of live objects. One simple way to check if the Java heap is large enough to contain all of the live objects in the JVM is to inspect the GC logs.

688995.775: [Full GC [PSYoungGen: 46400K->0K(471552K)] [ParOldGen: 1002121K->304673K(1036288K)] 1048
521K->304673K(1507840K) [PSPermGen: 253230K->253230K(1048576K)], 0.3402350 secs] [Times: user=1.48 
sys=0.00, real=0.34 secs]

We can see from the above log entry that after the Full GC, the heap occupancy drops from 1GB (1048521K) to 305MB (304673K), which means that 1.5GB (1507840K) allocated to the heap is large enough to contain the live data set.

Now, let's take a look at the following GC activity:

20.343: [Full GC (Ergonomics) [PSYoungGen: 12799K->12799K(14848K)] [ParOldGen: 33905K->33905K(34304K)] 46705K- >46705K(49152K), [Metaspace: 2921K->2921K(1056768K)], 0.4595734 secs] [Times: user=1.17 sys=0.00, real=0.46 secs]
...... <snip> several Full GCs </snip> ......
22.640: [Full GC (Ergonomics) [PSYoungGen: 12799K->12799K(14848K)] [ParOldGen: 33911K->33911K(34304K)] 46711K- >46711K(49152K), [Metaspace: 2921K->2921K(1056768K)], 0.4648764 secs] [Times: user=1.11 sys=0.00, real=0.46 secs]
23.108: [Full GC (Ergonomics) [PSYoungGen: 12799K->12799K(14848K)] [ParOldGen: 33913K->33913K(34304K)] 46713K- >46713K(49152K), [Metaspace: 2921K->2921 K(1056768K)], 0.4380009 secs] [Times: user=1.05 sys=0.00, real=0.44 secs]
23.550: [Full GC (Ergonomics) [PSYoungGen: 12799K->12799K(14848K)] [ParOldGen: 33914K->33914K(34304K)] 46714K- >46714K(49152K), [Metaspace: 2921K->2921 K(1056768K)], 0.4767477 secs] [Times: user=1.15 sys=0.00, real=0.48 secs]
24.029: [Full GC (Ergonomics) [PSYoungGen: 12799K->12799K(14848K)] [ParOldGen: 33915K->33915K(34304K)] 46715K- >46715K(49152K), [Metaspace: 2921K->2921 K(1056768K)], 0.4191135 secs] [Times: user=1.12 sys=0.00, real=0.42 secs] Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded at oom.main(oom.java:15)

From the frequency of the “Full GC” messages in the dump,  we can see, there are several back-to-back Full GCs attempting to reclaim space in the Java heap, but the heap is completely full and the GCs are not able to free up any space. These frequent Full GCs negatively impact application performance, slowing it to a crawl. This example suggests that the heap requirement of the application is greater than the specified Java heap size. Increasing the heap size will help avoid these full GCs and circumvent the OutOfMemoryError. The Java heap size can be increased using -Xmx JVM option:

java –Xmx1024m –Xms1024m Test

The OutOfMemoryError can also be an indication of a memory leak in the application. Memory leaks are often very hard to detect, especially slow memory leaks. As we know, a memory leak occurs when an application unintentionally holds references to objects in the heap, preventing them from being garbage collected. These unintentionally held objects can grow in the heap over time, eventually filling up the entire Java heap space, causing frequent garbage collections and ultimately the program termination with OutOfMemoryError.

Please note that it is always a good idea to enable GC logging, even in production environments, to facilitate detection and troubleshooting of memory issues as they occur. The following options can be used to turn on the GC logging:

-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-XX:+PrintGCDateStamps
-Xloggc:<gc log file>

The first step in detecting memory leaks is to monitor the live-set of the application. The live-set is the amount of Java heap being used after a full GC. If the live-set is increasing over time even after the application has reached a stable state and is under a stable load then that could indicate a memory leak. The heap usage can be monitored with tools including Java VisualVM, Java Mission Control, and JConsole, and can be extracted from the GC logs as well.

Java heap: Collection of Diagnostic Data

In this section, we will explore which diagnostic data should be collected to troubleshoot OutOfMemoryErrors in the Java heap, and the tools that can help us collect the required diagnostic data.

Heap Dumps

Heap dumps are the most important data that we can collect when troubleshooting memory leaks. Heap dumps can be collected using jcmd, jmap, JConsole and the HeapDumpOnOutOfMemoryError JVM option as shown below.

  • jcmd <process id/main class> GC.heap_dump filename=heapdump.dmp
  • jmap -dump:format=b,file=snapshot.jmap pid
  • JConsole utility, using Mbean HotSpotDiagnostic
  • -XX:+HeapDumpOnOutOfMemoryError
java -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xmx20m -XX:+HeapDumpOnOutOfMemoryError oom 
0.402: [GC (Allocation Failure) [PSYoungGen: 5564K->489K(6144K)] 5564K->3944K(19968K), 0.0196154 secs] [Times: user=0.05 sys=0.00, real=0.02 secs] 
0.435: [GC (Allocation Failure) [PSYoungGen: 6000K->496K(6144K)] 9456K->8729K(19968K), 0.0257773 secs] [Times: user=0.05 sys=0.00, real=0.03 secs] 
0.469: [GC (Allocation Failure) [PSYoungGen: 5760K->512K(6144K)] 13994K->13965K(19968K), 0.0282133 secs] [Times: user=0.05 sys=0.00, real=0.03 secs] 
0.499: [Full GC (Ergonomics) [PSYoungGen: 512K->0K(6144K)] [ParOldGen: 13453K->12173K(13824K)] 13965K- 
>12173K(19968K), [Metaspace: 2922K->2922K(1056768K)], 0.6941054 secs] [Times: user=1.45 sys=0.00, real=0.69 secs] 1.205: [Full GC (Ergonomics) [PSYoungGen: 5632K->2559K(6144K)] [ParOldGen: 12173K->13369K(13824K)] 17805K- 
>15929K(19968K), [Metaspace: 2922K->2922K(1056768K)], 0.3933345 secs] [Times: user=0.69 sys=0.00, real=0.39 secs] 
1.606: [Full GC (Ergonomics) [PSYoungGen: 4773K->4743K(6144K)] [ParOldGen: 13369K->13369K(13824K)] 18143K- 
>18113K(19968K), [Metaspace: 2922K->2922K(1056768K)], 0.3009828 secs] [Times: user=0.72 sys=0.00, real=0.30 secs] 
1.911: [Full GC (Allocation Failure) [PSYoungGen: 4743K->4743K(6144K)] [ParOldGen: 13369K->13357K(13824K)] 18113K- 
>18101K(19968K), [Metaspace: 2922K->2922K(1056768K)], 0.6486744 secs] [Times: user=1.43 sys=0.00, real=0.65 secs] 
java.lang.OutOfMemoryError: Java heap space 
Dumping heap to java_pid26504.hprof ... 
Heap dump file created [30451751 bytes in 0.510 secs] Exception in thread "main" java.lang.OutOfMemoryError: Java heap space

 at java.util.Arrays.copyOf(Arrays.java:3210)
 at java.util.Arrays.copyOf(Arrays.java:3181)
 at java.util.ArrayList.grow(ArrayList.java:261)
 at java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:235)
 at java.util.ArrayList.ensureCapacityInternal(ArrayList.java:227)
 at java.util.ArrayList.add(ArrayList.java:458)
 at oom.main(oom.java:14)

Please note that the parallel garbage collector can continuously attempt to free up room on the heap by invoking frequent back-to-back Full GCs, even when the gains of that effort are small and the heap is almost full. This impacts the performance of the application and may delay the re-bouncing of the system. This situation can be avoided by tuning the values for -XX:GCTimeLimit and -XX:GCHeapFreeLimit.

GCTimeLimit sets an upper limit on the amount of time that GCs can spend in percent of the total time. Its default value is 98%. Decreasing this value reduces the amount of time allowed that can be spent in the garbage collections. GCHeapFreeLimit sets a lower limit on the amount of space that should be free after the garbage collections, represented as percent of the maximum heap. It’s default value is 2%. Increasing this value means that more heap space should get reclaimed by the GCs. An OutOfMemoryError is thrown after a Full GC if the previous 5 consecutive GCs (could be minor or full)  were not able to keep the GC cost below GCTimeLimit and were not able to free up GCHeapFreeLimit space.

For example, setting GCHeapFreeLimit to 8 percent can help the garbage collector not get stuck in a loop of invoking back-to-back Full GCs when it is not able to reclaim at least 8% of the heap and is exceeding GCTimeLimit for 5 consecutive GCs.

Heap Histograms

Sometimes we need to get a quick glimpse of what is growing in our heap, bypassing the long route of collecting and analyzing heap dumps using memory analysis tools. Heap histograms can give us a quick view of the objects present in our heap, and comparing these histograms can help us find the top growers in our Java heap.

  • -XX:+PrintClassHistogram and Control+Break
  • jcmd <process id/main class> GC.class_histogram filename=Myheaphistogram
  • jmap -histo pid
  • jmap -histo <java> core_file

Sample output below shows that the String, Double, Integer and Object[] instances are occupying the most space in the Java heap, and are growing in number over time, indicating that these could potentially be causing a memory leak:

Java Flight Recordings

Flight Recordings with heap statistics enabled can be really helpful in troubleshooting a memory leak by showing us the heap objects and the top growers in the heap over time. To enable heap statistics, you can use Java Mission Control and enable the ‘Heap Statistics’ by going to ‘Window->Flight Recording Template Manager’ as shown below.

Or edit manually in the .jfc files by setting the heap-statistics-enabled to true.

<event path="vm/gc/detailed/object_count">
    <setting name="enabled" control="heap-statistics-enabled">true</setting>
    <setting name="period">everyChunk</setting>
</event>

The flight recordings can then be created using any of the following ways:

  • JVM Flight Recorder options, e.g.

-XX:+UnlockCommercialFeatures -XX:+FlightRecorder
-XX:StartFlightRecording=delay=20s,duration=60s,name=MyRecording,

filename=C:\TEMP\myrecording.jfr,settings=profile

  • Java Diagnostic Command: jcmd

jcmd 7060 JFR.start name=MyRecording settings=profile delay=20s duration=2m filename=c:\TEMP\myrecording.jfr

  • Java Mission Control

The Flight Recordings can take us as far as determining the type of objects that are leaking but to find out what is causing those objects to leak, we need heap dumps.

Java Heap: Analysis of Diagnostic Data

Heap Dump Analysis

Heap dumps can be analyzed using the following tools:

  • Eclipse MAT - (Memory Analyzer Tool) -  a community developed tool for analyzing heap dumps. Some of the amazing features that it offers are:
    • Leak Suspects: it can inspect the heap dump for leak suspects reporting the objects that are suspiciously retaining large amount of heap
    • Histograms: lists the number of objects per class, and shallow as well retained heap held by those objects. The objects in the histogram can be easily sorted or filtered using regular expressions. That helps in zooming in and concentrating on the objects that we suspect could be leaking. It also has the capability to compare histograms from two heap dumps, and can show the difference in the number of instances for each class. This helps in finding the top growers in the Java heap which can be inspected further to determine the roots holding on those objects in the heap.
    • Unreachable objects: an amazing capability of MAT is that it allows to include or exclude the unreachable/dead objects in its working set of objects. If we don't want to look at the objects that are unreachable and eligible for collection in the next GC cycle, and are interested only in the reachable objects then this feature comes in very handy.
    • Duplicate Classes: shows duplicate classes loaded by multiple classloaders.
    • Path to GC roots: can show the reference chains to the GC roots (objects kept alive by the JVM itself) responsible for keeping the objects in the heap
    • OQL: we can use Object Query Language to explore the objects in the heap dumps. Its enriched OQL facilitates writing sophisticated queries that help dig deep into the heap dumps.
  • Java VisualVM -  all-in-one tool for monitoring, profiling and troubleshooting Java applications. It is available as a JDK tool as well as can be downloaded from GitHub. One of the features it offers is the heap dump analysis. It has the capability to create heap dumps of the application being monitored, and can also load and parse them. From the heap dumps, it can show class histograms, instances of a class, and can also help find the GC roots of particular instances.
  • jhat  command line tool (in our <jdk>/bin folder.) provides  heap dump analysis by browsing objects in the heap dump using any web browser. By default the web server is started at port 7000. jhat supports a wide range of pre-designed queries and Object Query Language(OQL) to explore the objects in the heap dumps.
  • JOverflow plugin for Java Mission Control -  experimental plugin that enables Java Mission Control to perform simple heap dump analysis and reporting where the memory might be getting wasted.
  • Yourkit Commercial Java profiler with a heap dump analyzer with almost all of the features offered by the other tools. In addition, YourKit offers::
    • Reachability Scope: It has the capability to not just list the reachable and unreachable objects but also can distribute objects according to their reachability scope i.e. strongly reachable, weakly/softly reachable, or unreachable.
    • Memory Inspections: Instead of ad-hoc query capabilities, YourKit offers a comprehensive set of built-in queries that can inspect the memory looking for anti-patterns and provide causes and solutions for the usual memory problems.

I use Eclipse MAT quite a lot and have found it to be very helpful in analyzing heap dumps.

MAT is enriched with advanced features, including the Histograms and the ability to compare them with the other histograms. This gives a clear picture as to what is growing in the memory and is retaining the maximum space in the Java heap. One of the features I like a lot is the ‘Merge Shortest Paths to GC Roots’ that helps in finding the objects-trail responsible for retaining unintentionally held objects. For example, in the following reference-chain, ThreadLocalDateFormat object is held in the heap by the ‘value’ field of ThreadLocalMap$Entry object. Until the ThreadLocalMap$Entry is removed from the ThreadLocalMap, ThreadLocalDateFormat won’t get collected.

weblogic.work.ExecuteThread @ 0x6996963a8 [ACTIVE] ExecuteThread: '203' for queue: 'weblogic.kernel.Default (self-tuning)' Busy Monitor, Thread| 1 | 176 | 40 | 10,536

'- threadLocals java.lang.ThreadLocal$ThreadLocalMap @ 0x69c2b5fe0 | 1 | 24 | 40 | 7,560

'- table java.lang.ThreadLocal$ThreadLocalMap$Entry[256] @ 0x6a0de2e40 | 1 | 1,040 | 40 | 7,536

'- [116] java.lang.ThreadLocal$ThreadLocalMap$Entry @ 0x69c2ba050 | 1 | 32 | 40 | 1,088

'- value weblogic.utils.string.ThreadLocalDateFormat @ 0x69c23c418 | 1 | 40 | 40 | 1,056

With this approach we can find the roots of the top growers in our heap and get to what is leaking in the memory.

Java Mission Control

Java Mission Control is available in the <jdk>/bin folder of the JDK. The flight Recordings collected with Heap Statistics enabled can greatly help in troubleshooting the memory leaks. We can look at the Object Statistics under Memory->Object Statistics. This view shows the object histogram including the percentage of the heap that each object type occupies. It shows us the Top Growers in the heap too that most of the times have direct correlation with the leaking objects.

OutOfMemoryError due to Finalization

The OutOfMemoryError can also be caused due to excessive use of finalizers. Objects with a finalizer (i.e. a finalize() method) may delay the reclamation of the space occupied by them. The finalizer thread needs to invoke the finalize() method of the instances before those instances can be reclaimed and their heap space freed. If the finalizer thread does not keep up with the rate at which the objects become eligible for finalization (added to the finalizers’s queue so that it can invoke their finalize() method), the JVM might fail with an OutOfMemoryError even though the objects piled up in the finalizer’s queue were eligible for collection. Therefore it is really important to make sure that we are not running out of memory due to a large number of objects pending finalization.

We can use the following tools to monitor the number of objects that are pending finalization:

  • JConsole

We can connect JConsole to a running process and monitor the number of objects pending finalization in the VM Summary page as shown in the following picture.

  • jmap – finalizerinfo
D:\tests\GC_WeakReferences>jmap -finalizerinfo 29456 
Attaching to process ID 29456, please wait...
Debugger attached successfully. Server compiler detected.
JVM version is 25.122-b08
Number of objects pending for finalization: 10
  • Heap Dumps

Almost all of the heap dump analysis tools show details on the objects that are due for finalization.

Output from the Java VisualVM

Date taken: Fri Jan 06 14:48:54 PST 2017
	File: D:\tests\java_pid19908.hprof
	File size: 11.3 MB
 
	Total bytes: 10,359,516
	Total classes: 466
	Total instances: 105,182
	Classloaders: 2
	GC roots: 419
	Number of objects pending for finalization: 2

OutOfMemoryError: PermGen Space

java.lang.OutOfMemoryError: PermGen space

As we know, PermGen has been removed as of Java 8, so If you are running on Java 8 or beyond, feel free to skip this section.

Up until Java 7, PermGen (short for “permanent generation”) was used to store class definitions and their metadata. Unexpected growth of the PermGen or an OutOfMemoryError in this memory space meant that either the classes are not getting unloaded as expected, or the specified PermGen size is too small to fit all the loaded classes and their metadata.

To ensure that the PermGen is sized appropriately  per the application requirements, we should monitor its usage and configure it accordingly using the following JVM options:

           –XX:PermSize=n –XX:MaxPermSize=m

OutOfMemoryError: Metaspace

Example of an OutOfMemoryError for the MetaSpace:

java.lang.OutOfMemoryError: Metaspace

Since Java 8, class metadata is stored in the Metaspace. Metaspace is not part of the Java heap and is allocated out of the native memory. So it is unlimited and is limited only by the amount of native memory available on the machine. However, Metaspace size can be limited by using the MaxMetaspaceSize option.

We can encounter an OutOfMemoryError for the Metaspace if and when its usage reaches the maximum limit specified with MaxMetaspaceSize. As with the other spaces, this could be due to the inadequate sizing of the Metaspace or there is a classloader/classes leak. In a later section, we will explore the diagnostic tools that can be used to troubleshoot the memory leaks in the Metaspace.

OutOfMemoryError: Compressed class space

java.lang.OutOfMemoryError: Compressed class space

If UseCompressedClassesPointers is enabled (which it is by default if UseCompressedOops is turned on), then two separate areas of native memory are used for the classes and their metadata. With UseCompressedClassesPointers, 64-bit class pointers are represented with 32-bit values, and these compressed class pointers are stored in the compressed class space. By default, this compressed class space is sized at 1GB and can be configured using CompressedClassSpaceSize.
MaxMetaspaceSize sets an upper limit on the total committed size of both of these regions – committed space of compressed class space, and the class metadata.                      

Sample output from the GC logs with UseCompressedClassesPointers enabled. The committed and reserved spaces reported for the Metaspace include the committed and reserved space for the compressed Class space.

Metaspace     used 2921K, capacity 4486K, committed 4864K, reserved 1056768K
  class space used 288K, capacity 386K, committed 512K, reserved 1048576K

PermGen and Metaspace: Data Collection and Analysis Tools

PermGen and Metaspace occupancy can be monitored using Java Mission Control, Java VisualVM and JConsole. GC logs can help us understand the PermGen/Metaspace usage before and after Full GCs, and see if there are any Full GCs being invoked due to PermGen/Metaspace being full.

It is also very important to make sure that the classes are getting unloaded when they are expected to. The loading and unloading of the classes can be traced using:

-XX:+TraceClassUnloading –XX:+TraceClassLoading

It is important to be aware of a syndrome where applications inadvertently promote some JVM options from dev to prod, with detrimental consequences. One such option is -Xnoclassgc, which instructs the JVM to not unload classes during garbage collections. Now, if an application needs to load a large number of classes or during runtime some set of classes become unreachable and it loads another set of new classes, and the application happens to be running with –Xnoclassgc, then it runs the risk of hitting the maximum capacity of the PermGen/Metaspace and so failing with an OutOfMemoryError. So, if you are not sure why this option was specified, it is good idea to remove it and let the garbage collector unload classes whenever they are eligible for collection.

The number of loaded classes and the memory used by them can be tracked using the Native Memory Tracker (NMT). We will  discuss this tool in detail below in the “OutOfMemoryError: Native Memory” section.

Please note that with Concurrent MarkSweep Collector (CMS), the following option should be enabled to ensure that the classes get unloaded with the CMS concurrent collection cycle: –XX:+CMSClassUnloadingEnabled

In Java 7, the default value of this flag is false, whereas in Java 8 it is enabled by default.

jmap

‘jmap –permstat’ presents classloader statistics, for example the classloaders, number of classes loaded by those classloaders and whether those classloaders are dead or alive. It also tells us the total number of interned Strings present in the PermGen, and the number of bytes occupied by the loaded classes and their metadata. All of this information is very useful in determining what might be filling up the PermGen. Here is a sample printout, displaying all of these statistics. You can see the summary on the last line of the listing.

$ jmap -permstat 29620
Attaching to process ID 29620, please wait...
Debugger attached successfully. Client compiler detected.
JVM version is 24.85-b06
12674 intern Strings occupying 1082616 bytes. finding class loader instances ..
 done. computing per loader stat ..done. please wait.. computing liveness.........................................done.
class_loader	classes bytes parent_loader   alive?  type
<bootstrap> 1846 5321080  null  live   <internal>
0xd0bf3828  0   0  	null   live    sun/misc/Launcher$ExtClassLoader@0xd8c98c78
0xd0d2f370  1   904  	null   dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0c99280  1   1440  	null   dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0b71d90  0   0   0xd0b5b9c0	live 	  java/util/ResourceBundle$RBClassLoader@0xd8d042e8
0xd0d2f4c0  1   904  	null   dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0b5bf98  1   920   0xd0b5bf38 dead   sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0c99248  1   904  	null   dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0d2f488  1   904  	null   dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0b5bf38  6   11832  0xd0b5b9c0 dead  sun/reflect/misc/MethodUtil@0xd8e8e560
0xd0d2f338  1   904  	null   dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0d2f418  1   904  	null   dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0d2f3a8  1   904 	null   dead	   sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0b5b9c0  317 1397448 0xd0bf3828 live sun/misc/Launcher$AppClassLoader@0xd8cb83d8
0xd0d2f300  1   904  	null   dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0d2f3e0  1   904  	null   dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0ec3968  1   1440  	null   dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0e0a248  1   904  	null   dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0c99210  1   904  	null   dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0d2f450  1   904  	null   dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0d2f4f8  1   904  	null   dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0e0a280  1   904  	null   dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
 
total = 22  	2186    6746816   N/A   alive=4, dead=18   	N/A

Since Java 8, jmap –clstats <pid> prints out similar information about the classloaders and their liveness, but this displays the number and size of the classes loaded into the Metaspace instead of the PermGen

jmap -clstats 26240
Attaching to process ID 26240, please wait...
Debugger attached successfully. Server compiler detected. JVM version is 25.66-b00 finding class loader instances ..done. computing per loader stat ..done. please wait.. computing liveness.liveness analysis may be inaccurate ...
class_loader	 classes bytes parent_loader alive? type
<bootstrap>        513 950353 null live <internal>
0x0000000084e066d0 8 24416  0x0000000084e06740 live sun/misc/Launcher$AppClassLoader@0x0000000016bef6a0
0x0000000084e06740 0 0      null live sun/misc/Launcher$ExtClassLoader@0x0000000016befa48
0x0000000084ea18f0 0 0 0x0000000084e066d0 dead java/util/ResourceBundle$RBClassLoader@0x0000000016c33930
 
total = 4   	521 	974769      N/A     	alive=3, dead=1 	N/A

Heap Dumps

As we mentioned in the previous section, Eclipse MAT, jhat , Java VisualVM, JOverflow JMC plugin and Yourkit are some of the tools that can help analyze the heap dumps for analyzing OutOfMemoryErrors. But heap dumps are also useful for troubleshooting PermGen and Metaspace memory problems as well. Eclipse MAT offers a very nice feature called ‘Duplicate Classes’, which displays any classes that were loaded multiple times by different classloader instances. Some finite number of duplicate classes loaded by different classloaders may be part of the application design. But if they keep growing over time, that’s a red flag and  should  be investigated. This is most common in application server hosted applications that run on the same underlying JVM instance, and are un-deployed and re-deployed several times. If the un-deployment of the application does not release all of the references to the classloaders it created, the JVM would not be able to unload the classes loaded by those classloaders and the new deployment of the application would load a new set of those classes with a new classloader instance.

This snapshot shows that there are duplicate copies of classes loaded by the JaxbClassLoader, and this was happening because the application was inappropriately creating new instances of JAXBContext for every XML to Java class binding.

jcmd

jcmd <pid/classname> GC.class_stats provides much more detailed information about the size of loaded classes, enabling us see the space occupied by each class in the Metaspace, as shown in the following sample output.

jcmd 2752 GC.class_stats 2752:
Index  Super  InstBytes  KlassBytes  annotations  CpAll  MethodCount  Bytecodes  MethodAll  ROAll   RWAll   Total  ClassName
1  	357 	821632 	536       	0      	352 	2       	13     	616    	184 	1448	1632 java.lang.ref.WeakReference
2  	-1  	295272 	480       	0      	0   	0       	0      	0      	24  	584 	608 [Ljava.lang.Object;
3  	-1  	214552 	480       	0      	0   	0       	0      	0      	24  	584 	608 [C
4  	-1  	120400 	480       	0      	0   	0       	0      	0      	24  	584 	608 [B
5  	35  	78912  	624       	0      	8712	94      	4623   	26032  	12136   24312   36448 java.lang.String
6  	35  	67112  	648       	0      	19384   130     	4973   	25536  	16552   30792   47344 java.lang.Class
7  	9   	24680  	560       	0      	384 	1       	10     	496    	232 	1432	1664 java.util.LinkedHashMap$Entry
8  	-1  	13216  	480       	0      	0   	0       	0      	0      	48  	584 	632 [Ljava.lang.String;
9  	35  	12032  	560       	0      	1296	7       	149    	1520   	880 	2808	3688 java.util.HashMap$Node
10 	-1  	8416   	480       	0      	0   	0       	0      	0      	32  	584 	616 [Ljava.util.HashMap$Node;
11 	-1  	6512   	480       	0      	0   	0       	0      	0      	24  	584 	608 [I
12 	358 	5688   	720       	0      	5816	44      	1696   	8808   	5920	10136   16056 java.lang.reflect.Field
13 	319 	4096   	568       	0      	4464	55      	3260   	11496  	7696	9664	17360 java.lang.Integer
14 	357 	3840   	536       	0      	584 	3       	56     	496    	344 	1448	1792 java.lang.ref.SoftReference
15 	35  	3840   	584       	0      	1424	8       	240    	1432   	1048	2712	3760 java.util.Hashtable$Entry
16 	35  	2632   	736       	368    	8512	74      	2015   	13512  	8784	15592   24376 java.lang.Thread
17 	35  	2496   	504       	0      	9016	42      	2766   	9392   	6984	12736   19720 java.net.URL
18 	35  	2368   	568       	0      	1344	8       	223    	1408   	1024	2616	3640 java.util.concurrent.ConcurrentHashMap$Node
…<snip>…
577	35  	0      	544       	0      	1736	3       	136    	616    	640 	2504	3144 sun.util.locale.provider.SPILocaleProviderAdapter$1
578	35  	0      	496       	0      	2736	8       	482    	1688   	1328	3848	5176 sun.util.locale.provider.TimeZoneNameUtility
579	35  	0      	528       	0      	776 	3       	35     	472    	424 	1608	2032 sun.util.resources.LocaleData$1
580	442 	0      	608       	0      	1704	10      	290    	1808   	1176	3176	4352 sun.util.resources.OpenListResourceBundle
581	580 	0      	608       	0      	760 	5       	70     	792    	464 	1848	2312 sun.util.resources.TimeZoneNamesBundle
          	1724488 	357208    	1536   	1117792 7754    	311937 	1527952	1014880 2181776 3196656 Total
            	53.9%  	11.2%    	0.0%   	35.0%	-      	9.8%   	47.8%  	31.7%   68.3%   100.0%
Index  Super  InstBytes  KlassBytes  annotations  CpAll  MethodCount  Bytecodes  MethodAll  ROAll   RWAll   Total  ClassName

From this output, we can see names of the loaded classes (ClassName), bytes occupied by each class (KlassBytes), bytes occupied by the instances of each class (InstBytes), number of methods in each class (MethodCount), space taken up by the bytecodes (ByteCodes), and much more.

Please note that in Java 8, this diagnostic command requires the Java process to be started with ‑XX:+UnlockDiagnosticVMOptions option.

jcmd 33984 GC.class_stats 33984:
GC.class_stats command requires -XX:+UnlockDiagnosticVMOptions

-XX:+UnlockDiagnosticVMOption is not required in Java 9 for this diagnostic command.

OutOfMemoryError: Native Memory

Some examples of the OutOfMemoryError for the native memory are:
OutOfMemoryError due to insufficient swap space:

# A fatal error has been detected by the Java Runtime Environment:
 
#
# java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space?
#
#  Internal Error (allocation.cpp:166), pid=2290, tid=27 #  Error: ChunkPool::allocate

OutOfMemoryError due to insufficient process memory:

# A fatal error has been detected by the Java Runtime Environment:
#
# java.lang.OutOfMemoryError : unable to create new native Thread
These errors clearly tell us that the JVM is not able to allocate from the native memory, which could be due to the fact that the process itself is consuming all of the native memory, or there are other processes on the system that are eating up the native memory. After monitoring the usage of the native heap using ‘pmap’ (or other native memory mapping tools), and appropriately configuring the Java heap, number of threads and the stack sizes, and taking care to ensure to leave enough room for the native heap, if we find our native heap usage growing over time and we ultimately face an OutOfMemoryError then it could be an indication of a native memory leak.

Native Heap OutOfMemoryError with 64-bit JVM

Running with 32-bit JVM puts a maximum limit of 4GB on the process size, so it’s more likely you’ll run out of native memory with 32-bit Java processes. However when running with a 64-bit JVM we get access to unlimited memory, and technically we would expect never to run out of native heap. But in fact that is not the case, and it is not uncommon to observe native heap OutOfMemoryErrors occurring in a 64-bit JVM too. This is due to the fact that the 64-bit JVM by default has a feature called CompressedOops enabled, and the implementation of this feature determines where the Java heap should be placed in the address space. The position of the Java heap can put a cap on the maximum capacity of the native memory. The following memory map shows that the Java Heap is allocated at the 8GB address boundary, leaving around 4GB space for the native heap. If this application allocates intensively out of the native memory and requires more than 4GB then it would throw native heap OutOfMemoryError even though there is plenty of memory available on the system.

0000000100000000 8K r-x-- /sw/.es-base/sparc/pkg/jdk-1.7.0_60/bin/sparcv9/java
0000000100100000 8K rwx-- /sw/.es-base/sparc/pkg/jdk-1.7.0_60/bin/sparcv9/java
0000000100102000 56K rwx--	    [ heap ]
0000000100110000 2624K rwx--	[ heap ]   <--- native Heap
00000001FB000000 24576K rw---	[ anon ]   <--- Java Heap starts here
0000000200000000 1396736K rw---	[ anon ]
0000000600000000 700416K rw---	[ anon ]

This problem can be resolved by using option -XX:HeapBaseMinAddress=n to specify the address the Java heap should start from. Setting it to a higher address would leave more room for the native heap.

Please see more details on how to diagnose, troubleshoot and workaround this issue here:

Native Heap: Diagnostic Tools

Let’s take a look at the memory leak detection tools that can help us get to the cause of the native memory leaks.

Native Memory Tracking

JVM has a powerful feature called Native Memory Tracking (NMT) that can be used to track native memory that is used internally by the JVM. Please note that it cannot track memory allocated outside the JVM or by native libraries. By using the following simple two steps, we can monitor the native memory usage by the JVM:

  • Start the process with NMT enabled. The output level can be set to a ‘summary’ or ‘detail’ level:
    • -XX:NativeMemoryTracking=summary
    • -XX:NativeMemoryTracking=detail
  • Use jcmd to get the native memory usage details:
    • jcmd <pid> VM.native_memory  

Example of NMT output:

d:\tests>jcmd 90172 VM.native_memory  90172:
Native Memory Tracking:
Total: reserved=3431296KB, committed=2132244KB
-                 Java Heap (reserved=2017280KB, committed=2017280KB)
            (mmap: reserved=2017280KB, committed=2017280KB)
-                 Class (reserved=1062088KB, committed=10184KB)
            (classes #411)
            (malloc=5320KB #190)
            (mmap: reserved=1056768KB, committed=4864KB)
-                  Thread (reserved=15423KB, committed=15423KB)
            (thread #16)
            (stack: reserved=15360KB, committed=15360KB)
            (malloc=45KB #81)
            (arena=18KB #30)
-                 Code (reserved=249658KB, committed=2594KB)
            (malloc=58KB #348)
            (mmap: reserved=249600KB, committed=2536KB)
-                 GC (reserved=79628KB, committed=79544KB)
            (malloc=5772KB #118)
            (mmap: reserved=73856KB, committed=73772KB)
-                 Compiler (reserved=138KB, committed=138KB)
            (malloc=8KB #41)
            (arena=131KB #3)
-                 Internal (reserved=5380KB, committed=5380KB)
            (malloc=5316KB #1357)
            (mmap: reserved=64KB, committed=64KB)
-                 Symbol (reserved=1367KB, committed=1367KB)
            (malloc=911KB #112)
            (arena=456KB #1)
-                 Native Memory Tracking (reserved=118KB, committed=118KB)
            (malloc=66KB #1040)
            (tracking overhead=52KB)
-                 Arena Chunk (reserved=217KB, committed=217KB)
            (malloc=217KB)

More detailed information on all the jcmd commands to access the NMT data and how to read its output can be found here.

Native Memory Leak Detection Tools

For the native memory leaks stemming from outside the JVM, we need to rely on the native memory leak tools for their detection and troubleshooting. Native tools such as dbx, libumem, valgrind, purify etc. can rescue us in dealing with the outside JVM native memory leaks.

Summary

Troubleshooting memory problems can be very hard and tricky but the right approach and right set of tools definitely make it simple and easy to tackle them. As we have seen, there are different kinds of OutOfMemoryError messages reported by the Java HotSpot JVM, and it is very important to understand theses error messages clearly, and have a wide range of diagnostic and troubleshooting tools in our toolkit to diagnose and root out these problems. 

About the Author

Poonam Parhar, currently a JVM Sustaining Engineer at Oracle where her primary responsibility is to resolve customer escalated problems against JRockit and HotSpot JVMs. She loves debugging and troubleshooting problems, and is always  focused on improving the serviceability and supportability of the JVM. She has nailed down many complex Garbage Collection issues in the HotSpot JVM, and is passionate about improving the debugging tools and the serviceability of the product so as to make it easier to troubleshoot and fix Garbage Collector related issues in the JVM. In an attempt to help the customers and the Java community, she shares her work experiences and knowledge through the blog she maintains here.

Rate this Article

Adoption
Style

BT