BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Tips for Tuning the Garbage First Garbage Collector

Tips for Tuning the Garbage First Garbage Collector

Bookmarks

This is the second article in a two-part series on the G1 garbage collector. You can find part one on InfoQ July 15, 2013: G1: One Garbage Collector To Rule Them All.

Before we can understand how to tune the Garbage First garbage collector (G1 GC), we must first understand the key concepts that define G1. In this article, I will first introduce the concept and then talk about how to tune G1 (where appropriate) with respect to that concept.

Remembered Sets

Recall from the previous article: Remembered Sets (RSets in short) are per-region entries that aid G1 GC in tracking outside references that point into a heap region. So now, instead of scanning the entire heap for references into a region, G1 just needs to scan its RSet.


Figure 1: Remembered Sets

Let us look at an example. In figure 1 above, we show three regions (in gray): Region 1, Region 2 and Region 3 and their associated RSets (in pink) that represent a set of cards. Both Region 1 and Region 3 happen to be referencing objects in Region 2. Therefore, the RSet for Region 2 tracks the two references to Region 2, the "owning region".

There are two concepts that help maintain RSets:

  1. Post-write barriers
  2. Concurrent refinement threads

The barrier code steps in after a write (hence the name "post-write barrier") and helps track cross-region updates. Update log buffers are responsible for logging the cards that contain the updated reference field. Once these buffers are full, they are retired. Concurrent refinement threads process these full buffers.

Note that the concurrent refinement threads help maintain RSets by updating them concurrently (when the application is also running). The deployment of the concurrent refinement threads is tiered, where initially only a small number of threads are deployed and then more are eventually added depending on the amount of filled update buffers to be processed. The max number of concurrent refinement threads can be controlled by -XX:G1ConcRefinementThreads or even -XX:ParallelGCThreads. If the concurrent refinement threads cannot keep up with the amount of filled buffers, then the mutator threads own and handle the processing of the buffers - usually something that you should strive to avoid.

OK, so coming back to RSets - There is one RSet per region. There are three levels of granularity for RSets - Sparse, Fine and Coarse. A Per-Region-Table (PRT) is an abstraction that houses the granularity level for RSet. A sparse PRT is a hash table that contains card indices. G1 GC internally maintains these cards. The card may contain references from the region that spans the address associated with the card to the owning region. A fine-grain PRT is an open hash table where each entry represents a region with a reference into the owning region. The card indices, within the region, are held in a bitmap. When reaching the max capacity of the fine-grain PRT, a corresponding coarse-grained bit is set in the coarse grain bitmap and the corresponding entry is deleted from the fine grain PRT. A coarse bitmap has one bit for each region. A set bit in the coarse grain map means that the associated region may contain references to the owning region.

A Collection Set (CSet) is a set of regions to be collected during a garbage collection. For a young collection, the CSet only includes young regions, for a mixed collection, the CSet includes young and old regions.

If the CSet includes many regions with coarsened RSets (Note that "coarsening of RSets" is defined as the transitioning of RSets through different levels of granularity), then you will see an increase in scan time for the RSets. These scan times are represented in the GC pause as "Scan RS (ms)" in the GC logs. If the Scan RS times seem high relative to the overall GC pause time, or they appear high for your application, then please look for the text string "Did xyz coarsenings" in your GC log output when using the diagnostic option -XX:+G1SummarizeRSetStats (you can also specify the reporting frequency period (in number of GCs) by setting -XX:G1SummarizeRSetStatsPeriod=period).

If you recall from the previous article, an "Update RS (ms)" in the GC pause output shows the time spent in updating RSets, and the "Processed Buffers" show the count of the update buffers process during the GC pause. If you spot issues in those in your GC logs then use the above-mentioned options to "deep-dive" into the issue even further.

Those options can also help identify potential issues with the update log buffers and the concurrent refinement threads.

A sample output of -XX:+G1SummarizeRSetStats with the period set to one -XX:G1SummarizeRSetStatsPeriod=1:
Concurrent RS processed 784125 cards

Of 4870 completed buffers:

4870 (100.0%) by concurrent RS threads.

0 ( 0.0%) by mutator threads.

Concurrent RS threads times (s)

0.64 0.30 0.26 0.18 0.17 0.16 0.17 0.15 0.15 0.12 0.13 0.08 0.13 0.13 0.12 0.13 0.12 0.11 0.12 0.11 0.12 0.13 0.11

Concurrent sampling threads times (s)

0.00

Total heap region rem set sizes = 199140K. Max = 661K.

Static structures = 660K, free_lists = 15052K.

1009422114 occupied cards represented.

Max size region =
313:(O)[0x000000054e400000,0x000000054e800000,0x000000054e800000], size = 662K, occupied = 1214K.


Did 2759 coarsenings.

The above output shows the count of processed cards and completed buffers. It also shows that the concurrent refinement threads did 100% of the work and mutator threads did none (which as we said is a good sign!). It then lists the concurrent refinement thread times for each thread involved in the work.

The segment in brown shows the cumulative stats since the start of the HotSpot VM. The cumulative stats include the total RSet sizes and max RSet size, total number of occupied cards and max region size information. It also shows the total number of coarsenings done since the start of the VM.

At this point, I think it is safe to introduce another option flag -XX:G1RSetUpdatingPauseTimePercent=10. This flag sets a percent target amount (defaults to 10 percent of the pause time goal) that G1 GC should spend in updating RSets during a GC evacuation pause. You can increase or decrease the percent value, so as to spend more or less (respectively) time in updating the RSets during the stop-the-world (STW) GC pause and let the concurrent refinement threads deal with the update buffers accordingly.

Keep in mind that by decreasing the percent value, you are pushing off the work to the concurrent refinement threads; hence, you will see an increase in concurrent work.

Reference Processing

G1 GC processes references during an evacuation pause and during a remark pause (a part of the multi-phased concurrent marking).

During an evacuation pause, the reference objects are discovered during the object scanning and copying phase and are processed after that. In the GC log, you can see the reference processing (Ref proc) time clubbed under the sequential work group called "Other" -

 [Other: 0.2 ms]
   [Choose CSet: 0.0 ms]
   [Ref Proc: 0.2 ms]
   [Ref Enq: 0.0 ms]
   [Free CSet: 0.0 ms]

Note: References with dead referents are added to the pending list and that time is shown in the GC log as reference enqueing time (Ref Enq).

During a remark pause, the discovery happens during the earlier phase of concurrent marking. (Note: Both are a part of the multi-phase concurrent marking cycle. Please refer to the previous article for more information.) The remark phase deals with the processing of the discovered references. In the GC log, you can see the reference processing (GC ref-proc) time shown in the GC remark section -

0.094: [GC remark 0.094: [GC ref-proc, 0.0000033 secs], 0.0004374 secs]
       [Times: user=0.00 sys=0.00, real=0.00 secs] 

If you see high times during reference processing then please turn on parallel reference processing by enabling the following option on the command line -XX:+ParallelRefProcEnabled.

Evacuation Failure

You might have come across some of the terms "evacuation failure", "to-space exhausted", "to-space overflow" or maybe something like "promotion failure" during your GC travels. These terms all refer to the same thing, and the concept is similar in G1 GC as well. When there are no more free regions to promote to the old generation or to copy to the survivor space, and the heap cannot expand since it is already at its maximum, an evacuation failure occurs.

For G1 GC, an evacuation failure is very expensive -

  1. For successfully copied objects, G1 needs to update the references and the regions have to be tenured.
  2. For unsuccessfully copied objects, G1 will self-forward them and tenure the regions in place.

So what should you do when you encounter an evacuation failure in your G1 GC logs? -

  1. Find out if the failures are a side effect of over-tuning - Get a simple baseline with min and max heap and a realistic pause time goal: Remove any additional heap sizing such as -Xmn, -XX:NewSize, -XX:MaxNewSize, -XX:SurvivorRatio, etc. Use only -Xms, -Xmx and a pause time goal -XX:MaxGCPauseMillis.
  2. If the problem persists even with the baseline run and if humongous allocations (see next section below) are not the issue - the corrective action is to increase your Java heap size, if you can, of course
  3. If increasing the heap size is not an option and if you notice that the marking cycle is not starting early enough for G1 GC to be able to reclaim the old generation then drop your -XX:InitiatingHeapOccupancyPercent. The default for this is 45% of your total Java heap. Dropping the value will help start the marking cycle earlier. Conversely, if the marking cycle is starting early and not reclaiming much, you should increase the threshold above the default value to make sure that you are accommodating for the live data set for your application.
  4. If concurrent marking cycles are starting on time, but are taking a lot of time to finish; and hence are delaying the mixed garbage collection cycles which will eventually lead to an evacuation failure since old generation is not timely reclaimed; increase the number of concurrent marking threads using the command line option: -XX:ConcGCThreads.
  5. If "to-space" survivor is the issue, then increase the -XX:G1ReservePercent. The default is 10% of the Java heap. G1 GC creates a false ceiling and reserves the memory, in case there is a need for more "to-space". Of course, G1 GC caps it off at 50%, since we do not want the end-user to set it to a very large value.

To help explain the cause of evacuation failure, I want to introduce a very useful option: -XX:+PrintAdaptiveSizePolicy. This option will provide many ergonomic details that are purposefully kept out of the -XX:+PrintGCDetails option.

Let us look at a log snippet with -XX:+PrintAdaptiveSizePolicy enabled:

6062.121: [GC pause (G1 Evacuation Pause) (mixed) 6062.121: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 129059, predicted base time: 52.34 ms, remaining time: 147.66 ms, target pause time: 200.00 ms]

6062.121: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 912 regions, survivors: 112 regions, predicted young region time: 256.16 ms]

6062.122: [G1Ergonomics (CSet Construction) finish adding old regions to CSet, reason: old CSet region num reached min, old: 149 regions, min: 149 regions]6062.122: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 912 regions, survivors: 112 regions, old: 149 regions, predicted pause time: 344.87 ms, target pause time: 200.00 ms]

6062.281: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: region allocation request failed, allocation request: 2097152 bytes]

6062.281: [G1Ergonomics (Heap Sizing) expand the heap, requested expansion amount: 2097152 bytes, attempted expansion amount: 4194304 bytes]

6062.281: [G1Ergonomics (Heap Sizing) did not expand the heap, reason: heap expansion operation failed]

6062.902: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: recent GC overhead higher than threshold after GC, recent GC overhead: 20.30 %, threshold: 10.00 %, uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 %)]

6062.902: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason: still doing mixed collections, occupancy: 9596567552 bytes, allocation request: 0 bytes, threshold: 5798205810 bytes (45.00 %), source: end of GC]

6062.902: [G1Ergonomics (Mixed GCs) continue mixed GCs, reason: candidate old regions available, candidate old regions: 1038 regions, reclaimable: 2612574984 bytes (20.28 %), threshold: 10.00 %]

(to-space exhausted), 0.7805160 secs]

 

The above snippet carries a lot of information - first off let me highlight a few things from the set of command line options that were used for the above GC log: -server -Xms12g -Xmx12g -XX:+UseG1GC -XX:NewSize=4g -XX:MaxNewSize=5g

The switches in red show that the user has restricted the nursery in the 4G to 5G range and thus restricting the adaptability of G1 GC. If G1 needs to drop nursery to a smaller value, it cannot; if G1 needs to increase the nursery spaces, beyond its distribution, it cannot!

This is evident from the heap utilization information printed at the end of this evacuation pause:

[Eden: 3648.0M(3648.0M)->0.0B(3696.0M) Survivors: 448.0M->400.0M Heap: 11.3G(12.0G)->9537.9M(12.0G)]

After the cycle, G1 has to adhere to 4096M as the minimum nursery (-XX:NewSize=4g), out of which, based on G1's calculations, 3696M should be for Eden space and 400M for the survivor space. However, post-collection the data in the Java heap is already at 9537.9M. So, G1 ran out of "to-space"! The next two evacuation pauses also result in evacuation failures with the following heap information:

Next mixed evacuation pause 1:

   [Eden: 2736.0M(3696.0M)->0.0B(4096.0M) Survivors: 400.0M->0.0B Heap: 12.0G(12.0G)->12.0G(12.0G)]

Next mixed evacuation pause 2:

   [Eden: 0.0B(4096.0M)->0.0B(4096.0M) Survivors: 0.0B->0.0B Heap: 12.0G(12.0G)->12.0G(12.0G)]

Finally leading to a full GC -

6086.564: [Full GC (Allocation Failure) 11G->3795M(12G), 15.0980440 secs]

   [Eden: 0.0B(4096.0M)->0.0B(4096.0M) Survivors: 0.0B->0.0B Heap: 12.0G(12.0G)->3795.2M(12.0G)] 

The Full GC could have been avoided by letting the nursery / young gen shrink to the default minimum (5% of the total Java heap). As you may be able to tell the old generation was big enough to accommodate the live data set (LDS) of 3795M. However, LDS coupled with the explicitly set minimum young generation of 4Gs, pushed the occupancy to above 7891M. Since the marking threshold was at its default value of 45% of the heap (i.e. around 5529M), the marking cycle started earlier and reclaimed very little during the mixed collections. The heap occupancy kept increasing and another marking cycle started, but by the time, the marking cycle was done and the mixed GCs kicked in, the occupancy was already at 11.3Gs (as seen in the first heap utilization information). This collection also encountered evacuation failures. Therefore, this issue falls in the over-tuning and “starting marking cycle too early” categories.

Humongous Allocations

One last thing that I would like to cover in this article is a concept that may be new to many end-users - humongous objects (H-objs) and G1 GC's handling of H-objs.

So, why do we need a different allocation path for H-objs? -

For G1 GC, objects are considered humongous if they span 50% or more of G1's region size. A humongous allocation needs a contiguous set of regions. As you can imagine, if G1 would allocate H-objs in the young generation and if they would stay alive for a long enough time, then there would be a lot of unnecessary and expensive (remember H-objs need contiguous regions) copying of these H-objs to survivor space and then eventually these H-objs will get promoted to the old generation. Hence, to avoid this overhead, H-objs are directly allocated out of the old generation and then categorized, or mapped as humongous regions.

By allocating H-objs directly in the old generation, G1 avoids including them in any of the evacuation pauses, and thus they are never moved. During a full garbage collection cycle, G1 GC compacts around the live H-objs. Outside of a full GC, dead H-objs are reclaimed during the cleanup phase of the multi-phased concurrent marking cycle. In other words, H-objs are collected either during the cleanup phase, or they are collected during a full GC.

Before allocating H-obj, G1 GC will check if the initiating heap occupancy percent, the marking threshold, will be crossed because of the allocation. If it will, then G1 GC will initiate a G1 concurrent marking cycle. This is done in this manner since we want to avoid evacuation failures and full garbage collection cycles as much as possible. As a result we check as early as possible so as to give the G1 concurrent cycle as much time as possible to complete before there are no more available regions for live object evacuations.

With G1 GC, the basic premise with H-objs is the hope that there are not too many of those and that they are long lived. However, since G1 GC's region size is dependent on your minimum heap size, it could happen that based on the assigned region size, your "normal" allocation may look humongous to G1 GC. This would then lead to lots of H-obj allocations taking regions from old generation, which would eventually lead to an evacuation failure since G1 would not be able to keep up with those humongous allocations.

Now you may be thinking how to find out if humongous allocations are leading to evacuation failures. - Here, once again, -XX:+PrintAdaptiveSizePolicy will come to your rescue.

In your GC log, you will see something like this:

1361.680: [G1Ergonomics (Concurrent Cycles) request concurrent cycle initiation, reason: occupancy higher than threshold, occupancy: 1459617792 bytes, allocation request: 4194320 bytes, threshold: 1449551430 bytes (45.00 %), source: concurrent humongous allocation]

Therefore, now you can tell that a concurrent cycle was requested since there was a humongous allocation request for 4194320 bytes.

This information is helpful, since you not only can tell how many humongous allocations were made by your application (and whether they were excessive or not), but you can also tell the size of the allocation. Moreover, if you deem that there were excessive humongous allocations, all you have to do is increase the G1 region size to fit the H-objs as regular ones. Therefore, for example, the allocation size is just above 4M bytes. Hence, to make this allocation a regular allocation, we need a region size of 16M. So, the recommendation here was to set that explicitly on the command line: -XX:G1HeapRegionSize=16M

Note: Recall from my last article, G1 regions can span from 1MB to 32MB in powers of two, and the allocation request is slightly over 4 MB. Hence, an 8MB region size is not quite large enough to avoid the humongous allocations. We need to go to the next power of 2, 16 MB.

OK. I think I have covered most of the important issues and G1 GC concepts here. Once again, thanks for tuning in!

About the Author

Monica Beckwith is a Java Performance Consultant. Her past experiences include working with Oracle/Sun and AMD; optimizing the JVM for server class systems. Monica was voted a Rock Star speaker @JavaOne 2013 and was the performance lead for Garbage First Garbage Collector (G1 GC). You can follow Monica on twitter @mon_beck

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Strange behaviour with G1 GC

    by Jose Otavio Carlomagno Filho,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Monica.

    Great work on both of your articles about G1 here at InfoQ!

    In our company, we have been experimenting with G1 for a couple months but we are seeing some strange behaviour, do you think you can shed some light on the subject? Here's a question I posted at StackOverflow: stackoverflow.com/questions/20274317/g1-garbage...

    I searched pretty much everywhere I could only find one person with a very similar problem, but unfortunately nobody answered his questions either. Do you think you can give us a little help?

    Thanks in advance!

    Jose

  • Re: Strange behaviour with G1 GC

    by Monica Beckwith,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Jose -

    Sorry that I didn't get your message earlier. I checked the thread on stackoverflow to understand your problem and also checked out the graphs. From the graphs it's clear to me that class unloading is the problem here. With G1 (and ParallelOldGC) classes are unloaded only during a full collection. You probably didn't notice this issue with ParallelOldGC since the full collections were (probably) plenty and they are parallel (unlike G1 where a full collection is single threaded). Unfortunately, G1 doesn't (yet) perform class unloading at the end of marking (but G1 devs are aware of this issue and there already is a JEP in-place for it: openjdk.java.net/jeps/156). Please feel free to send in more updates on this issue.

  • Re: Strange behaviour with G1 GC

    by Jose Otavio Carlomagno Filho,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Monica,

    Thanks for your reply. Indeed we have a problem in our application regarding the number of classes being loaded/unloaded, we are already tackling this (we use a lot of reflection and JBoss is generating many proxies because we have a lot of remote EJBs which don't need to be remote).

    You're right about ParallelOldGC: it performed more full collections because the old gen was filling up and that prevented the Perm Gen from filling up. So G1 was not the cause of the problem, it just helped expose it.

    We have also been experimenting with CMS and it is giving us good results in some cases, but I think G1 would be ideal for us because we usually have large heaps (always over 5 GB, normally around 10 GB depending on hardware availability), and we have seen some very large pauses with ParallelOldGC. I hope they improve class unloading in G1 since it's our major issue with this collector right now, it is working a lot better for us after we made improvements to our application but it might still causes some trouble, just a lot less often than before.

  • Re: Strange behaviour with G1 GC

    by Monica Beckwith,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Yes. I think since HS25, CMS has the CMSClassUnloadingEnabled set by default. So, CMS will collect perm space by default and doesn't have to wait till a full GC.
    If you have any other issues with G1 (or CMS) or if you want to check on the priority of the class unloading JEP, I encourage you to use the hotspot-gc-use alias. That will help you have a long term and a short term plan based on the availability of the fix.

  • Re: Strange behaviour with G1 GC

    by Monica Beckwith,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Yes. I think since HS25, CMS has the CMSClassUnloadingEnabled set by default. So, CMS will collect perm space by default and doesn't have to wait till a full GC.
    If you have any other issues with G1 (or CMS) or if you want to check on the priority of the class unloading JEP, I encourage you to use the hotspot-gc-use alias. That will help you have a long term and a short term plan based on the availability of the fix.

  • Re: Strange behaviour with G1 GC

    by Jose Otavio Carlomagno Filho,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Yes, but fragmentation with CMS might be a big problem for us, that is why I think G1 should be ideal once these wrinkles are ironed.

    I'll subscribe to hotspot-gc-use. Thanks for your help.

  • Uncaptured time in G1 gc log

    by Kirti Teja Rao,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi,

    I am trying to use the G1 garbage collector for a low latency application and want to keep the GC pauses in few tens of milli seconds. I see notice some pauses as big as 600ms but when i add up the timings in parallel time or other phases it does not add up. How do i uncover the cause for these delays in gc?

    I also posted a question on stackoverflow with the GC log.

    stackoverflow.com/questions/21416774/g1-garbage...

    Can you please suggest if there are any STW phases not captured in GC log?

  • Problems with the G1 collector

    by Jesper Kragh,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Monica

    We just changed to the G1 collector in our production environment, and are having great problems. We are experiencing several full stop the world GC's each day, without any indication as to why in the GC logs.

    I have extensive knowledge of the CMS collector, but i have been unable to understand why the full stop the world GC's are happening with the G1 collector.

    We have plenty of Permgen to spare when the problem occurs, and the problem does not seem to be caused by h-objects. The stop the world GC happens most often when we update one of our largest caches, but this cache contains no h-objects.

    Are there any place we can get some help in analysing the GC logs? Or do we need Oracle support for this?

  • Re: Problems with the G1 collector

    by Kirti Teja Rao,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    If there is an explicit GC invoked, on G1 it will be a Serial Full GC. So I suggest adding Disable Explicit GC parameter. Also, if you have any heap inspection tools running like mission control, their jmap tend to invoke explicit GCs.

    You also try enabling concurrent explicit gc parameter but

  • Re: Problems with the G1 collector

    by Kirti Teja Rao,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    You can enable -XX:+PrintSafepointStatistics -XX:PrintSafepointStatisticsCount=1
    to see what type of GC it is in console output.

  • Unexplained long stop the world pauses during concurrent marking step in G1 Collector

    by Kannan Krishnamurthy,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Greetings,

    We are experiencing unexplained/unknown long pauses (8 seconds) during concurrent marking step of G1 collector.

    Extract from GC-log showing problem.

    2014-08-07T13:42:30.552-0400: 92183.303: [GC concurrent-root-region-scan-start] 2014-08-07T13:42:30.555-0400: 92183.305: [GC concurrent-root-region-scan-end, 0.0025230 secs] 2014-08-07T13:42:30.555-0400: 92183.305: [GC concurrent-mark-start] 2014-08-07T13:42:39.598-0400: 92192.348: Application time: 9.0448670 seconds 2014-08-07T13:42:39.601-0400: 92192.351: Total time for which application threads were stopped: 0.0029740 seconds 2014-08-07T13:42:39.603-0400: 92192.353: [GC pause (G1 Evacuation Pause) (young) 92192.354: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 7980, predicted base time: 28.19 ms, remaining time: 71.81 ms, target pause time: 100.00 ms

    2014-08-07T13:42:30.555-0400: 92183.305 is when the concurrent mark starts, approximately after 2 seconds of this step the application starts to pause until the next GC pause. However the GC logs claims the application was not paused during this window
    (Application time: 9.0448670 seconds).

    Any help in understanding the root cause of this issue is appreciated.

    Our target JVM: we had tried Java7 and Java8

    java version "1.7.0_04" Java(TM) SE Runtime Environment (build 1.7.0_04-b20) Java HotSpot(TM) 64-Bit Server VM (build 23.0-b21, mixed mode)

    java version "1.8.0_11" Java(TM) SE Runtime Environment (build 1.8.0_11-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.11-b03, mixed mode)

    Our JVM options :

    -Xms20G -Xmx20G -Xss10M -XX:PermSize=128M -XX:MaxPermSize=128M -XX:MarkStackSize=16M -XX:+UnlockDiagnosticVMOptions -XX:+G1PrintRegionLivenessInfo -XX:+TraceGCTaskThread -XX:+G1SummarizeConcMark -XX:+G1SummarizeRSetStats -XX:+G1TraceConcRefinement -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:InitiatingHeapOccupancyPercent=65 -XX:ParallelGCThreads=24 -verbose:gc -XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintAdaptiveSizePolicy -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -Xloggc:/common/logs/ocean-partition-gc.log

    Thanks,
    Kannan

  • Long young generation pauses

    by mishra07amit .,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hello Monica,

    I am seeing unexpected high long generation pause as big as 138 seconds whereas previous phase took only few milliseconds.
    Could you look into logs and confirm what tuning can be done and possible root cause.

    4065.258: [GC pause (young), 0.0394645 secs]
    [Parallel Time: 30.2 ms, GC Workers: 13]
    [GC Worker Start (ms): Min: 4065258.6, Avg: 4065258.7, Max: 4065258.9, Diff: 0.3]
    [Ext Root Scanning (ms): Min: 14.1, Avg: 16.2, Max: 19.7, Diff: 5.6, Sum: 210.5]
    [Update RS (ms): Min: 0.0, Avg: 1.6, Max: 3.2, Diff: 3.2, Sum: 20.2]
    [Processed Buffers: Min: 0, Avg: 23.8, Max: 44, Diff: 44, Sum: 309]
    [Scan RS (ms): Min: 3.1, Avg: 4.7, Max: 5.2, Diff: 2.2, Sum: 61.1]
    [Object Copy (ms): Min: 6.6, Avg: 7.2, Max: 7.8, Diff: 1.2, Sum: 93.5]
    [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
    [GC Worker Other (ms): Min: 0.0, Avg: 0.2, Max: 0.3, Diff: 0.2, Sum: 2.3]
    [GC Worker Total (ms): Min: 29.6, Avg: 29.8, Max: 30.1, Diff: 0.4, Sum: 387.7]
    [GC Worker End (ms): Min: 4065288.4, Avg: 4065288.6, Max: 4065288.7, Diff: 0.2]
    [Code Root Fixup: 1.7 ms]
    [Clear CT: 2.7 ms]
    [Other: 4.9 ms]
    [Choose CSet: 0.0 ms]
    [Ref Proc: 0.8 ms]
    [Ref Enq: 0.0 ms]
    [Free CSet: 2.4 ms]
    [Eden: 1416.0M(1416.0M)->0.0B(13.2G) Survivors: 16.0M->16.0M Heap: 12.9G(28.0G)->11.5G(28.0G)]
    [Times: user=0.42 sys=0.00, real=0.04 secs]
    5020.853: [GC pause (young), 148.0343389 secs]
    [Parallel Time: 142474.3 ms, GC Workers: 13]
    [GC Worker Start (ms): Min: 5022695.7, Avg: 5022705.6, Max: 5022714.4, Diff: 18.7]
    [Ext Root Scanning (ms): Min: 143.0, Avg: 2550.5, Max: 6442.5, Diff: 6299.4, Sum: 33157.1]
    [Update RS (ms): Min: 5704.6, Avg: 19017.1, Max: 45977.9, Diff: 40273.3, Sum: 247222.7]
    [Processed Buffers: Min: 0, Avg: 26.9, Max: 291, Diff: 291, Sum: 350]
    [Scan RS (ms): Min: 70538.2, Avg: 92539.7, Max: 114688.0, Diff: 44149.8, Sum: 1203016.2]
    [Object Copy (ms): Min: 894.7, Avg: 9265.9, Max: 20674.1, Diff: 19779.4, Sum: 120456.9]
    [Termination (ms): Min: 10169.3, Avg: 19073.0, Max: 26684.1, Diff: 16514.7, Sum: 247949.3]
    [GC Worker Other (ms): Min: 0.1, Avg: 4.5, Max: 18.2, Diff: 18.0, Sum: 58.4]
    [GC Worker Total (ms): Min: 142441.2, Avg: 142450.8, Max: 142461.1, Diff: 19.8, Sum: 1851860.7]
    [GC Worker End (ms): Min: 5165155.6, Avg: 5165156.4, Max: 5165162.0, Diff: 6.4]
    [Code Root Fixup: 2.2 ms]
    [Clear CT: 11.2 ms]
    [Other: 5546.7 ms]
    [Choose CSet: 0.0 ms]
    [Ref Proc: 47.4 ms]
    [Ref Enq: 0.9 ms]
    [Free CSet: 3642.1 ms]
    [Eden: 13.2G(13.2G)->0.0B(1416.0M) Survivors: 16.0M->16.0M Heap: 24.7G(28.0G)->11.5G(28.0G)]
    [Times: user=184.95 sys=34.49, real=148.07 secs]

    Thanks,
    Amit

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT