Call Count Profiling
Re: Call Count Profiling
The article which was published in Mar 2008 is not available anymore but here are some excerpts.
Micro-Tuning the Code: This approach involves scanning the code and applying standard Java micro-performance optimizations (reducing object allocation, field access, method calls, and recursion), eliminating duplication of computation across call paths (temporal caching), and using heuristics to reduce the length of call paths. With this approach it is possible to waste a lot of development time if you do not understand how the software executes and the data characteristics so you must have already a good idea of the hotspots with the software. This should should ideally be combined it with one or more of the other approaches described here.
Micro-Benchmarking the Code: This second approach involves creating a benchmark and iteratively changing a small part of the code then measuring the change across test runs. A problem with this approach is that you can easily be lead astray by your desire to beat the clock and tune specifically for benchmark. So you must be careful to make trade-offs within the software that improve the efficiency of the software under normal operating conditions whilst not creating extreme worst cases for deviations in execution.
Counting the Code: With the third approach I use what I learnt early on in my algorithm studies of simply counting and weighting the instructions (not necessarily byte code) executed during processing and determining how this count changes depending on the branches in the code taken which can be dependent on object state. This is a much harder approach especially as I previously did most of this by visualizing the execution in my mind and maintaining accumulators based on costs assigned to language constructs and operations. After two weeks of doing this intensely you’ll begin to think you are Neo so I created a solution....