BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Interview with William Louth on JXInsight 5.0

Interview with William Louth on JXInsight 5.0

JInspired released version 5 of JXInsight, their performance monitoring tool. InfoQ sat down with William Louth, JXInsight Product Architect and CTO of JInspired to talk about the release and performance monitoring and optimization.

JXInsight is an enterprise Java performance monitoring, problem diagnostic, transaction analysis and application management solution. Some of the new features of version 5 include:

  • Automated Performance Analysis and Problem Detection
  • Runtime Diagnostics for Improved Problem Resolution
  • Extensible Resource Metering Framework for Improved Monitoring
  • Powerful Execution Behavior Pattern Identification
  • Enhanced Performance Metric Collection and Monitoring
  • Enhanced Application Runtime Object State Inspection
  • SOA Performance Test Management Tool Integration
  • Automated Change Detection of Deployed Artifacts
  • New and Enhanced Terminal Commands

We asked Louth to describe how JXInsight differs from other tools on the market.

There are a number of aspects to the product that make JXInsight different and unique. The first, and the most important in terms of production deployment, being the large volume of contextual runtime execution behavior and resource consumption data we collect while having an extremely low overhead. The second difference, which has existed since our initial offering, is the distributed resource transactional analysis we provide - this has yet to be implemented in any other product today. In terms of the deployment architecture we are unique in that are agents have a high degree of autonomy with no central server supporting distributed tracing across different devices including handhelds. From a performance analysis and problem resolution perspective it is clear that the information visualizations within our management console have raised the bar in terms of graphical excellence with an extremely high data to ink (pixel) ratio. The last item has resulted in a product manager at Quest referring to JXInsight as the “product of love”.

One of the enhancements in JXInsight 5.0 is an extension of their open API to the Diagnostics and Probes frameworks. Louth describes the open API and how it benefits users:

Extensibility has been at corner stone of our product design from the beginning. Nearly all of the 50 trace, metric, and insight extensions we deliver today, that integrate into many diverse application frameworks and component technologies, use the same open API that we provide to customers. To date our Trace API is probably the most heavily utilized of the API’s both within the extensions and customer applications. The main reason for this is that it allows developers to augment the Java call stack with a contextual trace stack that can include a HTTP request URL, a JMS property, the JNDI name for a component, the code source for a class, the trace stack from a Java client application, a SQL DML, etc. Unlike other newcomers to this market we have not limited how the the injection of instrumentation is performed. Developers can look at introducing trace calls via existing framework event listener mechanisms, AOP, or manually coding the calls within their code.

With JXInsight 5.0 we have added two important frameworks to the open API - Diagnostics and Probes. With the diagnostics framework a developer can make important object state accessible from within JXInsight’s application management console which can then be stored off-line in a snapshot and shared across various teams and team members. The feature is very similar to a debugger but different in that frames are not tied to the execution of a method code block - diagnostics frames can represent high level concepts such a business process. Also the naming of variables and frames is much more expressive supporting grouping and allowing naming to be dynamic and contextual. Did I say contextual again? The other API we have introduced, JXInsight Probes, started out as a solution to our own performance management needs. We found none of the commercial low level code profilers provided sufficient and accurate information in helping us tune our own Java code base so we created an extremely lightweight, fast and extensible resource metering runtime that allowed us to aggregate resource consumption into logical systems within our code base.

Louth then described how JXInsight helps developers track down performance problems.

Our data collection is impressive but more so because we collect all this at important and costly component and system boundaries within an application such as database and message interactions. We simplify the problem by visually highlighting the technology and component layers with a server application, how they interact with each other, and what resources are consumed during such interactions. We have a vast array of different visualizations in the product with each designed to highlight a particular aspect of the underlying performance model we have created.

The management console also has different 3 different analysis modes that also help focus the developer limiting the information depending on the use case and the application stage. The Metrics mode provides high level monitoring of key performance indicators over a sliding window of 1 hour. These metrics typically come from our Agent, the runtime, and registered MBeans within the process. The Profile mode provides aggregate performance statistics for trace and transactional execution paths within the application. Within the Timeline analysis mode a performance engineer can detect whether performance slow downs occur under a certain level of heavy concurrency for particular transactions patterns or SQL statements. It is also possible to correlate high maximums for traced requests, transactions and SQL statement executions with other event processing within the a JVM such as garbage collection, waiting and blocking. Each of these analysis modes can visualize at a process or user defined cluster level.

The conversation then moved to how JXInsight helps isolate the biggest problems in an application.

With JXInsight 5.0 we have added performance analysis inspections. These are similar to code inspections found in Intellij’s IDEA but of course based on parameterized runtime resource consumption and execution patterns we have associated with a performance and capacity management issue. At present we have 150 inspections but this will grow with each minor release delivered this year. It is important to note that performance analysis is run against snapshots so updates to the built-in problem-cause-symptom pattern library can be run with previous application snapshots.

Next we asked Louth how JXInsight can help with grid applications.

Performance management for grid based applications is likely to be a big growth area for us over the coming year. We have already created extensions for Tangosol’s Coherence data grid product along with its commonj.work.WorkManager implementation that allows us to track the parallel execution of work across a cluster of nodes recording clock, cpu time, wait and gc times within each node for each work item and all tracked back to the point of origin. With data grid products like Coherence scaling becomes less about the actual runtime and more about the deployment and management of the grid resulting in a need for tools that provide resource metering and effective trace back billing.

By the way bringing this together with our powerful resource transaction analysis makes it possible to understand the transactional semantics of a business transaction chopped and sliced into multiple units of work and executed in parallel across multiple processes.

Finally Louth talked about how to approach performance problems in general.

Always consider performance during the early design stages of an applications architecture and when selecting the style of communication between components and resources. The biggest performance gains are in general obtained at the application layer so it is important to capture as much knowledge of the application as it moves from development through to production. Monitoring changes in execution behavior and resource consumption is a much more rewarding and cost effective exercise that has benefits in other areas including quality assurance, capacity planning and problem management. To gain this knowledge I recommend first the testing of each major use case in an application, recording interactions between clients and containers, components and components, and components and external resources. Metrics collected at this stage such as remote procedure call counts are extremely useful irrespective of the underlying hardware and will help immensely when one starts to model and monitor the application under high workload volumes.

Rate this Article

Adoption
Style

BT