Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Q&A on New Relic Software Analytics Improvement

Q&A on New Relic Software Analytics Improvement

New Relic has released a set of new features to its New Relic Software Analytics Platform. The company wants to reduce complexity in modern software architecture, considering that microservices, PaaS, in-memory databases and containerization are the main key trends in software architecture.

Service Maps creates a visual, real time map which gives personalized views of the infrastructure to development and operation teams. Docker monitoring allows collecting containers performance metrics in an application context. The Database Dashboard focused on NoSQL databases and a streamlined alerting system are the other two features that New Relic has released recently.

InfoQ talked to Al Sargent, senior director of product marketing at New Relic, in order to know more about the new features.

InfoQ: Of Service Maps, docker monitoring, Database Dashboard or streamlined alerting, which do you consider the most relevant to teams that are already using modern architectures in production?

Modern architectures are evolving across a number of dimensions -- and anyone that focuses on just one risk is falling behind in today's software-driven business environment.

Those dimensions are:

1) Monolithic apps to microservices
2) Physical and virtual machines to containers
3) Disk-bound databases to in-memory databases
4) Application servers to PaaS

All of these help developer productivity and application scalability and resilience. But they all bring increased complexity. We have a customer with literally 4000 microservices, for instance, which can be a rat's nest if not properly managed. To help tame that complexity, our new Service Maps and Alerts were reimagined to ensure ease of use and flexibility during this shift, but also be able to address future complexities in software architectures.

InfoQ: Service Maps seems to be an entry to the monitoring tool. How did you decide the importance of each component when designing what you call a “specific view of the world” for microservices?

We decided the importance and priority of components for Service Maps by working closely with customers to understand how they used visual tools in their day-to-day operations. What we found is that particularly in organizations that are adopting microservices, the teams who were developing and supporting these services wanted a focused view primarily of the services they were supporting, but also visibility to both upstream and downstream dependent services. 

What this led to was a strong priority focus on customization, so that different teams could create a visual representation of their world in a way that made sense to them. Using these maps as artifacts to have discussions with other teams has been well received.

InfoQ: Could you go deeper and detail the main features of Service Maps?

Service Maps help service developers, ops, site engineering teams, and DevOps better understand their complex software architectures by creating a visual, real-time map that represents their specific view of the world. Service Maps not only show a representation of the architecture as seen by New Relic APM auto-magically, but any changes to the architecture is reflected in the map. Real-time health status for components shown on the map are made visible for troubleshooting purposes, including evaluating the health of incoming or outgoing connections for each specific app.

InfoQ: One of the features you developed is directly related to Docker. What is in your opinion the grade of maturity of this container technology and how do you see its evolution in the near future?

Docker is obviously still in its early days, but based on our customers' interest and our own usage of Docker as part of our technology stack, we saw there was a blind spot when it came to visibility of containers as part of your application’s performance. While we can’t say whether Docker will become widely adopted in production or how their evolution might impact that, our view is modern software will continue to move to more modular, elastic cloud application architectures consisting of services like containers, PaaS and microservices.

InfoQ: As everyone is speaking about Docker nowadays, probably  you set the implementation of Docker monitoring as a high priority task. What do you think are the most valuable features that you are offering to your users?

In a phrase, it's application context. We put Docker performance metrics into the context of an application -- and thus a business need. Which container images are supporting which apps? How heavily loaded are they? These are valuable insights that correlate data from both our application agent and our Linux server monitoring agent. This matters because a datacenter can have dozens of containers today, and hundreds or thousands in the near future. If you just collect Docker metrics without application context, that data is essentially worthless.

We have more details in this blog post on our new Docker support.

InfoQ: When talking about NoSQL databases, what are the most common metrics that customers ask you for? What are most valuable ones offered in the new Database Dashboard?

Whether you use SQL to access data (as is done for a RDBMS) or an API to access data (as is done for a NoSQL database), the same core concerns apply:

1) How much data is moving between my app and the databases it connects to? (Throughput)
2) Are my databases responding slowly? (Response time)
3) Is my app making excessive calls to its databases? (Call count)
4) When databases run slowly, why? (Slow query details)
On the Database dashboard, the primary metrics are standardized across all instrumented RDBMS and NoSQL databases, as this make it easy to compare performance characteristics and behaviors between the different database types.  We feel that this unified view of database performance from the perspective of the application using them is a powerful way to show how applications use these databases, particularly how database operations impact performance of the application using them, and ultimately the end user.  To augment this view, database-specific plugs which can be found on our Plugin Central Page will provide database-specific metrics from the database itself - this combination of different perspectives is important.

InfoQ: In the unified alerts platform, how do you decide which alerts should be clustered together when they are coming from different devices? Can you briefly explain if they are different configuration options?

Because we are able to monitor the full stack and have a very flexible alert policy model we are able to leverage the two together to use a simple and elegant rollup mechanism that uses the configuration itself to create the scope of the clustering. This way the user is in control of how and what gets grouped together.

New Relic Alerts can generate incidents in one of three ways, detailed here, depending on the user's needs:

1) One incident per policy. (A policy is a set of conditions for a set of resources (aka targets), such as apps or servers.) This generates the fewest number of alerts.

2) One incident per condition. This is useful for policies containing conditions that focus on targets (resources) that perform the same job; for example, servers that all serve the same application(s)

3) One incident per target and condition. This generates the most alerts, and is useful if you need to be notified of every violation or if you have an external system where you want to send alerts.

Rate this Article