Top 8 Application Performance Management (APM) Tools

DynatraceDatadogAppDynamicsAternityNew Relic APMAzure MonitorITRS GeneosApica Synthetic
  1. leader badge
    Technical support has always been quick to respond. We use the Dynatrace AI to assess impact. Because it links to real users, it is generally pretty correct in terms of when it raises an incident. We determine the severity by how many users it is affecting, then we use it as business justification to put a priority on that alert.
  2. leader badge
    Its integration is most valuable because you can integrate it with various service providers such as AWS, .Net, etc.The application performance monitoring is pretty good.
  3. Find out what your peers are saying about Dynatrace, Datadog, AppDynamics and others in Application Performance Management (APM). Updated: July 2021.
    523,535 professionals have used our research since 2012.
  4. leader badge
    It's good for a larger scale deployment such as what my company is working on.The release management capabilities are great.
  5. The infrastructure data, especially the CPU and memory data, is per second, which makes it outstanding as compared to other solutions. Its licensing cost is very low for us.
  6. The simplicity of the dashboard is very good.Working with the solution is very easy. It's user-friendly.
  7. Azure Monitor is very stable. Azure Monitor is really just a source for Dynatrace. It's just collecting data and monitoring the environment and the infrastructure. It is fairly good at that.
  8. report
    Use our free recommendation engine to learn which Application Performance Management (APM) solutions are best for your needs.
    523,535 professionals have used our research since 2012.
  9. The solution is used across the entire investment banking division, covering environments such as electronic trading, algo-trading, fixed income, FX, etc. It monitors that environment and enables a bank to significantly reduce down time. Although hard to measure, since implementation, we have probably seen some increased stability because of it and we have definitely seen teams a lot more aware of their environment. Consequently, we can be more proactive in challenging and improving previously undetected weaknesses.
  10. There are several features that are really good. The first one is the flexibility and the advanced configuration that Apica offers when it comes to configuring synthetic checks. It provides the ability to customize how the check should be performed and it is very flexible in the number of synthetic locations that it can use. It allows us to run scripts from different locations all over the world, and they have a really good number of these locations.

Advice From The Community

Read answers to top Application Performance Management (APM) questions. 523,535 professionals have gotten help from our community of experts.
Hi peers, With so many APM tools available, it can be hard for businesses to choose the right one for their needs.  With this in mind, what is your favorite APM tool that you would happily recommend to others?  What makes it your tool of choice?
author avatarHani Khalil
Real User

I have tested a lot of APM tools and most of it are doing the same job in different techniques and different interfaces. 

One of the best tools I tested called eG Enterprise, this tool provided the required info and data to our technical team. we Found also great support from eG technical team during the implementation. One of the main factors was cost and they can challenge a lot of vendors on that.


author avatarSilvija Herdrich

Hi, I recommend Dynatrace.Companies can focus on their business instead of wasting time and money in different tools and complex analysis. I today's world, companies would need more and more specialized employees only to do what Dynatrace can deliver in minutes via artificial intelligence. IT-world is changing, but companies can’t change quickly their monitoring tools and educate people. Customers, suppliers, partners and employees do expect perfect IT. The risk is too high doing something that does not deliver fast enough full observability. No matter what you do: run applications in your company, develop apps for employees and customer, built an e-commerce channel. IT is important for success, and this can be guaranteed if you know challenges in IT before others tell you.

author avatarAbbasi Poonawala (Yahoo!)
Real User

My favourite APM tool is New Relic. Monitoring dashboard shows exact method calls with line numbers, including external dependencies for apps of any size and complexity.

author avatarPradeep Saxena
Real User

My favourite APM tool is Azure Monitoring from this I can check application insights. I can also check when application crashed.

author avatarGustavoTorres

My favorite APM tool is Dynatrace, the one agent handling enables fast and agile deployment.

author avatarRavi Suvvari
Real User

Agree, well explained.

author avatarreviewer1352679 (IT Technical Architect at a insurance company with 5,001-10,000 employees)
Real User

Our organization is large and has a long history.  We had a lot of on-premise, monolithic applications and tons of business logic included in places it shouldn't be.  This caused a lot of pain to implementing new architectures.  Several years back we implemented new architectures using micro-service apps, client side browser processing, ephemeral systems based on kubernetes.  During the transition, DevOps teams were given full reign to use whatever tool they wanted, including open source.  A handful of tools were used more pervasively including New Relic, Prometheus, CloudWatch, OMS, Elastisearch, Splunk, and Zabbix.  As can be imagined, this caused a lot of issues coordinating work and responding to issues.  3 years back we did an evaluation of all tools and pulled Dynatrace into the mix.

Dynatrace was easily the most powerful solution to provide APM and simplify the user experience into a "single pane of glass".  We are also working to integrate several other data sources (zabbix, OMS, cloudwatch & prometheus) to extend the data set and increase the leverage of the AI engine.

Why Dynatrace?

- Most comprehensive end-to-end tracing solution from browser to mainframe
- Entity (aka CI) mapping to relate RUM to applications to hosts.  This includes mapping of entities such as Azure, AWS, kubernetes, and VMWare
- An AI engine that uses the transaction trace and entity mapping to consolidate alerts to accelerate impact and root cause analysis

There are several other features such as simplified/automated deployment, API exposure and analytics tools.

author avatarPradeep Saxena
Real User

Azure Monitor gives application insights as ingesting metrics and data logs as many varients OS, application, CPU, memory, etc. We can visualise and analyse what's going on in the application.

Hi community members, I have some questions for you:  What is ITOM? How does it differ from ITSM?  Which products would you recommend to make up a fully defined ITOM suite?
author avatarTjeerd Saijoen

ITOM is a range of products integrated together, it contains infrastructure management Network management Application management Firewall Management Configuration management. you have a choice of products from different vendors vendors. (BMC, IBM, Riverbed, ManageEngine etc).

ITSM is a set of policies and practices for implementing, delivering and managing IT Services for end users 

author avatarSyed Abu Owais Bin Nasar
Real User

One is that ITSM is focused on how services are delivered by IT teams, while ITOM focuses more on event management, performance monitoring, and the processes IT teams use to manage themselves and their internal activities.

I will recommend you to use BMC TrueSight Operations Management (TSOM) an ITOM tool. TrueSight Operations Management delivers end-to-end performance monitoring and event management. It uses AIOps to dynamically learn behavior, correlate, analyze, and prioritize event data so IT operations teams can predict, find and fix issues faster.

For more details:

author avatarNick Giampietro

Rony, ITOM and ITSM are guidelines (best practices) with a punch list of all the things you need to address in managing your network and the applications which ride on them. 

Often the range of things on the list is relatively broad and often while some software suites offered by companies will attempt to cover ALL the items on the list, typically, the saying "jack of all trades, master of none!" comes to mind here. 

In my experience, you can ask this question by doing a Google search and come up with multiple responses each covering a small range of the best practices. 

My suggestion is to meet with your business units and make sure you know what apps are critical to their success and then meet with your IT team to ask them how they manage those applications and make sure they are monitoring the performance of those applications. Hopefully, both teams have some history with the company and can provide their experiences (both good and bad) to help you prioritize what is important and key IT infrastructure that needs to be monitored.  

Like most things in life, there is more than one way to skin the cat. 

author avatarreviewer1195575 (Managing Director at a tech services company with 1-10 employees)
Real User

There are two letters which define a core "difference" in these definition and one which define a common theme.
O for Operations is the first pointer to the IT function of using IT infrastructure to keep business satisfied. That does involve day to day tasks but also longer term planning. Ideally Operations teams DON'T firefight except in rare circumstances, but have information at hand on the health of all objects that could impact business directly or indirectly. Monitoring collects  data, then correlation, analysis helps extract useful information to deduce the situations and take corrective action. The functions available in toolsets may automate parts of that, rare are case where they become 100% automatic.

S points to service delivery to users, hence ITSM is about serving users, mostly. So for many ITSM is fact the help desk or ticket management. Of course within ITSM there's a lot more to it, maybe a lot of analytics of operations data as well as history of past incidents and fixes to them that impacted service delivery in the past. ITSM may also include commitment, so called SLA/SLOs are contracts that describe the quality of service expected and committed to.

M for management means more than tools is needed for both. People are needed even if automation is highly present as all automation will require design and modification. Change is constant.
Management means processes for standardisation of data, tasks and their execution etc. It also means data collection, cleansing, handling, analysis, protection, access and many other aspects without which risks are taken and delivery of service becomes more hazardous.

ITIL and other formalised standards of conduct in the IT world have proven to be vital ways of driving standardisation, and shouldn't be ignored.

With the emergence of modern application landscapes and DevOps there's a tendency to "imagine" doing away with ITOM and ITSM.
Like everything they need to evolve and have over the last couple of decades, but getting some of the basic correct go a long way to ensuring IT serves business as a partner.

author avatarHani Khalil
Real User


ITOM is IT Operations Management which is the process of managing the provisioning, capacity, cost, performance, security, and availability of infrastructure and services including on-premises data centers, private cloud deployments, and public cloud resources.

ITSM refers to all the activities involved in designing, creating, delivering, supporting and managing the lifecycle of IT services.

I tired Microfocus OBM (HP OMi) and its good. You have also App Manager from manage engine. 

Ariel Lindenfeld
Let the community know what you think. Share your opinions now!
author avatarit_user342780 (Senior Software Engineer Team Lead at THE ICONIC)

Speed to get data into the platform is one of our most important metrics. We NEED to know what is going on right now, not 3-4minutes ago.

author avatarreviewer1528404 (Engineer at a comms service provider with 1,001-5,000 employees)
Real User

1.Ability to Corelate 

2. Machine learning/AI based thresholds 

3. Ease of configuration ( in bulk)

author avatarit_user364554 (COO with 51-200 employees)

Full disclosure I am the COO at Correlsense.
2 years ago I wrote a post just about that - "Why APM project fails" - I think it can guide you through the process of the most important aspects of APM tools.

Take a look - feel free to leave a comment:
Elad Katav

author avatarreviewer1608147 (CTO at Kaholo)

There are so many monitoring systems in every company that provide alerts. This is what's called - alert fatigue phenomena. Moreover, most alerts are handled manually which means it takes a long time and cost a lot of money to resolve an event. I think companies should evaluate the remediation part of monitoring systems... What is there to instantly and automatically resolve problems that come up instead of just alerting.

author avatarRavi Suvvari
Real User

Tracing ability like record level, latency and capabilities to inform good predictions in advance, history storage, pricing, support etc..,

author avatarDavid Fourie
Real User

Full stack end-to-end monitoring including frontend and backend server profiling, real user monitoring, synthetic monitoring and root cause deep dive analysis. Ease of use and intuitive UX. 

author avatarit_user229734 (IT Technical Testing Consultant at adhoc International)

In order to evaluate/benchmark APM solutions, We can based on the 5 dimension provided by Gartner:
1. End-user experience monitoring: the capture of data about how end-to-end application
availability, latency, execution correctness and quality appeared to the end user
2. Runtime application architecture discovery, modeling and display: the discovery of the
various software and hardware components involved in application execution, and the array of
possible paths across which those components could communicate that, together, enable that
3. User-defined transaction profiling: the tracing of events as they occur among the components
or objects as they move across the paths discovered in the second dimension, generated in
response to a user's attempt to cause the application to execute what the user regards as a
logical unit of work
4. Component deep-dive monitoring in an application context: the fine-grained monitoring of
resources consumed by and events occurring within the components discovered in the second
5. Analytics: the marshalling of a variety of techniques (including behavior learning engines,
complex-event processing (CEP) platforms, log analysis and multidimensional database
analysis) to discover meaningful and actionable patterns in the typically large datasets
generated by the first four dimensions of APM

In other side, we tried to benchmark internally some APM solutions based on the following evaluation groups:
Monitoring capabilities
Technologies and framework support
Central PMDB (Performance Management DataBase)
Service modeling and monitoring
Performance analysis and diagnostics
Alerts/event Management
Dashboard and Visualization
Setup and configuration
User experience
and we got interresting results

author avatarit_user178302 (Senior Engineer at a financial services firm with 10,001+ employees)
Real User

Most vendors have similar transaction monitoring capabilities so I look at the End user experience monitoring features to differentiate. Not only RUM (mobile and web) but also Active Monitoring through synthetics.

See more Application Performance Management (APM) questions »

Application Performance Management (APM) Articles

Tjeerd Saijoen
CEO at Rufusforyou
May 06 2021

How are security and performance related to each other?

Today a lot of monitor vendors are on the market, most of the time they focus on a particular area, for example, APM (Application Performance Monitoring) or Infrastructure monitoring. Is this enough to detect and fix all problems?

How are performance and security related?

Now our landscape is changing rapidly. In the past, we had to deal with one system. Today we are dealing with many systems in different locations. For example, your own data center called on-premise. Next, we have on-premise and for example AWS, and now we get on-premise and AWS and Azure and it doesn't stop. Now hackers have more locations and a better chance to find a weak spot in the chain, also if performance slows down, where is the problem. 

Because of this, you need many different monitoring tools also they don't monitor your application or OS parameter settings. For example, I have a webserver and it has a parameter to set the number of concurrent users to 30, a monitor tool will probably tell you more memory is required, you ad more expensive memory and you get the same result more memory, while the real solution is to adjust those parameter settings. 

We had several applications running for years while the total number of end users is rapidly growing, now most people don't adjust the parameters because they are not aware of they exist and the right value. 

How are performance and security related to each other, if they compromise systems as well you will see unusual behavior in performance? For example, a performance drop and more CPU will be allocated. For this, you need monitors capable of looking holistic to the complete environment, checking parameter settings and alert on unusual behavior also look for one single dashboard to check your environment including the cloud. Don't look at a sexy dashboard but more important a functional dashboard. Important is the tool capable and give it advise on what to do or is it to tell you there is a problem in the database but it doesn't tell you the buffer setting on DB xxx needs to be adjusted from 2400 Mb to 4800 MB

If we have the right settings, performance will increase and better performance is more transactions. More transactions mean more selling and more business.

Caleb MillerGood article, but the spelling and grammatical errors are pretty blatant.
Tjeerd Saijoen
CEO at Rufusforyou
Mar 29 2021

End-users can connect with different options: by cloud (AWS, Microsoft Azure or other cloud providers), by a SaaS solution or from their own datacenter. The next option is Multi Cloud and hybrid - this makes it difficult to find reasons for a performance problem. 

Now users have to deal with many options for their network. You have to take into account problems such as latency and congestion, and now an added a new layer because of Covid-19. Normally you work in an office space as an end-user and your network team takes care of all the problems. Now everybody is working from home, and many IOT devices are connected to our home network - are they protected? It is easy for a hacker to use these kinds of devices to enter your office network. 

How can we prevent all of this? With a security tool like QRadar or Riverbed. The most important thing to know is that you don't need a APM solution only. Many times, I hear people say,  "We have a great APM solution." Well, this is great for application response times, however an enterprise environment has many more components, like the network, load balancers switches and so on. Also, if you're running power machines you have to deal with microcodes and sometimes with HACMP - an APM solution will not monitor this. 

Bottom line: you need a holistic solution.  

Find out what your peers are saying about Dynatrace, Datadog, AppDynamics and others in Application Performance Management (APM). Updated: July 2021.
523,535 professionals have used our research since 2012.