Top 8 IT Infrastructure Monitoring Tools

ZabbixDatadogLogicMonitorSolarWinds NPMPRTG Network MonitorAuvikSevOne Network Data PlatformCentreon
  1. leader badge
    The solution is stable.The pricing of the product is reasonable.
  2. leader badge
    Because of our client focus, it is easy for us to sell. This is because it is easy to use and easy to set up.I have found error reporting and log centralization the most valuable features. Overall, Datadog provides a full package solution.
  3. Find out what your peers are saying about Zabbix, Datadog, LogicMonitor and others in IT Infrastructure Monitoring. Updated: September 2021.
    535,544 professionals have used our research since 2012.
  4. leader badge
    LogicMonitor saves time in terms of its ability to proxy a connection through a device. For example, if you are troubleshooting a device, which you may want to connect to, you can proxy this connection through the platform. As a support resource, I don't need to use multiple platforms to connect to a device to further investigate the issue. It is all consolidated. From that perspective, it saves time because a resource now only needs to use one platform.
  5. leader badge
    The solution is easy to install and it takes a couple of hours.It's a very good tool and a very stable tool.
  6. leader badge
    Ability to create a dashboard inside software monitoring.It is user-friendly. It is easy to set up health checks within PRTG and use scans. It is like a self-service tool, and anyone can configure their own settings within PRTG.
  7. The alerting feature has been a very key piece for us, especially in the data center because we manage it ourselves... Within the data center, we have an RDS farm that all the users from the facility connect to. Whenever something may be slow, we can look at the alerting and it helps us troubleshoot whether the issue is at the facility level or at an infrastructure level.
  8. report
    Use our free recommendation engine to learn which IT Infrastructure Monitoring solutions are best for your needs.
    535,544 professionals have used our research since 2012.
  9. The comprehensiveness of this solution's collection of network performance and flow data is one of the basics in the field for what it does. It meets all of our needs. So for all those areas, for the most straightforward collection capabilities, right up to NetFlow and even telemetry, it meets all those demands. Not only just basic or fundamental SNMP collection capability, but the product also supports what we need for the future with telemetry streaming. So it's very comprehensive.
  10. The customizable reports and dashboards are really flexible. We started this partnership with Centreon, when we were looking for a solution, because of the flexibility of the reporting. That's what we found to be most attractive in the solution. You can display the data as you want.

Advice From The Community

Read answers to top IT Infrastructure Monitoring questions. 535,544 professionals have gotten help from our community of experts.
Nurit Sherman
Hi peers, Is it required for your company to conduct a security review before purchasing an infrastructure monitoring solution?  What are the common materials you use in the review?  Do you have any tips or advice for the community and any pitfalls to watch out for?
author avatarDavid Collier
Reseller

As with any software that is deployed within any organisation, security must be built-in from the ground up. When it comes to Infrastructure Monitoring Software, the problem has and additional dimension - that of the underlying protocols used in the core work of gethering data. These protocols are typically outside of the control of the software developer themselves. So I would certainly incude "How the software vendor responds to 3rd party vulnerabilities". And there are potentially many areas where such vulberabilities can exist. For instance SNMP is pretty standard for collecting metrics and intercepting SNMP "traps". But what if there is an issues with SNMP itself?  (I won't go into SNMPv1, v2 and v3 here) How does your vendor respond and mitigate against issues with underlying protocols. I've mentioned SNMP, but what SSH (or the numerous implementations of SSH), WMI. sFlow etc etc. This is my first layer. Security of the PROTOCOL.


The next thing is the communication of the monitoring data. Each of the above protocols need a TCP/IP port to be open. That means putting holes in your firewall. And for me that's the only downside of "agentless" monitoring tools. Don't get me wrong, agentless is great for ease of deployment and ease of management in a closed network. For anything that goes over a wider network or Internet, then it's agent-based management for me. Why, well typically because it should be more secure. The agent should communicate data to the management server over a single port in an encrypted for. The agent should also be configured to only respond to data requests from a VERY limited number of servers. So that's the second layer, the security of the AGENT.


Moving up our IT monitoring ladder, we have the security of the MANAGEMENT ARCHITECTURE. Is all data encrypted in transit? Is it encrypted at rest (i.e. in the database). Is access to the database limited to only the management software? Is all other access simply REad-Only (e.g. for 3rd party reporting tools). There's also the security of the entire network within which the management software is operating - but that comes under the remit of wider network security. Most IT Infrastructure Monitoring software these days is web-based. Is the web-server secure? i.e. is Apache, NGinx, IIS etc fully patched and as secure as possible. Same for databases.


We then also need to consider the ability of users to "do bad things". As a previous respondent says, deny everything and allow by exception. This is typically achieved by using some form of RBAC mechanism in the software (Role Based Access Control). Each user is given on the level of access to the monitoring software that is needed to deliver the service the business needs. For instance, A firewall guy (or gal) does not the ability to run scripts on an Oracle database. Therefore I'd include in my review an assessment of the granularity of RBAC for users of the monitoring software. Let's call this the security of the APPLICATION.


Now that's a long response, but never, ever lose sight of the simple truth - the human brain is more complex, intricate and flexible than any IT system. Or in other words, don't underestimate the ability of man to screw it all up.


DAVE COLLIER


Nobius IT.





author avatarRavi Khanchandani
Reseller

It may not be mandatory for conducting a security review before procuring Infrastructure monitoring tools but understanding the security concerns and planning a security monitoring platform goes a long way. You can adopt a sectional approach and address each security section. The following sections may help address the security concerns:


1. Monitoring Platform exposure to the Internet - Ensure that the monitoring platform has restricted access to OEM sites and sessions are initiated by the monitoring platform only. If the platform is accessible from the internet, ensure access via secure mechanisms like VPN (IPSec/SSL).


2. Email security - accounts used by the platform to send out email alerts


3. Web server security - the console of most monitoring solutions will be a web-based interface - pay attention to enabling SSL with certificates and ensuring other aspects of web security. Ensure acceptable security practices for web server security, Self signed certificates, Trusted certificates.


4. User access - Create a list of administrators of the platform and monitoring users. Also, identify if you need to have an integration with existing LDAP or Active Directory setups. Role-based access should be charted out maybe based on the LDAP/AD groups. Pay attention to the local users created on the platform. 


5. Credentials used for end devices - Many monitoring tools will require device credentials to be stored on the monitoring platform (for example network device configuration backup). Make sure to have a security strategy for these credentials.


6. Anti-X solutions deployed on the monitoring platform - Define clear guidelines for anti-virus, anti-malware solutions, etc for deployment on the monitoring platform itself. You may be asked by the monitoring solution OEM to exclude certain files, folders, services, executables from scans.


7. Monitoring protocol security - use of SNMP, WMI are the defacto monitoring protocols. Use of secure methods like SNMPv3 with strong encryption methods. 


8. Database security - the monitoring application would eventually store captured information into a database.


9. Integration with other solutions like ticketing - Secure the integration whether using API integration, connectors available out-of-the-box, credentials, etc.


Each section may have elaborate security measures that may need to be adopted in consultation with domain specialists. 

author avatarTchidat Linda
Real User

Although in our company we didn't require to conduct a security review before choosing an infrastructure monitoring solution, we have particularly look about the authentication method. Talking about user's accounts, groups and permissions.
One tip we have used, was to look for a monitoring solution that can interface with an existing entreprise authentication server (LDAP Server). In other that users could directly log in this purchased solution with their entreprise accounts.
So we have no more need to invest in creating a new secure users database and simply focus on creating users permissions depending on employees category.

author avatarMenojRoekalea
User

I would start focussing on the used acounts and their privileges, other components aren't that interresting security wise. But the used accounts are probably over privileged as my experience has showed my before.

author avatarCarlos Daniel Casañas Bertolo ஃ
Real User

The documentation MUST indicate that the standard security configuration is DENY EVERYTHING and grant permissions based on multiple conditions (IP, user, schedule, ...).
The BD with which it is compatible must be able to be encrypted.
Compatible with iso 27000.
The trial must pass several security tests before being included as an option to choose.

author avatarMatt Davis
Real User

My company does not require a security review per se, although we do incorporate security measures to protect our network. For example, if your monitoring system is public facing, you'd want to lock it down so that only the IP ranges and TCP/UDP port ranges necessary for you to monitor what you want to monitor are allowed in. If you are doing only active monitoring, then you don't really need to allow any establishment of connections from outside. If you are using SNMP traps, or an agent that pushes info to the monitoring services, the respective IPs and ports need to be allowed in. You can do this with a firewall like iptables. Security by obscurity is also still a helpful thing. Default port numbers, etc. are low-hanging fruit for bots and things that scour the internet for easy victims. You can also use something like fail2ban, which creates a blacklist of IPs who repeat failed logins. It is also helpful to ask the vendor which versions of software they use. It is possible they use an older version, which is not as secure as using one that is regularly updated with security patches. For example, do they use mySql? Which version? What about the OS? Is it a version still supported?

author avatarSofian Bayoudh
User

IT security is an ongoing exercise, with some sporadic penetration testing. SOC should be closely coupled to NOC, especially in terms of log management, traffic capture and analysis (for heuristics/forensics), connectivity/management, DNS security, WAF, etc.
So it's more than security review before deploying NOC, it's rather complete integration with due proper design and planning.

author avatarTjeerd Saijoen
Vendor

Security is always important, the first thing you review is if you start using monitoring is do you need this on-premise or from the cloud.

With on-premise you follow your own security rules however important are the following questions:

-How is the monitoring data stored in the database?
-Are the DB fips enabled?
-How are agents sending data, is the data encrypted?
-What kind of data is sent between customer systems and monitoring server?
-Does the monitoring software using security policies or for example integrate with LDAP or active directory?

Today you have many tools for infra monitoring we deliver monitoring from the cloud and using a VPN/IPSec tunnel between the customer and the systems in our cloud.

Also, we have customers doing a security check on our servers and we using patent recognition to check if our systems have no security leaks. Second, we using local gateways at the customer to collect the data we need and only the local gateway has a connection with our servers. Using this technology we have only one connection between datacenter and gateway and this connection is monitored all the time as well only 2 ports are open in the firewall.

Important is what are you using for infrastructure monitoring and how is it connected, what kind of interface is it web or client/server from the client to the monitoring server.

Anonymous User
With the security issues associated with SolarWinds - are people switching to other vendors?   Which ones are you switching to and why?
author avatarchamepicart
Real User

We’ve switched from Solarwinds to Centreon even before the issue occurred. It’s way cheaper and is a good alternative and very flexible to your needs. You can play with it yourself.

author avatarRobertUllman
User

Thousand Eyes acquired by Cisco, interesting synergies with AppDynamics APM.

author avatarTjeerd Saijoen
Vendor

Riverbed is also a great solution very easy to install and a great dashboard.

author avatarIanMacfarlane
Real User

I have used both and have to say my experience with Connectwise was very good. Design for MSP's and when used with IT Glue and My It Process nice, accurate seamless. 

author avatarSonal Kashyap
Vendor

We, at MapleLabs have developed best-of-the breed APM & Log mgmt. product SnappyFlow (https://www.snappyflow.io/#/). SnappyFlow is a unique APM & Log Monitoring application that brings together metrics, logs, tracing & synthetic monitoring in an elegant workflow and unified view. We work with SaaS companies having Cloud native applications that are micro services based, dispersed across VMs, Kubernetes & Cloud Services and straddle hybrid Clouds. SnappyFlow is best suited to optimize your cloud native apps, maximize the performance and availability of your cloud infrastructure. SnappyFlow is currently listed in AWS and vCenter marketplace and very cost-effective, scalable and flexible solution compared to its peers, including Cloudwatch. We're offering extended free trials till the time you're comfortable with the product. Please give it a thought. Thanks!

author avatarAbhirup Sarkar (EverestIMS Technologies)
Real User

Please check out InfraonIMS from EverestIMS Technologies.


(https://www.everestims.com/inf...)

The major advantage is an integrated solution which not only monitors the complete IT Infra but also provides complete visibility into the ticket lifecycle for any issues detected via the PINK-Certified InfraonDesk ITSM engine.

From a security standpoint, the tool is OWASP Certified for higher levels of protection against malicious attacks.

author avatarBernd Harzog
Real User

The hackers targeted SolarWinds because SolarWinds has many customers. To minimize the risk of being hacked through one of your vendors, this suggests choosing unpopular vendors with few customers. Which is completely irrational.


Summary - this is a really hard problem and switching vendors does nothing to reduce your risk of this type of hack.

author avatarTjeerd Saijoen
Vendor

IBM Netcool is a great alterative, also available as a SaaS solution from https://rufusai.com

Ariel Lindenfeld
Let the community know what you think. Share your opinions now!
author avatarStacy Leidwinger (Goliath Technologies)
Real User

1) Ease of deployment and maintenance. The ideal solution will auto-discover your environment and have intelligence built in to tell you what to monitor and how to monitor with built-in alerts that leverage industry best practice thresholds. This way users can anticipate issues and resolve them before users are impacted. 


2) Historical, real-time, and discrete data that will show all IT infrastructure elements used to deliver a single end-user experience. The is the only way to monitor and troubleshoot issues is to have full visibility into the true user experience.


3) Document all user activity, behavior, and system performance so that you can share, integrated, and enhance data to collaborate with management, other IT teams, application vendors, and even end-users. 

author avatarDmytro Kutetskyi
Real User

I think you need to look for:


1. Unifications. All aspects of the monitoring should be done by one or multiple tools. As an option, integration between tools should be possible.


2. Plug-in based or open architecture. Open Source will be a huge plus. In this case, you will have community support, and hiring the expert for widely used technology should not be the issue.


3. Tools should have quick support - monitoring could go down when you really need this. Open Source tools allow you to have a big market of engineers with good expertise.


4. Agree with other comments - ROI is very important here.

author avatarMichaelDelzer (Michael Delzer Consulting)
Real User

The ability for the solution to correlate data from across the enterprise to remove noise in alerts, and for the alerts to be able to trigger automation to remediate a known problem/incident.

author avatarreviewer1608147 (CTO at Kaholo)
User

Most monitoring solutions have similar capabilities. Each one has a bit different set-up and points that are stronger than others. According to the 2020 CIO report from Dynatrace, on average, companies get close to 3,000 alerts daily. That is my concern.... Over time, you'll get more than one monitoring system and need to make sure that you have resources to deal with the alerts... scripts with Jenkins will not work as there are so many types of alerts. Try looking at low-code automation that will help you set up instant remediation pipelines. Companies like Lacework created a library of instant remediation workflows that customers can use in minutes...

author avatarMichael DelSecolo
User

I would propose to look at Infrastructure monitoring from a different perspective. The corollary I would use is to equate infrastructure monitoring to a big data problem with the need for automation. In today's world we have many infrastructure devices that transmit a large amount of data or telemetry and the key to quick automated response is to look at adjacencies and quickly determine corrective action. I suggest injecting the telemetry into an infrastructure data lake and apply some ML & AI applications to determine issues and automation to quickly solve. The amount of data produced has become daunting and I suggest taking a data driven approach instead of siloed Infrastructure monitoring tools.

author avatarreviewer1599867 (Senior Performance and Architecture Analyst at a manufacturing company with 10,001+ employees)
Real User

Most important: know in depth your environment and the future evolution, upcoming change - this includes systems, solution, operations model, SLO.


After, you can initiate the second step, identify tool(s), etc.  

author avatarreviewer1584621 (Cyber Security Consultant at a tech services company with 11-50 employees)
Real User

I would consider multi metrics to evaluate an infrastructure such as:


1- CAPEX, OPEX & ROI for financial.


2- Security, Reliability, operational complexity lifetime ...etc for technical.


3- Extra benefits that might be gained from different solutions such as possible cross solutions integrations which can be a tie breaker in many times.


I hope this helps.

Rony_Sklar
Hi community members, I have some questions for you:  What is ITOM? How does it differ from ITSM?  Which products would you recommend to make up a fully defined ITOM suite?
author avatarTjeerd Saijoen
Vendor

ITOM is a range of products integrated together, it contains infrastructure management Network management Application management Firewall Management Configuration management. you have a choice of products from different vendors vendors. (BMC, IBM, Riverbed, ManageEngine etc).


ITSM is a set of policies and practices for implementing, delivering and managing IT Services for end users 


author avatarSyed Abu Owais Bin Nasar
Real User

One is that ITSM is focused on how services are delivered by IT teams, while ITOM focuses more on event management, performance monitoring, and the processes IT teams use to manage themselves and their internal activities.


I will recommend you to use BMC TrueSight Operations Management (TSOM) an ITOM tool. TrueSight Operations Management delivers end-to-end performance monitoring and event management. It uses AIOps to dynamically learn behavior, correlate, analyze, and prioritize event data so IT operations teams can predict, find and fix issues faster.


For more details:
https://www.bmc.com/it-solutio...

author avatarNick Giampietro
User

Rony, ITOM and ITSM are guidelines (best practices) with a punch list of all the things you need to address in managing your network and the applications which ride on them. 


Often the range of things on the list is relatively broad and often while some software suites offered by companies will attempt to cover ALL the items on the list, typically, the saying "jack of all trades, master of none!" comes to mind here. 


In my experience, you can ask this question by doing a Google search and come up with multiple responses each covering a small range of the best practices. 


My suggestion is to meet with your business units and make sure you know what apps are critical to their success and then meet with your IT team to ask them how they manage those applications and make sure they are monitoring the performance of those applications. Hopefully, both teams have some history with the company and can provide their experiences (both good and bad) to help you prioritize what is important and key IT infrastructure that needs to be monitored.  

Like most things in life, there is more than one way to skin the cat. 

author avatarreviewer1195575 (Managing Director at a tech services company with 1-10 employees)
Real User

There are two letters which define a core "difference" in these definition and one which define a common theme.
O for Operations is the first pointer to the IT function of using IT infrastructure to keep business satisfied. That does involve day to day tasks but also longer term planning. Ideally Operations teams DON'T firefight except in rare circumstances, but have information at hand on the health of all objects that could impact business directly or indirectly. Monitoring collects  data, then correlation, analysis helps extract useful information to deduce the situations and take corrective action. The functions available in toolsets may automate parts of that, rare are case where they become 100% automatic.


S points to service delivery to users, hence ITSM is about serving users, mostly. So for many ITSM is fact the help desk or ticket management. Of course within ITSM there's a lot more to it, maybe a lot of analytics of operations data as well as history of past incidents and fixes to them that impacted service delivery in the past. ITSM may also include commitment, so called SLA/SLOs are contracts that describe the quality of service expected and committed to.


M for management means more than tools is needed for both. People are needed even if automation is highly present as all automation will require design and modification. Change is constant.
Management means processes for standardisation of data, tasks and their execution etc. It also means data collection, cleansing, handling, analysis, protection, access and many other aspects without which risks are taken and delivery of service becomes more hazardous.


ITIL and other formalised standards of conduct in the IT world have proven to be vital ways of driving standardisation, and shouldn't be ignored.


With the emergence of modern application landscapes and DevOps there's a tendency to "imagine" doing away with ITOM and ITSM.
Like everything they need to evolve and have over the last couple of decades, but getting some of the basic correct go a long way to ensuring IT serves business as a partner.


author avatarHani Khalil
Real User

Hi,


ITOM is IT Operations Management which is the process of managing the provisioning, capacity, cost, performance, security, and availability of infrastructure and services including on-premises data centers, private cloud deployments, and public cloud resources.


ITSM refers to all the activities involved in designing, creating, delivering, supporting and managing the lifecycle of IT services.


I tired Microfocus OBM (HP OMi) and its good. You have also App Manager from manage engine. 

Rony_Sklar
Hi. There are many open-source IT Infrastructure Monitoring solutions on the market. However, large companies often invest in a paid solution.  What do paid monitoring solutions offer that make them a better choice than open source?
author avatarRichard Treadway (NetApp)
Real User

When you’re considering infrastructure monitoring, you certainly have plenty of options to choose from. From free to expensive, from targeted to comprehensive, today’s monitoring software presents a confusing assortment of features and claims.


How do you make sense of it all and choose a solution that meets your needs and is within your budget? The question “Can you afford it?” might quickly become “Can you afford not to have it?,” especially when you consider the cost of poor performance and downtime.


We look at ways to frame the problem and decide what’s right for you. We start by looking at cloud infrastructure monitoring tools and why they need to be specifically designed to handle new cloud-based applications. Then we’ll consider how to measure the value of a monitoring tool and what it’s worth to your organization.  


Read the complete analysis in this eBook

Infrastructure Monitoring Software: Can You Afford It? How to choose a monitoring tool that meets your needs — and your budget


Thanks,


Richard Treadway

author avatarDavid Collier
Reseller

Frankly speaking, these days there is very little if any advantage. Allow me to build on that if you will.....


First, let's talk about cost. Open source software, such as Zabbix (and numerous others) does not suffer from complex licensing considerations. Most paid-for solutions license their products based on number of devices, number of users, number of locations or a combination of all the above. There are even some licensing schemes where a docker container counts as 1/10th of a device. This makes license compliance difficult and an unwanted overhead to users. It's no wonder that many commercial vendors have "License compliancy" teams who are incentivised to identify and pursue non-compliant customers. In effect, increasing licensing complexity is actually good for the vendor.


With open source licensing these issues are non-existant.


Next, support. One commentator said "Open source software isn't free, you have to pay for support". This is true - if you want official support from the vendor. Speaking from wide-ranging personal experience, I've found that the support from open source providers is, without exception, orders of magnitude better than the paid for software. I've found that there isn't the obfuscation and "script reading" from open source support compared to commercial vendors. Also, with commercial vendors there are usually 2 components to a software support contract - "problem resolution" and "product updates". For most vendors, an annual support and upgrade contract is upwards of 20% of the initial capital cost of the software, usually with an annual uplift. So every 5 years you're buying the software again.


This takes me on to feature bloat. Some of the major vendors seem to introduce features almost to justify the annual support contract. At a high level, the features are always valuable, but in my experience are rarely implemented by users correctly or worse, not even used at all. These features are developed at the insistence of the largest customers and in those cases, the very largest will indeed use the feature. However, every customer will pay for a feature that delivers benefit to only the biggest customers. Smaller customers are hit twice in that they pay via their support contract for these features which likely need increased IT resource just to run, let alone use.


I think I should pause here, as it might appear that I have a massive chip on my shoulder regarding commercial ITOM vendors. Nothing is further from the truth. I have implemented a considerable number of such systems and I've found that once running they are reliable. They do require just as much "care and feeding" as open source solutions but there's no doubt there is a psychological aspect to having paid for the solution. The phrase "reassuringly expensive" comes to mind.


Another commentator talked about "fast training". Well, training is another area where open source providers excel. Certifications, on-site training, remote training, community support and so forth are all part of the open-source ecosystem, just as with any commercial vendor.


Someone else mentioned better security from commercial software. I don't want to offend, but let's ask Solarwinds about that. Open-source software prides itself on the availability of the source code. There are many tools out there that scan source code to find vulnerabilities (including some big vendors). This isn't possible with commercial vendors.


It's also worth talking about "supplier stability" - commercial vendors are likely to be more trusted as they are "real" whereas there still seems to be a perception that open source is written by amateurs in their parents' basement. Nothing could be further from the truth. Take a look at Apache and RedHat as 2 examples.


Open source in the 21st Century is as professional and of a quality and security that exceeds many commercial vendors.


Try it, you'll like it.

author avatarStacy Leidwinger (Goliath Technologies)
Real User

George Spiers a certified Citrix CTP and technology evangelist has a nice write up talking about how without enough experts to manage the infrastructure you do have to rely on technology to help:


https://www.jgspiers.com/troub...


In relying on software, I think the key difference commercial products can bring is the embedded intelligence and automation that can discover your overall infrastructure along with the what and how to monitor automatically. With open-source, you often have to decide what you want to monitor and what thresholds to set and even create scripts to manage the alert process. With 3rd party tools, it is all there and a core reason for needing help is you don't have enough experts already. 


Just make sure if you are paying for a product, it is doing a lot of the setup, configuration, and management for you! 


author avatarFaustine Chisasa
Real User

The main reason large companies invest in paid solutions is the advanced and timely support  provided when using paid solutions. And of course those who sell the solutions need to do a good support job to continue selling their products.


Using open source solutions means having very skilled  personnel to do most if not all of the work on getting things working and continue running which can be more expensive than paying for support by opting a paid solution. For example a monitoring solution for a large enterprise may require experts or separate teams to handle database administration, network and systems administration, web applications, systems security etc. but most of the work will be done by the vendor or suppling partner when it comes to paid solutions.


Also paid solutions seem to be more secure and when issues or bugs arise, the supplier/vendor is responsible to quickly get things on track. 


Paid solutions can easily provide customer requested features as part of support, that means less work on the end user for features request.


And since paid solutions are often sold to many similar customers, many large companies feel more assured by the large  portfolio of existing customers hence less risk in their opinions.

author avatarreviewer904428 (IT Strategy, Management Consulting, Training and Development at a comms service provider with 51-200 employees)
Real User

The so called free NMS is not free in case you want support. All organizations where critical businesses processes have dependency on reliable IT Network and therefore, reliable monitoring, have to go for an NMS tool which is preferred by organisations whose business processes are IT critical. It is simple, anything free is not guaranteed to work under all circumstances. Financial viability of NMS too vendor is essential to have a reliable tool which helps them continue to invest in future developments in line with emerging network technolofy landscape.   

author avatarAbhirup Sarkar (EverestIMS Technologies)
Real User

Generally apart from IT companies, I feel its difficult to manage an open source tool since it requires a lot of manual effort. IT Managers / Network Admins / Server Admins would rather use a commercial tool to spend less time on it from a daily operations / maintenance perspective. The major challenges of Open Source Tools include: 


1. Lack of professional support increases tool deployment cycle


2. Better Security and quick fixes to any security issues found in any commercial tool


2. Need for expensive resources to manage open source tool (must have scripting knowledge on programming languages like Perl / Python / Java etc.)


3. Steep learning curve leads to user dissatisfaction in Open Source tools


4. Limited API support for 3rd party integrations in Open Source tools


5. Better roadmap visibility both from product & company perspective for any commercial tool as compared to a similar open source tool


6. Personalized solution / tailor-made solution is a possibility with commercial tools (although it may involve additional costs)

author avatarOan-Ali
Real User

Most of modern open source Infrastructure monitoring tools can suffice the needs of any organisation, but the reason to organisations opt for a commercial product is of timely support and upgrades from the vendor.


Having said that, with Open Source software companies can go for Enterprise support from any certified partner. In this way the company saves massive CAPEX on software licensing.

author avatarMamadou-Diallo
Real User

Hello,


It's to have the service of technical support and the guarantee of security and integrity of the applications.


Also to have quick training for the technicians.


Thank you for your interest,


IT Infrastructure Monitoring Articles

Shibu Babuchandran
Regional Manager/ Service Delivery at ASPL Info Services
Sep 14 2021
The Essential Guide to AIOps

What Is AIOps?

AIOps is the practice of applying analytics and machine learning to big data to automate and improve IT operations. These new learning systems can analyze massive amounts of network and machine data to find patterns not always identified by human operators. These patterns can both identify the cause of existing problems and predict future impacts. The ultimate goal of AIOps is to automate routine practices in order to increase accuracy and speed of issue recognition, enabling IT staff to more effectively meet increasing demands.

History and Beginnings

The term AIOps was coined by Gartner in 2016. In the Market Guide for AIOps Platforms, Gartner describes AIOps platforms as “software systems that combine big data and artificial intelligence (AI) or machine learning functionality to enhance and partially replace a broad range of IT operations processes and tasks, including availability and performance monitoring, event correlation and analysis, IT service management and automation.”

AIOps Today

Ops teams are being asked to do more than ever before. In a common practice that can sometimes even feel laughable, old tools and systems never seem to die. Yet the same ops teams are under constant pressure to support more new projects and

technologies, very often with flat or declining staffing. To top it off, increased change frequencies and higher throughput in systems often mean the data these monitoring tools produce is almost impossible to digest.

To combat these challenges, AIOps:

•Brings together data from multiple sources: Conventional IT operations methods, tools and solutions aggregate and average data in simplistic ways that compromise data fidelity (as an example, consider the aggregation technique known as “averages of averages”). They weren’t designed for the volume, variety and velocity of data generated by today’s complex and connected IT environments. A fundamental tenet of an AIOps platform is its ability to capture large data sets of any type while maintaining full data fidelity for comprehensive analysis. An analyst should always be able to drill down to the source data that feeds any aggregated conclusions.

•Simplifies data analysis: One of the big differentiators for AIOps platforms is their ability to correlate these massive, diverse data sets. The best analysis is only possible with all of the best data. The platform then applies automated analysis on that data to identify the cause(s) of existing issues and predict future issues by examining intersections between seemingly disparate streams from many sources.

•Automates response: Identifying and predicting issues is important, but AIOps platforms have the most impact when they also notify the correct personnel, automatically remediate the issue once identified or, ideally, execute commands to prevent the issue altogether. Common remedies such as restarting a component or cleaning up a full disk can be handled automatically so that the staff is only involved once typical solutions have been exhausted.

Key Business Benefits of AIOps

By automating IT operations functions to enhance and improve system performance, AIOps can provide significant business benefits to an organization. For example:

•Avoiding downtime improves both customer and employee satisfaction and confidence.

•Bringing together data sources that had previously been siloed allows more complete analysis and insight.

•Accelerating root-cause analysis and remediation saves time, money and resources.

•Increasing the speed and consistency of incident response improves service delivery.

•Finding and fixing complicated issues more quickly improves IT’s capacity to support growth.

•Proactively identifying and preventing errors empowers IT teams to focus on higher-value analysis and optimization.

•Proactive response improves forecasting for system and application growth to meet future demand.

•Adding “slack” to an overwhelmed system by handling mundane work, allowing humans to focus on higher-order problems, yielding higher productivity and better morale.

Data Is Vital for AIOps

Data is the foundation for any successful automated solution. You need both historical and real-time data to understand the past and predict what’s most likely to happen in the future. To achieve a broad picture of events, organizations must access a range of historical and streaming data types of both human- and machine-generated data.

Better data from more sources will yield analytics algorithms better able to find correlations too difficult for humans to isolate, allowing the resulting automation tasks to be better curated. For example, it’s not hard in most semi-modern monitoring systems to automate some sort of response. However, if response times slow down an application, AIOps would help ensure the correct automated response and not just the “knee-jerk” response that’s statically connected. Adding more capacity to a service may in fact make a slowdown worse if the bottleneck isn’t related to capacity. And it certainly can result in unintended and unnecessary costs in cloud environments. Thus, having the right data to make more complete decisions results in better outcomes.

For total visibility, it’s necessary to access data in one place across all of your IT silos. It’s important to understand the underlying data supporting your services and applications — defining KPIs that determine health and performance status. As you move beyond data aggregation, search and visualizations to monitor and troubleshoot your IT, machine learning become the key to achieving predictive analysis and automation.

Key AIOps Use Cases

According to Gartner, there are five primary use cases for AIOps:

1. Performance analysis

2. Anomaly detection

3. Event correlation and analysis

4. IT service management

5. Automation


1. Performance analysis:

It has become increasingly difficult for IT professionals to analyze their data using traditional IT methods, even as those methods have incorporated machine learning technology. The volume and variety of data are just too large. AIOps helps address the problem of increasing volume and complexity of data by applying more sophisticated techniques to analyze bigger data sets to identify accurate service levels, often preventing performance problems before they happen.

2. Anomaly detection:

Machine learning is especially efficient at identifying data outliers — that is, events and activities in a data set that stand out enough from historical data to suggest a potential problem. These outliers are called anomalous events. Anomaly detection can identify problems even when they haven’t been seen before, and without explicit alert configuration for every condition.

Anomaly detection relies on algorithms. A trending algorithm monitors a single key performance indicator (KPI) by comparing its current behavior to its past. If the score grows anomalously large, the algorithm raises an alert. A cohesive algorithm looks at a group of KPIs expected to behave similarly and raises alerts if the behavior of one or more changes. This approach provides more insight than simply monitoring raw metrics and can act as a bellwether for the health of components and services.

AIOps makes anomaly detection faster and more effective. Once a behavior has been identified, AIOps can monitor and detect significant deviations between the actual value of the KPI of interest versus what the machine learning model predicts. Accurate anomaly detection is vital in complex systems as failures often exist in ways that are not always immediately clear to the IT professionals supporting them.

3. Event correlation and analysis:

The ability to see through an “event storm” of multiple, related warnings to identify the

underlying cause of events. The reality of most complex systems is that something is always “red” or alerting. It’s inevitable. The problem with traditional IT tools, however, is that they don’t provide insights into the problem, just a storm of warnings. This creates a phenomenon known as “alert fatigue”; teams see a particular alert that turns out to be trivial so often that they ignore the alert even on the occasions when it’s important.

AIOps automatically groups notable events based on their similarity. Think of this as drawing a circle around events that belong together, regardless of their source or format. This grouping of similar events reduces the burden on IT teams and reduces unnecessary event traffic and noise. AIOps focuses on key event groups and performs rule-based actions such as consolidating duplicate events, suppressing alerts or closing notable events. This enables teams to compare information more effectively to identify the cause of the issue.

4. IT service management (ITSM): A general term for everything involved in designing, building, delivering, supporting and managing IT services within an organization. ITSM

encompasses the policies, processes and procedures of delivering IT services to end-users within an organization. AIOps provides benefits to ITSM by letting IT professionals manage their services as a whole rather than as individual components. They can then use those whole units to define the system thresholds and automated responses to align with their ITSM framework, helping IT departments run more efficiently.

AIOps for ITSM can help IT departments to manage the whole service from a business perspective rather than managing components individually. For example, if one server in a pool of three machines encounters problems during a normal-load

period, the risk to the overall service may be considered low, and the server can be taken offline without any user-facing impact. Conversely, if the same thing were to happen during a high-

load period, an automated decision could be taken to add new capacity before taking any poor-performing systems offline.

In addition, AIOps for ITSM can help:

• Manage infrastructure performance in a multi-cloud environment more consistently

• Make more accurate predictions for capacity planning

• Maximize storage resource availability by automatically adjusting capacity based on forecasting needs.

• Improve resource utilization based on historical data and predictions

• Manage connected devices across a complex network

5. Automation: Legacy tools often require manually cobbling information together from multiple sources before it’s possible to understand, troubleshoot and resolve incidents. AIOps provides a significant advantage — automatically collecting and correlating data from multiple sources into complete services, increasing the speed and accuracy of identifying necessary relationships. Once an organization has a good handle on correlating and analyzing data streams, the next step is to automate responses to abnormal conditions.

An AIOps approach automates these functions across an organization’s IT operations, taking simple actions that responders would otherwise be forced to take themselves. Take for example a server that tends to run out of disk space every few weeks during high-volume periods due to known-issue logging. In a typical situation, a responder would be tasked with logging in, checking for normal behavior, cleaning up the excessive logs, freeing up disk space and confirming nominal performance has resumed. These steps could be automated so that an incident is created and responders are notified only if normal responses have already been tried and have not remedied the situation. These actions can range from the simple, like restarting a server or taking a server out of load-balancer pools, to more sophisticated, like backing out a recent change or rebuilding a server (container or otherwise).

AIOps automation can also be applied to:

•Servers, OS and networks: Collect all logs, metrics, configurations and messages to search, correlate, alert and report across multiple servers.

•Containers: Collect, search and correlate container data with other infrastructure data for better service context, monitoring and reporting.

•Cloud monitoring: Monitor performance, usage and availability of cloud infrastructure.

•Virtualization monitoring: Gain visibility across the virtual stack, make faster event correlations, and search transactions spanning virtual and physical components.

•Storage monitoring: Understand storage systems in context with corresponding app performance, server response times and virtualization overhead.

•Application monitoring: Identify application service levels and suggest or automate response to maintain defined service level objectives.

AIOps and the Shift to Proactive IT

One of the primary benefits of AIOps is its ability to help IT departments predict and prevent incidents before they happen, rather than waiting to fix them after they do. AIOps, specifically the application of machine learning to all of the data monitored by an IT organization, is designed to help you make that shift today.

By reducing the manual tasks associated with detecting, troubleshooting and resolving incidents, your team not only saves time but adds critical “slack” to the system. This slack allows you to spend time on higher-value tasks focused on increasing the quality of customer service. Your customer experience is maintained and improved by consistently maintaining uptime.

AIOps can have a significant impact in improving key IT KPIs, including:

• Increasing mean time between failures (MTBF)

• Decreasing mean time to detect (MTTD)

• Decreasing mean time to investigate (MTTI)

• Decreasing mean time to resolution (MTTR)

IT organizations who have implemented a proactive monitoring approach with AIOps have seen significant improvement in a variety of IT metrics, including:

How to Get Started With AIOps

The best way to get started with AIOps is an incremental approach. As with most new technology initiatives, a plan is key. Here are some important considerations to get you started.

Choose Inspiring Examples

If you’re evaluating AIOps solutions, platforms and vendors for your organization, you’ve got a big task ahead of you. The most challenging aspect may not be the evaluation process itself, but gaining the support and executive buy-in you need to conduct the evaluation.

If you choose inspiring examples of other, similar organizations that have benefited from AIOps — and have metrics to prove it — you’ll have a much easier time getting the go-ahead. A good partner can help you do that.

Consider People and Process

It’s obvious that technology plays an important role in AIOps, but it’s just as important to make a plan to address people and process.

For example, if an AIOps solution identifies a problem that’s about to happen and pages a support team to intervene, a responder might ignore the warning because nothing has actually happened yet. This can undermine trust in the AIOps solution before it has a chance to be proven in operation.

It’s also important to give IT teams the time to work on building, maintaining and improving systems. This vital work can’t be assigned as a side project or entry-level job if you expect meaningful change. Put your best people on it. Make it a high priority so other work can’t infringe on it. AIOps practices are iterative and must be refined over time; this can only be done with a mature and consistent focus on improvement.

You’ll also need to re-examine and adjust previously manual processes that had multiple levels of manager approval, like restarting a server. This requires trust in both technology and team practices. Building trust takes time. Start with simple wins to build cultural acceptance of automation. For example, be prepared to build historical reports that show previous incidents were correctly handled by a consistent, simple activity (such as a restart or disk cleanup) and offer to automate those tasks on similar future issues. Choose a solution that allows for “automation compromise” by inserting approval gates for certain activities. Over time, those gates should be removed to improve speed as analytics proves its value in selecting correct automation tasks.

Finally, include in your plans a campaign to reassure staff that AIOps is not intended to replace people with robots. Show them how AIOps can free up key resources to work on higher-value activities — limiting the unplanned work your teams have to endure each day.

The Bottom Line: Now Is the Time for AIOps

If you’re an IT and networking professional, you’ve been told over and over that data is your company’s most important asset, and that big data will transform your world forever. Machine learning and artificial intelligence will be transformative and AIOps provides a concrete way to leverage its potential for IT. From improving responsiveness to streamlining complex operations to increasing productivity of your entire IT staff, AIOps is a practical, readily available way to help you grow and scale your IT operations to meet future challenges. Perhaps most important, AIOps can solidify IT’s role as a strategic enabler of business growth.

Evgeny BelenkyGreat article, @Shibu Babuchandran! Thank you for sharing your knowledge with… more »
Shibu Babuchandran
Regional Manager/ Service Delivery at ASPL Info Services
Aug 31 2021
Future of NOC

Future of NOC transformation unifies IT teams

NOC transformation could lead to unified IT operations with cross-domain teams, but not all enterprises need radical change when smaller upgrades and modernization do the job.

In the technology world, it can be easy to throw around the word transformation and lose the nuances of what it entails.

Consider the networking industry. Remote work requires enterprises to rethink VPN strategies and management. Network automation means network practitioners have to shift from manual tasks and trust automated processes. Advances in security and visibility result in more collaboration with security teams. Like a chain reaction, each of these developments influences other areas, spurring more change, such as Network Operations Center (NOC) transformation.

Transformation occurs in varying increments and levels, depending on enterprise strategies, risk and motivation -- and the same applies to NOCs. Some companies don't have -- or need -- NOCs, some are gradually modernizing their NOCs and some are pursuing full-blown NOC transformation.

The role of traditional NOCs

For years, organizations have used NOCs to maintain an operational view of the network and the services running across it. NOC technicians and analysts follow certain best practices to monitor network performance, handle service desk tickets, triage and troubleshoot, and, if needed, escalate problems.

But many businesses don't function the same way they did five years ago -- or even one year ago -- and various factors are reshaping network operations strategies and priorities. The global pandemic is one obvious stimulant. But progress in server virtualization, IoT, cloud, containers and microservices has also sparked NOC transformation.

As technology has evolved, network traffic flows have changed, and application support is more complicated. As a result, network operations need to be more proactive and implement comprehensive visibility tools for their environments.

For example, end-users recognize one-third of all IT service problems before NOC technicians or other teams are alerted, which means one-third of all problems can impede business productivity before IT is aware of them. Remote work has exacerbated many of those management concerns, prompting network technicians to retool so they can achieve visibility into home office networks. Those tools include remote desktop access, endpoint transaction monitoring and laptop agents that generate test traffic to gauge latency and dropped packets, he said.

As operating models change, network teams should shift from tactical tasks -- in which they simply deploy, fix and maintain operations -- to strategic tasks that enable innovation and automation.

NOC Transformation doesn't look the same for every organization.

Virtualization and automation drive NOC modernization

Many NOC upgrades aren't radical transformations; rather, they're part of business strategies to virtualize, consolidate or modernize networks. Network teams undertake these upgrades to meet their goals of reduced downtime, improved end-user satisfaction and increased innovation within IT.

Within networking, teams are prioritizing modernization in the following areas:

  1. network security
  2. network virtualization
  3. network automation
  4. network operations optimization

With network operations optimization, teams look at how they can improve service-level agreement compliance and accelerate their mean time to resolution. In some cases, NOC teams troubleshoot issues that are originally perceived to be network problems, which they later discover to be security incidents. That time-lapse could be critical in the event of a breach or attack -- and could be shortened if network teams worked with security teams.

"Over the last four or five years, network operations teams -- whether they're in a NOC or a cross-domain team -- are trying to work more closely with security,"

Also, enterprises shift their network operations strategies to prioritize integrated network and security management, noting how networking and security are "increasingly bonded." As the integration of the two previously siloed departments strengthens, so too does IT innovation.

NOC transformation with unified operations

Enterprises that are focused on IT innovation and optimizing network operations could pursue a more transformational operations strategy. Perhaps the most ambitious NOC transformation is one that eliminates the standalone NOC and security operations center in favor of a unified operations center that includes networking, security, cloud and applications teams.

The goal of this unified approach is to streamline operations so all applications and services are highly resilient and avoid long downtimes, he said. Cross-domain teams collaborate to prevent trouble proactively instead of reacting to issues, helping enterprises achieve the innovation they desire.

Operations teams, however, need IT leadership guidance if they want to implement a unified operations approach. Different teams might not always get along, but the initiative is more likely to succeed with leadership support.

Another important factor to consider is data, which could be an asset or obstacle to a unified IT operations approach.

"[Networking and security] might have their own data repositories they guard jealously and don't want to share. If they do share, they might find their data conflicts with each other.

A way to address that issue is to have a common data set. Enterprises can implement a fabric that centrally distributes traffic to the individual tools each team uses. Those tools clean data from the same fabric, so teams can collaborate better and share data. The teams can also share an analysis tool -- with clear processes on how to use it -- to provide common views, reports and dashboards.

NOC transformation is not for everyone

Moving away from a standalone NOC to a unified operations approach can help streamline IT operations and improve overall service delivery. But independent NOCs are still an established and reliable way to monitor operations -- and moving away from them is a disruptive strategy that might not be for every organization.

"NOC transformation isn't going to be for everyone, and it isn't necessarily a best practice to go from a traditional NOC to something like an integrated cross-domain operation center" .

Shibu Babuchandran
Regional Manager/ Service Delivery at ASPL Info Services
Aug 18 2021

IT Operations Management (ITOM) refers to the administration of technology and application requirements within an IT organization. Under the ITIL framework, ITOM’s objective is to monitor, control, and execute the routine tasks necessary to support an organization’s IT infrastructure.

In addition to the above, an ITOM solution ensures effective provisioning and management of capacity, cost, performance, and security of the IT infrastructure within the organization.

It’s interesting to see how mainstream technology seeps into various management paradigms such as AI supporting IT Service Management (ITSM) and IT Operations Management (ITOM). What’s more exciting is when these processes inspire and spread outside the IT infrastructure to the rest of the organization’s departments such as in the case of Enterprise Service Management (ESM).

2021-2022 will see a phenomenal shift in ITOM and its objective of providing cost-effective, efficient, and qualitative delivery of services. In this article we discuss five upcoming ITOM trends that are essential for securing and maintaining your IT infrastructure:

1. Data-driven IT Operations

A study conducted by Gartner estimates that by 2022, 60% of enterprise IT infrastructure will focus on “centers of data” that will inherently drive the majority of IT operations workflows and decision-making.

IT operations have always been dependent on incoming data to renew previous assumptions, improve processes, and increase performance efficiency. And now more than ever, we will need data from multiple sources of information such as logs, metrics, and traces to keep up the pace.

This might be more daunting when we learn that private sector companies are now looking for data sovereignty, latency, or compliance through private cloud systems. This will allow for enterprise services that need the flexibility and agility of the cloud but require siloed IT infrastructure.

Additionally, Artificial Intelligence for IT Operations (AIOps) will also play a role in monitoring, organizing, and managing large amounts of IT operations and event data.

2. Increased Adoption of ESM Solutions

In an ever-changing digital landscape, IT operations data and expertise are being utilized to improve non-IT areas of the organization such as human resources and marketing. The impact this data has on the rest of the organization has led to increased adoption of Enterprise Service Management solutions as a long-term strategy for business growth.

All departments within the organization will be able to adopt ESM but it should follow an order of priority, and it is the responsibility of the management to lead the organization through this developmental process.

Adopting an ESM solution ensures that your IT infrastructure succeeds even when market competition is fierce, consumer expectations are constantly changing, and the margin for error is minimal.

3. Automation-based Infrastructure Operations

Gartner has identified a rise in the trend of companies adopting automation strategies in an attempt to repurpose IT staff to perform tasks of greater value.

By automating repetitive tasks in the execution process, ITOM solutions help mitigate possible inconsistencies or issues that usually occur when the process is carried out manually.

Because ITOM extends visibility and reach into other IT Management processes such as ITAM, ITSM, and so on, automation-based infrastructure operations replace expensive human expertise and effort, thereby freeing up time for more complex tasks.

4. Unified Management Solution for All Hybrid Infrastructure

Another emerging trend in the ITOM space is a unified management solution for hybrid infrastructure, also called Hybrid Digital Infrastructure Management (HDIM).

The technology integrates multiple functionalities of routine IT operations such as infrastructure management, data management, cloud management, security, and other ITSM functions into one unified solution.

Because managing hybrid IT infrastructure is challenging, HDIM technologies will provide a viable solution that addresses the key pain points of operational processes and tools required to manage the same.

Although HDIM technologies are still in the early stages of development, Gartner predicts that 20% of IT organizations will adopt HDIM technologies to optimize hybrid IT infrastructure operations.

5. Transitioning from Traditional ITSM and ITOM to AIOps

Touted as the next big thing in IT management, Artificial Intelligence for IT Operations or AIOps is the application of advanced technologies such as machine learning and artificial intelligence to automate IT operations within the organization.

Modern IT infrastructure is becoming increasingly complex as enterprises look to adopt newer and more efficient solutions to meet modern-day IT challenges. AIOps helps enhance traditional ITSM and ITOM operations by automating key components of the process.

For instance, an AIOps solution can identify a network or outage problem in real-time, and use automation to identify the error and fix it even before the customer is notified. In addition, this improves the incident response time and increases performance efficiency, thereby improving the customer experience.

Conclusion:

In the future, IT Operations Management will serve as an anchor for all organizational processes, IT-related and otherwise, to ensure that the delivery of quality IT support services is continuously optimized and improved with time.

ITOM automation will be capable of monitoring alerts and initiating required protocols for network intruders or server shutdown while AI collects operations data from such incidents and helps prevent future occurrences conveyed by user-friendly dashboards and forecast reports.

An effective ITOM solution lays the foundation for the successful and efficient management of an organization’s IT infrastructure.

William LinnI have done the product for 22 plus years, whenever it was called OpC.  Some… more »
Find out what your peers are saying about Zabbix, Datadog, LogicMonitor and others in IT Infrastructure Monitoring. Updated: September 2021.
535,544 professionals have used our research since 2012.