We just raised a $30M Series A: Read our story

Zabbix Competitors and Alternatives

Get our free report covering Nagios, Centreon, Splunk, and other competitors of Zabbix. Updated: November 2021.
552,695 professionals have used our research since 2012.

Read reviews of Zabbix competitors and alternatives

BU
Systems Engineer at a tech vendor with 201-500 employees
Real User
Top 10
Saves us cost-wise in the amount of time we're not spending with false errors

Pros and Cons

  • "The solution’s overall reporting capabilities are pretty powerful compared to ones that I have used previously. It seems like it has a lot of customizations that you can put in, but some of the out-of-the-box reports are useful too, like user logon duration and website latency. Those type of things have been helpful and don't require a lot of, if any, changes to get useful content out of them. They have also been pretty easy to implement and use."
  • "It needs better access for customizing and adding monitoring from the repository. That would be helpful. It seems like you have to search through the forums to figure out what specific pieces you need to get in for specific monitoring, if it's a nonstandard piece of equipment or process. You have to hunt and find certain elements to get them in place. If they could make it a bit easier rather having to find the right six-digit code to put in so it implements, that would be helpful."

What is our primary use case?

We use it in a few different ways:

  • For general monitoring of operating systems. 
  • Leveraging some customized offerings, specifically for creating application monitoring. 
  • Some external site-to-site monitoring in various places, ensuring that our websites and external pieces are available over an Internet connection. 

How has it helped my organization?

It has given us a clearer view into our environment because it's able to look in and pull things off of the event viewer or log files. We have been able build dashboards and drill down on things, which has helped improve our time to respond. Also, in the case of specific conditions being met in X log, we have been able to get in and take a look at that a lot faster rather than trying to connect and parse through the log and figure it out. It's able to flag that and work us towards a solution faster than normal.

We have a few custom data sources that we have defined, especially for our application. It is able to leverage a specific data source and build monitoring rather than just having it be a part of the general monitoring. It is segmented and customized for what we actually need, which has been pretty helpful.

Custom data sources have given us a bit more information from a point in time and historically viewpoint. In the console, it is easy to compare week-over-week or month-over-month traffic and numbers. As changes are made in the environment, we can look and have better historical knowledge, and say, "We started seeing this spike three months ago and this is the change we made," or, "We started seeing this CPU usage reduced after the last patch or software update." It lets us be able to compare and get a better insight into the environment over a longer period, rather than just at a point in time, when investigating an issue.

The solution has allowed us to have specific alerting for specific messages. If we know that X messages on a notification let us know this state has happened, we can then set that to be either an email notification or a tracking notification. In the cases of a log meaning that we have a specific issue, we can have it send an email and let us know. Thus, we have a better, faster response. We also have integrations with PagerDuty, which allows us to be able to make things very specific as to the level of intervention and the specific timing of that intervention. It has been nice to be able to customize that down to even a message type and timing metric.

The solution’s ability to alert us if the cloud loses contact with the on-prem collectors has been helpful to know. E.g., if we are having an issue with our Internet connection or some of our less monitored environments, such as our lower environments in different data centers where we don't have as heavy of monitoring. Therefore, it's helpful to have that external check there versus our production environments which are heavily monitored. Typically, we are intervening before it times out to say that it's lost the connection. It's been helpful to have that kind of information. This way, we know either via a page or email if there is any sort of latency or a timing issue with it connecting to the cloud. It's been helpful that it's not just a relying on the Internet connection at our site, but is able to see into our environment, then it monitors when there are connectivity or timeout issues.

We use it for anomaly detection because our software is designed to function in a specific way. Therefore, anomaly detection is helpful when there are issues that may not be breaking the software but when it is running in a nonstandard way, then we can be alerted and notified so we can jump on that issue. Whether the issue will be fixed it in the moment or handed off to development to find a solution, it's helpful to have that view into how it's running over the long-term.

It is a pretty robust solution. There are a lot of customizations that you can put in for what you want it to be checking, viewing, and alerting on. As we get alerting and realize that that's not something we need to be alerted on or it happens to be normal behavior, a lot of that information can be put back into the system, to say, "Alright, this may look like an anomaly, but it isn't." Therefore, we can customize it so it gets smarter as it goes on, and we're really only being notified for actual issues rather than suspected issues.

It's been helpful to be able to have some information to be able to pass along to development that's very specific as to what the issues are. E.g., we can see an anomaly during periods of time while this is running, then pass that along so development can figure out, "Is it a database issue, an application issue, or possibly a DNS level issue?" They also determine if there are further things that need to be dug into or if it is something that can just be fixed by a code change. 

The solution’s automated and agentless discovery, deployment, and configuration seems to work pretty well for standard pieces, like Windows servers and your standard hardware. It has been able to find and add those piece in. Normally, if I'm running into an issue with finding something, it's usually because it's missing a plugin or piece that just needs to be implemented, which just needs to be added in manually. However, 99 percent of the time, it finds things automatically without a problem.

What is most valuable?

The flexibility to be able build a custom monitor is its most valuable feature. Because it's just a general CPU or memory, it doesn't always give you a full picture, but we can dig into it, and say, "These services are using this much, and if these services are using more than 50 percent of the CPU, then alert us." We can put those type of customizations in rather than use the generic out-of-the-box things with maybe a few flags. It's been very nice to be able to customize it to what we need. We can also put in timings if we know there are services restarting at 11 o'clock at night (or whenever). We can put those in so as long as it's doing exactly what we want it to do, which is restarting the service, then it won't monitor us. However, if there are any issues or errors, then it monitors us right away. That's been really helpful to leverage.

We use a few dashboards. A couple are customized for specific groups and what they maintain. As I am doing projects, I'm able to make a quick dashboard for some of the things that I'm working on so I can keep track without having to flip between multiple pages. It seems pretty flexible for making simple use cases as well.

I have a custom dashboard which monitors each site and does virtual environment monitoring, such as CPU, memory, timing, etc. It was easy to get in place and adjust for what I wanted to see. It has been one of the go-to dashboards that I have ended up utilizing.

We can kind of get a single pane of glass and be able to view specific functions, whether it be sites or the entire environment. We are able to quickly get in, see what's going on, and where issues are coming from rather than having to hunt down where those issues are. Therefore, it's helped us more with our workflow than automating functions.

The solution’s overall reporting capabilities are pretty powerful compared to ones that I have used previously. It seems like it has a lot of customizations that you can put in, but some of the out-of-the-box reports are useful too, like user logon duration and website latency. Those type of things have been helpful and don't require a lot of, if any, changes to get useful content out of them. They have also been pretty easy to implement and use.

What needs improvement?

It needs better access for customizing and adding monitoring from the repository. That would be helpful. It seems like you have to search through the forums to figure out what specific pieces you need to get in for specific monitoring, if it's a nonstandard piece of equipment or process. You have to hunt and find certain elements to get them in place. If they could make it a bit easier rather having to find the right six-digit code to put in so it implements, that would be helpful.

For how long have I used the solution?

Personally, I've been using the solution for about a year. We've had it in place for about a year and a half, but I came to the organization about a year ago.

What do I think about the stability of the solution?

I don't think we've really had a time where the application or monitoring nodes have failed. The connection to LogicMonitor has been very stable. We haven't had any connection issues to the SaaS offering. It's been pretty resilient and stable from our end.

What do I think about the scalability of the solution?

The scalability seems fine. Every time we've had to expand and add elements, we've not run into any delays or issues with it. It seems to expand with us as we've needed to use more features. We haven't had any issues with delays or timing. It's been able to handle what we've thrown at it.

There are at most 10 users at our company, who do everything from application monitoring to platform engineering to some developers who have access into the solution for some monitoring pieces. Varying segments have been able to get in and they all seem to have had pretty good luck with accessing and using it.

We are using LogicMonitor pretty extensively. We're using it from low level environments, development, quality assurance, all the way up to user testing and production. We have leveraged it in as many segments and parts of the business as we can. It has been really helpful to have it be able to handle different workloads, but also be customized. This way, we're not getting triggered at 2:00 AM because a switch is on in the office reporting an issue, instead we can adjust those timings to report for specific times of the day rather than any time during the day.

We have about 1,000 totals including VMs and physical devices.

How are customer service and technical support?

The technical support has been pretty good. I haven't had to leverage it, but some of the people I work around have taken it on when we have had questions or issues to leverage the process. They seem to be fairly responsive and the timing of it is usually good. We are usually hearing back in minutes instead of hours. We haven't had any major issues with them.

Which solution did I use previously and why did I switch?

We've eliminated three different monitoring tools by leveraging LogicMonitor. We had two different in-house, custom built tools that were used for a long time that we were able to roll off, and we also used Nagios. I have also used Zabbix and Orion.

LogicMonitor has reduced our number of false positives compared to how many we were getting with other monitoring platforms. We leveraged the solution to focus it down and only look at the specific things that need monitoring, e.g., rather than every time a service is down we get notified, instead if it's not a critical service, then we can just get a flag, go back, and check it. This is rather than getting spammed with hundreds of emails about specific things being down. Thus, we can customize it for what we actually want to know and need for non-issues.

How was the initial setup?

It had already been implemented before I joined the company. We've added a few functions since then, but the core and initial launch of it had already been implemented and heavily used at that point that I joined.

What was our ROI?

We have definitely seen ROI.

We have seen probably a 80 or 90 percent decrease in false flag alerts.

We move our people so they're able to be more proactive on things, rather than having to deal with parsing through and figuring out if something is an issue or a non-issue, that cuts down on our personnel time of managing the day-to-day processes. That's been helpful. At least from conversations I've had with management, they've seemed to have found it to be a good investment and solution for getting our normal work done, but also for making sure that we're ready to go if something does go wrong.

What's my experience with pricing, setup cost, and licensing?

It definitely pays for itself in the amount of time we're not spending with false errors or things that we haven't quite dealt with monitoring. It has been good cost-wise. 

What other advice do I have?

I would definitely recommend LogicMonitor. It's something to look at either when signing up for a trial or for a use case process . It's been a great product. It has customizations when you want them, and out of the box solutions if you don't want them. It works and is reliable. Compared to other monitoring platforms I've used in the past, it seems to be the most powerful and robust that I've dealt with.

The solution monitors most devices out-of-the-box, such as, Windows, Windows Server, Linux, F5 load balancers, Cisco firewalls, and Cisco switches. Those have been pretty easy to monitor. Our issues have been with one-off or nonstandard platforms that we've implemented. Otherwise, everything has been pretty easy to implement.

I would rate it as a solid nine (out of 10).

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Maharajan S
CISO / Associate Vice President - IT Infrastructure at a pharma/biotech company with 501-1,000 employees
Real User
Top 5
Provides data accuracy for availability and policy harmonization

Pros and Cons

  • "Our response time is within 30 minutes for any support. This solution provides alerts immediately, so we are within our SLA, giving efficiency to our support."
  • "This solution is available in SaaS. The reason why we have not gone to SaaS is they do not have a country-specific separation of assets. There are GDPR and other requirements that might require country-specific sensitive information to be filtered as well as other things that need to be taken care of. Normally, if we need to do any compliance, like ISO27000 compliance, they don't have such a report within their system. This kind of report is missing from their SaaS. That is one of the reasons that we have gone to the on-prem version, where I am assured that my data is secure."

What is our primary use case?

We are geographically spread across 11 countries. At each location, we have a firewall and other critical IT infrastructure. We have to log in to all the systems and different URLs, so we are very dependent on some individuals who have the knowledge, control, or access. Moving to this system, I have a single portal where I can access all 10 locations' firewalls from that portal with easy manageability.

We are in the life sciences domain with a lot of customer-hosting apps in our AWS cloud. We deployed this monitoring system in our on-premises environment to monitor all the critical IT infrastructure.

We are using the latest version.

How has it helped my organization?

We use the solution to automatically trigger processes to help to resolve issues when the solution detects compliance violations. While they don't have a report, this feature is in our environment. For example, our system is ISO27000, but it can miss this, instead our system goes through the on-premises process. We have segregation of duty, data storage, and the level of data encryption as well as how the server is being protected from the onset. We took all these things and kept them since it is under our validated environment. Any system implemented with us has to follow through this process. We can confidently say that our system is there, but the moment we move to SaaS or hybrid, we won't have control because they don't provide this. So, they need to build in this sort of solution for SaaS or hybrid. 

I have a Moscow office. In Moscow, I don't have an IT engineer. We have a very small team in a satellite office. We can easily manage the firewall, servers, and other things from here. When we are operating a central kind of implementation for any new initiatives, that is a big challenge for us. However, by implementing this monitoring tool, I can write any policies or procedures centrally. The process is harmonized so I don't need to worry about whether these policies play well with a particular Germany or Moscow firewall. This is more like a control mechanism. We could see the responses after implementing this tool.

Manual or time-consuming activities have been reduced by implementing this solution. Getting this information from each site takes a lot of time. Sometimes we get the wrong updates where the accuracy is not intact. By implementing a centralized tool that manages availability and the health situation of far away systems, this was ideal rather than doing it manually. Though, it was a learning curve for us.

What is most valuable?

The most important part is the real-time network monitoring dashboard. It pops up when you log into the system so it gives you clear-cut, real-time availability of the firewall/gateway-level infrastructures.

My network team, the server team, and I have different dashboards. There is also a complaints manager who has different access. These different dashboards are important because we are in the life sciences domain, and segregation of duty is very important.

The role-based dashboards summarize data points as well as provide charts and topology diagrams in a single window. We support all other regions from India. Therefore, it is better that the dashboard is a single point of entry to each site, managing those infrastructures. 

The dashboards tell us the details. For example, even in the firewall, I can go to the port level. Then, on the port level, I can deep dive on the configuration. It will also go into the level of services, memory, CPU, and storage availability. From the dashboard, you can look at that specific infrastructure or asset.

The graphical user interface is very good. It is readable, which doesn't need a technical expert to do that. That is critical. You don't need a network administrator or some other administrator to see the monitoring or anything else. Non-technical people can log in and understand it. 

Infraon's individual tunnel monitoring capabilities are more critical on the firewall side because we have a lot of Point-to-Point Tunnels created. The tunnel usage is more critical when you have a ransomware attack or any other attack has happened. When I implement a policy for a particular configuration, it will apply to all the tunnels. That makes easy for us to manage or maintain. This is a very important feature.

What needs improvement?

The reporting capabilities are a challenge and could be improved. We have been trying to connect to it from our help desk ticketing system, because the ticketing system manages asset tracking, which has been a bit challenging for us. Otherwise, they give some reports that are okay, but we do not use them much because we work in the dashboard. 

This solution is available in SaaS. The reason why we have not gone to SaaS is they do not have a country-specific separation of assets. There are GDPR and other requirements that might require country-specific sensitive information to be filtered as well as other things that need to be taken care of. Normally, if we need to do any compliance, like ISO27000 compliance, they don't have such a report within their system. This kind of report is missing from their SaaS. That is one of the reasons that we have gone to the on-prem version, where I am assured that my data is secure. I can take the report and show it to them from a compliance point of view. However, the moment we go to a SaaS model, I don't have control of the data and where the data is stored. I don't receive any complaints-based reports from the SaaS model.

For how long have I used the solution?

We have been using this solution for four to five months, including the implementation and PoC. We did the PoC in November 2020.

What do I think about the stability of the solution?

It is stable. We have never had an issue.

What do I think about the scalability of the solution?

Since it is on-prem, storage and our virtual environment are within our control. There has been no issue in terms of scaling up with the system. The scalability is good.

We have five to six people working in the system for different purposes. I log into the system based purely on availability, systems' health statuses, and other things. At the same time, a network engineer will have much more involvement than that. 

Within our system, we have around a 34-member team. Out of those 34 members there are only five or six people using this system because I don't want to give everybody a login with access to it. Since we centralize the management of the system, there are only a few people who have access. We built it in such a way that we manage it with limited resources.

How are customer service and technical support?

The technical support is good. They are very aggressive. They understand that requirements are very important. 

Which solution did I use previously and why did I switch?

Earlier, we were using Zabbix, which is open source. We had a lot of challenges with it. We had to build a distributed Zabbix environment, giving it a different kind of report. We were set up on that. While the product was very good, we were not capable of properly implementing it. 

Infraon IMS reads firewall logs, which is an important reason why we chose this product. There were other products where we had an issue reading the logs of firewalls and other things. Most of the tools provide an SNMP log, but we can reach syslog and other firewall logs with this solution. The best part: Our policies can be driven from this system and applied to multiple firewalls. For example, I am writing a rule for some URLs or specific sites to be blocked. I can then write one single policy which can be pushed to all 10 different locations. Earlier, we used to log into each system and do this process. Now, the system takes care to push these common policies.

This tool was introduced by one of our vendors. Through them, we got to know this tool and engage with it. 

How was the initial setup?

We built a PoC where we provide all this information. That PoC was running in 30 days. Effectively, once the PoC was complete, we upgraded the system to production. That is how it happened. So, the implementation was very smooth. 

We started with a PoC for around 20 assets. This takes a day or two, but it took a lot of time to understand the configuration and make changes. That took a couple of weeks because we were not familiar with their dashboard and they were not familiar with our life sciences domain requirements and regulatory requirements. That was the challenge. Once they understood our requirements, the configuration part was more like a day-to-day job.

What about the implementation team?

The team is very eager and aggressive on this. Priya put a lot of effort into the system. She provided more clarity on how to implement it. She also understood our requirements. Any tool implementation is successful based on the people who were involved and how well they understand the customer requirements and implementation. In this case, the vendor's team was good. 

Maximum two to three major players were involved from our end, maybe someone from network admin and another person on the server side. They were directly involved, but there were a few other people, like the site engineers, who contributed but weren't directly involved.

For setup and training, we only ever worked with the Everest team.

What was our ROI?

It gives us a lot of time savings. 60% to 70% of our time has been saved.

We are able to see the availability. Before people know that the infrastructure is down, we are able to get this information from the system. That is critical as far as infrastructure operations. This solution provides cost savings and is effective.

Our response time is within 30 minutes for any support. This solution provides alerts immediately, so we are within our SLA, giving efficiency to our support.

It improved our data and availability accuracy over doing the work manually. Once we installed this central system, our site engineers who provide the data started believing in the data's accuracy.

What's my experience with pricing, setup cost, and licensing?

The cost model is within our budget. I have less than 180 critical assets, but the moment that I have 1,000 assets, then the license model is totally different. I don't know whether they are capable of handling that kind of a load. They could revisit the licensing model. They are not mature enough to define this license. We had a discussion about that. 

They have given us different services as a separate license, but the cost is not there proportionally against those services. The cost was one number, but the number of services were specific to the license. For example, for server licenses, they have X quantity, and network licenses also have X quantity, but they cumulate the cost and then provide it. They don't provide the unit cost. Normally, when you work in costing, you should have some kind of clarity about standard, professional, or enterprise kinds of models, or go with a unit-based license. So, we redid our licensing cost and they provided it. So, they should work on their licensing model.

Which other solutions did I evaluate?

We evaluated ManageEngine and this solution. After doing the PoC for Infraon IMS, we were happy with it so we ended up implementing it. We didn't go with other tools because of cost and the support from the bigger players is limited. We got burned with an implementation of a bigger player previously and were not keen on going that way.

Normally, you have a product for different sectors. For example, network management will have a separate tool from server management. Here, it is a mixture of these tools in one system. Additionally, you can do vulnerability and penetration testing from this associated product. You can do network auditing, vulnerability assessment, and penetration tests on a particular critical infrastructure. Plus, you can do monitoring. I didn't see many tools that had this combination of services. There are many enterprise tools available, but we cannot afford those. This solution was something that we could afford and achieve what we really required.

What other advice do I have?

I would recommend this tool for people who want to have data accuracy in terms of availability and policy harmonization. They should look for this tool.

We are very good at integrating it with third-party applications, like AWS and other information security platforms. For our SOC, we build using some other tools, like Acunetix as SAS programming. We have integrated all these things.

I haven't seen any workflow automations.

We plan to increase our licenses going forward. However, Everest is a small company, and that has risks. I don't know their five- or 10-year plan. They need a proper roadmap for customer support, engagement, etc.

I would rate this solution as an eight out of 10. The licensing model, the compliance report, and integration of other tools are little challenges that we have with the tool. Though, we are happy with the tool. Aside from that, our requirements have been fulfilled. 

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Flag as inappropriate
SB
Director of Professional Services at a tech services company with 11-50 employees
Real User
Top 20
Good visibility, and support, but it would be easier to have remote sessions into the box

Pros and Cons

  • "It lets you know what your infrastructure is like and what state you are in."
  • "It would be nice to have everything in one place. Now they have Intune for the desktops and SCCM to handle their servers."

What is our primary use case?

We use it and our clients use it for device patch management, servers, and management processes.

We deploy it for clients but we don't usually maintain it for them.

What is most valuable?

The best thing about SCCM is the patch management. You can make sure that all of your devices are there. You can see all of them and see your levels.

It lets you know what your infrastructure is like and what state you are in.

SCCM internally works great. On your internal infrastructure, it is fantastic. It gives you everything you want it to do.

What needs improvement?

Because of the way SCCM is, we are moving to the Intune platform similarly to the way that everybody else is. Microsoft is slowly migrating SCCM to the new Intune product for management.

There are so many issues with SCCM, but they are already working on migrating the desktop to the intune platform. They have already improved the management and the patch management. They are also looking at cloud integration and being able to deploy it in Azure properly and run the Azure infrastructure.

The main or legacy issue is not being able to do remote management of devices without being on a VPN to get their updates. It didn't work well on non-corporate networks. This has been resolved by the new Intune platform.

It's Microsoft, they have their issues, but they are getting better. They are integrating it with their office products, and their platforms.

In the next releases, I would like to see them make it easier to do remote sessions into the boxes.

It would be nice to have everything in one place. Now they have Intune for the desktops and SCCM to handle their servers.

For how long have I used the solution?

I have been using SCCM for ten years.

We were using some of the older versions.

What do I think about the stability of the solution?

The stability is only as good as your infrastructure.

What do I think about the scalability of the solution?

The scalability of SCCM is good but now that it is on the intune platform, it's even better.

The usage and how extensively it is being used depends on the client and the client's roadmap.

How are customer service and technical support?

As gold partners, you have a direct line to Microsoft technical staff. It is easy for us to get support.

Our experience with the support is a positive one.

Which solution did I use previously and why did I switch?

I have been using Zabbix for ten years. I have deployed it in my infrastructure.

I have integrated it with Grafana.

How was the initial setup?

The initial setup is pretty straightforward.

Depending on the customer and their infrastructure, it could be easy. If it is a small infrastructure the installation could be quite quick. You could fire up SCCM, sent the probes, let them detect it, and put it in. 

For large infrastructures or complex networks, it can be more difficult. It can take as long as a day to get it all set up and running or it could even take a week.

One of the joys of SCCM is that one person could easily maintain it but we have two people from the service desk.

What's my experience with pricing, setup cost, and licensing?

They are always changing their price model, which I don't like. It would be better if they didn't keep adjusting their price model.

The price model is different for every client. It depends on the corporation, the company's subscription balance, and how many machines they have. For us, it fluctuates. 

Some clients have a smaller infrastructure, and for those with large infrastructures, it will cost them more. Others will also have multiple versions of it for backup and failovers.

Which other solutions did I evaluate?

I was looking for a comparison to see if I want to propose them to some of my clients.

What other advice do I have?

If you are implementing from new, go with Intune directly, don't use the on-premises version.

With the transitioning state to the cloud versions, I would rate SCCM a seven out of ten.

They have handled desktops very well but they haven't transitioned servers very well.

Which deployment model are you using for this solution?

Hybrid Cloud
Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
Flag as inappropriate
Johnathan Bennett
Sr. Servicenow Developer at a retailer with 10,001+ employees
Real User
Top 20Leaderboard
Reliable, helps improve efficiencies, and uses AI to help build events

Pros and Cons

  • "It does a good job of collecting the data that's necessary for data centers, and IT's operations."
  • "When you switch versions, for example, when you go from Paris to Quebec they will introduce many new things and occasionally things break when they do that."

What is our primary use case?

We use this solution to populate the CMDB and to track changes in our environment.

How has it helped my organization?

It improves process efficiencies. 

When people need to query for information, they don't have to go to seven different people. They can go to a single source.

What is most valuable?

It does a good job of collecting the data that's necessary for data centers, and IT's operations.

When it comes to the internal data centers and the on-premise data centers, they are pretty good.

What needs improvement?

When you switch versions, for example, when you go from Paris to Quebec they will introduce many new things and occasionally things break when they do that. You usually find out after the fact when you stumble into it.

I currently have an issue that we just stumbled into, where our bucket wasn't populating correctly from my own Google cloud. They're trying to figure out how to fix that.

They should include support for Google Cloud.

For how long have I used the solution?

I have been working with ServiceNow Discovery for ten years.

The version we are using is Paris.

What do I think about the stability of the solution?

It's a reliable product.

What do I think about the scalability of the solution?

Scalability is pretty good, but they are lacking a bit in the Google Cloud realm.

They are really good in others such as Azure, AWS, and IBM. It seemed to have matured those a little better. Maybe the problem is with Google Cloud being a partner and keeping up with them. I am not sure where it's lacking. Is it ServiceNow or just Google Cloud?

Our users vary from technical people to managers to business people. We do it that way to assign costs so things could be lower.

With the first couple of reports, managers were surprised at the cost, but when you do it in increments throughout the year, you don't realize what the total is coming to until you get to the end of the year.

How are customer service and technical support?

For the most part, ServiceNow has gone through a few growing pains. They have grown rapidly. 

The first line of support is sometimes lacking. Once you get to the second and third-level people, they are good.

I would rate technical support an eight out of ten. After the first line, you can tell the person is pretty new.

Which solution did I use previously and why did I switch?

We used Maximo Tatem and it was really cumbersome. 

The data was excellent but you had two different vehicles.

You had your Discovery pool, and you had another server that took the data from Discovery. That had to be mapped into the Maximo database. You then had a Discovery server, with an in-between server that did the translation and put it into Maximo.

It was convoluted.

The Discovery engine was wonderful, but getting the data from Discovery back into the Maximo database was difficult.

How was the initial setup?

The initial setup was pretty straightforward.

What's my experience with pricing, setup cost, and licensing?

The price could be better. It's a bit on the pricey side.

Which other solutions did I evaluate?

As a product, they are probably right at the top. Their only competition I would say right now would be Dynatrace.

Dynatrace is missing a few things, but so is ServiceNow. You can take your pick, as they are both good.

What other advice do I have?

The data is really good, it's reliable and they keep adding to it. 

They are using AI with a lot of cases to help build the events, which is good.

I would rate ServiceNow a nine out of ten.

Which deployment model are you using for this solution?

Private Cloud
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
ITCS user
IT Architect at a comms service provider with 10,001+ employees
Real User
Top 5Leaderboard
Multi-customer approach, easy to scale, and reliable

Pros and Cons

  • "The most valuable feature is the event correlation mechanism."
  • "If the integration is simplified or improved, it will be a unique selling point in comparison to the competition on the market."

What is our primary use case?

We use DX Spectrum for monitoring secured networks.

What is most valuable?

The most valuable feature is the event correlation mechanism.

I also like the product's multi-customer approach.

What needs improvement?

They need a seamless integration to launch cloud-based management products of Broadcom.

In the future, more cloud-based solutions will be offered. It is necessary to strengthen the integration with other cloud-based products.

They also have other products that are Operational Intelligence, which is currently stand-alone and usually requires tighter integration. It should be simplified as well, as it is rather complex to create, and doesn't match the ideal situation. 

If the integration is simplified or improved, it will be a unique selling point in comparison to the competition on the market.

For how long have I used the solution?

I have been working with DX Spectrum for 15 years.

What do I think about the stability of the solution?

Spectrum is quite stable.

What do I think about the scalability of the solution?

DX Spectrum is very scalable.

How are customer service and support?

Technical support is okay. A few years ago it was better, and I would have given them a ten out of ten, but now I would give them an eight out of ten.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

We have extensive knowledge when it comes to using Spectrum. Some teams around me are using Zabbix.

How was the initial setup?

The initial setup is quite complex. It takes a lot of knowledge, but it is well worth the effort.

What about the implementation team?

We have our own consultants who are very knowledgeable key users.

What's my experience with pricing, setup cost, and licensing?

The price is high enough to expect an ideal solution. It's expensive.

The price should be lower, it's not cheap, but we are willing to pay for it. For example, If you are asking for a Rolls Royce, you have to pay for a Rolls Royce.

Which other solutions did I evaluate?

I have been evaluating the differences between Spectrum and Zabbix, as well as the pricing.

What other advice do I have?

For us, within that technology area, it is the standard for solutions that we can select. We haven't seen a better product.

Overall, I am happy with this solution.

Based on the current market, I would rate DX Spectrum a nine out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Get our free report covering Nagios, Centreon, Splunk, and other competitors of Zabbix. Updated: November 2021.
552,695 professionals have used our research since 2012.