We just raised a $30M Series A: Read our story

Network Monitoring Software Linux Reviews

Showing reviews of the top ranking products in Network Monitoring Software, containing the term Linux
SCOM: Linux
Information Technology Auditor at a financial services firm with 10,001+ employees

My primary use case for SCOM is to monitor service availability and performance, such as operating systems. We also integrate some Linux based operating systems to monitor our databases. We also monitor the Microsoft Exchange. We are having some difficulties in the case of the monitoring a couple of our networking devices, so I wouldn't say that monitoring networking devices is also part of the primary use cases. 

I also have Internet Information Server and Application Service from Microsoft Monitor.

View full review »
NM
IT Officer at a financial services firm with 501-1,000 employees

We are using the SCOM server for monitoring our network devices, our Windows servers, and Linux servers.

View full review »
Solution Architect at KIAN company

In recent years, no doubt it's improved. 

That said, at the time I used it, System Center just provided upgrade and update features for Windows clients, and Windows systems, and did not support Linux, Android, or iOS, and other operating systems. They need to provide better integration with other operating systems if they don't already.

The initial setup should be a bit more straightforward.

View full review »
Nagios XI: Linux
AS
IT-OSS Manager at a comms service provider with 501-1,000 employees

The initial setup was straightforward. It's easy to install, but you need some information about the Linux operating system.

It took less than one hour to deploy.

View full review »
ManageEngine OpManager: Linux
IA
Network Engineer at a non-tech company with 5,001-10,000 employees

In the initial setup, it is very easy to deploy. It supports SMB (Server Message Block) and WMI (Windows Management Instrumentation) protocol. So with SMB and WMI, it can support multiple platforms like Linux and Windows and Cisco devices.  

View full review »
Zabbix: Linux
Engineer of Telecommunication at Gold Telecom

The initial setup is very simple, and for a simple network monitoring we didn't need to activate any specials features on linux server

View full review »
Principal Technical Consultant at Ciber

We - msp - use this solution for enterprise-wide monitoring and alerting for network devices, appliances, Linux, Windows, Exadata, ODAs, Oracle, PostgreSQL, SQL Server, and MySQL databases.

View full review »
MD
ICT Network Infrastructure & Architect at a transportation company with 1-10 employees

The NetFlow integration really could be improved upon. In general, integration with other solutions and services needs to be worked on.

The documentation could be improved. I find that it's a bit limited.

It runs on a Linux backend, and we're using it to monitor our Windows platforms. For people who are not familiar with Linux, there may be issues. You do need to have a bit of Linux knowledge before beginning, otherwise, you may have problems working with it.

View full review »
Co-Founder at Nobius IT

I've deployed so many times that the initial setup is straightforward, but I would say that for someone who is totally inexperienced in Linux, it can be a little time consuming. If you understand a little about Linux, then it's no problem. A full system can easily be configured in two hours but it took two days the first time I did it. If you're not a technical person you can still install it but it will likely take some time.

As an example, configuring SNMP trapping into Zabbix needs configuration outside of Zabbix itself. This is not complex, but can slow down the process for inexperienced installers.

View full review »
Engineering Supervisor- Corporate Data Solutions and Services at TZ Telecoms. Corporation

Zabbix is open-source so if one wishes to implement it in-house, they must have qualified professionals to set up and optimize databases, Linux/Unix OS, PHP, Apache, and depending on what is monitored, a full-stack network and systems administrator may be needed.

Zabbix provides support although we have not subscribed to the support. We implemented the instances on our own and we also operate and maintain them on our own.

View full review »
Président / Directeur des services informatiques at Atig network

The basic setup is very easy. I installed it in a Raspberry Pi with a Linux version. It took perhaps 20 minutes to install, at the most.

View full review »
EB
Infrastructure Manager and Security at ITG

My advice is that the person who is responsible for implementing Zabbix in their environment should be familiar with Linux because then the process is more simple, efficient, and takes less time.

I would rate Zabbix an eight out of ten. 

View full review »
JL
Senior Specialist Critical Infrastructure at a educational organization with 5,001-10,000 employees

I would definitely recommend it. It is very good for what I want it to do. I would recommend getting your Linux and databases teams involved very early on in the journey, and when you are deploying, make sure that you are targeting the more important applications in your portfolio. Don't just try and deploy it on everything straight off the bat. Try and pick some critical applications to look at and build the value in the product in the initial phase, and that usually gets people interested in the application and moving forward. That would be my advice to people. One of my drawbacks was that I waited a bit too long, and when I brought them on board, I had already built most of the environment myself. I should have got them involved a lot earlier and sooner. It is not really a bad thing, but you can't do everything yourself, so try and get people on board.

I would rate Zabbix a nine out of ten. I am pretty biased. I really enjoy using Zabbix, and I feel it does what I need it to do. It definitely ticked the boxes. In my current role and in three years, I've gone from demoing Zabbix, doing a proof of concept, and integrating it with a few things to the boss turning around and saying, "Right, make it production." I have to admit that everybody that has come into contact with it or I've presented it to has been very pleased with the results. It has been a very good fit. I can only compliment the tool. 

I am not giving it ten because it's not perfect. I don't think any monitoring tool is absolutely a hundred percent perfect. There is always room for improvement, but this has to be one of the better ones. I know what I'm doing, and I could do more if I had support from them, but what you can do with the tool is very good as compared to other tools that I've tried out in the past, such as Nagios. With Nagios, if you really want the full functionality, you have to pay for it. Here, they give you that functionality. You've just got to know how to use it. It is very clever, and it has definitely won me over as a tool. Thanks to deploying and using Zabbix, I have learned a lot of stuff around Zabbix as well. I have learned a lot about different tools such as LinuxMySQL, and Postgres that are needed to run the service. It has been good. I have enjoyed it a lot.

View full review »
CEO/Founder at Zen Networks

Its initial setup is very straightforward. You need prior knowledge of Linux, but you don't need specific knowledge of Zabbix to deploy it. It is really straightforward and lightweight. Its deployment could take as little as one hour per person.

You simply download the packages. For a small deployment, you install them in the same box. There are three main components that you have to put in the same place, and that's it. It is not really complex to set up. Zabbix isn't really geared like some of the other solutions where there are different modules for each part. Zabbix is monolithic. You have a core system that can do everything, and it is extended with the plugins that provide additional integration and monitoring, but the framework and the UI are in one package or software.

You definitely need someone to administer the platform after it is deployed. Otherwise, it is a bad deal. The number of people required for maintenance depends on the site. It can start with someone part-time, and it can end with two full-time persons developing scripts and plugins. Post-deployment maintenance also depends on the monitoring requirements. You can't have a monitoring solution that is central to your network and sees everything but doesn't change as your network changes. If your network changes, your solution has to adapt to it, which is normal for all monitoring solutions. Similarly, if you have too many metrics, you would require some database tuning as the solution gets bigger. 

View full review »
PRTG Network Monitor: Linux
JG
Senior Network Analyst at New Signal Systems

The up-to-date graphs and the history are very good.

They keep adding services, such as MS SQL monitoring, and email monitoring to keep the different parts of the e-mails going.

The implementation is easy, initially.

Also, the servicing of Linux and Windows Operating Systems are useful.

View full review »
SevOne Network Data Platform: Linux
MD
Sr, IT Engineer

The initial setup is a little bit complex and you have to first create the main database. After you create the database, make sure you start collecting. Then you have multiple collectors that start collecting the information and send it to the database. They are really technical and it's Linux based.

The setup takes about one or two days.

Usually, when you do an upgrade it takes eight hours.

View full review »
ITRS Geneos: Linux
SG
Senior Enterprise Management Administrator at a financial services firm with 501-1,000 employees

It's absolutely scalable. 

The only place that we have a problem with scalability is with what is called the UL Bridge dashboard. That is an API stream that goes to the net probe. We're just sending so much data that sometimes the net probe suspends, so we're not seeing the data. That's the only place where we really have an issue. But I don't think it's the ITRS functionality that is responsible. I think it's our software just sending too much data.

In terms of the possibility of increasing usage, everything is pretty stable. The servers that we have them on are all Linux servers with more than enough CPU and memory. I've never really run into a utilization problem on any of the servers where ITRS is running.

View full review »
Zenoss Service Dynamics: Linux
TM
Principal Infra Developer at a computer software company with 10,001+ employees

As a Zenoss partner, the use cases have varied, based on the requirements. When I started using it in 2012, we were asked to migrate a set of technologies, like Windows and Linux monitoring, infrastructure monitoring, from different tools like SolarWinds and Tivoli to Zenoss, so that we would have a single pane for unified monitoring; a global operations manager. That, in turn, was integrated with ITSM tools like ServiceNow or CA. 

When I joined a different organization, it was used mostly for direct monitoring of vSphere and NetApp, with Windows and Linux of course, and some integration with SolarWinds and trap-based integrations. Internally we integrated it with ServiceNow. I implement that, integrate it, and hand it over to the daily support team.

We implement some customers on-prem and some customers we do using AWS or Azure cloud. It varies.

View full review »
DX Spectrum: Linux
MT
System Administrator at a government with 10,001+ employees

The visual is good but it's a little archaic. I think it's the way it's compiled, because I've been struggling recently with deploying it in different departments. The software still uses a 32 bit library on the Linux boxes for installation. Every admin scratches their head when I ask them to install, 16 or 17 packages, 32 bit packages on a 64 bit architecture.

Also, they dumped all the documentation from the old versions and re-tagged them with the new version and they haven't updated them for a few years. That's a bit of a problem. They need to get rid of Java and 32 bit applications proctors, and that would be good.

View full review »
LogicMonitor: Linux
DD
System Engineer at IFM Efector, Inc.

We have a network that comprises a bunch of Windows Servers, Linux servers, CentOS, and a variety of network devices, such as Cisco routers, Cisco switches, Riverbeds, and some VeloCloud. We use the service to monitor and alert us to any potential issues that we may be having. We also use it to do some pink tests and to monitor the availability of some websites as well.

The whole purpose is to give the IT team a heads-up, before the user base is aware of an issue. There are different levels in the system from "warning" up to "critical" that can let us see that a situation might be developing, that we might be having a problem with a system. We can proactively take care of it before it transitions to a level where it might affect our users and prevent them from doing their daily work.

View full review »
Network Operations Center (NOC) Manager at a tech services company

The automated and agentless discovery, deployment, and configuration were major selling points for us — the fact that LogicMonitor doesn't require any agent installations. That makes our lives much easier because we don't have to take care of the installations and the maintenance, etc. It's very good and very useful.

The setup is straightforward. We just create the credentials, install the needed collectors to the networks we want to monitor and that is pretty much it. I don't think a monitoring solution could be any more ready for setup than this.

It monitors pretty much everything we have, all devices, out-of-the-box, including 

  • Windows Servers
  • Linux servers running Tomcat and Apache applications
  • Microsoft SQL databases
  • a variety of network devices monitored by SNMP
  • MySQL databases
  • Dell EMC Avamar backup systems
  • Veeam backup solutions.

There are plenty of them. It's hard to list more than a few of them because we are speaking about tens or hundreds of different technologies.

There are some exceptions at certain customers, but we are working with LogicMonitor to have them monitor them properly in the future. We understand that this is a tool which is being developed on a constant basis. There are so many technologies and it's already covering so many of them. It's understandable that there might be some that are missing at this point.

Overall, the time it takes to deploy LogicMonitor depends on the particulars and the size of the customer. One customer can have, for example, five or 10 different networks, so we may have to install 10 different collectors. We might have a customer that has only one internal network where we can monitor everything by installing only one collector.

Onboarding a client requires one person from our company, and for monitoring it depends on the client. The whole network operation center team is involved, but it doesn't require an army of IT admins for monitoring. It's something which can be done pretty much by one person.

We have several users of LogicMonitor in our company. They include people on the service desk team and some network administrators. We have people in the network operation center, which is the team I'm part of, and we take responsibility for monitoring, but we are not the networking team, which is separate. And our sales team monitors the numbers so that we can bill our customer for these services.

View full review »
Senior Director, US Operations at Optimal+ Ltd.

The initial setup was very easy. I did it by myself in a matter of a few weeks.

The automated and agentless discovery, deployment, and configuration is very easy. You put the one agent in place and, from this agent, you can monitor the rest of the infrastructure you have in that specific environment.

In terms of the solution monitoring devices out-of-the-box, for us it was about 50/50. What we did is that we started to monitor at the application level, not only the infrastructure. The ETL is part of the application level — how many files we have in different stages. For this, we had to develop all kinds of scripts inside the tool. But even if you need to develop a new, custom data source, it's pretty easy. For us it's a matter of a few hours and we can deploy the new data source across our infrastructure. We know how to manage that pretty well in the tool.

But for the rest of what we had to monitor it was pretty okay. It was able to monitor all the Windows and Linux devices, as well as SQL databases. They have a lot built into the tool.

We don't monitor networking and I think that they put a lot of effort into the networking devices, like CO networking or storage devices. They are not part of what we need to monitor. 

We did a PoC. We deployed LogicMonitor in two of our environments. We have central and edge-types of environments. We put one on a central and one on an edge and we ran the tool. We made sure that we had everything we needed. I then gave my team some training — I have teams in the U.S., Asia, and in — and they took it on themselves.

For maintenance, to change configuration and that kind of thing, in our organization it's a matter of three to four people, because they are in different locations. So in the U.S. they manage their stuff, and in Asia they manage their stuff, because there are different infrastructures in the U.S. and Asia.

We give LogicMonitor to our customers as well. We build dashboards for them and they log in to our instance and they can see their infrastructure. About say 30 people are using the tool, as a result. They are mainly IT engineers and DBAs.

We use this tool at different tiers. Tier 1 is our customers; Tier 2 is the field engineer; Tier 3 is our support headquarters, and Tier 4 is R&D. All of them log in to the tool and see what is going on in the infrastructure.

View full review »
AP
Principal IT Consultant at a tech services company with 51-200 employees

I wasn't involved in the initial deployment. However, in terms of configuration, I have done many rearrangements of specific hardware and discovery of new equipment. That was pretty easy. It didn't take that much for the configuration, mostly for storage or infrastructure, like hypervisors. It was pretty straightforward. The Help page is pretty straightforward too. You will find what you're looking for.

LogicMonitor monitors most devices out-of-the-box. I was pretty amazed with all the documentation on how to configure specific hardware, like Citrix NetScaler ADC and PureStorage FlashArray. Those were pretty easy to configure. Other things it was able to monitor out-of-the-box include Veeam Backup, NetBackup, VMware, Windows Server — all the versions that we're using are supported — SQL Server, Linux servers, Red Hat, Oracle. Those are a few that come to mind.

View full review »
BU
Systems Engineer at a tech vendor with 201-500 employees

I would definitely recommend LogicMonitor. It's something to look at either when signing up for a trial or for a use case process . It's been a great product. It has customizations when you want them, and out of the box solutions if you don't want them. It works and is reliable. Compared to other monitoring platforms I've used in the past, it seems to be the most powerful and robust that I've dealt with.

The solution monitors most devices out-of-the-box, such as, Windows, Windows Server, Linux, F5 load balancers, Cisco firewalls, and Cisco switches. Those have been pretty easy to monitor. Our issues have been with one-off or nonstandard platforms that we've implemented. Otherwise, everything has been pretty easy to implement.

I would rate it as a solid nine (out of 10).

View full review »
Sr. Systems Engineer, Infrastructure at NWEA

I know that we have added extra Collectors, and it's super simple. We get to a point where we have too many instances on a Collector and it starts working too hard because it's just a VM. So, we spin up another Linux VM, download their Collector code, install it, and then you have another Collector running in 30 minutes. It's pretty straightforward. We add collectors fairly regularly, and it's pretty easy.

I know getting it installed is not that big of a deal, but getting things migrated off of old stuff can be time consuming. However, I wasn't around for it.

If we were implementing LogicMonitor now, we would need to identify when to pull the plug on Nagios, then identify what we wanted to monitor so we were not running duplicates.

View full review »
Nagios Core: Linux
JE
Network Engineer at a retailer with 1,001-5,000 employees

The installation is initially a little bit complex.

The process took several months. Originally, we were using Linux systems.

View full review »
Sr. System Administrator at Guj Info Petro Limited

We found Nagios Core a great product. Started working with Nagios Core when RHEL 5 was there & since then it is still in functioning mode and various types of hosts and services have been configured such as the IBM AIX, Linux, Microsoft Windows, Cisco Routers & Switches & certain gateway level firewalls. (Fortigate, Juniper netscreen, Checkpoint, Radware) 

As my knowledge gains, I've stated deploying additional plug-ins to increase our productivity like monitoring DATA CENTER Temperature using Cisco 3560/3560 Chassis Temperature. This was really helpful especially when your DATA CENTER doesn't have a dedicated temperature sensor mechanism. Nagios Exchange is a great source of plug-ins.

To date, total 127 Hots & 729 Services are being monitored under Nagios on an old Intel Dual Core, 4 GB RAM Desktop Computer.!!! Our future plan is to have a HA (High Availability) Setup for Nagios Core. To achieve this, we'll be using Apache Heartbeat (HA Proxy) for 2 individual Linux nodes with Common NFS Storage for Apache nodes. For backup mechanism, we would us "rsync" between NFS Node & Cloud Node over a secure "site to site' VPN connection.

Only the disadvantage of Nagios Core is it's shell based interface. Like Nagios XI, Core doesn't have an intuitive dashboard to configure everything (Hosts & Services) in GUI. Certain open source tool GUI configuration tools are also available but never used it as their -ve reviews.

We monitor both kind of checks. Active & Passive. However, Active checks mostly works well in my architecture. Also, I do provide Nagios Core training/support as per the end client need. Also, I've prepared custom Nagios Core documents for those who wants to learn & deploy. 

The biggest difference in Nagios Core & XI is, everything comes pre-built with Nagios XI, while for Nagios Core, all add-ons needs to be configured individually. So, Nagios Core requires hard coded system administrator, who understands advanced Linux commands, Editing of files, Directory permissions, Nagios Log Generation etc.

View full review »
Centreon: Linux
IT Analyst at La Corporation D'urgences-Santé

The stability is good because the operating system is Linux, and we are using the latest version.

We use it with vBackup, and the server backs up to vBackup every two to three days.

View full review »
Managing Director, CANADA at eva

Because we are a consulting company, we usually work for large organizations, such as banks, industry and retail companies. We have a deep knowledge of monitoring tools such as SolarWinds or HPE. We have extensive knowledge of what's on the market and, knowing that, that's why we chose Centreon.

The advantages of Centreon are its flexibility and that the licensing is pretty easy compared to other solutions.

Where Centreon is weaker is that the initial deployment could be easier. It's based on the open source solution, so if you are not from the open source world, and you're not good at Linux, that could be a barrier. But for people who are familiar with Linux this would be a pro. I'm not from the Linux world so for me it is a con.

View full review »
Pandora FMS: Linux
User

What I value most about Pandora FMS is the simplicity of working with it.

The speed of locating problems and to be able to solve them quickly, so that it affects our client's network infrastructure as little as possible, is very valuable.

Thanks to Pandora FMS we have everything unified in the same point and it is highly efficient.

This software is used to monitor several elements in the network, for example, it can detect if a network interface has been down, if it has received a defacement attack in unaweb, it alerts if there has been a memory loss in any application server; among its characteristics it allows to interact with other applications or platforms in the web and it can also send SMS if a system fails or alert about changes of an application in the web.Pandora FMS can collect information from any operating system, using specific agents for each platform, which collect data and send them to the server. Specific agents are used for GNU/Linux, AIX, Solaris, and Windows 2000, XP, 7, 2003 and 2008. Among its characteristics you can monitor services over TCP/IP protocol, without installing agents, you can also monitor network systems such as load balancers, routers, switches, operating systems, applications or printers. Pandora FMS also supports WMI to communicate directly with Windows systems remotely and SNMP to collect data orecibir traps.it can supervise the resources of devices such as processor load, disk use and RAM memory, analyzes the processes that are running in the system, in general can receive information from anything that can be collected automatically.

View full review »
MC
Systems Analyst at a university with 501-1,000 employees

We use its latest version to monitor all our Windows and Linux servers.

View full review »
Plixer Scrutinizer: Linux
Network Manager at IOOF Holdings

It's very stable. It can go up to a year or two without a reboot. It mainly gets rebooted when I do an upgrade.

During 2015 there were a couple of releases and I had a few stability issues. That was mostly because I moved the database from a Windows appliance to the Linux back-end. It didn't quite sync across. I just deleted the maps and rebuilt them from scratch and that fixed all the problems. That was the only real stability issue we've had across the journey.

We had one upgrade that didn't go as well as it could have, but Anna was able to jump on it with our support engineer and fix it within 15 minutes. It was just a matter of reaching out. They were on the phone within 20 to 30 minutes and got it sorted for us.

View full review »
PS
Network Manager at a energy/utilities company with 5,001-10,000 employees

We have tried to extract a map of data flow information, but I think we have to use a JSON query with API in order to query Scrutinizer to pull out some information in order to make some correlation with other third-party tools. We never had the opportunity to do this. It is something that would be nice to do, but it's very labor intensive.

I really would like to exploit the metadata to match it with other applications using the API, but this is not yet available. I'm not sure that we'll go that way because all the work that we have to do in order just to extract the metadata from Scrutinizer. We'll have to correlate with all the information from other systems. For that reason, I'm not sure it's going to happen. It will be very interesting though. 

I would like them to improve the update process. It's so complicated now that it switched to Linux. This makes the server more stable because before we were running it on Windows. The fact that they use Linux is very good and makes it more stable. However, updates never happen in one day or on our own. So, every time we need to call Plixer to proceed with the update, and they are very efficient in that. However, if they could make it a bit easier to upgrade, e.g., a click from the web interface to update the system, this would be nice.

For updating the Scrutinizer platform, when we have the actual data, it never happens in one day. Every time we have the data, we are obliged to install a new server in order to integrate the old data, and every time it has a problem. Most of the time, we were obliged to scrap all the data because we couldn't transfer it to the new server. So, it would be very good if they could improve this part.

Concerning the NetFlow, we have encountered many issues with some routers that don't send proper tickets. All the time, we're obliged to logon to SSH and run pcap. Pcap is just the packet capture. We are obliged to enter into the Linux to run some pcap on the common line, which is not great. It would be very nice if they integrated the pcap features through the web in order to analyze them. It's very easy. Most of the tools that we're using, and that are on the market, provide this feature. It would be great if Plixer integrated the pcap functionality through the web interface without having to enter into the Linux system.

The security part could also be improved. It would be great if they could implement a better algorithm inside the Scrutinizer to detect if there were attacks. The current algorithm to check if there has been a DNS attack is very light.

View full review »
MM
Network Infrastructure at a tech vendor with 1,001-5,000 employees

The workflow integration within a single platform has allowed us to remove redundant tooling. So, it streamlines that process into less workflows. It's allowed us to consolidate network statistical information. We have eliminated tools like SolarWinds, ntop, and some Linux utilities.

The primary reason that we switched to Scrutinizer was the interface. I saw a demonstration of the product at one of the security seminars where it was advertised as Splunk for network data. That's exactly the type of product we were looking for and it gave us that functionality. It was also able to deliver as expected.

Other requirements that we had were that it was multi-vendor, scalable, and a single-appliance solution. So, we didn't need to have a lot of database servers or Microsoft Servers and could run it as a virtual machine.

View full review »
Datadog: Linux
Sr. Architect - SaaS Ops at CommVault

We need the ability to create a service dependency map like Splunk ITSI. We have to build this in PagerDuty and it's not the best user experience. The ability to create custom inventory objects based on logs ingested would be a value add. It would be better if Datadog makes this a simple click and enable.

It would be helpful to have the ability to upgrade agents via the Datadog portal. Once agents are connected to the Datadog portal, we should be able to upgrade them quickly.

Security monitoring for Azure and Operating System (Windows and Linux) are features that need to be addressed.

Dashboards for Azure Active Directory metrics and events should be improved.

View full review »
Auvik: Linux
Founder, Managing Director at AssureStor Limited

We think the pricing is actually really cool. Only certain network devices make the pricing really cost-effective for us. We can monitor 50 servers and essentially one server or 50 servers has no impact on costs. The one thing I think that's crucial is just to make sure that you understand how many billable network devices you have in your estate before you move forward.

Typically, in our environment, VM hosts, storage arrays, virtual machines, or physical like Windows or Linux machines, all have no impact on cost. The only things that really impact costs are our network switches and our firewalls.

View full review »
SR
Systems Support Specialist at a government with 501-1,000 employees

I was involved in the initial setup of Auvik at my location. It was straightforward, and I was surprised by how much information Auvik can give you. The way they deploy is the smartest way to deploy anything. You go through that trial period with them where you'll give it all the time to gather the information about your gear. When you're actually talking to the guys, they give you a demonstration of Auvik in your environment related to your gear and the information Auvik will use, which is very important. 

Before we got down to the purchase, I wanted to see information related to the gear that I actually have, and that's important for anybody. I didn't want to see the hypotheticals of if we had a specific gear. Instead of deploying it in my environment with the belief that it is going to be great, and then realizing it is not compatible with this, I wanted to know that first, see it, and then decide whether or not that's going to be a deal-breaker. For example, I might get to know that Auvik is not going to show me information about the access points that I have because the manufacturer's access points don't have a feature that allows Auvik to see that information.

In terms of the duration, we gave it a weekend. There are different methods for using Auvik, and you can spin up a Linux box and install Auvik that way, or you can use their appliance. Based on your environment, they have their recommendations, and then you just let it sit for some time while you configure all your devices to communicate with Auvik. The setup configuration took me half a day. I had to make sure that I had the traffic all permitted through the firewall, the switches and routers were all set up to send information to Auvik, and SNMP communication was all good. After all that was set up, I just had to wait for Auvik to gather the information. I come in on Monday, and I saw all the information Auvik gathered about the network topology and other things over the weekend.

Comparing Auvik's setup time with other solutions, I haven't seen better. Auvik does the work for you. I spent half a day setting up the SNMP information and entering whatever credentials I needed to enter into Auvik for the WMI communication. After that point, you'd have to kind of trim it down. You have to say that I don't want to see the subnet because it'll scan everything. When you give it the information to look at your route, it'll be able to grab any route that your router can see. If you're not concerned with the public WiFi that you might provide and that your router might handle, you can just eliminate that from the map. You just say don't scan the network, and this way, you're only looking at the data that you want to see, which is really handy. So, in terms of the setup time, it is about how fast you can get into your devices and how quickly can you enter the credentials into the devices that you manage.

View full review »
PP
IT Manager at a computer software company with 51-200 employees

It's missing the license checker feature. We are using Salesforce and the license is a really crucial part of the development, and we have to monitor it. Now, I have to write a script and then run it on a random Linux box and get a notification if it's expiring. It's a really specific feature. I'm not sure Auvik will develop it.

We used Nagios for monitoring. Since it's an open-source thing, you can easily extend it with plugins. We had the license-checker in Nagios and I miss it in Auvik. There might be a solution to check this license. I just haven't had time to check it.

View full review »