We just raised a $30M Series A: Read our story

All-Flash Storage Arrays AI Reviews

Showing reviews of the top ranking products in All-Flash Storage Arrays, containing the term AI
NetApp AFF (All Flash FAS): AI
Storage Engineer at Missile Defense Agency

We don't use NetApp AFF for machine learning or artificial intelligence applications.

With respect to latency, we basically don't have any. If it's there then nobody knows it and nobody can see it. I'm probably the only one that can recognize that it's there, and I barely catch it. This solution is all-flash, so the latency is almost nonexistent.

The DP protection level is great. You can have three disks failing and you would still get your data. I think it takes four to fail before you can't access data. The snapshot capability is there, which we use a lot, along with those other really wonderful tools that can be used. We depend very heavily on just the DP because it's so reliable. We have not had any data inaccessible because of any kind of drive failure, at all since we started. That was with our original FAS8040. This is a pretty robust and pretty reliable system, and we don't worry too much about the data that is on it. In fact, I don't worry about it at all because it just works.

Using this solution has helped us by making things go faster, but we have not really implemented some of the things that we want to do. For example, we're getting ready to use the VDI capability where we do virtualization of systems. We're still trying to get the infrastructure in place. We deal with different locations around the world and rather than shipping hard drives that are not installed into PCs, then re-installing them at the main site, we want to use VDI. With VDI, we turn on a dumb system that has no permanent storage. It goes in, they run the application and we can control it all from one location, there in our data center. So, that's what we're moving towards. The reason for the A300 is so that our latency is so low that we can do large-scale virtualization. We use VMware a tremendous amount.

NetApp helps us to unify data services across SAN and NAS environments, but I cannot give specifics because the details are confidential.

I have extensive experience with storage systems, and so far, NetApp AFF has not allowed me to leverage data in ways that I have not previously thought of.

Implementing NetApp has allowed us to add new applications without having to purchase additional storage. This is true, in particular, for one of our end customers who spent three years deciding on the necessity of purchasing an A300. Ultimately, the customer ran out of storage space and found that upgrading the existing FAS8040 would have cost three times more. Their current system has quadruple the space of the previous one.

With respect to moving large amounts of data, we are not allowed to move data outside of our data center. However, when we installed the new A300, the moving of data from our FAS8040 was seamless. We were able to move all of the data during the daytime and nobody knew that we were doing it. It ran in the background and nobody noticed.

We have not relocated resources that have been used for storage because I am the only full-time storage resource. I do have some people that are there to help back me up if I need some help or if I go on vacation, but I'm the only dedicated storage guy. Our systems architect, who handles the design for network, storage, and other systems, is also familiar with our storage. We also have a couple of recent hires who will be trained, but they will only be used if I need help or am not available.

Talking about the application response time, I know that it has increased since we started using this solution, but I don't think that the users have actually noticed it. They know that it is a little bit snappier, but I don't think they understand how much faster it really is. I noticed because I can look at the system manager or the unify manager to see the performance numbers. I can see where the number was higher before in places where there was a lot of disk IO. We had a mix of SATA, SAS, and flash, but now we have one hundred percent flash, so the performance graph is barely moving along the bottom. The users have not really noticed yet because they're not really putting a load on it. At least not yet. Give them a chance though. Once they figure it out, they'll use it. I would say that in another year, they'll figure it out.

NetApp AFF has reduced our data center costs, considering the increase in the amount of data space. Had we moved to the same capacity with our older FAS8040 then it would have cost us four and a half million dollars, and we would not have even had new controller heads. With the new A300, it cost under two million, so it was very cost-effective. That, in itself, saved us money. Plus, the fact that it is all solid-state with no spinning disks means that the amount of electricity is going to be less. There may also be savings in terms of cooling in the data center.

As far as worrying about the amount of space, that was the whole reason for buying the A300. Our FAS8040 was a very good unit that did not have a single failure in three years, but when it ran out of space it was time to upgrade.

View full review »
Systems Engineer at Nordstrom, Inc.

Our primary use for NetApp AFF is backup for our production. It's more for our database for all of our retail for Nordstrom. We've got to keep it running every day, so we've got to make sure that we have all the databases backed up for three years, or more.

View full review »
IT Director at a legal firm

This product was brought in when I started with the company, so that's hard for me to answer how it has improved my organization. I would say that it's improved the performance of our virtual machines because we weren't using Flash before this. We were only using Flash Cache. Stepping from Flash Cache with SAS drives up to an all-flash system really had a notable difference.

Thin provisioning enables us to add new applications without having to purchase additional storage. Virtually anything that we need to get started with is going to be smaller at the beginning than what the sales guys that sell our services tell us. We're about to bring in five terabytes of data. Due to the nature of our business operations that could happen over a series of months or even a year. We get that data from our clients. Thin provisioning allows us to use only the storage we need when we need it.

The solution allows the movement of large amounts of data from one data center to another, without interrupting the business. We're only doing that right now for disaster recovery purposes. With that said, it would be much more difficult to move our data at a file-level than at the block level with SnapMirror. We needed a dedicated connection to the DR location regardless, but it's probably saved our IT operations some bandwidth there.

I'm inclined to say the solution reduced our data center costs, but I don't have good modeling on that. The solution was brought in right when I started, so in regards to any cost modeling, I wasn't part of that conversation.

The solution freed us from worrying about storage as a limiting factor. In our line of business, we deal with some highly duplicative data. It has to do with what our customers send us to store and process through on their behalf. Redundant storage due to business workflows doesn't penalize us on the storage side when we get to block-level deduplication and compression. It can make a really big difference there. In some cases, some of the data we host for clients gets the same type of compression you would see in a VDI type environment. It's been really advantageous to us there.

View full review »
Infrastructure Team Lead at a pharma/biotech company with 51-200 employees

The procurement process could be improved. It takes a long time for us to receive stuff. The product is good. It's not the product, it's just that it takes forever to get it. It's not our reseller's problem; it's usually held up at NetApp.

Waiting for equipment is one of our biggest hiccups. I live in Pennsylvania and we flew out to Washington state to do an install. We were there for three days, but the product didn't show up. We left and the product came the next day. Then we had to send somebody else out. That's because things were getting held up in shipping and stuff like that. The shipping is my only beef with NetApp.

View full review »
Sr Storage Engineer at a financial services firm with 1,001-5,000 employees

The initial setup of this solution is straightforward, at least for me. I've deployed NetApp before in my previous jobs, and it was easy with my experience. That said, it is not very complex.

View full review »
Storage Administrator at a energy/utilities company with 1,001-5,000 employees

During a maintenance cycle, there are outages for NAS. There is a small timeout when there is a failover from one node to another, and some applications are sensitive to that.

We are in the process of swapping our main controller, and there is no easy way to migrate the data without doing a volume move. I would like a better way to swap hardware.

Technical support could use some improvement.

View full review »
Storage Analyst at a financial services firm with 10,001+ employees

Our primary use case for NetApp AFF is performance-based applications. Whenever our customers complain about performance, we move their data to an all-flash system to improve it.  

We have our own data center and don't share our network with others.

View full review »
Technical Lead at USAF

I've set up a NetApp network previously. The setup was pretty straightforward.

View full review »
Systems Engineer at a tech services company with 51-200 employees

I would like to see NetApp improve more of its offline tools and utilities. Drilling down to their active IQ technology, that's great if your cluster is online and attached to the internet, with the ability to post and forward auto support, but in terms of having an offline cluster that is standalone, all of those utilities don't work. If there's a similar way to how NetApp has a unified manager, but on-premises where the user could deploy and auto support could be forwarded to that, and maybe more of a slimmed-down active IQ solution could be made available, I'd be interested in that.

I need a FlexPool to FlexGroup solution.

I would like to see the FAS and AFF platforms simplified so that the differences will disappear at some point. This would reduce the complexity for the end-storage engineers.

View full review »
System Programmer at a energy/utilities company with 5,001-10,000 employees

Our primary use case for NetApp AFF is unstructured data. We set up it up for high availability and minimum downtime.

View full review »
Specialist Senior at a consultancy with 10,001+ employees

Prior to deploying this product, we were having such severe latency issues that certain applications and certain services were becoming unavailable at times. Moving to the AFF completely obliterated all those issues that we were having.

With regard to the overall latency, NetApp AFF is almost immeasurably fast.

Data protection and data management features are simple to use with the web management interface.

We do not have any data on the cloud, but this solution definitely helps to simplify IT operations by unifying data that we have on-premises. We are using a mixture of mounting NFS, CIFS, and then using fiber channel, so data is available to multiple platforms with multiple connectivity paradigms.

The thin provisioning has allowed us to add new applications without having to purchase additional storage. The best example is our recent deployment of an entire server upgrade from Windows 2008 to Windows 2016. Had we not been using thin provisioning then we never would have had enough disk space to actually complete it without upgrading the hardware.

We're a pretty small team, so we have never had dedicated storage resources.

NetApp AFF has reduced our application response time. In some cases, our applications have gone from almost unusable to instantaneous response times.

Storage is always a limiting factor, simply because it's not unlimited. However, this solution has enabled us to present the option of less expensively adding more storage for very specific application uses, which we did not have before.

View full review »
Director at a tech services company with 11-50 employees

Prior to NetApp AFF, we were using an HPE Storage solution. It was a little more difficult to swap out the drives on the XP series. You have to shut down the drive and then wait for a prompt to remove it. It's a long process and if somebody pulls it out hot and puts another one in then you're going to have to do a complete rebuild. It is not as robust or stable when you are swapping parts.

View full review »
Storage Engineer at a computer software company with 10,001+ employees

Speed, reliability, ease of use are the most valuable features. 

The overall latency in your environment is very good. 

We don't use the solution for artificial intelligence or machine learning applications.

The simplicity around data protection and data management is very good. We use SnapVault for data protection which works very well. SnapMirror is also good. We mainly use the command line a lot, so we don't tend to use many provisioning tools.

View full review »
Senior Data Center Architect at a financial services firm with 1,001-5,000 employees

We have not used this solution for artificial intelligence or machine learning applications as of yet. This product has reduced our total latency from a spinning disc going into flash discs. We rarely see any latency and if we do it is not the discs, it's the network. The overall latency right now is about two milliseconds or less.

AFF hasn't enabled us to relocate resources, or employees that we were previously using for storage operations.

It has improved application response time. With latency, we had applications that had thirty to forty milliseconds latency, now they have dropped to approximately one to three, a maximum of five milliseconds. It's a huge improvement.

We use both technologies and we have simplified it. We are trying to shift away from the SAN because it is not as easy to failover to an opposite data center.

We are trying to switch over to have everything one hundred percent NFS. Once the switch to NFS is complete our cutover time will be one hour versus six.

View full review »
Tech Solutions Architect at a healthcare company with 10,001+ employees

The primary use case is enterprise storage for our email database system.

We have just been using on-premise. We are looking to move the workloads to the cloud, but right now it's just on-premise.

View full review »
Sr Data Storage at a energy/utilities company with 10,001+ employees

We stay away from what is called a silo architecture. NetApp cluster enables us to do a volume move to different nodes and share the entire cluster with the various sub setups as well as using the most storage we have on ONTAP. We are able to tailor and cut out at a file level, block-level or power level, to our various clients.

View full review »
Storage Architect and Engineer at United Airlines

On the fiber channel side, there is a limit of sixteen terabytes on each line, and we would like to see this raised because we are having to use some other products.

View full review »
Manager at Pramerica

ONTAP has improved my organization because we now have better performance. We can scale up and we can create servers a lot faster now. With the storage that we had, it used to take a lot longer, but now we can provide the business what they need a lot faster.

It simplifies IT operations by unifying data services across SAN and NAS environments. We use our own type of SAN and NAS for CIFS and also for virtual servers. It's pretty basic. I didn't realize how simple it was to create storage and manage storage until I started using NetApp ONTAP. We use it daily.

Response time has improved. IOPS reading between reading and the storage and getting it to the end-users is a hundred times faster than what it used to be. When we migrated from 7-Mode to cluster mode and went to an all-flash system, the speed and performance were amazing. The business commented on that which was good for us. 

Datacenter costs have definitely been reduced with the compression that we get with all-flash. We're getting 20 to one so it's definitely a huge saving.

It has enabled us to stop worrying about storage as a limiting factor. We can thin provision data now and we can over-provision compared to the actual physical hardware that we have. We have a lot of flexibility compared to what we had before. 

View full review »
Senior Network Technical Developer and Support Expert at a healthcare company with 10,001+ employees

Prior to bringing in NetApp, we would do a lot of Commvault backups. We utilize Commvault, so we were just backing up the data that way, and recovering that way.  Utilizing Snapshots and SnapMirror allows us to recover a lot faster. We use it on a daily basis to recover end-users' files that have been deleted. It's a great tool for that.

We use Workflow Automation. Latency is great on our right, although we do find that with AFF systems, and it may just be what we're doing with them, the read latency is a little bit higher than we would expect from SSDs.

With regard to the simplicity of data protection and data management, it's great. SnapMirror is a breeze to set up and to utilize SnapVault is the same way.

NetApp absolutely simplifies our IT operations by unifying data services.

The thin provisioning is great, and we have used it in lieu of purchasing additional storage. Talking about the storage efficiencies that we're getting, on VMware for instance, we are getting seven to one on some volumes, which is great.

NetApp has allowed us to move large amounts of data between data centers. We are migrating our data center from on-premises to a hosted data center, so we're utilizing this functionality all the time to move loads of data from one center to another. It has been a great tool for that.

Our application response time has absolutely improved. In terms of latency, before when we were running Epic Caché, the latency on our FAS was ten to fifteen milliseconds. Now, running off of the AFFs, we have perhaps one or two milliseconds, so it has greatly improved.

Whether our data center costs are reduced remains to be seen. We've always been told that solid-state is supposed to be cheaper and go down in price, but we haven't been able to see that at all. It's disappointing.

View full review »
Storage Architect at a energy/utilities company with 10,001+ employees

The stability of the solution is very good. The reliability is just top-notch. We have not had any outage or unscheduled downtime. Sometimes a disk fails or the SSD fails, but it gets replaced without any users knowing about it because of service interruptions.

View full review »
Storage Administrator at a computer software company with 5,001-10,000 employees

This solution has helped simplify our IT operations. We can easily move data from on-premises to the cloud, or from one cloud to another cloud. NetApp SnapShots and SnapMirror are also helpful.

The thin provisioning has allowed us to add new applications without having to purchase additional storage. We are shrinking the data with functions like deduplication and giving almost two hundred percent. It is very helpful.

This solution has allowed us to move very large amounts of data without affecting IT operations. We have moved four petabytes to the cloud. We have moved data from on-premises to the cloud, and also between clouds. It is easy to do. For example, if you want DR or a backup in a second location, then you just use SnapShot. If you have a database that you want to have available in more than one location then you can synchronize them easily. We are very happy with these features.

Our application response time has been improved since implementing this solution. The AFF cluster is awesome. Our response time is now below two milliseconds, whereas it used to be four or five milliseconds. This is very useful. 

The costs of our data center have definitely been reduced by using this solution. The power consumption and space, obviously, because this solution is very small, have been reduced.

We have been using this solution to automatically tier cold data to the cloud. I would not say that it has affected our TCO.

This solution has not changed our position in terms of worrying about storage as a limiting factor.

View full review »
Consulting Storage Engineer at a healthcare company with 10,001+ employees

I can't remember the last time we had an issue or an outage.

It is one of the best solutions out there right now. It is extremely simple, reliable, and seldom ever breaks. It's extremely easy to set up. It's reliable, which is important for us in healthcare. It doesn't take a lot of management or support, as it just works correctly.

Our NetApp environment has been fairly stable and simple that we don't have a lot of resources allocated to support it right now. For our entire infrastructure, we probably have three engineers in our entire enterprise to support our entire NetApp infrastructure. So, we haven't necessarily reallocated resources, but we already run pretty thin as it is.

View full review »
Data Protection Engineering at a manufacturing company with 10,001+ employees

This solution reduced our costs by consolidating several types of disparate storage. The savings come mostly in power consumption and density. One of our big data center costs, which was clear when we built our recent data center, is that each space basically has a value tied to it. Going to a flash solution enabled us to have a lower power footprint, as well as higher density. This essentially means that we have more capacity in a smaller space. When it costs several hundred million dollars to build a data center, you have to think that each of those spots has a cost associated with them. This means that each server rack in there is worth that much at the end. When we look at those costs and everything else, it saved us money to go to AFF where we have that really high density. It's getting even better because the newer ones are going to come out and they're going to be even higher.

Being able to easily and quickly pull data out of snapshots is something that benefits us. Our times for recovery on a lot of things are going to be in the minutes, rather than in the range of hours. It takes the same amount of time for us to put a FlexClone out with a ten terabyte VM as it does a one terabyte VM. That is really valuable to us. We can provide somebody with a VM, regardless of size, and we can tell them how much time it will take to be able to get on it. This excludes the extra stuff that happens on the back end, like vMotion. They can already touch the VM, so we don't really worry about it.

One of the other things that helped us out was the inline efficiencies such as the deduplication, compaction, and compression. That made this solution shine in terms of how we're utilizing the environment and minimizing our footprint.

With respect to how simple this solution is around data protection, I would say that it's in the middle. I think that the data protection services that they offer, like SnapCenter, are terrible. There was an issue that we had in our environment where if you had a fully qualified domain name that was too long, or had too many periods in it, then it wouldn't work. They recently fixed this, but clearly, after having a problem like this, the solution is not enterprise-ready. Overall, I see NetApp as really good for data protection, but SnapCenter is the weak point. I'd be much more willing to go with something like Veeam, which utilizes those direct NetApp features. They have the technology, but personally, I don't think that their implementation is there yet on the data production side.

I think that this solution simplifies our IT operations by unifying data services across SAN and NAS environments. In fact, this is one of the reasons that we wanted to switch to this solution, because of the simplicity that it adds.

In terms of being able to leverage data in new ways because of this solution, I cannot think of anything in particular that is not offered by other vendors. One example of something that is game-changing is in-place snapshotting, but we're seeing that from a lot of vendors.

The thin provisioning capability provided by this solution has absolutely allowed us to add new applications without having to purchase additional storage. I would say that the thin provisioning coupled with the storage efficiencies are really helpful. The one thing we've had to worry about as a result of thin provisioning is our VMware teams, or other teams, thin provisioning on top of our thin provisioning, which you always know is not good. The problem is that you don't really have any insight into how much you're actually utilizing.

This solution has enabled us to move lots of data between the data center and cloud without interruption to the business. We have SVM DR relationships between data centers, so for us, even if we lost the whole data center, we could failover.

This solution has improved our application response time, but I was not with the company prior to implementation so I do not have specific metrics.

We have been using this solution's feature that automatically tiers data to the cloud, but it is not to a public cloud. Rather, we store cold data on our private cloud. It's still using object storage, but not on a public cloud.

I would say that this solution has, in a way, freed us from worrying about storage as a limiting factor. The main reason is, as funny as it sounds because our network is now the limiting factor. We can easily max out links with the all-flash array. Now we are looking at going back and upgrading the rest of the infrastructure to be able to keep up with the flash. I think that right now we don't even have a strong NDMP footprint because we couldn't support it, as we would need far too much speed.

View full review »
Senior Storage Engineer at Hyundai autoever

We have been using the FAS series product, and AFF is pretty similar to the FAS products, as it still runs the ONTAP operating system. They are using AFF because that comes with all-flash disks, which gives us better performance with a smaller footprint. We use that mainly to start our block and NAS data.

View full review »
Storage Team Lead at a manufacturing company with 10,001+ employees

Speed is the most valuable feature. It is all-flash, so it is fast.

It simplifies since it is integrated with the other platforms as well. It's maintainable; it does not take too much to maintain the stuff. Creating users and sessions is easy on it.

View full review »
Systems Engineer at Cleveland Clinic

The primary use case for AFF is as a SAN storage for our SQL database and VMware environment, which drives our treatment systems. We do not use our it currently for AI or machine learning.

We are running ONTAP 9.6.

View full review »
System Administrator at Bell Canada

The most valuable features are dedupe, compression, compaction, and the flexibility to offload your cold data to StorageGRID. This is the biggest key point, which drove our whole move to the NetApp AFF solution.

AFF has opened our eyes in a different light of how storage value works. In the past, we looked at it more as just a container where we could just dump our customer dBms and let the customers use it in terms of efficiency. Today, to be able to replicate that data to a different location, use that data to recover your environment or be able to have the flexibility with the solution and data. These are things which piqued our interest. It's something that we're willing to provide as a solution to our customers.

View full review »
Systems Management Engineer at Linklaters

The primary use case for AFF is for use in our production environment. Within our production environment, we have a number of different data stores that AFF serves. We use a number of protocols from NFS to CIFS, as well from the file system protocols, and in the block level we use iSCSI.

We are a fully on-prem business as far as data positioning data sets. 

We don't have real-time applications that we run in-house, being a law firm. The most important thing is the availability of our environments and applications that we serve to our client base. We don't have real-time applications that we could be measured in real tangible form that would make a huge difference for us. Nevertheless, the way it goes: the faster, the better; the more powerful, the better; and the more resources you can get from it, the better.

View full review »
Director of Infrastructure Engineering at a financial services firm with 10,001+ employees

We did it for consolidation of eight file repairs. We needed the speed to make sure that it worked when we consolidated.

View full review »
Unix Engineer at a healthcare company with 5,001-10,000 employees

We've been using AFF for file shares for about 14 years now. So it's hard for me to remember how things were before we had it. For the Windows drives, they switched over before I started with the company, so it's hard for me to remember before that. But for the NFS, I do remember that things were going down all the time and clusters had to be managed like they were very fragile children ready to fall over and break. All of that disappeared the moment we moved to ONTAP. Later on, when we got into the AFF realm, all of a sudden performance problems just vanished because everything was on flash at that point. 

Since we've been growing up with AFF, through the 7-Mode to Cluster Mode transition, and the AFF transition, it feels like a very organic growth that has been keeping up with our needs. So it's not like a change. It's been more, "Hey, this is moving in the direction we need to move." And it's always there for us, or close to being always there for us.

One of the ways that we leverage data now, that we wouldn't have been able to do before — and we're talking simple file shares. One of the things we couldn't do before AFF was really search those things in a reasonable timeframe. We had all this unstructured data out there. We had all these things to search for and see: Do we already have this? Do we have things sitting out there that we should have or that we shouldn't have? And we can do those searches in a reasonable timeframe now, whereas before, it was just so long that it wasn't even worth bothering.

AFF thin provisioning allows us to survive. Every volume we have is over-provisioned and we use thin provisioning for everything. Things need to see they have a lot of space, sometimes, to function well, from the file servers to VMware shares to our database applications spitting stuff out to NFS. They need to see that they have space even if they're not going to use it. Especially with AFF, because there's a lot of deduplication and compression behind the scenes, that saves us a lot of space and lets us "lie" to our consumers and say, "Hey, you've got all this space. Trust us. It's all there for you." We don't have to actually buy it until later, and that makes it function at all. We wouldn't even be able to do what we do without thin provisioning.

AFF has definitely improved our response time. I don't have data for you — nothing that would be a good quote — but I do know that before AFF, we had complaints about response time on our file shares. After AFF, we don't. So it's mostly anecdotal, but it's pretty clear that going all-flash made a big difference in our organization.

AFF has probably reduced our data center costs. It's been so long since we considered anything other than it, so it's hard to say. I do know that doing some of the things that we do, without AFF, would certainly cost more because we'd have to buy more storage, to pull them off. So with AFF dedupe and compression, and the fact that it works so well on our files, I think it has saved us some money probably, at least ten to 20 percent versus just other solutions, if not way more.

View full review »
Consulting Manager at a tech services company with 1,001-5,000 employees

We primarily use it for storage for VMs and backup units.

We use this solution on a daily basis. In Sweden, typically small to medium-sized companies use this solution.

View full review »
Consultor and Co-founder at OS4IT

We evaluated HP and EMC. The main differences were the support, functionality, and cost of NetApp. 

View full review »
Head of Infrastructure, Network & Security Management at Vos Logistics N.V.

The only problem is that when you change to NetApp, it may have a large impact on your backups or something else.

When comparing with Pure for example, with Pure you have no maintenance anymore and with NetApp, you still need maintenance. For the maintenance, you need an external company to maintain the system. With Pure you have less maintenance which is a good item.

I think it could have better monitoring.

View full review »
Vice President Data Protection Strategy at a computer software company with 1,001-5,000 employees

We like the fact that we also use it and therefore can tell our clients about it from an actual user perspective, not just a sales perspective. 

No one has a price-to-earnings ratio that NetApp has, everyone's is inflated. NetApp's is below market, NetApp pays a two and a half percent dividend, NetApp stock has doubled in the past 12 months. NetApp's largest customer is probably the federal government, which uses more than 50% of NetApp, from my understanding, if you subtract cloud, although I'm not privy to understand how much cloud the federal government uses that is actually NetApp under the covers.

The fact of the matter is, if you need the top-selling, performing, file serving appliance to deliver your files to your end-users, NetApp pretty much invented the technology. While no one really can take credit for serving files, NetApp has been doing it for more than 25 years. They do it better than anyone. They have utilities around that. They can do three things that their competition can do with multiple different solutions. I'm sure there are some obscure things that they do in vertical markets that their competition does better, however, I'm not going to comment on radiology or genetics or things of that. They do a lot of things, yet, not like a Swiss army knife. They do a lot of things and are the best of breed of products put together.

Other manufacturers claim simplicity. In fact, frankly, they do have an advantage in that regard, however, they don't have the functionality. If you were to compare one of those products to NetApp, head to head from a feature perspective, NetApp would wind up in the top 10.

View full review »
IT Manager at a wholesaler/distributor with 201-500 employees

I primarily use the solution for asically all my main data for all my ESXi hosts.

View full review »
Solutions Consultant at a financial services firm with 5,001-10,000 employees

NetApp is a good choice because it's not only for a normal application, but it can also integrate with Nvidia for AI solutions.

View full review »
Dell EMC XtremIO: AI
G. Manager- Technical Services with 51-200 employees

The initial setup is straightforward. The deployment is not complex and not much time is required. The stop that takes time is data migration.

View full review »
SolidFire: AI
IT Infrastructure Consultant at a manufacturing company with 1,001-5,000 employees

The initial setup of this solution is straightforward.

I would say that you can deploy this solution in an hour if you know how to do it.

View full review »
Technical Consultant at a tech services company with 51-200 employees

I work as a technical consultant and our company are resellers. We sell hyper-converged solutions to our customers. We use mainly NetApp HCI and SolidFire. We use a variety of versions depending on the customer's requirements. Our main use of the product is for ESX environments and Hyper-V environments. 

View full review »
CTO at a tech services company with 1-10 employees

The initial setup is straightforward and quick.

View full review »
Presales Engineer at Tech Data Corporation

I have a SolidFire grid set up and I find that it is a stable solution. I did have to replace a disk on one occasion, which is something that the technical support contacted me about. While I have not used SolidFire in production, I have not heard complaints about stability from any of our customers.

View full review »
Founder, President and CEO with 201-500 employees

One of the most valuable thing aspect of the solution is the fact that it's all in one and all in a very small physical footprint. It has all of your major components, including your storage area network, servers, and networking footprint.

The delivery of the product is very fast and the solution itself deploys quickly, it is up and running within hours.

The product is competitively priced and technical support is good.

You can easily and effectively scale this solution. It's one of the main selling points and one of the features that makes it far superior to competitors.

View full review »
Manager IT at a tech services company with 201-500 employees

In terms of SolidFire's most valuable features, simplicity is the key component and key feature of SolidFire. It meant that the administrator or the user do not need to learn about storage RAID Groups or something like that, but they only need to provision the storage space that they need for the host. SolidFire's is being used in NetApp HCI(Hyper-converge Infrastructure)solutions that comes with a Deployment Engine to make the solution more simpler faster and easier to deploy.When you need a software defined storage system SolidFire's is really, really good.

When a customer need complete "Software defined Data-center Solution" with compute and storage; then NetApp HCI is far better choice than any other HCI Solutions.

SolidFire's really simple to deploy. you don't need to learn a lot. When you compare the NetApp storage system with SolidFire, both are very simple to deploy, but comparing to the other products from NetApp, SolidFire is even more simpler to deploy.

View full review »
Presales Engineer at a tech services company with 10,001+ employees

The only time I had an issue was with a motherboard. In fact, with the SolidFire technology, NetApp was able to acquire Active IQ. Active IQ is the software layer that is pushing all information on the health of the SolidFire platform. 

Therefore, the support is really quite proactive, in fact. Each time there was something to do, a component to change, or an upgrade to do on the platform, it was followed by emails from the NetApp support, who would remind me of necessary changes. Even with Active IQ, we've had advice on what we could do on the system to get better performance or better organization about the data that resides on the SolidFire platform. 

It's got really great proactive support, and we're quite satisfied with them.

View full review »
Tintri VMstore: AI
Director at Festino Indonesia

The initial setup is straightforward. It didn't take that long to deploy. 

View full review »
Pure Storage FlashArray: AI
VMware and Windows Server Team Lead with 1,001-5,000 employees

With respect to comparing other solutions, when you put all of the features in a box, leverage them and migrate your application to one of these arrays, it will give you a lot of benefits. Some people have compared benchmark performance tests against other arrays and from my point of view, overall as a whole package when you sum everything up, Pure Storage is the winner.

View full review »
Technology and Architecture Deputy Manager at a financial services firm with 1,001-5,000 employees

The problem is that we can only make a few groups, around five or six groups. I like groups and we need a lot of them. We had to put all the information in only a few groups and cannot make a more detailed separation of them.

This is the only problem that we have in the two years of working with Pure Storage and it is not an important problem. The interface that this solution has is really good. It senses all the errors. We get good support from the vendor. 

The price doesn't really matter. It's very expensive, it can be cheaper.

View full review »
IT System Engineer at a tech services company with 501-1,000 employees

If I compare it to SAN Symphony, for instance, it's much faster, much reliable level. 

The maintenance is very good. The support is very, very good. If you do any maintenance on it you have the support, and it's nice to know they are there to assist.

It's a very good product. It's very easy to manage everything.

With a snapshot, you can schedule it and you can remove it afterward. You can do a kind of production cope. That's very, very good now, and it's performing very well. The storage is amazing. It's so fast.

The total reduction you can expect is excellent. You buy the bundle storage and they give you a ratio of what you can achieve within it.

The mobile app is very helpful. I have an application on my smartphone. I view the latency in real-time on my app. You can see everything on your smartphone. You can also set up alerts on it, and things like this. I don't think you can do this on Dell storage. 

View full review »
Systems Engineer at a tech services company with 1-10 employees

The solution is stable. It's superb. We've done upgrades in which multiple controllers were involved and, while changing from one model of the array to another, a single controller was removed. It is swapped out and a new one introduced. Once it's stable they proceed to the next one. We have never experienced an outage in any of the three companies in which I've employed the solution. Even when the controller went down, the arrays remained up.

View full review »
Implementation and Support Engineer at PRACSO S.R.L.

The solution itself is pretty solid. Perhaps the time available for selecting upgrades or for scheduling things could be improved. On a couple of occasions, the waiting time for an upgrade has been pretty substantial. 

In the next release, I would like for them to support file systems on the lower-end models, like the X-10 or X-20. 

View full review »
Professional Test Engineer at a computer software company with 10,001+ employees

The integration and migration features have been really good.

We're getting good performance, and the compression ratio is also very good in Pure Storage FlashArray.

It has an Evergreen model and always maintains the controllers, so the controllers never let you down.

View full review »
Fresh Operations Manager at Jerónimo Martins

We did the implementation of the solution ourselves with the supervision of the integrator.

We have a team that does the maintenance and operations of the solution.

View full review »
Manager, Enterprise Infrastructure at a tech services company with 1,001-5,000 employees

The administration is very easy and quite minimal.

The performance is very good.

The installation is pretty straightforward. 

Technical support is good.

View full review »
Solutions Architect at a wholesaler/distributor with 1,001-5,000 employees

I'm a pre-sales architect. I architect, and I sell them as a partner with Pure Storage on the VAR side. Our customers use it for storage, mainly block-based storage and virtualization storage. Some solutions have both block and file storage, and some solutions only have file storage from Pure. 

View full review »
Chief Consultant and Architect at Tahir Professional Services

There is definitely room for improvement.

Overall, the solution is pretty good, although it does have certain gaps. There are many features which need to be added, particularly on the replication side. 

View full review »
Storage Solutions Architect at a manufacturing company with 1,001-5,000 employees

The initial setup was very straightforward. The deployment took a couple of hours.

We did a PoC with the product and checked to make sure that it worked in our environment.

We have a group of about 6 to 10 people managing the system.

View full review »
Cloud Solutions Architect at a tech services company with 10,001+ employees

We've had different types of storage, and three things of this solution are valuable. The first one is its outstanding performance. The second one is its stability. In the about three years that we've had it, we've had component failures, but we never had a service interruption or any data loss. The third one, which is really critical, is that it is super easy to use in terms of provisioning, storage, and managing the arrays. I'm able to maintain a multi-site environment with a couple of dozen arrays with a single mid-level storage admin.

We do a lot of data replication as well, and the replication features are all easy to set up. The networking controls for setting up interfaces and sub-interfaces are also easy to manage.

View full review »
Enterprise Solutions Architect at a logistics company with 10,001+ employees

The setup was extremely simple. 

The solution offers amazing performance. The speed and reliability of their flash arrays are great. In terms of flash storage, they're on it. The performance is there.

Their evergreen solution is probably the most needed in any industry. Especially today in unprecedented times and supply chain issues, their evergreen solution is amazing. 

Whenever we come out with a new feature for our system, we just swap out the storage controller. We don't change anything on our desk and we get the new features. That evergreen approach in your third year costs you nothing. 

It's a great company, great solution. They're dominating in their space for a good reason.

Even their management, their interface, is just the best in the industry. 

The solution can scale.

View full review »
Project Manager at WFSFAA

I have not contacted technical support.

My main point of contact has been the reseller.

View full review »
HPE Nimble Storage: AI
Lead Infrastructure Architect at ThinkON

Nimble storage is our primary Production storage vendor.  We use this with VMware on a daily basis including a new AFA5000 all flash array for our DMS system.

View full review »
Lead Infrastructure Architect at ThinkON

Straightforward and very easy, as always.

View full review »
Technical Specialist at a tech vendor with 11-50 employees

This product is definitely stable and we use it on a daily basis.

View full review »
Network & System Support Engineer at a recruiting/HR firm with 5,001-10,000 employees

The solution requires a higher availability.

The pricing of the solution isn't ideal. They should work to make it more affordable. It's very expensive.

I'd like to be able to configure the solution from vCenter, which isn't possible right now.

It would be great if the solution offered even more integrations and plugins.

View full review »
Product Manager at a comms service provider with 11-50 employees

The problem is the price. It needs to be improved.

I would like to see an added feature to auto-fix, or a dynamic alerting system on storage. This is very important because we would like to prevent a disk failure before it happens.

If we had some sort of AI in place to alert us then we could replace the disk before it occurs.

Also, we would like to receive alerts if space is over the limits. This is necessary for us.

View full review »
Owner at a tech services company with 1-10 employees

We have a team of five to maintain this product.

View full review »
Senior IT Officer at a financial services firm with 201-500 employees

The dashboard can be improved. I would like to see more details on the dashboard in the next release.

View full review »
IT Support Engineer at a computer software company with 501-1,000 employees

Scalability is something we are looking into with HA and failover features at other locations. This is part of that project we are working on. I do not know right now exactly how scalable the solution is but I might know soon.

Our whole company SAN, uses the solution of approximately 700 employees.

View full review »
Technical Manager at a tech services company with 11-50 employees

When I'm competing against someone, I would like Nimble to be an active-active controller. As it is now, Nimble is an active-passive controller. If the customer is looking for an active-active controller, then we can't use Nimble and have to go with Primera. Nimble gives us a lot of good things, but without the active-active controller, it's pointless.

View full review »
Systems Engineer at a tech services company with 51-200 employees

We use HPE Nimble for deduplication and to compress data.

We have a large number of customers that rely on high availability from this product.

View full review »
HPE Technical Support Manager at Servicios GZ, C.A.

I think the scalability of HPE Nimble Storage could be improved.

In Venezuela, we have to purchase the solution for two years and cannot obtain a secondary storage platform. So from my perspective, the scalability is not as easy as that of 3PAR StoreServ or HPE Primera.

View full review »
Team Leader at PT.Helios Informatika Nusantara

I'm working with one of the distributors here in Indonesia, Helios Informatika Nusantara. Mainly I'm involved with managing the SMB company. 

I rate HPE Nimble Storage as a nine out of ten. 

View full review »
ICT Architect / Team Leader at a tech services company with 51-200 employees

The performance and reliability are excellent. Due to the fact that we are a provider, we need systems which just run and run and run the whole day and the whole night without issue. This product does just that. We are selling services, and therefore we need a system which works 24/7.

The initial setup is very easy.

The stability is very good.

The scalability is straightforward.

View full review »
ICT Director KA Infra at a transportation company with 1,001-5,000 employees

The installation is straightforward.

View full review »
VP - Engineering Operations at WPG Consulting

In general, the solution works great. We haven't had any issues with it.

It's very reliable. Even if one control fails, it automatically turns on the other controller and everything moves from one to the second instantaneously without any issues.

The product is quite robust.

View full review »
Enterprise Administrator at a outsourcing company with 51-200 employees

HPE Nimble Storage is quick to release updates that fix bugs or problems and the failover has been good.

View full review »
Senior Storage Specialist, Digital Systems at Shaw Communications

The first installation we did was at a mine in South America, Chile, in a place called Ike where the elevation was very high that spinning disks were failing, the meantime for failure was low. The main reason we put our first all-flash array was that it was solid-state which has no moving parts. This solution allowed our organization to operate in that location.

View full review »
HPE 3PAR StoreServ: AI
System Administrator at ON Semiconductor Phils. Inc.

HPE 3PAR provides fast and reliable storage for our critical systems like the database (MSSQL and Oracle). It also improved the availability of the system and at the same time provides a Disaster Recovery solution by using the remote-copy feature.

The adaptive optimization is also a factor in maximizing the capability of the system.

View full review »
Head of IT Department at Sonepar

We paid for five years of support when we purchased the product.

View full review »
Sr, Storage Engineer at a manufacturing company with 10,001+ employees

The primary system comes with not too much software and is pretty simple and straightforward. You're not really using too much. The solution doesn't make things that are too complicated.

The replicating software is very good and the duplication part of it is very efficient.

Technical support is pretty good.

The solution offers good stability.

It has a good ability to scale.

View full review »
Responsible for information processing at a manufacturing company with 1,001-5,000 employees

It's stable. Although we experienced malfunctions where a virus was running and it failed. 

View full review »
Responsible for information processing at a manufacturing company with 1,001-5,000 employees

The initial setup was not complex or difficult. It was pretty straightforward and easy.

View full review »
Presales Engineer at a tech services company with 51-200 employees

We use the StoreServ as our main storage device. We have one unit in the main site and the other in the DR site, with replication between them. We have a file server running, it hosts our database, and it acts as our Exchange server.

View full review »
Deputy Manager at CESC Ventures

Its stability is the most valuable. It has soft alerts. When an alert is raised, we get a call from HP saying that there is this type of alert, and they need to do a remote session to check things. Similarly, for firmware updates, they get in touch to say that a firmware upgrade is required on your storage. They schedule a time and take control remotely to upgrade the firmware. In all such cases, there is no downtime. Everything is done when a full-fledged operation is going on.

Its user interface is also quite good. We are quite accustomed to this user interface. We can easily take a look at the current usage or the amount of storage. It is quite easily understandable, and I can present those things to my seniors or other people who are not that tech-savvy, and they can easily understand what we are trying to tell them. We can easily show them that we are using around 87% of the storage, so we need to plan for another tree and things like that.

View full review »
IT Infrastructure & Data Center Operation Engineer at Ministry of Communications and Information Technology (MCIT), Egypt

I use 3PAR as the standard storage. The main production is VMware, and it is connected to 3PAR across fabric switch. The fabric switch between them is MDS Switch and Notebook 8. We also have a Hyper-V environment, which is connected to the same storage. The main service is the exchange service. I have a public cloud and a private cloud. I use 3PAR as a private cloud.

View full review »
Service & Infrastructure Manager at a tech services company with 201-500 employees

Our storage team deploys the HPE 3PAR system. Sometimes, we also need some support from the local HPE support team. Its maintenance is done by a vendor.

View full review »
SAN Consultant at a tech services company with 201-500 employees

The initial setup is straightforward. It's not too complex. If you have any SAN experiences pretty easy, as you have to understand some of the terminologies so that you can pre-prep the architectural design that you're looking for. Overall, however, it's pretty easy from a GUI perspective. You can do everything from a GUI. You don't need to play at the CLI level. Of course, if you'd like to do that, you can do that as well.

View full review »
Systems Engineer at a tech services company with 51-200 employees

The deployment and maintenance are done in-house.

We are always using the latest version because I upgrade every year.

View full review »
CCO at a construction company with 11-50 employees

The features which are most valuable are the availability of the system and the management.

View full review »
Systems Engineer at a educational organization with 11-50 employees

I think the main thing of value to me is the intelligence around the solution and how data is taken care of on the storage side of things, with your LANs and your virtual volumes and how it sort of manages everything else.

View full review »
Storage Manager at a financial services firm with 10,001+ employees

We use HPE 3PAR StoreServ for data storage. Hewlett Packard Enterprise (HPE) had something in the contract because if you can compress data very well, you don't need that much capacity in your systems. If it was not possible to compress to a certain degree, they put some extra capacity in the systems. We bought that borrowed capacity but they separated the one-piece storage boxes added to our environment to get along with that bigger growth in capacity. Despite that, it is a wonderful system with an excellent graphical user interface. Still, new functions are being rolled out.

View full review »
Senior Systems Engineer at a university with 501-1,000 employees

The initial setup was straightforward, it took approximately 45 minutes.

View full review »
Infrastructure and Networks at a financial services firm with 10,001+ employees

The initial setup is straightforward with the help of the technicians from the company.

The deployment was completed in one week.

View full review »
Data Feeding at a computer software company with 1-10 employees

The initial setup is straightforward.

View full review »
System Administrator at a government with 1,001-5,000 employees

We use the old disk and we can't make a VMO in HPE 3PAR without getting an upgrade. We activated a new disk but it failed, it's hard to manipulate and format it. The disk might be okay but when we put it in the storage array, it failed.

View full review »
Technical Account Manager at a tech services company with 201-500 employees

The main use for this solution is for storing large data in the backend. It is used in many companies, such as financial institutions. Unfortunately, it has come to the end of service life, companies are going to other solutions because there are cheaper products, it cost too much to maintain the 3PAR.

View full review »
Sr. Manager - IT Systems at a transportation company with 501-1,000 employees

The initial setup is not overly difficult or complex. it's pretty simple and straightforward. 

The deployment is fast and within a day you can have everything up and running. 

We have engineers available that can assist with deployment and maintenance. 

View full review »
Storage Infrastructure Engineer at Cambridge Health Alliance

HPE 3PAR StoreServ has improved our organization from its ease of use and high availability.

View full review »
Hitachi Virtual Storage Platform F Series: AI
Solution Architect, IT Consultant at Merdasco - Rayan Merdas Data Prosseccing

We are a solution provider and I work with a lot of different SAN products, depending on the needs of the customers. we have implemented this solution, as well as the G series, for some of our clients.

I have a project right now that involves revising and fine-tuning a storage network. This network contains two Hitatch VSP G Series units. There is not a major difference between the F Series and the G Series. Both of them are enterprise-scale and efficient for many data centers. It is used as primary storage in industries such as banking, automotive, health care, and insurance. Large companies, or companies that have an IBM mainframe.

If the solution requires a very high IO/s (Input/Output per second) with sub-millisecond response time then they should select the F Series because it has better performance.

View full review »
Solution Architect, IT Consultant at Merdasco - Rayan Merdas Data Prosseccing

The primary use case of this solution is for storage in some industries such as banking, automotive, healthcare, and insurance companies. It is for large companies or companies with a large mainframe such as an IBM mainframe. Our primary use case was for core banking.

View full review »
Chief System Engineer at a media company with 501-1,000 employees

I think the management should be improved, because it is not very user-friendly at all. I also think the support could be better - at least for mid-range users. I think historically Hitachi systems are made for really big organizations like banks or insurance companies. Usually those companies have dedicated personnel dealing with the storage. And because they're very valuable clients, their support contracts are much more tailored to their businesses. Since this program is more aimed towards smaller companies, they don't have support solutions for mid-range companies.

I also think that their management is not very user-friendly. If this wasn't the case, more support wouldn't be necessary. It's a very reliable system, but the whole management user interface is very unfriendly. I know they're aware of that. They have a lot of data of management solutions, but none of them yet have reached sovereignty yet.

View full review »
Team Manager at a tech services company with 1,001-5,000 employees

The initial setup is straightforward.

View full review »
Product Manager at Storageone

Hitachi should launch some small machines in Brazil. The smallest machine here in Brazil is VSP 350, which can be quite big for some of the customers. In China, Hitachi has small models of this equipment, but those models are not available in our region. 

Its pricing is a big issue for us. We are resellers, and we face some competition from other vendors. Hitachi doesn't always have a good position in terms of the price. Its user interface is also not as good as some of the other competitors, and it can be improved.

View full review »
Engineer at Secretaria de Educacion del Gobierno del Estado de Mexico

I have worked with this equipment for the last two years. When I worked before in Hitachi Data Systems, I worked for a data architect and designed complete solutions, so I have a lot of interaction with the clients.  I handle the solutions, the capacity of the disk, configuration, initial set up, definitions of the DP pools, assigning the volumes, creating the entire SAN, etc. Also, I manage the SAN switches. I worked for Pearson, Sonic, Mobiistar, Macromer, Mynorte, and Santander. Every time, I created the whole environment— all open and the mainframe in development. For example, at Ponavid, I created the whole solution, assigned the space, performed all the troubleshooting, as well as supported all the hardware and the performance.

View full review »
IBM FlashSystem: AI
VP - Head Enterprise IT Infrastructure at MIB

We have three administrators who take care of the different applications and data that are hosted on this storage. We don't perform maintenance on a daily basis. We may extract some stats for the performance and for evaluating capability. However, when it comes to maintenance, we probably work on it once or twice a month.

View full review »
Infrastructure Architect Supervisor; Solution Delivery Supervisor at a financial services firm with 1,001-5,000 employees

They can include Amazon file system S3 protocol in the upcoming releases. It is a cloud file system. IBM FlashSystem doesn't have this feature in the box for high-end or mid-range. We have got requests for this from customers because we need to use S3 for EDI application storage. 

At the beginning of every year, IBM releases firmware. When I find any bugs in the firmware during the year, I am unable to find any information from IBM regarding the bug. I need to open a ticket, and the IBM engineering team makes a patch only for me. This patch is not public. By creating a customized patch for a client, they don't really solve the issue for everyone. If multiple users have the same bug, IBM should upload the patch on the official website so that we can download it.

IBM FlashSystem has a monitoring tool in the box, but it is not advanced. I need a more advanced tool for more advanced equations and monitoring. All top three storage vendors, that is, EMC, IBM, and Pure Storage, don't have a powerful monitoring tool. To monitor our box to show the statistics for I/Os and latency, I need to pay for extra software. The built-in monitoring storage is not mature enough to handle all requests and generate all reports that I need.

They can include the functionality to stretch a cluster natively without using any additional boxes. In addition, there are some features that EMC has integrated with the box. These features are not available in IBM FlashSystem.

View full review »
Director Technical at a tech services company with 11-50 employees

There is always room for improvement, but IBM is less interested in on-premise storage or on-premises solutions going forward. 

They're highly focused on the cloud. I don't see IBM being a very major player, the way that they used to be because they are moving away from this and are trying to move all their customers to the cloud.

Nothing really comes to mind for needing improvement. Some years ago, there would have been an answer to what could be better about this product, but nowadays, virtually all of the companies are meeting all of the features.

More and more, we would like to see how it's easier for the solutions to be bought by the customer more on a pay-per-use basis. That is certainly an improvement.

The customer's expectations are what they get on the cloud, they're expecting even in the on-premises deployments, going forward. 

They want to pay-per-use and not own and get stuck with what they're buying. 

They want flexibility.

IBM does that in a few products, but then more and more you see the business model changing towards that. We'd like to see that in all IBM products.

View full review »
Senior System Administrator at a tech services company with 1,001-5,000 employees

The maintenance service and support from IBM is very good.

View full review »
Storage Manager at a financial services firm with 10,001+ employees

I have been in touch with IBM support, and I did it a lot. Normally, they respond within an acceptable time with a sufficiently detailed answer. Around 90% of the time, you will get the answer straight back. In some rare cases, you need to ask them more. You send them a mail asking to clarify something or get more details about how to do a specific task, but normally, they provide a satisfactory answer.

View full review »
Deputy Chief Technology Officer at a comms service provider with 51-200 employees

The initial setup was not complex at all. It's pretty straightforward and easy to handle.

The deployment was very fast as well and may have only taken about a day or two.

We had a team of three that handled everything. They don't just handle the storage, however. They're handling the servers and network as well.

View full review »
Senior Systems Engineer at a tech services company with 1,001-5,000 employees

I like most of the features. Its speed, performance, and availability are valuable. We are implementing the data reduction technology the most.

View full review »
BT Area Champion/Trainer at a financial services firm with 5,001-10,000 employees

Recently, we deployed SS9100. At the core level that is deployed on that storage, it is not stable. We had an incident not too long ago. Both controllers rebooted simultaneously, within 15 seconds. There was some threshold value defined in the core level, and the system exceeded that threshold value. 

We logged the case to IBM. IBM did internal checks, which we deployed. The permanent fix will be available in the first quarter of 2021. It seems to be an issue on IBM's side. Obviously, we were surprised by how both controllers rebooted. We faced downtime on our applications and on our services. 

The issue which we had recently faced relates to the core level. It should be first tested at IBM labs and then introduced for general release.

IBM should improve its data reduction development.

View full review »
Network and System Administrator at TWD Technologies Ltd.

The solution is primarily a file infrastructure. It contains all the virtual machines for our company.

View full review »
Hybrid IT Enterprise Executive at a tech services company with 11-50 employees

The initial setup is straightforward. 

It is well documented.

View full review »
Storage Consultant at E-Storage

The installation is straightforward. The solution is delivered with an operating system, which can be updated in about half an hour. 

View full review »
Technology Specialist at InfoTech Group

The initial setup was straightforward.

View full review »
Storage Infrastructure Engineer at Cambridge Health Alliance

We do the implementation and maintenance of the solution using our in-house team.

View full review »
NetApp EF-Series All Flash Arrays: AI
Senior Systems Engineer at Indra

There is a lot of room for improvement. What I don't like is that they do not create barriers in the areas. The data management is based on the software and they do not use segmentation on the storage. That is the main problem - there is no segmentation. You cannot segment the data on the database. You put the data there but you don't know where the data goes on each disk. The information will be there but there is no segmentation. There needs to be improvement in data segmentation.

In future releases, I'd like to see federation and segmentation. Those are the big problems with NetApp at the moment. Compared to HP, Dell and HPE 3PAR, they cannot do the federation which is very important. We have to do remote replication and work with two or more storage sites in different locations. If I have a site and I have a second or third site - they require working federation and NetApp cannot do this right now.

View full review »
Associate Executive - Technical Engineer at a tech services company with 51-200 employees

The All Flash Array is stable and highly available.

View full review »
IT Systems Engineer at Adaptive Solutions

This solution is very stable. We use it for our critical workloads and it is used on a daily basis.

View full review »
Director at a computer software company with 1,001-5,000 employees

The most valuable feature of this solution is the performance of the database access.

It's simple in operating and for maintenance. Also, they provide a warranty for the I/O output.

View full review »
Manager, Cloud workload Migration & Onboarding Lead at Globe Telecom

The initial setup for installation was straightforward. 

View full review »
Technical Advisor at Synnex Metrodata Indonesia

The solution is very good. It offers very good performance, and very good data services to customers. 

The ONTAP is excellent. 

SnapMirror is very useful. It allows the customer to be able to see the entire relationship. It's one of the best features of the product.

The initial setup is pretty straightforward.

For the most part, the solution is stable.

The technical support has been pretty good for the most part.

View full review »
Dell EMC Unity XT: AI
Responsable de Production at Office National des Forets

We found the implementation to be very straightforward. It's not complex in any way.

The deployment didn't take too long. We had everything up and running in two days or so. It's pretty quick.

We have 12 technicians for the maintenance of all of our equipment in the enterprise. They would also handle any maintenance required for this product.

View full review »
Cloud Engineer at a tech services company with 51-200 employees

Good in-built monitoring tools from the System|Performance section Tab, from CloudIQ you can reach out to vCenter as well. ESRS (Call Home) on the service delivery part is valuable.

Remote Code update support (interactive or not) is free of charge , as you wish, nonetheless you are free to do it yourself as update are cumulative and retained on each new codelevel.

View full review »
Huawei OceanStor: AI
Senior Consultant at a tech services company with 11-50 employees

At first, we had some component failures - none that were critical because our system has built-in redundancy. We had a number of these failures at a component level, but that was quickly resolved through a firmware upgrade. We had some doubts we were going in the right direction by using the OceanStor solution, but all the problems got resolved through the firmware upgrade. I don't have much to say. Maybe our period of using it is too limited for me to say much about it.

View full review »
Project Manager at a tech services company with 11-50 employees

The initial setup isn't complex. It's quite easy and straightforward.

View full review »
Manager Infrastructure and Security (SCADA) at a government with 1,001-5,000 employees

Initial setup is very straightforward, very easy. We didn't have any challenges. 

View full review »
IT Solution Architect at Huatech a.s.

There are some small things in the solution that can be improved. Supporting software is one of them and the integration with mainstream solution technologies could be better. They are small issues and generally the technology functions well. It's not an issue caused by the vendor but rather due to external circumstances and the cessation of cooperation between Chinese and US companies. 

View full review »
Technical Lead at Computer Marketing Company Pvt Ltd

We have some Huawei 9000 cases for our virtualization, and for the storage block-level and mass as well, For HyperMetro too, for 18560 and 6800, this solution is critical for our company. The product is the main one on the line, it serves the entire company. Our usage expands daily.  We are partners with Huawei and I'm the technical lead. 

View full review »
Technology Solutions Architect at a computer software company with 201-500 employees

We are a system integration company and we are working with Huawei OceanStor products. It is mainly used for virtualization.

View full review »
Chief Executive Officer at a consultancy with 201-500 employees

Huawei OceanStor is priced very well but is a bit complicated.

You have a price list but have to wait for a discount, which depends on the distribution, where you get it from. This is intransparent, I prefer a clear price policy.

I would prefer having a price quote based on whether you are a reseller if you have silver or gold status, or a customer.

The price list is high and you get a large discount from the price list. I prefer a comparable price list and a smaller discount.

View full review »
Huawei OceanStor Dorado: AI
Senior Storage Consultant at a tech services company with 51-200 employees

The logistics can be improved because sometimes we have to wait a long time for the product to be delivered, despite there being stores available in Europe. Some of our customers are discouraged due to this long wait time.

The marketing for this product needs to be improved because it does not have enough exposure.

This solution does not support VMware VVols 2.0. However, I do not feel that this is necessary.

View full review »
Solution Delivery Expert at a tech services company with 11-50 employees

My company has an in-house team that handles the implementation and services for our clients. If the client purchases a maintenance contract then we take care of that, as well.

View full review »
Solution Delivery Expert at a tech services company with 11-50 employees

Huawei's technical support is very advanced. Customers can directly approach a Huawei technical person by logging a ticket and providing an email address. They will definitely respond in one or two hours.

The support is fast but the customer needs to understand what data they need for support.

View full review »
IT Service Manager at a financial services firm with 1,001-5,000 employees

We like that the solution is all-flash.

The solution has a sufficient amount of storage on offer.

The solution is very stable.

So far, we have found it to be quite scalable.

The initial setup is fairly straightforward and not overly complex. The installation and management are very easy.

The performance is very good. 

Technical support is quite good.

View full review »
Solutions Architect (Huawei & Lenovo) at Computer Marketing Company Pvt Ltd

There are plenty of features available in this solution.

View full review »
Senior Consultant at a tech vendor with 1-10 employees

The installation is straightforward. The team did not do a lot of training and was able to handle the installation without issues.

View full review »
Head of Enterprise Horizondal Division at Computech Limited

So far, technical support has been great I have no complaints. they are helpful and responsive. 

View full review »
Dell EMC SC Series: AI
Senior Consultant at a tech services company with 1,001-5,000 employees

Compellent's setup is straightforward.

View full review »
Technical Director at Allot Group

The initial setup is straightforward. It is complex but straightforward. It takes approximately 45 minutes to configure.

View full review »
EMC Storage & Backup Implementation Specialist at a tech vendor with 1-10 employees

The setup was straightforward. I implement it for my customers. It can be deployed within an hour. 

You only need one person for the deployment. One person is more than enough. There's no dedicated team required.

View full review »
IT Director - Enterprise Storage and Data Protection at a manufacturing company with 10,001+ employees

We use it for various type of data but mainly for virtual environments.

View full review »
Managing Director at Consult BenJ Ltd

The solution is used for shared storage for the ESX cluster, VMware, or Vcenter cluster. It's a virtual machine and it's hosting space for virtual service. The primary reason we use the solution is to host the core infrastructure, the virtual servers including file servers, domain controllers, application servers, sequel servers, etc. Basically, the servers that run the business.

View full review »
Senior Systems Consultant at a tech services company with 11-50 employees

It's fairly scalable, as long as you size it properly, to begin with. The expansion options are pretty good. Our clients for this product range in size from small to large businesses.

View full review »
Storage Architect at a healthcare company with 10,001+ employees

We use it for multiple databases. We use it for Oracle and for SQL. We also use it for file systems — Oracle, SQL, file system storage. Most of our use cases involve Oracle, SQL, VMware, and large file systems.

I am a storage architect. have a state administrator that works with me as well.

Internally, within our company, there are a few dozen employees using this solution. Externally, we literally have millions of people that hit that storage system every day.

As far as our database administrators, they're always looking at the storage performance. Some of them actually have read-only log-ins to the storage array itself. They can log in and look at directly what the storage performance is for their database.

Currently, we are not using this solution extensively because it's becoming a sunset solution. There's no option to increase usage. It's like saying you want to buy a '65 Mustang — no, you can't get one brand new, they don't make those anymore. There's no expansion being done because the product's no longer available. 

View full review »
Information Technology Operations Manager at Weber Metals

The technical support is great. We have 24/7 support with a four-hour response. They are responsive and they stay to help until the problem is resolved.

One time, we had a drive fail and we were notified before we even saw it on the device. Then, the new hardware was shipped to us the next day.

View full review »
Director at a tech company with 11-50 employees

We use this product for daily business and normal IT operations. We also use it for replication of one system to another system. We also use it for the hostings for HANA virtualization. Those kinds of replications and IT applications. We use a hybrid version.

View full review »
Senior Consultant at a tech company with 11-50 employees

The solution is mainly used for primary storage and in more than half of the cases, we use the Live Volume, the Synchronous Replication - this Automatic Failover. Its function is for primary storage on two sites.

View full review »
Lenovo ThinkSystem DM Series: AI
Group IT Architect & Network Engineer at a engineering company with 11-50 employees

Scalability is available and could be done, but we bought everything that we needed at the beginning. The system covers our needs so we don't have any reason to improve, update, or scale.

View full review »
Technical Specialist at a tech vendor with 11-50 employees

In this solution, I like the option of clustering two storages together and that there can be HA availability. You can get two DM Series which is a 5000 Series, and you can cluster them and make a chain solution. This is a very important feature. If you want to build this solution with brands such as EMC, you should be paying a lot of money because it requires buying equipment like a VPLEX which is so expensive. With the other device you can launch as soon as it appears. But with this DM Series, you only need two of these. It's a very good feature. 

View full review »
IT Solutions Architect at nds Netzwerksysteme GmbH

It's difficult to calculate pricing on the solution. Lenovo does help us, as a partner, however, there are different types of storage at different price points, and also certain items that are built into the cost already. 

Normally, they break down and show you how they get to the final price. Typically, a client will tell you what they need and you run some calculations based on workloads. Once you have a plan that aligns with the customer's needs, you go to Lenovo for pricing, and you need to negotiate with them to try to lessen costs. 

Once you have decided on the costs, there are no extras beyond that which a customer would have to pay for. It's all one set negotiated cost.

View full review »
Lenovo ThinkSystem DE Series: AI
Solutions Developer at Next Dimension Inc.

The advice that I would you give to others looking into implementing this product is that it is a good product if it fits your needs. Products in this category are not all the same and they all have something unique, but ThinkSystem is a good offer and can work out well if you can work with it to fit the use cases that you have.  

On a scale from one to ten (where one is the worst and ten is the best), I would rate this product as probably and eight-out-of-ten overall. I would give it that rating because it is a good product. It is stable. It is well supported. It just lacks some depth in the area of features. That ends up not necessarily being a strike against the product because, price-wise, it is a good value. If you want all the bells and whistles, Lenovo wants you to move up the ladder on something else they have to offer.  

View full review »
IT Departmant - System Administration at a healthcare company with 501-1,000 employees

As we employ the VMware for three hosts, we are not able to scale it too much. But, for our present needs it is scalable. 

The solution could be more scalable and I believe we could increase certain resources should the need arise. 

View full review »
Pure Storage FlashBlade: AI
IT Business Consultant, Presales Specialist and Solution Designer at Veracomp EOOD

Our use cases vary but usually we use the solution for cloud-based solutions. We use it for containers, which provide security on premises for our customers to test environments. Most of our customers are medium to large enterprises based in Bulgaria where we are located. We are resellers, distributors and system integrators. We have a partnership with FlashBlade. 

View full review »
Architecte technique at a energy/utilities company with 10,001+ employees

Compared to, for example, Hitachi NAS, the solution is not mature at all. It's just in its infancy as far as technology goes.

That means there are some features that just arent yet available on the product. When we ask for customization or certain features, we'll get a response saying "it's not available yet" or "that's in the pipeline". We have a complicated enterprise, therefore we need more features, perhaps, than the average user, and in this sense this product is limited.

We're on a dark site. We don't have internet access. This isn't great for FlashBlade, which needs to be connected to the internet. It's a website, so it needs to be connected in order to provide reports. Therefore, reporting isn't available to us.

We'd love a better dashboard that offers more accurate metrics. We'd like more details about what is happening on the system, so we can notify the clients as necessary.

View full review »
Technical Consultant Storage at a tech services company with 51-200 employees

At the moment, I can't think of anything that needs to be improved; however, the feature that we're waiting on is better integration with the cell services. I know Pure has a company that's working on the cell system, but it's still not completely there yet.

View full review »
Program Manager at Máxima Medisch Centrum

The initial setup is very easy.

We require a team of two to maintain the solution.

View full review »
CTO at a tech services company with 201-500 employees

This solution is mainly used in a very performance-sensitive environment for enterprise software storage.

View full review »
Fresh Operations Manager at Jerónimo Martins

The initial setup is straightforward.

It took approximately 20 minutes to deploy.

View full review »
Cloud Solutions Architect at a tech services company with 10,001+ employees

It's just a file IO and we've used it for training AI systems. We also use it for network boot for a number of our diskless machines, and it's proven rock-solid on that front, as well.

View full review »
HPE Primera: AI
Service Delivery Manager at a tech services company with 11-50 employees

In my opinion, HPE is making good progress and moving ahead with InfoSight, which is predictive analytics and artificial intelligence. I think that this is one of the selling points for their storage products and it comes at no extra cost.

InfoSight is built by gathering intelligence from all of the storage solutions, worldwide.  If the AI is being used here in Malta, locally, as it would with InfoSight, then it would use baseline data from the whole world to make predictions. It is really robust.

Cases are opened automatically and they give you trend analysis as well.

Anything that helps administratively for a data center is important, and Primera gives you both efficiency and effectiveness.

View full review »
Principal Consultant at a consultancy with 1-10 employees

The technical support is very well organized. There is a feature called Home, where the hardware calls to their NOC and reports and alerts or events. Then they have a pretty standard protocol to call back. They're quite insistent; they'll send you emails and if you don't acknowledge then they start calling.

The mechanics of the support is very well put in place. 

View full review »
Principal Consultant at a consultancy with 1-10 employees

Tell us about your experience with customer service/technical support.

Technical support is very well organized. There is a feature called Home. The hardware calls their NOC center and reports any alerts or events. Then they have a standard ideal protocol to call back. They are quite insistent, they send you emails if you don't acknowledge and they start calling.

The mechanics of the support is very well put in place.

View full review »
Head of IT Infrastructure Solutions at a tech services company with 51-200 employees

The initial installation, setup, and deployment were simple for Primera, versus other hyper-converged infrastructure.

HP Enterprise, simply with your Dell Technologies VxRail and HCI solutions, has done its job very well job, because you can deploy and start up your services and your new infrastructure in one or two days. One day for checking the specs and one day for deploying the whole infrastructure. This is because the solution is inside of the solution. Meaning the virtualization is already pre-installed, and they have already done the job.

Each customer is looking for high-end solutions. They may decide to purchase the Primera solution partly because of its installation and deployment. If it takes two days as is the case for HCI and three days in the case of Primera, there is no problem for the customer. It depends on the customer and their mindset.

View full review »
Associate Vice President - IT at a transportation company with 1,001-5,000 employees

The most important thing that we had found when we purchased this product around six months back was its 100% availability and uptime.

View full review »
CTO at a financial services firm with 5,001-10,000 employees

The initial setup was straightforward.

The deployment process was easy and it took between three and four months to complete.

View full review »
Head of Hosting & LAN Services at Lanka Communication Services (Pvt) Ltd.

We really like the HPE infosight. it is an AI driven interface for hybrid cloud. it gives some more insights into any virtualized infrastructure such as performance issues, proactive recommendations etc. for example, get alerts and items of that nature It's been quite helpful so far. 

The performance of the solution is excellent. It performs far better than you would expect.

The initial setup is very straightforward. It's not overly complex. 

It outperforms on latency. It's very fast. We're talking about 0.4 milliseconds. The faster, the fast data works, the faster you'll get the response. This is really low latency; it's a great performance.

The dashboards are very good.  It has a very user-friendly kind of dashboard that is easy to understand. There isn't any complex stuff or too much information. That said, it has deep integration in to HPE Infosight. It gives you so much information, even more than you want.

The customization capabilities are excellent.

View full review »
Technical Account Manager at a tech services company with 201-500 employees

We used HPE to do the full implementation of the solution. 

We have two engineers for the maintenance of the solution.

View full review »
Violin System 7000 Series: AI
Enterprise Solutions Architect at a tech vendor with 11-50 employees

The initial setup was pretty straightforward. It takes about half a day to one day for the initial deployment.

View full review »
Dell EMC PowerMax NVMe: AI
Presales Engineer Information System and Security at a tech services company with 10,001+ employees

The primary use case is data storage consolidation for mission-critical applications, like billing, the charging system, mobile payment, and intelligent network. Virtualization and cloud infrastructure are where the customer is using many solutions for virtualization, like Hyper-V, Oracle Virtual Machine, OpenStack, VMware, Solaris, Linux, Kubernetes, and Docker. Disaster recovery was also the main focus of the customer to guarantee RPO and RTO. The last use case was a NAS solution through the eNAS provided by PowerMax. The previous eNAS hosted by VMAX 10K has its limits in term of size limit for a file system.

View full review »
Senior BDM at a tech services company with 51-200 employees

The deduplication of the solution is excellent and the compression is quite helpful. These are the most useful aspects of the solution for us.

The initial setup is quite straightforward.

Technical support has been excellent.

View full review »
VP Global Markets, Global Head of Storage at a financial services firm with 10,001+ employees

Uptime and availability are first and foremost. The deduplication and compression capabilities are also excellent, allowing us to be very efficient with the physical hardware that we need to deploy on-prem in order to fulfill our requirements. It has given us excellent value for money without compromising performance.

The solution's snapshot capabilities and replication are very good features. Snapshots are allowing us to quickly build analytical models directly from production data. This gives us amazing insights into market trends and allows us to build more effective trading algorithms. Replication offers us unparalleled levels of resilience.

The management overall is excellent. Dell EMC continues to build on very solid foundations, which have been evolving for over two decades. 

The REST APIs are great.

The solution exposes excellent automation opportunities.

We have found the performance to be very good so far.

View full review »
Storage Team Manager at a government with 10,001+ employees

There are so many ways it has helped. It provides efficiencies through compression and it provides high availability through its solid-state drives. We literally turn it on and it does its thing.

When it comes to storage provisioning, a lot of it has been automated. This was true even prior to PowerMax, back with the VMAX. The days of provisioning the mapping and masking, and doing all those things manually, are over. A lot of that is automated through their tools. Overall, that automation is saving us about four hours a week.

View full review »
Infrastructure Lead at Umbra Ltd.
  • The cost of the entire solution
  • Their dedupe rates
  • Ease of use
  • Simplicity

Data availability is very high. Data security is also very good. There are a lot of encryption methods available.

We use the solution’s NVMe SCM storage tier feature. There is almost no overhead or management time involved. It was kind of set it and forget it.

View full review »
Senior Solution Architect at Rackspace

We are a very large customer of Dell EMC. We have several different deployments or installations. The biggest use case is probably a multi-tenant or shared environment where we provide many petabytes of storage for multiple customers who utilize that same infrastructure. We are a managed services provider in the cloud sector so we have to deliver high performance storage for thousands of customers who have to be up all the time.

There are a lot of different use cases, in general: Having large quantities of storage available that is always available, because of this uptime is important as is performance. As a service provider, we deliver storage on demand for our customers. This is important because we can adjust storage needs on a per customer basis. Whether it be increases or decreases in storage, this platform allows us to do that very easily.

We are using the latest release.

View full review »
Pure FlashArray X NVMe: AI
Manager of IT Department with 201-500 employees

The initial setup was very straightforward. It took two weeks from start to finish.

View full review »
Managing Director at Dr. Netik & Partner GmbH

In the future, I would like to see integration with enterprise backup systems. They should have direct integration available using Pure APIs. Good candidates would be Rubrik and Veeam.

View full review »
VP Infrastructure & Security at a financial services firm with 51-200 employees

Fundamentally, we have more visibility to what is happening in the storage for the databases. We can determine if the problem is something that is bound by IO or the problem is related to the database structure itself. 

The amount of time that a DBA has to spend figuring out whether it is a physical problem versus a programmatic problem has been reduced significantly. Before moving to this solution, when the database was running slow, we were asked to check our disks, but we had no way of verifying that. It was a nightmare. Now, we have reports that we can send on a daily basis, and they know what their performance is like.

We can now ascertain that it is not the physical problem with the array that is causing the delays on the database. The DBAs can then look at the database and figure out various reasons or solutions for this, such as maybe the tables are value structure, maybe they need to run optimal queries, or maybe they should change the way they are accessing the data. You can pretty much take out of the equation the fact that the hardware is the problem.

View full review »
Senior Administrator/IT Systems & Cloud Operations at a comms service provider with 10,001+ employees

The software layer has to improve. The software is promising but not prominent.

We have upgraded more than 21 things. We have four artists and have had to upgrade the codes. 

There are several upgrades required, but we are slowly catching up to them.

There are not many drill-down options available. EMC is providing many reporting tools that are not available in Pure.

They need better reporting. Some of the tools are missing. EMC is a step ahead in that area.

The usage at the host level has some limited options.

View full review »
Implementation and Support Engineer at PRACSO S.R.L.

Being able to have broken files on-site on the same appliance is quite useful.

The newer version of NVME has a really noticeable difference in quality versus the last generation. It's better in terms of latency. It allows for so much more input.

The initial setup was extremely simple and straightforward. 

The stability is quite good.

We've found the scalability to be excellent.

The price of the product isn't too high.

View full review »
IBM FlashSystem 9100 NVMe: AI
Senior Client Specialist at a tech services company with 201-500 employees

The initial setup is straightforward and it usually takes about a day to deploy.

View full review »
Microfinance at a financial services firm with 5,001-10,000 employees

The administration is easy and very straightforward. It is the same software for managing both flash and non-flash storage. It can be managed using either the GUI or the command-line interface.

It is a high-performance product that serves our requests well.

The IBM Professional Services give us technical support that is both timely and of high quality.

View full review »
General Manager at SinergyHard Ecuador

The high performance and high availability improved our overall processes.

View full review »
Information Technology Senior Administrator at Genpa

IBM's support is not good. I experienced a big problem where I opened the console IBM Storage and would see that something was broken. I called the call centers, and I said, "I have a problem. My drive is not working." They want me to give them the serial number, I gave it to them and they told me "I cannot find your product. Your product is not here."

It was unbelievable. I had purchased it a week prior. I was the first person in Turkey to buy it. They said they couldn't help me. I ended up fixing it myself. 

View full review »
Dell EMC PowerStore: AI
CTO at Universita' degli Studi di Pisa
  • Flexibility
  • Performance
  • Easy of use

It also has some very good compression capabilities. 

We were looking for a solution that was easy to install in our VMware environment, that was flexible. PowerStore X is a type of a VMware cluster that you install inside your environment. If you have a VMware environment, like we have in production, it's easy to install and use.

It enables us to add compute or capacity independently. We have also deployed some apps on PowerStore, even though the PowerStore we have is not the biggest one you can buy. One of the main characteristics of PowerStore is that it is like another piece of VMware, so you can run applications on top, applications that have direct access to the storage. The ability to add compute or capacity independently is great because it adds more flexibility to our environment. You are not adding only storage, but you're adding some not-so-big computing capability. You have the possibility of adding some virtual machines, running NVMe storage, and that is a real plus for this solution.

In addition, PowerStore's built-in intelligence for helping to simplify IT operations is incredible. When we approached PowerStore, we had an idea that it was a normative platform, but we were impressed by the capability of the solution. It's probably one of the best pieces of storage that we have installed here.

View full review »
Chief Information Officer at a computer software company with 5,001-10,000 employees

We've had no issues with the machines since turning them on.

The machine supports "six nines" of availability. Anything they could do to push it closer to "seven nines" of availability would be extremely beneficial.

View full review »
Founder and CEO at Desktoptowork

The way you're able to manage all your PowerStores as one solution is very good for us. PowerStore enables you to federate or cluster multiple appliances with automated load balancing. In terms of management, it's helpful that we can do it from a single interface where we're able to manage multiple PowerStores. When you have multiple PowerStores, it works intelligently by running the workloads based on the needs of the infrastructure.

The solution's machine learning and automation work well when it comes to optimizing resources. It does inline deduplication which means that the net the amount of storage you have available is bigger. It can do all kinds of optimization in the system itself and that works very well.

View full review »
Co-Founder at a tech services company with 11-50 employees

We have a team of four engineers that do the implementation and maintenance of the solution.

View full review »
NetApp NVMe AFF A800: AI
Director Global Storage at a healthcare company with 10,001+ employees

The solution's initial setup was not complex at all. We found the process of the implementation to be very straightforward. It was easy to execute on.

The deployment took less than two weeks. It was a fairly quick process.

We don't really need to have any staff for maintenance.

View full review »
Pavilion HyperParallel Flash Array: AI
Network Manager at a transportation company with 1,001-5,000 employees

The rail system that Pavilion uses to mount up into a standard Dell or APC cabinet extends further back than normal rails, and they cover up the zero PDU slot. So, I don't like the rail system that comes with the device. That is my biggest complaint.

View full review »
Manager of Production Systems at a media company with 10,001+ employees

The solution's performance and density are excellent.

Typically, there is a trade-off. You can have incredibly dense storage in a small footprint sometimes, but the trade-off to that is you need a lot of horsepower to access it, which ends up counterbalancing the small footprint. Then, sometimes you can have very fast access to a storage array, but that usually requires a more comprehensive infrastructure.

This kind of balance, to somehow fit it all into one chassis, in a 4U server rack, is unheard of. You have the processing proxy accessing the data and almost a petabyte of flash accessible.

It's a very small footprint, which is important to our type of industry because we don't have massive servers.

We have benefited from this technology because we were able to centralize a lot of workflows. There is normally a trade-off, where you can have very fast local storage on the computer, but in a collaborative environment that's counterproductive because it requires people to share files and then copy them onto their system in order to get the very fast local performance. But with Pavilion, basically, you get that local NVMe performance but over a fabric, which makes it easier to keep things in sync.

We have been able to consolidate storage and as part of a multi-layer storage system, it plays a very important part. For us, it cuts down on costs because we essentially get an NVMe tier that's large enough to hold everyone's data, but the other thing for us is time and collaboration. Flexibility is worth a lot to us, as is creativity, so having the resources to do that is incredibly valuable.

If we wanted to do so, Pavilion could help us create a separation between storage and compute resources. It's one of those things where, in some environments, such as separation is natural and in other environments, there's an inclination to minimize the separation between compute and data. But to that point, Pavilion has the flexibility to allow you to really do whatever you want.

In that sense, you have some workloads where compute is very close to the data, such as iterative stuff, whereas we have some things where we simply want bulk data processing. You can do any of that but for us, that type of separation is not necessarily something we are concerned with, just given our type of workflows. That said, we have that flexibility if necessary.

This system has allowed us to ingest a lot of data in parallel at once, and that has been very useful because it's a parallel system. It's really helped eliminate a lot of the traditional bottlenecks we've had.

Pavilion could allow for running additional virtual machines on existing infrastructure, although in our case, the limitation is the core densities in our hardware. That said, it is definitely useful for handling the storage layer in a lot of our VMs. The problem is that the constraints of our VM deployments are really in just how many other boxes we have to handle the cores and the memory.

View full review »
Manager of Platform Software at a healthcare company with 51-200 employees

Performance-wise, this product is faster than pretty much anything we've seen. In terms of the density and how it compares, what we have in-house is not very extensive in terms of other things we use, but in terms of our research and actually, what we have used, the density is much higher than anything else we've seen.

We can basically store the entire company's data inside of one unit, when the unit is properly configured. As it is now, it's equivalent to replacing three or four racks of equipment. The density is incredibly high.

This solution provides us with flexibility in our storage operations. It's software-defined storage, so we can allocate capacity however we want. It uses thin provisioning, which is convenient for us, and all sorts of other enterprise features that come with it that we haven't used quite yet. But, we can imagine we'll be taking advantage of them as the usage against the unit rises.

Our use case is primarily about performance, so consolidation has not saved us in terms of costs or capital expenditures. Our implementation of the product is an add-on to what's currently at the company. We've taken data out of the existing infrastructure and just moved it. The migration has allowed us to use it a lot faster, but we haven't gone through a consolidation exercise where we've gotten rid of the old equipment and now just depend on the new unit.

Absolutely, we are able to run more virtual machines on our existing infrastructure. With respect to storage management, we've reduced the amount of work that was required. In fact, we can eliminate most of the staff that has been dedicated to doing that in the old equipment. Now, we need very few people to administer the entire company using Pavilion. We can basically have one person manage all of the company's engineering data.

In terms of cost savings, in our situation, the cost we're saving is not headcount but rather, engineering time spent doing those kinds of activities. Where we may have had to spend a lot more time administering storage and IT equipment, we now have to spend much less time doing it, even though the headcount dedicated to IT is the same. Basically, opportunity costs have improved dramatically, as we've been able to assign staff to more value-added tasks.

We probably had three people spending between 25% and 50% of their time doing related activities, whereas now, we have one person spending perhaps 10% of their time.

View full review »
HPC CTO at a tech services company with 10,001+ employees

The initial setup was straightforward and it was ready to go within one day.

View full review »
Hitachi Virtual Storage Platform E990: AI
General Manager - IT Operations at a tech services company with 201-500 employees

One improvement I am hoping for in the next release is unified storage. The other storage systems we use are unified, but E990 is not unified. Only block storage is available. To get the same functionality with the E990, you have to configure it through the NAS setup, which adds some additional costs. I would also like more capacity in a single box — expand the space up to 4 petabytes.

View full review »
IntelliFlash: AI
Lead Systems Engineer at a retailer with 5,001-10,000 employees

The initial setup is straightforward. 

View full review »
IT Manager at a agriculture with 1,001-5,000 employees

We used the solution basically for all operations. We got our ERP, Citrix, email (there was about a two wall to 18 terabytes with email), et cetera, that we had on the hybrid and we had our ERP system, which demands performance. We had that on the solution's all-flash.

View full review »
Zadara: AI
CTO at Pratum

We have a significant amount of data that is stored and retained. We have a rolling 365 days' worth of data. There are about 35,000 to 45,000 events per second that come into this solution and then get stored, long-term. That data also needs to be readily accessible, meaning that it can be searched on at any point in time. We have real-time security metrics that are run against that volume of data, so we need the data to not only be persistent and stored long-term, but also on fast storage arrays that can be readily accessed in a minimal amount of time. We've leveraged Zadara to house that storage as well as have it available to us so that we can query it very quickly.

View full review »
CEO at Momit Srl

We use iSCSI and Object multi-protocols. These simplify our operations a lot because otherwise we would need a lot of different products or interconnections. With Zadara Storage Cloud, all of this is just one type of connection. It works only with Ethernet, which means no Fibre Channel nor other protocols, like InfiniBand. It is just Ethernet, which is easy and simple. You can just use the protocols that you need. Today, this means we are not using NFS. But, never say never. Probably tomorrow or one day, if someone would just ask us to implement something mounted via NFS, we are ready to go. This is good because we don't need to buy another hardware or additional features. The best part is the fact that the cost covers everything, so you don't have to activate features by license, e.g., we don't need to pay more to activate NFS, CIFS, or iSCSI because we are not using them today. We still have them. So, we are free to use them whenever we want, which is good.

All our customers report the same story when we ask for a case study. With Zadara Storage Cloud, you simplify the management, which is absolutely true. 

Zadara Storage Cloud's agility is the most important part because all customers want agility today. Everyone wants quick answers, support, and features as well as the ability to provide storage with just some clicks or a simple request. 

Zadara Storage Cloud is elastic in all directions. We create a lot of events (marketing events, technical events, and public speaking) with VMware. They have always been available to sponsor, participate, or just integrate their experience. Even with features, we requested some specific features for the Italian market, then they just put them into the roadmap, which was great.

View full review »
CTO at a tech services company with 51-200 employees

Having dedicated cores and memory absolutely provides us with a single-tenant experience. We have use cases in both categories, but we have customers who have a completely dedicated and private environment and it is particularly important for them. For example, if they are dealing with medical or patient data then they have a dedicated core and a dedicated disk, which is essentially their own private cloud.

It is important that we also have the flexibility for some of the lower-end services that we can have multi-tenant storage because not all of our clients require completely dedicated cores and disk space.

It is very important to us that Zadara provides various drive options because we're more of a niche cloud player, and we don't compete with Azure, AWS, or other large providers. We tend to have bespoke solutions, so having the different drive options gives us the flexibility we need to do that.

Zadara has improved our business with the main benefit being that we generate quite a bit of revenue every month from the services that we provide others, and I don't think that would have been possible without this product. What really attracted us to Zadara was the fact that they have the pay as you grow model.

As a new cloud provider, say three years ago, it would have been quite a large investment for us to take on. Not just in the hardware but also in the skills and the knowledge required to set up and operate it. Zadara was a key enabler for us to be able to enter the cloud business because if it wasn't for them, it would have taken us a lot longer. We would have had to invest in more people and as well in more hardware. As it is now, we are generating revenue and it gives us some credibility with our larger customer base.

Although we only use cloud storage services, Zadara is an agile solution that offers compute and networking, as well. This agility means that they are very quick at turning things around, which is key for us because we're able to implement solutions for customers quite quickly. Ultimately, we can start bringing in revenue for sales quite fast, as opposed to some of our traditional business. 

If you take fiber, for example, it could take up to three months to realize the value of the sale before it actually starts to build. Whereas with Zadara, it is so agile and so quick and easy to set up that even in a few days, we can turn a sale around into billing. This quick conversion from sale to revenue is also important for the business.

We didn't have a cloud before we had Zadara so, in that regard, it has increased our performance by 100%. In fact, we have been able to redeploy people and budgets to more strategic projects because it helped us to enter the cloud environment and to offer new services to our customers.

View full review »
Chief Technology Officer at Harbor Solution

Our initial application was probably the simplest one. We were sunsetting a product, but we needed to do some movement and we needed some additional storage, but we knew that what we needed was going to change within six months as we got rid of one product and brought in another. To handle this, we started deploying Block storage with Zadara, which we then changed to Object storage and effectively sent back the drives related to the Block storage as we did that migration. This meant that we did not have to invest in new technology or different platforms but rather, we could do it all on one platform and we can manage that migration very easily.

We use Zadara for most of our storage and it provides us with a single-tenant experience. We have a lot more customer environments running on it and although we don't use the compute services at the moment, we do use it for multi-tenant deployment for all of our storage.

I appreciate that they also offer compute services. Although we don't use it at the moment, it is something that we're looking at.

The fact that Zadara provides drive options such as SSD, NL-SAS, and SSD Cache is really useful for us. Much like in the way we can offer different deployments to our customers, having different drive sizes and different drive types means that we can mix and match, depending on customer requirements at the time they come in.

With available protocols including NFS, CIFS, and iSCSI, Zadara supports all of the main things that you'd want to support.

In terms of integration, Zadara supports all of the public and private clouds that we need it to. I'm not sure if it supports all of them on the market, but it works for everything that we require. This is something that is important to us because of the flexibility we have in that regardless of whether our customers are on-premises, in AWS, or otherwise, we can use Zadara storage to support that.

I would characterize Zadara's solution as elastic in all directions. There clearly are some limits to what technology can do, but from Zadara's perspective, it's very good.

With respect to performance, it was not a major factor for us so I don't know whether Zadara improved it or not. Flexibility around capacity is really the key aspect for us.

Zadara has not actually helped us to reduce our data center footprint but that's because we're adding a lot more customers. Instead, we are growing. It has helped us to redeploy people to more strategic projects. This is not so true with the budget, since it was factored in, but we do focus on more strategic projects.

View full review »
Platform and Infrastructure Manager at a tech services company with 1,001-5,000 employees

We are a disaster recovery company and we used Zadara as a storage platform for all of our disaster recovery solutions. We do not make use of the computing and networking services they offer. Rather, we only use the storage facility.

Our main environment is Zadara Storage, and then we have multiple VMware and Hyper-V virtual clusters that run the services we provide to our customers. We've also got numerous recovery platforms as well, which we can recover customer's environments onto. Zadara is a key underpinning of that because, without that common storage layer and the services running on top of that, we wouldn't have a business to run.

It's key for us, as a DR specialist, that we have the confidence that all of our systems and services are available all the time. Picking a vendor, be it Zadara or any other vendor, is really important to us because we have to trust that they're going to be there 24/7, every day.

View full review »
Chief Information Officer at a tech services company with 201-500 employees

The fact that we have offsite storage that is provided to us using iSCSI as a service has allowed me to offload certain storage-related workloads into Zadara. This means that when I have a planned failover, if I need to maintain the local storage that I have in my data center, I simply shift all of the new incoming traffic into Zadara storage. None of my customers even know that it has happened. In this regard, it allows us to scale in an infinite way because we do not have to keep adding more capacity inside our physical data center, which includes power, networking, footprint, and so on. The fact that Zadara handles all of that for me behind the scenes, somewhere in Virginia, is my biggest selling point.

With its dedicated cores and memory, we feel that Zadora provides us with a single-tenant experience. This is important for us because we are aware that in the actual physical environment, where Zadara is hosting our data, they have other clients. Yet, the fact that we have not had any kind of performance issues, and we don't have the noisy neighbor concept, feels like we are the only ones on that particular storage area network (SAN). It's really important for us.

Zadara provides drive options such as SSD and NL-SAS, as well as SSD cache, and this has been important for us. These options allow us to decide for different volumes, what kind of services we're going to be running on them. For example, if it happens to be a database that requires fast throughput, then we will choose a certain type of drive. If we require volume, but not necessarily performance, then we can choose another drive.

A good thing about Zadara is you do not buy a solution that is fixed at the time of purchase. For instance, if I buy an off-the-shelf storage area network, then whatever that device can do at the time of purchase, give or take one or two upgrades, is where I am. With Zadara, they always improve and they always add more functionalities and more capacities.

One example is that when we became customers, their largest drives were only nine terabytes in size. A year or so later, they improved the technology and they now have 14 terabyte drives available, which is good at almost a 50% increase. It is helpful because we were able to take advantage of those higher densities and higher capacities. We were able to migrate our volumes from the nine terabyte drives to the 14 terabyte drives pretty much without any downtime and without any kind of interruption to service. This type of scalability, and the fact that you are future-proofing your purchase or your operations, is another great advantage that we see with Zadara.

As far as I know, Zadara integrates with all of the public cloud providers. The fact that they are physically located in the vicinity of public cloud regions is a major selling point for them. From my perspective, it is not yet very important because we are not in the public cloud. We have our own private cloud in Miami, and not part of Amazon or Azure. This means that for us, the fact that they happen to be in Virginia next to Amazon does not play a major role. That said, they are in a place where there is a lot of connectivity, so in that regard, there is an advantage. We are not benefiting from the fact that they are playing nice with public clouds, simply because we are not in the public cloud, but I'm sure that's an advantage for many others who are.

Absolutely, we are taking advantage of the fact that they integrate with private clouds.

Zadara saves me money in a couple of ways. One is that my operational costs are very consistent. The second is that the system is consistent and reliable, and this avoids a lot of the headaches that are associated with downtime, reputation, and all of that. So, knowing that we have a reputable, reliable, and consistent vendor on our side, that to me is important.

It is difficult to estimate how much we have saved because it wouldn't be comparing apples to apples. We would be buying a system versus paying for it operationally and I don't really have those kinds of numbers off-hand. Of course, I cannot put a price tag on my reputation.

View full review »