We changed our name from IT Central Station: Here's why
Get our free report covering Pure Storage, Dell EMC, Dell EMC, and other competitors of IBM FlashSystem. Updated: January 2022.
563,208 professionals have used our research since 2012.

Read reviews of IBM FlashSystem alternatives and competitors

Haseeb Sheikh
Assistant Manager IT Infrastructure at a comms service provider with 5,001-10,000 employees
Real User
Top 5
Simplified storage provisioning for us, enabling us to assign any volumes in two to three minutes
Pros and Cons
  • "The SRDF site-to-site replication for the volumes is the most important feature for us. That enables us to do site recovery and replication for our VMware infrastructure."
  • "There is also room for improvement in the PowerMax architecture and hardware itself. They should design the PowerMax on the basis of PCIe 4.0. I would like to see the possibility of an NVMe drive that operates on PCIe 4.0 and not PCIe 3.0."

What is our primary use case?

Our primary use case for PowerMax is hosting our VMware environment with VMware SRM hosted on and connected to both. The PowerMax does the SRDF replication for VMware SRM, and some of the workload on it is for the physical environment that consists of Unix, AIX, and Sun Solaris. In addition to that, we have physical Windows and Linux servers as well. We have 1,200-plus virtual machines hosted on PowerMax.

We have two PowerMax 8000s, each deployed at a different site. The capacity of the PowerMax at the primary site is 500 terabytes, and approximately 200 terabytes at the DR site.

How has it helped my organization?

We are coming from the VMAX environment where the storage provisioning was a bit complex. We had to create volumes manually from the command line. But with the introduction of the PowerMax, it's a piece of cake for us. We can assign whatever volumes we want in two to three minutes. Storage provisioning has become very simple for us and is a real improvement.

What is most valuable?

The SRDF site-to-site replication for the volumes is the most important feature for us. That enables us to do site recovery and replication for our VMware infrastructure.

Along with that, the NVMe response time is very good. We used to have a VMAX 20K but we have just upgraded, and moved two or three generations ahead to PowerMax, and the response time is great. Because we are coming from a hybrid storage scenario, the performance of NVMe is a huge upgrade for us. The 0.4 millisecond response time means our application works great and we are seeing huge performance improvements in our VMware and physical environments.

Regarding data security, EMC has introduced CloudIQ solution with the PowerMax environment, and that enables live monitoring of the telemetry and security data array of the PowerMax. CloudIQ also has a feature called Cybersecurity. That monitors for security vulnerabilities or security events that are occurring on the array itself. That feature is very helpful. We have been able to do some vulnerability assessment tests on the array, which have helped us to resolve issues regarding data security and security vulnerabilities. We are not using the encryption feature of the PowerMax, because we didn't order the PowerMax configuration for it.

CloudIQ helps the environment and lets us manage the respective connected environments. A good feature in CloudIQ is the health score of each connected infrastructure. It gives you timely alerts and informs you when a health issue is occurring on the arrays and needs to be fixed. Those reports and health notices are also sent to Dell EMC support, which proactively monitors all the infrastructure and they will open service requests themselves.

In terms of efficiency, the compression we are currently receiving is 4.2x, which is very good efficiency. We are storing 435 terabytes of data in just 90 TB. In addition to what I mentioned about the NVMe performance, which is very good, we were achieving 150k IOPS on the VMAX, but on the PowerMax the same workload is hitting 300k-plus IOPS. That is sufficient for the workload and means the application is performing as required, according to the SLAs as defined on the PowerMax.

When it comes to workload congestion protection, we have not faced any congestion yet in our environment. We have some spikes on Friday evenings, but they are being handled by PowerMax dutifully. It can beautifully handle up to 400k IOPS, even though it is only designed for 300k IOPS. That is another illustration of its good performance.

What needs improvement?

The CloudIQ features still need to be improved because CloudIQdoes not support PowerProtect DD capacity, although it is working well overall.

Their mobile app also still needs improvement. 

In addition, the web GUI is good and shows all the related reports, but I would like to see more granularity in the reports, and reporting on CloudIQ should be done in the web GUI interface.

There is also room for improvement in the PowerMax architecture and hardware itself. They should design the PowerMax on the basis of PCIe 4.0. I would like to see the possibility of an NVMe drive that operates on PCIe 4.0 and not PCIe 3.0. The design could be very much better if they did some R&D and introduced a version based on PCIe 4.0.

For how long have I used the solution?

I manage the IT infrastructure of a telco company in Pakistan. I look after the servers and storage infrastructure and I've been with the company for the last eight years. Recently, we have deployed PowerMax, PowerProtect DD and PowerScale Isilon, with the help of Dell EMC and their partners.

What do I think about the stability of the solution?

In terms of availability, Dell EMC claims PowerMax will give you six nines. We have not faced a single issue in the last six months with PowerMax. The storage has been very stable for us and it's performing well. It's giving us the right amount of uptime and availability.

What do I think about the scalability of the solution?

The NVMe scale-out capabilities were a factor we had in mind when we were evaluating the PowerMax against competitors, including IBM and Huawei. The scale-out capabilities are very important. We have 4 TB of cache with four directors right now, and we can add capacity in the future. If that capacity is met and we need to add more engines for our workload, we can do that very easily.

We are not currently using the NVMe SCM storage tier feature, but that is in the pipeline. If there is a high-demand workload in the future, we will consider the SCM storage.

How are customer service and support?

Dell EMC's support for PowerMax has worked great for us. If we have to open a severity-one, we call their support line. Other than that, the support portal works great. If we have to open a severity-two, or they open a service request with the proper severity, the infrastructure and storage support are very good. They will escalate an issue to the next level when required, as well.

There is some margin for improvement in that they should develop an application for support where you could see support tickets and escalate them if you want.

How would you rate customer service and support?

Positive

How was the initial setup?

I was involved from the initial design to the product evaluation from different vendors, and I was involved in the whole migration project through to its conclusion.

Dell EMC dedicated project managers and members of its professional services team to handle all of our migration from VMAX to the PowerMax without any hassle. And all of our data was successfully migrated within 1.5 months. It was a very good experience for us. There was no downtime and it was a totally non-disruptive migration for VMware, AIX, Windows, and Linux. Only some of the Solaris environment experienced a disruption because we had to reboot the servers. The rest of the migration was non-disruptive and the deployment was very good for us.

For maintenance and admin of the solution, two people report to me. They manage the PowerMax series along with me as the team lead. On the user side, there are different stakeholders. We provision storage to them and then they map the storage to various OSs for VMware, Linux, Solaris, AIX, and Windows. That team is a bit larger and has separate departments, with approximately 25 to 30 people.

Which other solutions did I evaluate?

We evaluated PowerMax against IBM FlashSystem 9200R and against the Huawei Dorado V6. At that time, Huawei did not have the VMware certification due to US policies and enforcement, but Dorado now has VMware certification. That's why we rated the PowerMax highest.

What other advice do I have?

The solution is very stable and performs well. If you are doing research, look at the architecture of all the available vendors. Evaluate every storage solution with respect to architecture, the NVMe version they are using, and the hardware which they are using.

Out of 10, I would give PowerMax a nine. It has worked very well for us.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Senior Consultant at a tech services company with 10,001+ employees
Consultant
Top 20
Straightforward to set up, good performance for database applications, and supports volume encryption
Pros and Cons
  • "We recently started using the volume encryption feature, which is helpful because there are some federal projects that require data at rest to be encrypted."
  • "We would like to have a feature that automatically moves volumes between aggregates, based on the performance. We normally need to do this manually."

What is our primary use case?

The main purpose of the AFF is to work with applications that require high-intensity I/O operations. For example, we run some open-source DBs, as well as Oracle, that require high-intensity I/O. We also have a high-performance computing setup.

We have two locations. In the first location, we have an AFF cluster. In the second location, we have an AFF cluster that works in combination with ASAs.

Our environment is primarily made up of open-source applications. 

How has it helped my organization?

We are not using the NetApp cloud backup services. Instead, we have a storage solution on the back end and AFF on the front end. In this setup, we have high I/O with a low storage expenditure.

Our company is mainly concerned with software development and we have VMs as part of our infrastructure. We have a large number of VMs and they require a large data capacity, although we don't know which ones require high-intensity input and output. The reason for this is that some scenarios demand a high level of I/O, whereas, with others, the demand is low. We have AFFs set up at the front end, and at the backend, we have ECD boxes, which are the storage grid.

We treat the system as a fabric pool setup. When a high level of I/O is required, the data will be stored on NetApp AFF at the front end. We created a policy so that pooled data will move automatically to the lower-end capacity units, which are configured from the storage unit.

NetApp helps to accelerate some of the demanding enterprise applications that we have, in particular, our database applications. 

NetApp AFF has helped to simplify our infrastructure while still getting a very high performance. Prior to setting up AFFs, we had latency issues. Now, things are more balanced, including the volumes that are on SAS or SATA.

Using NetApp AFF has helped to reduce support issues, including performance-tuning. About a year and a half ago, we were experiencing some performance issues. Lately, this has not been the case, although occasionally, we still have problems. We are exploring whether it is the server hardware or an issue with VMware and drivers.

The ONTAP operating system has made things somewhat simpler, although we don't use it very much. I normally work on the CLI so for me, it is not a big difference. That said, as features are released with the latest versions, I review them to stay updated.

We also use NetApp's StorageGRID and the combination of it with AFF has reduced our overall cost while increasing performance. We see benefits on both sides. 

What is most valuable?

The most valuable feature is its ability to handle high-intensity read and write operations. It works very well in terms of this.

We recently started using the volume encryption feature, which is helpful because there are some federal projects that require data at rest to be encrypted.

SnapMirror is another feature that we use, but we don't have MetroCluster set up. SnapMirror is used for replication across multiple geographical data centers. In these locations, we have products and we are exploring how to minimize the bandwidth while improving DR capabilities. With respect to the DR, we don't use the AFF in secondary nodes.

What needs improvement?

In some situations, we would like to have an additional storage shelf but do not want to use an SSD. Unfortunately, AFF won't work in conjunction with SATA. Having these together might give some benefit in terms of capacity.

We would like to have a feature that automatically moves volumes between aggregates, based on the performance. We normally need to do this manually.

In some cases, we would like to have the ability to expand our units to handle two additional target ports. As of now, we are using four or eight target ports, which come with the A300 model. There are situations where we need to extend this but we have limited slots available. 

For how long have I used the solution?

We have been using NetApp AFF for the past six years.

What do I think about the stability of the solution?

The stability of this solution is fine.

What do I think about the scalability of the solution?

The scalability is seamless. Without any downtime, we can upgrade and scale-up.

As of now, we have a 40TB SSD front-end fabric pool capacity. At the back end, we have a two-petabyte storage grid. We are not experiencing any performance-related issues, although we have encountered a few time sync-related problems.

Which solution did I use previously and why did I switch?

I have also worked on an IBM DS8000 series and some similar products from EMC.

IBM had released the 8700 with the AFF configuration. However, I was with another company at the time. The majority of my experience is with NetApp using the CLI, but with the IBM product, I was using the GUI. I prefer the CLI in both systems.

With respect to the pros and cons between the vendors, it is difficult for me to judge. Each filesystem has benefits with respect to the vendor and the technology that they use.

How was the initial setup?

The initial setup is straightforward. It is not a big, complex job.

We are in the process of setting up and transitioning to a Hybrid cloud environment, but it takes some time. We are currently exploring it. We have thousands of servers in AWS and Google cloud, and we have an internal VMware cloud as well.

What about the implementation team?

The NetApp team helped us with the deployment and also helps with the patches.

What was our ROI?

We invested a lot of money in our NetApp AFF set up but we have a huge capacity. We balance it that way.

What's my experience with pricing, setup cost, and licensing?

NetApp AFF is an expensive product, although not compared to other vendors.

Which other solutions did I evaluate?

We chose the A300 model based on recommendations from existing users. There are lower-end versions, such as the A250 and A260, but we didn't explore them.

What other advice do I have?

Based on my experience, whether I would recommend this product depends on what the budget is. We have to determine whether we are achieving the right cost for the right product because the budget is the primary objective. Some cases may not require the capacity. Perhaps, for example, software-defined storage can manage it. To decide, we need to see what the application is, how much demand it needs, and what kind of performance it requires. All of these things need to be reviewed before we decide which products suit which situation.

Overall, NetApp AFF is a good product.

I would rate this solution a nine out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor. The reviewer's company has a business relationship with this vendor other than being a customer: Partner
Flag as inappropriate
SajithEruvangai
IT System Specialist - Operations & Infrastructure at a insurance company with 1,001-5,000 employees
Real User
User-friendly, fast performance, good data compression and deduplication capabilities
Pros and Cons
  • "The management features are well organized and they have a very good dashboard."
  • "Data reduction is an area that needs improvement. There is a garbage collection service that runs but during that time, system utilization increases."

What is our primary use case?

We are in the health industry and use this product for block storage. We have VMware hosted on our Pure FlashArrays and we have a Citrix environment. We also have Oracle running as our SQL database. Our VMs run from Pure.

We have also done a couple of PoCs with the Blade solution for using the file share system.

How has it helped my organization?

One of the requirements from our developers and test and development team is that from time to time, they want to clone the production environment. We are able to accomplish this within seconds, using a script. This is one of the best parts that I have seen. This feature is not available with other storage solutions.

What is most valuable?

Performance-wise, it is giving us a very good result.

We are happy with the data compression and deduplication capabilities.

The interface is user-friendly and very easy to use.

Taking a snapshot and cloning data is very easy to do. We can create a script and it will clone the environment. Similarly, we can replicate the environment from one site to another site, and we can restore the environment where we choose.

The management features are well organized and they have a very good dashboard. For example, I can see all of the utilization and it has port monitoring capabilities. With other storage vendors, multiple tools are required for this, and there is an additional charge.

What needs improvement?

Data reduction is an area that needs improvement. There is a garbage collection service that runs but during that time, system utilization increases.

Integration with VMware tools can be improved.

The reporting can be better.

For how long have I used the solution?

We have been using Pure Storage FlashArray for between five and six years.

What do I think about the stability of the solution?

This is a very stable product and we haven't had any downtime. We use this product extensively and I have seen that we have a 90% I/O load in our environment.

What do I think about the scalability of the solution?

This is a flexible system that is easy to scale.

We initially purchased two FlashArray systems. One of them was small or midsized, and the other was high-end. Then, later, we started upgrading. As per the Everygreen contract, we get free upgrades. Every three years, we get a new controller upgrade, free of cost.

We have also upgraded our capacity and now everything is on the X series. We have four FlashArrays in total and all of our database users are connected to them. The infrastructure and database teams are directly involved with it.

How are customer service and support?

The response from the technical support team is very good. We have not found any difficulties with their ability or engagement.

Which solution did I use previously and why did I switch?

We have worked with solutions from HPE, IBM, and Hitachi. We don't work with any of these vendors now. We switched because Pure storage is much easier to manage. It is also more stable and it is very easy to work with.

For example, there is no shutdown procedure. If you want to power down the environment then you just unplug the power and that's it. Once you reconnect the power, it is up. With legacy storage, there is a shutdown procedure. You have to shut down the host, then the SAN switch, then the storage.

With legacy storage, there is also a procedure to bring it up. You have to power up the enclosures, then the controller, then the SAN environment, and then the server. We had to follow a long set of steps with more dependencies.

After a power outage, the storage devices from the other vendors did not always come back online. For example, we implemented a PoC with the IBM FlashSystem and a power outage occurred. The management tool crashed and did not come back up. We had to wait for IBM engineers to come and fix the issue. Whereas, with Pure, when the power came back on, the system came back online immediately.

The other storage systems were not as user-friendly. For example, I had a Hitachi G600 and I wanted to extend the block capacity. I had to spend between 30 minutes and one hour to complete it. It's quite complex. With Pure, that would be taken care of in seconds by going to the console, selecting the volume, and performing the reset.

How was the initial setup?

The initial setup is straightforward and very easy.

The day that we received the box, we unpacked it, racked it, and configured it. The next day, we were able to utilize it for production.

Upgrading the hardware, such as performing a controller upgrade, is a seamless process. We are planning to do a major upgrade and it will be done on the fly.

What about the implementation team?

We engaged Pure to assist us with our implementation, and our experience with them was very good. The technical team came onsite for the deployment. If we have any problems then they will return to our site to help.

Only one person is required for deployment and maintenance.

What's my experience with pricing, setup cost, and licensing?

You can pay extra for Evergreen support, which gives you free upgrades when new features are introduced.

Which other solutions did I evaluate?

We completed a PoC with most of the leading brands.

What other advice do I have?

My advice for anybody who is considering this product is that I can recommend Pure. We were the first customer for Pure Storage in the UAE. It's stable, reliable, and you can trust it.

The biggest lesson that I have learned from using Pure FlashArray is that it's user-friendly, easy to manage, and very flexible. You can scale out and it's easy to upgrade. The upgrade process is not complex and it can be done on the fly, without any disruption.

My main complaint is that the garbage collection mechanism draws heavily on the resources. They have integration with VMware tools, although they can improve it slightly, and I would also like to see some improvements in the reporting.

We have been using it heavily and all of our people are happy with it. This includes the DBA team. Whenever we have a requirement of it, it's very easy and it can be done within seconds. With our previous storage solutions, we had to spend more time looking into problems and they were not user-friendly.

I would rate this solution a ten out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Storage Manager at a financial services firm with 10,001+ employees
Real User
Top 20
User-friendly graphical user interface and simplifies reporting for easy management
Pros and Cons
  • "There are a lot of screens for easy management where you can change some settings. But after a few years, the important settings were better after an upgrade, and all the vendors have other ways to upgrade their systems."
  • "HPE 3PAR StoreServ has limited flexibility in building replication solutions. There are limitations to the number of IOPS the system can do. It's not bad as it is doing its job. However, for the application, if you need a toolbox, you can build everything concerning periodic replication modes of synchronous or asynchronous three-site, four-site, with supported cascading which requires you to buy an IBM product. It also takes a few hours to one day to upgrade the system and sometimes; it takes more time because, in some HPE 3PAR StoreServ 20000 Storage, you have an eight-node system. If you do an upgrade, you do it node by node and every node might take more than an hour."

What is our primary use case?

We use HPE 3PAR StoreServ for data storage. Hewlett Packard Enterprise (HPE) had something in the contract because if you can compress data very well, you don't need that much capacity in your systems. If it was not possible to compress to a certain degree, they put some extra capacity in the systems. We bought that borrowed capacity but they separated the one-piece storage boxes added to our environment to get along with that bigger growth in capacity. Despite that, it is a wonderful system with an excellent graphical user interface. Still, new functions are being rolled out.

How has it helped my organization?

I've seen a lot of data storage systems. It's the only storage system you can watch over the application time and it keeps measuring it. We have some thresholds on our end on it, a very good graphical user interface and reporting. 

What is most valuable?

There are a lot of screens for easy management where you can change some settings. But after a few years, the important settings were better after an upgrade, and all the vendors have other ways to upgrade their systems.

What needs improvement?

HPE 3PAR StoreServ has limited flexibility in building replication solutions. There are limitations to the number of IOPS the system can do. It's not bad as it is doing its job. However, for the application, if you need a toolbox, you can build everything concerning periodic replication modes of synchronous or asynchronous three-site, four-site, with supported cascading which requires you to buy an IBM product. 

It also takes a few hours to one day to upgrade the system and sometimes; it takes more time because, in some HPE 3PAR StoreServ 20000 Storage, you have an eight-node system. If you do an upgrade, you do it node by node and every node might take more than an hour.

For how long have I used the solution?

I have been using HPE 3PAR StoreServ for the past seven years.

What do I think about the stability of the solution?

Last year after summer, HPE had to locate replication groups getting stopped and took a lot of time to find out what's happening, and yet we still don't know what's happening over there. It feels like the message is quite clear after replicating from A to B, and it states that B is not responding very well.

There is a timeout, and it stops the replication group because there is no stability or consistency and is not good at that moment. So that might be negative, but when was the last time? I think in November of last year.

What do I think about the scalability of the solution?

At one point, some remote copy groups stopped working, and we used a disaster recovery plan because, in production, we replicate everything from A to B and then split up into some remote copy groups, gathering together some data source and clusters. If one of those remote copy groups stops, you don't have DFP anymore and you have to restart them. And last year after starting one of those replication groups; we had some performance issues because they're trying to get in sync as soon as possible using all the resources, so we had to plan very well outside the business hours.

How are customer service and technical support?

We have proactive datacenter care; I call it a storage advocate, and we can send every question to them and we get quick answers. They also help to find out if new releases are available and other services. For now, they have more insights on that. They have better sources sometimes, and I have better sources than them sometimes, but they do a great job and they also assisted us concerning the compression issue we had at the beginning.

What's my experience with pricing, setup cost, and licensing?

It is quite difficult to decide on the cost. At one point, I was the project lead to cover with some people, but the price was important and we had its compression calculated. At that moment, it was fair because that was one of the things moving their product due to the cost of HPE 3PAR StoreServ as they were competing with Hitachi and IBM A9000, which I'm not sure if is still available. 

We have done total cost of ownership calculations over the past five years, and we also ask for some cost prices for the sixth and seventh year so that we can get some insights into what happens after those five years. We have some systems that are five years old and we keep them because it's flash data storage. It's still almost a three terabyte solid-state drive, and the support cost is not that high. We'll have a look after that. I see other things happening on the Hitachi boxes with all those license defeats. This is also positive for HPE 3PAR StoreServ, everything is on the license. When we bought the systems, it was the case and then I've been reading something about it. You can buy the rest of the licenses. If you buy a system, that will not be replicated to another system, then you get a license without replication software.

What other advice do I have?

The job of direct channel support to HPE 3PAR StoreServ is not an end of life or end of support but HPE Primera has now replaced it and I hope they get all the functionality in there like the HPE 3PAR. I remember it seems like HPE 3PAR and HPE Primera have support for volume plugins and that will be a big game if they can implement volumes on their system because that kind of release is much better than the datastore level.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Solutions Developer at Next Dimension Inc.
MSP
Top 10
Mostly typical of production storage for virtual machines but with fewer features and a lower price
Pros and Cons
  • "Fewer features is not necessarily a negative as it leads to simpler operation and lower price."
  • "The features are limited."

What is our primary use case?

We sell the Lenovo ThinkSystem DE 4000 or 6000 series storage arrays. The clients that we sell the products to are mostly manufacturers and the use case for ThinkSystem is always production for storing and operating their virtual environment, which means the units are almost always used for VMware. It is just typical storage for production and virtual machines.  

How has it helped my organization?

It does not improve the way we function as we use it with clients. It gives us another product to offer to fill particular client needs.  

What is most valuable?

As far as features go, the product does not really have a lot of them. The products we sell are Lenovo's low to mid-range stuff. They are not really feature-laden products with added value that make them stand out from other solutions. The lack of complexity and the lower price — because of that lack of complexity — may be the most valuable features.  

The interface is all good.  

What needs improvement?

You can not buy a lot of options for these devices. There are a lot of things that it does not do. Some things that it does not do that we would like it to do include easy tiering. If you have got spindles and you want to cache a couple of terabytes of storage on SSD, that would be something we would like to see that, currently, it does not have the capacity to do.  

The thing it comes down to is that Lenovo needs to add some more of the software features that would allow the ThinkSystem line to compete with other products that we sell. Other than that, it is what it is.  

For how long have I used the solution?

We have had experience with the product for as long as Lenovo has been out with it. I do not know exactly how long that is, but it is maybe seven or eight years.  

What do I think about the stability of the solution?

I think these Lenovo products are very stable. We generally do not have issues with their products going down.  

What do I think about the scalability of the solution?

ThinkSystem is scalable. Our clients are small top medium-large businesses and within that range, it works well.  

How are customer service and technical support?

When we had to use them, the technical support was very good. If I had to rate them I would give them an eight-out-of-ten. Maybe they need to work on consistency with response time, but the bigger concern comes down to the actual know-how. A lot of times, we have to defer to IBM for support in order to get an answer.  

Which solution did I use previously and why did I switch?

We previously used a different solution from IBM.  

How was the initial setup?

The installation and the initial setup for these is very simple.  

What's my experience with pricing, setup cost, and licensing?

The pricing is fine for what the product does.  

What other advice do I have?

The advice that I would you give to others looking into implementing this product is that it is a good product if it fits your needs. Products in this category are not all the same and they all have something unique, but ThinkSystem is a good offer and can work out well if you can work with it to fit the use cases that you have.  

On a scale from one to ten (where one is the worst and ten is the best), I would rate this product as probably and eight-out-of-ten overall. I would give it that rating because it is a good product. It is stable. It is well supported. It just lacks some depth in the area of features. That ends up not necessarily being a strike against the product because, price-wise, it is a good value. If you want all the bells and whistles, Lenovo wants you to move up the ladder on something else they have to offer.  

Which deployment model are you using for this solution?

On-premises
Disclosure: My company has a business relationship with this vendor other than being a customer: Reseller
Get our free report covering Pure Storage, Dell EMC, Dell EMC, and other competitors of IBM FlashSystem. Updated: January 2022.
563,208 professionals have used our research since 2012.