Top 8 Hyper-Converged (HCI) Tools

Nutanix Acropolis AOSVxRailVMware vSANStarWind Virtual SANStarWind HyperConverged ApplianceHPE SimpliVityNetApp HCICisco HyperFlex HX-Series
  1. leader badge
    Nutanix Acropolis AOS is easy to use, integrates with other hardware configurations, and is simple to manage.The most valuable feature is the integration of all parts in Prism Element, the browser-based management tool.
  2. leader badge
    Updating the product has been very easy. VxRail performs well in the VDI environment.
  3. Find out what your peers are saying about Nutanix, Dell EMC, VMware and others in Hyper-Converged (HCI). Updated: July 2021.
    523,535 professionals have used our research since 2012.
  4. leader badge
    One of the valuable features of vSAN is it has a universal type of technology that allows you to deploy it on any server or hardware. Competitors, such as Nutanix, provides the AOS and can be deployed only on certified hardware. For vSAN, it does not require any kind of certified hardware.
  5. leader badge
    vSAN we found was simple to set up, easy to configure and manage and allows us to achieve storage redundancy.It is extremely stable.
  6. leader badge
    We opted for 24-hour monitoring and support, which has already paid for itself.We also substantially reduced network complexity by eliminating that standalone SAN. That reduced complexity has allowed us to concentrate on improving other areas of our network.
  7. leader badge
    The most valuable features of SimpliVity are the built-in backup and immunity to ransomware. Our clients are very comfortable with the single management of the complete stack. The creation of VM systems is also very fast.
  8. report
    Use our free recommendation engine to learn which Hyper-Converged (HCI) solutions are best for your needs.
    523,535 professionals have used our research since 2012.
  9. The most valuable feature, currently, is the density of the system as hardware. I'm able to leverage the density of the product and remove bigger hardware which requires more space, cooling, and power costs, obviously. There are cost savings, obviously.
  10. Overall, the solution is extremely easy, flexible and secure. The scalability of the product is quite good overall - as long as you plan correctly from the outset.

Advice From The Community

Read answers to top Hyper-Converged (HCI) questions. 523,535 professionals have gotten help from our community of experts.
Hi community,  What are key factors that businesses should take into consideration when choosing between traditional SAN and hyper-converged solutions?
author avatarTim Williams
Real User

Whether to go 3 Tier (aka SAN) or HCI boils down to asking yourself what matters the most to you:

- Customization and tuning (SAN)
- Simplicity and ease of management (HCI)
- Single number to call support (HCI)
- Opex vs Capex
- Pay-as-you-grow (HCI)/scalability
- Budget cycles

If you are a company that only gets budget once every 4/5 years, and you can't get any capital expenditures for Storage/etc, pay-as-you-grow becomes less viable, and HCI is designed with that in mind. It doesn't rule out HCI, but it does reduce some of the value gained. Likewise, if you are on a budget cycle to replace storage and compute at different times, and have no means to repurpose them, HCI is a tougher sell to upper management. HCI requires you replace both at the same time, and sometimes budgets for capital don't work out.

There are also some workloads that will work better on a 3Tier solution vs HCI and vice versa. HCI works very well for anything but VMs with very large storage footprints. One of the key aspects of HCI performance is local reads and writes, a workload that is a single large VM will require essentially 2 full HCI nodes to run, and will require more storage than compute. Video workloads come to mind for this. Bodycams for police, surveillance cameras for businesses/schools, graphic editing. Those workloads can't reduce well, and are better suited for a SAN with very few features such as an HPE MSA.

HCI runs VDI exceptionally well, and nobody should ever do 3 Tier for VDI going forward. General server virtualization can realize the value of HCI, as it radically simplifies management.

3 Tier requires complex management and time, as you have to manage the storage, the storage fabric, and the hosts separately and with different toolsets. This also leads to support issues as you will frequently see the 3 vendor support teams blame each other. With HCI, you call a single number and they support everything. You can drastically reduce your opex with HCI by simplifiying support and management. If you're planning for growth up front, and cannot pay as you grow, 3 tier will probably be cheaper. HCI gives you the opportunity to not spend capital if you end up not meeting growth projections, and to grow past planned growth much easier as adding a node is much simpler than expanding storage/networking/compute independently.

In general, it's best to start with HCI and work to disqualify it rather than the other way around.

author avatarShivendraJha
Real User

There are multiple factors that you shall be looking at while selecting one over the other.
1. Price- Price for HCI is cheaper if you are refreshing your complete infrastructure stack (Compute/Storage/network) however, if you are just buying individual components in the infrastructure such as compute or storage only, then 3-Tier infrastructure is cheaper.
2. Scalability-HCI is highly and easily scalable.
3. Support- On a 3 tier architecture, you have multiple vendors/departments to call/contact to get support on the solution. Whereas for HCI, you call/contact a single vendor addressing all your issues on the solution.
4. Infrastructure- For very small infrastructure, a 3Tier architecture based on iSCSI SAN can be a little cheaper. However, for a medium or large infrastructure HCI comes cheaper every time.
5. Workload type- If you are using VDI, I strongly recommend to use HCI. Similarly, for a passive secondary site, 3-tier could be OK. Please run all bench-marking tools to know what are your requirements.

I am sure HCI can do everything though.

author avatarreviewer1234203 (Pre-sales Engineer at a tech services company with 11-50 employees)
Real User

There are so many variables to consider.

First of all, have in mind that tendency is not the rule, your needs should be the base of decision, so you don't have to choose HCI because it's the new kid on the block.

To start, think with your pocket, SAN is high cost if you are starting the infrastructure; cables, switches, and HBAs are the components to add to this structure that have a higher cost than traditional LAN components, On the other side, SAN requires more experimented experts to manage the connections and issues, but SAN has particular benefits sharing storage and servers functions like you can have on same SAN disk and backup and use special backup software and functionalities to move data between different storage components without direct impact on servers traffic.

SAN has some details to consider on cables like distance and speed, its critical the quality or purity to get the distance; the more distance, the less speed supported and transceiver cost can be the worst nightmare. But SAN have capabilities to connect storage boxes to hundreds of miles between them, LAN cables of HCI have 100 mts limit unless you consider a WAN to connect everything or repeaters or cascaded switches adding some risk element to this scenario.

Think about required capacities, do you need TB or PB?, Some dozens of TB can be fine on HCI, But if there are PBs you think on SAN, what about availability?, several and common nodes doing replication around the world but fulfilling the rules of latency can be considered with HCI, but, if you need the highest availability, replicating and high amount of data choose a SAN.
Speed, if it is a pain in the neck, LAN for HCI starts minimum at 10 Gb and can rise up to 100 Gb if you have the money, SAN has available just up to 32 Gb and your storage controller must be the same speed, this can drive the cost to the sky.

Scalability, HCI can have dozens of nodes replicating and adding capacity, performance, and availability around the world. With SAN storage you can have a limited number of replications between storage boxes, depending on manufactures normally you can have almost 4 copies of the same volume distributed around the world and scalability goes up to controllers limits its a scale-up model. HCI is a scale-out model to grow.

Functionalities, SAN storage can manage by hardware things like deduplication, compression, multiple kinds of traffic like files, blocks or objects, , on HCI just blocks and need extra hardware to accelerate some process like dedupe.

HCI is a way to share storage on LAN and have dependencies like the hypervisor and software or hardware accelerators, SAN is the way to share storage to servers, it is like a VIP lounge, so there are exclusive server visitors to share the buffet and can share the performance of hundreds of hard drives to support the most critical response times.

author avatarBart Heungens

All depends of how you understand and use HCI:
If you see HCI as an integrated solution where storage is integrated into servers, and software-defined storage is used to create a shared pool of storage across compute nodes, performance will be the game changer of choosing for HCI or traditional SAN. The HCI solution of most vendors will be writing data 2 or 3 times for redundancy across compute nodes, and so where there is a performance impact on the applications due to the latency of the network between the nodes. Putting 25Gb networks, as some vendors recommend, is not always a solution since it is npt the bandwidth nut the latency of the network that defines the performance.

Low latency application requirements might push customers to traditional SAN in this case. If you use HCO for ease of management through a single pane of glass, I see many storage vendors delivering plugins to server and application software, eliminating the need of using the legacy SAN tools to create volumes and present them to the servers. Often it is possible to create a volume directly from within the hypervisor console and attach them to the hypervisor servers. So for this scenario, I don't see a reason choosing between the one or the other.

Today there is a vendor (HPE) that is combining traditional SAN in an HCI solution calling it dHCI. It gives you a HCI user experience, the independent scalability of storage and compute, and the low latency often required. After a time I expect other vendors will follow the same path delivering these kinds of solutions as well.

author avatarKrishna Randadath

Business-wise, direct savings across the architecture, hardware, software, backup, and recovery, hyperconvergence can transform IT organizations from cost centers to frontline revenue drivers. A major issue in traditional IT architecture was that as complexity rises, the focus shifts from business problems to tech problems. The business’s focus should be on what IT can do for the bottom line, not what the bottom line can do for IT.

Capital expenditures (CAPEX): The one-time purchase and implementation expenses associated with the solution Operational expenditures (OPEX): The running costs of an IT solution – better known as the total cost of ownership (TCO) – that are incurred for managing, administering, and updating the existing IT infrastructure Considering the separate areas of cost reductions discussed above, organizations can evaluate the expense differentials between their traditional infrastructures and the HCI environment.

Hyperconvergence helps meet current and future needs, so it’s essential to calculate the TCO accurately. The TCO of a hyperconverged infrastructure includes annual maintenance fees for data centers and facilities, telecom services, hardware, software, cloud systems, and external vendors. Other costs include staff needed for deployment and maintenance, staff training and efforts to integrate with existing and legacy systems.

HCI overcomes the enormous wastage of resources and budgets common in the early phases of traditional infrastructure deployments because their scale dwarfs business needs at the time of purchase. HCI lends itself to incremental and granular scaling, allowing IT to add/remove resources as the business grows.

author avatarManjunath V
Real User

Scalability and agility are the main consideration factor to decide between SAN and HCI. SAN infra needs huge work involvement when attaining the end of support, end of life situation. Also, budgeting and procurement frequency plays a role.

Also, the limitation of HCI to be single datastore in VMware environment is a problem, when disk corruption or data corruption happens.

author avatarKashifNaseer

If things are working in a traditional way already and not much growth is expected then SAN is suitable. however, if things are on the cloud journey or already virtualized then HCI suites more.

author avatarJOAO BONNASSIS

There are two SAN (FC SAN and IP SAN), both use the SCSI v3 protocol:
- FC-SAN achieves a bandwidth of 16 and 32 Gbps.
- IP SAN achieves a bandwidth of 1, 10, and 25 Gbps.

SAN generally uses CI (Converged Infrastructure): “n” COMPUTE nodes, “n” NETWORK nodes, and “n” STORAGE nodes.

HCI (Hyper-Converged Infrastructure) uses only GbE network (1, 10, and 25 Gbps), through the SCSI V3 protocol. Each node is connected to an aggregate of nodes (Cluster – up to 64 Nodes) and have all 3 functions for each node (COMPUTE + NETWORK + STORAGE). These nodes are managed by a Hypervisor (VMware, Nutanix, ...).

If STORAGE capacity grows rapidly, HCI (Hyper-Converged Infrastructure) will not be the most suitable solution!

The two main problems are the NETWORK and the SCSI V3 protocol: high latency and limited by 25 Gbps!

What are some important factors to keep in mind and to compare when choosing between HCI solutions? 
author avatarShivendraJha
Real User

1. Support

2. Migration or Conversion process from existing solution

3. Cost 

4. Hardware compatibility 

5. Integration with all critical and non-critical solutions

6. Cloud readiness

author avatarGaurav Vyas
Real User

Availability, support, cost, compatibility and scalability, cloud readiness.

author avatarSteffen Hornung
Real User

Can we do a proof-of-concept?

Does the solution support my critical/legacy application?

Does the solution support my current backup solution?

How does migration work, what downtime is to be expected (e.g based on hours/terabyte)?

Which other aspects are possible with this solution?

How responsive is vendor support?

It does not matter if the solution works natively with the hypervisor or through virtual machines which do the magic.
Native hypervisor integration will be likely a vendor-lock-in.
The main matter is "how well does it work"? See this through with my first question.

author avatarVaibhav Saini

> Integration with the existing running apps and solutions.

> Support Parameters.

> Ease of Scaling up and out the solution.

> Cost of the overall solution.

> Technical architecture of the solution.

> Integration with Cloud services/Solution should be cloud adaptive.

> Solution should be truly ready for the complete SDDC platform.

author avatarMichael Samaniego

There are several solutions that claim to be HCI in the market, however the best factor is the native integration with the hypervisor without the need to have additional virtual machines that "perform HCI", so far in several cost-efficient scenarios that I have performed and in turn With different hardware manufacturers I can personally say that the best option is VMware vSAN. Its main strength is the correct management of hardware resources.

Real User


author avatarAbdelrahman Mahmoud

For me the most important component in an HCI Solution is the Software-defined Storage, so you always need to give great care when comparing SDS offerings from different HCI vendors.
You check the below points:
- Data Locality
- SDS Offerings (block storage, file storage, object storage)

Ariel Lindenfeld
There are a lot of vendors offering HCI solutions. What is the #1 most important criteria to look for when evaluating solutions? Help your peers cut through the vendor hype and make the best decision.

Cost metrics, Rob, Capex, and open savings and even a TCO should be accounted for.

1) Operational efficiency assumptions based on assessments. This should yield time to deploy, VM to admin ratios, device consolidation, and power usage.
2) My most important thing is in the Recovery Time Objective and how well does it sustain without data loss. Recovery Point Objective measures how far you can go back without loss and RTO is how long mission critical devices are brought back online.

Since you will find yourself managing VMs, you might consider a cost analysis there as well. (Remember you won't be managing devices any longer)

Your benefits in using an HCI is
1) A VM Centric Approach
2) A software-defined datacenter- ( less replacement, better utilization, pay as you go)
3) Data Protection
4) Lower costs
5) Centralized and even automated self-management tools.

author avatarBart Heungens

For me an HCI solution should provide me:
- ease of management, 1 console does all, no experts needed, cloud Experience but with on-premise guarantees
- invisible IT, don't care about the underlying hardware, 1 stack
- built-in intelligence based on AI for monitoring and configuration
- guaranteed performance for any workloads, also when failures occur
- data efficiency with always-on dedupe and compression
- data protection including backup and restore
- scalability, ease of adding resources independent of each other (scale up & out)
- a single line of support

author avatarBharat Bedi (SolarWinds)

While there is a long list features/functions that we can look at for HCI -> In my experience of creating HCI solutions and selling it to multiple customers, here are some of the key things I have experienced most customers boil it down to:

1) Shrink the data center:
This is one of the key "Customer Pitch" that all the big giants have for you, "We will help you reduce the carbon footprint with Hyperconverged Infrastructure". It will be good to understand how much reduction they are helping you with. Can 10 racks come down to two, less or more? With many reduction technologies included and Compute + Storage residing in those nodes, what I mentioned above is possible, especially if you are sitting on a legacy infrastructure.

2) Ease of running it:
The other point of running and buying HCI is "Set it and forget it". Not only should you look at how easy it is for you to set up and install the system, but how long does it take to provision new VMs/Storage, etc. It is great to probe your vendors around to find out what they do about QOS, centralized policy management, etc. Remember that most HCI companies portfolios differ at the software layer and some of the features I mentioned above are bundled in their code and work differently with different vendors.

3) Performance:
This could be an architecture level difference. In the race of shrinking the hardware footprint down, you could face performance glitch. Here is an example: When you switch on de-duplication and compression, how much effect does it have on the overall performance on CPU, and thereby affecting the VMs. Ask your vendors how they deal with it. I know some of them out there offload such operations to a separate accelerator card

4) Scaling up + Scaling out:
How easy it is to add nodes, both for compute and storage?
How long does it take while adding nodes and is there a disruption in service?
What technologies do the vendors use to create a multi-site cluster? Keep in mind if the cluster is created with remote sites too?
Can you add "Storage only" or "Compute only" nodes if needed?
All of the above have cost implications in a longer run

5) No finger pointing:
Remember point number two? Most of these HCI are based on "Other Vendors' hardware" wrapping it with their own HCI Software and making it behave in a specific way. If something goes wrong, is your vendor okay to take full accountability and not ask you to speak with a hardware vendor? It will be a good idea to look for a vendor with a bigger customer base (not just for HCI but compute and storage in general) - making them a single point of contact and more resources to help you with, in case anything goes wrong.

author avatarSamuelMcKoy
Real User

In my opinion, the most important criteria when assessing HCI solutions other than the obvious performance. How does that HCI solution scale? Or in other words, how does one add storage and compute resources to the solution. Without understanding how the solution scales one can easily request resources without understanding how and why the overall costs have ballooned. The costs can balloon not only because you're adding additional nodes to your HCI cluster for the additional storage and compute resources that were needed but also with additional compute nodes added to the cluster this requires additional licensing for whichever hypervisor the HCI solution depends upon. This is usually on a per-compute-node basis. For example, some HCI architecture allows admins to add only storage to the HCI cluster when additional storage is needed. Not requiring the purchase of any additional licensing from the hypervisor's perspective. On the other hand, some HCI architecture requires you to add a compute node with the additional storage you need. Even if you don't need the compute resources required to add that storage. That compute node will then need to be properly licensed as well. This type of architecture can and usually does force its consumers to spend more money than the circumstances initially dictated. So for me how the HCI solution scales is most important because it can ultimately determine how cost-effective the HCI solution really is.

author avatarlobo
Real User

1)Easy to operate or not



author avatarLuciano Zoccoli (Lenovo (United States))
Real User

Absolutely the important aspects are:

1- Simplification, simple to implement, simple to manage and simple to use.

2- Reliability; There is always more reliability compared with a traditional solution.

For these two items when you see the cost, or better compare the TCO to a hyper-converged solution is always better.

author avatarBhaskarRaman (AspireNXT)

HCI solutions have matured over time. While the swing in the global market is a yoyo between VxRail and Nutanix, there are quiet a few new vendors who have brought Hardware Agnostic solutions to the market. Management, ease of implementation was the key yesterday. Of late, I see a plethora of customers, who needs multi-cloud connectors available. Nutanix has taken a decent lead here, with the acquisition of Calm. Pricey though, a minimum pack with yearly subscription provides for 25VM's. VxRail from Dell EMC has a lot to catch up there, however with a free API connector to AWS, free for the first three TB, and then priced per TB of movement between private and public cloud. DISA STIG compliance is yet another point customers are interested to see in the solutions. Nutanix claims their code is built to comply with these rigorous standards for secure Virtualization layer with AFS, whereas Dell EMC offers scripts that have been pretested, to ensure the environment can comply to the standards.

Backup companies are vying to get their products certified. Wonder what Nutanix would have for the currently certified solutions, post their acquisition of "Mine". It still has miles to go.

author avatarSimon-Leung

Data protection is my primary concern, backup restore is a must have feature.

How does hyper-converged differ from converged?  Is one better than the other? When would one choose converged, rather than hyper-converged? Are there pros and cons to each type of solution?
author avatarDan Reynolds
Real User

Hyper-converged is typically an "all in one box/rack" solution. It consists of compute, storage & network resources all tied together physically (and through software). 

Hyper-converged for a pro - is a complete solution. You don't have to architect it. All you have to know is how much "power" you need (what you want to do with it). While with converged infrastructure (which can still be 'software defined') you have to match and configure the components to work together. 

More often then not converged infrastructure is cheaper. You might already have the storage and networking resources, for example. And manufacturers put a premium on packaging the solution together. 

author avatarPierreChapus

Hyperconverged is a system cluster of at minimum 3 nodes. The system mirrors datas between nodes and runs virtual machines. 

Converged systems is anything between the classic server and hyperconverged platform. This converged concept was useful in waiting for hyperconverged development and should disappear in a near future.

author avatarSatish Dg

converged infrastructure still incorporates hardware, running the technology natively on hardware. On the other hand, hype convergence is fully software-defined and completely integrated

author avatarSteffen Hornung
Real User

Oh, you cant geht Tod of Hardware in and way.

But it is true That hci is a Software defined approach which has the advantage of delivering new features without new hardware.

Another thing that destinguishes hyperconerged Solutions from converged ones is the scale-out nature: simple add more nodes  to the system to support new workloads without losing  Performance because you add all types at once (compute, storage and networking).

See more Hyper-Converged (HCI) questions »

Hyper-Converged (HCI) Articles

IT Central Station

Members of the IT Central Station community are always happy to take a few minutes to help other users by answering questions posted on our site. In this Q&A round-up, we’re focusing on our users’ answers about SIEM, Identity and Access Management, and the Differences between Hyper-converged Infrastructure vs Converged Infrastructure.

Which is the best SIEM tool for a mid-sized enterprise financial services firm: Arcsight or Securonix?

One of our users was looking for SIEM recommendations, and was specifically looking at ArcSight and Securonix. As always users were very helpful, and suggested possible tools based on their own experience.

ArcSight appeared to be the popular recommendation between the two tools; One user, Himanshu Shah, suggested that Securonix may be better suited for a mid-sized business as ArcSight “works on EPS (Events per second) costing”, which can become costly. Users also suggested looking at other options, such as QRadar, Splunk, and LogRhythm.

However, Consulta85d2 responded, “Neither, or both. Having done literally thousands of SIEM deployments, I can tell you from experience that the technology choice isn’t the most important choice. The critical choice is in the resources and commitment to manage and use the system.”

Aji Joseph held similar sentiments and highlighted the key role that the SoC team plays: “The success of SIEM solutions depends a lot on the expertise of the SoC team that will be managing the alerts generated by SIEM solutions.” He also suggested evaluating the forensics capabilities of the various solutions before buying.

What are some tips for effective identity and access management to prevent insider data breaches?

Insider breaches can be a real issue in businesses. Users gave advice on how to effectively implement Identity and Access Management to tackle this issue.

Mark Adams, a Senior Manager, IT Security and Compliance / CISO at a large construction company, gave great advice for implementing a solution, noting that it’s important to “make the implementation a formal project and involve all key stakeholders, including those from the business, not just IT folks.” He gave practical tips, including identifying and classifying all information assets and creating rules for access to those assets. He also highlighted the importance of reviewing access periodically. He stated, “Data owners should be involved in the review since they are usually in a better position to determine if individuals’ access is still legitimate.”

What are the key differences between converged and hyper-converged solutions?

Users helped to clarify key differences between hyper-converged (HCI) and converged infrastructure. Based on the users’ answers, the key differences revolve around ease of use, flexibility, and price.

HCI solutions are typically more expensive, but have significant advantages. Steffen Hornung pointed to the scaleout nature of HCI, noting that “add more nodes to the system to support new workloads without losing Performance because you add all types at once (compute, storage and networking).”

Dan Reynolds summarised the appeal of HCI really well, pointing out that it’s a complete solution: “Hyper-converged is typically an “all in one box/rack” solution. It consists of compute, storage & network resources all tied together physically (and through software)….You don’t have to architect it. All you have to know is how much “power” you need (what you want to do with it).” In contrast, he noted that “with converged infrastructure (which can still be ‘software defined’) you have to match and configure the components to work together.”

Thanks, as always, to all the users who are taking the time to ask and answer questions on IT Central Station!

IT Central Station is here for you, to learn and help your peers. In a market full of vendor hype, we enable you to get real, unbiased information from people like you.

Do you have a question that you’d like to ask our IT Central Station Community? Ask now!

Rony_Sklar@Himanshu Shah ​@Consulta85d2 ​@Aji Joseph ​@Mark Adams ​@Steffen Hornung ​@Dan… more »
Julia Frohwein
Content and Social Media Manager
IT Central Station

What is the difference between Converged Infrastructure (CI) and Hyper-Converged Infrastructure (HCI)? When is it best to use each one? This article helps you sort out the question of Converged vs. Hyper-Converged Infrastructure (or architecture) and what each choice might mean for your data center. It is a companion to our Top 10 Converged Infrastructure Solutions Report and the Top 10 Hyper Converged Infrastructure Solutions Report.

What Came Before Converged and Hyper-Converged Infrastructure?

Before delving into Converged vs. Hyper-Converged Infrastructure, it makes sense to ask, “Converged as opposed to what?” CI and HCI are alternatives to the traditional approach to IT infrastructure that most of us work with every day. CI and HCI are not necessarily replacements for traditional infrastructure. Rather, they represent different ways of organizing and managing the four basic components of infrastructure: compute, storage, networking and server virtualization.

Most infrastructure in use today was set up using a “best of breed” approach. Typically, architects would devise an infrastructure plan that called for the most suitable solutions for servers, compute, data storage, network and server virtualization for a specific workload. Each solution might come from a different vendor and have its own separate management tools. Even if the components came from the same vendor, they might still be controlled by different management software. An overall infrastructure management solution might be placed above everything, with some degree of integration between the management tools for storage, compute and so forth.

There are several advantages to the traditional approach to infrastructure, but also a number of drawbacks. It can provide a high degree of flexibility and customization, but at a cost. There are direct costs for all the specialized hardware and software elements. Administrative costs can also be high, given how many separate elements have to be managed all at once. As organizations start to move systems to the cloud, traditional infrastructure may not port over well. New modes of desktop deployment like Virtual Desktop Infrastructure (VDI) may also not fit well with traditional infrastructure. These are the challenges that CI and HCI try to address.

Download our monthly report on the top converged infrastructure solutions to get the most up to date reviews and ratings.

Why do storage professionals switch over to Converged and Hyper-Converged Infrastructure?

For some IT professionals it’s a matter of cost. This Senior Systems Engineer at an enterprise company reviewed Flexpod, currently ranked as the top converged infrastructure solution according to the IT Central Station community.  He writes, “We needed to migrate away from our older servers. When we did the cost analysis through the FlexPod, and the cost of replacing each individual server, it just made more financial sense going with FlexPod in the long term. Previously to this solution, we were using individual Dell and HP servers. It was kind of a mishmash.”

For this CTO at a large healthcare company it was a combination of both cost and reliability. In his review of VMware vSAN he writes, “The value that vSAN brings to our organization, really there are two major areas. One is the ability to replace very expensive proprietary SANs. The other is the need to replicate and keep data available at all times across three separate data centers. Those two elements are really where vSAN plays.”

When this IT Systems Engineer moved to Nutanix, his main consideration was speed. He notes, “We can deploy new servers faster than ever. Our capacity to grow is bigger than when we had SAN storage dependency. We are now able to deploy a pool of QA virtual machines for testing purposes in minutes rather than in hours.”

What is Converged Infrastructure?

Like many IT concepts, the concept of a CI platform means different things depending on whom you ask. Industry buzz can do that sometimes. For some, CI is a reference architecture that specifies the elements and configuration of a converged appliance. The owner of the reference architecture is able (or expected) to create the appliance on his or her own. For others, CI means a distinct vendor software offering that embodies CI concepts.

Ultimately, both versions end up in the same. Whether through an open source CI reference architecture or a vendor-specific software package, a Converged Infrastructure appliance is a single box system comprising networking, storage, compute and server virtualization.

Structurally, CI is a three-tier software architecture. It uses the client–server software pattern where the UI, business logic, compute, storage and data access are in separate architectural layers. Each can be maintained and updated separately. Hyper-Converged Infrastructure also follows the three-tier architecture pattern.

CI is hardware-driven, with each component able to be separated and used independently if necessary. The package of components, though, is controlled through a centralized management platform. For this reason, CI is often simpler and more cost effective to manage than traditional infrastructure. Additionally, CI makes it easier for IT departments to save money using lower-priced commodity hardware instead of proprietary or vendor-specific hardware.

Virtualization is a key enabling technology for Converged Infrastructure. Indeed, without virtualization it may be essentially impossible to have CI and HCI. The ability to set up, reconfigure and spin down VMs on demand is what makes Converged Infrastructure so efficient. Virtualization vendors like VMware are building convergence and software-defined infrastructure capabilities into their main products, applications and hypervisors, as is the case with VMware vCenter server and vSphere Enterprise. Other examples include the Microsoft Hyper-v virtual hypervisor, Windows Powershell, EMC’s EVO:RAIL and HPE’s Hyper-Converged  offerings.

You can read user reviews for CI solutions from the IT Central Station community here.

What is Hyper-Converged Infrastructure?

Hyper-Converged Infrastructure takes convergence a step further. In this sense, “hyper” means more, as in hypersonic or hyperactive. (It also implies a smaller system, despite the general meaning of the word.) HCI is software-defined. Compute, storage and network are abstracted away from the physical hardware. An HCI system bundles virtualization software into the built-in management package and single hardware appliance. As a result, a Hyper-Converged solution resembles what users have a similar experience to what they enjoy with cloud service providers. It’s possible to add nodes, systems, virtual machines, storage and so forth without having any awareness of the underlying physical hardware. Of course, unlike the public cloud, there is a clear physical capacity limit, but the software/physical abstraction is comparable.

An HCI solution may enable functions like cloud bursting as well as disaster recovery. It can be configured to enable the management of virtual and physical infrastructure through a single interface. This is known as infrastructure federation.

You can read user reviews for HCI solutions from the IT Central Station community here.

Differences Between CI and HCI

CI and HCI overlap a great deal but there are some clear differences between the two architectures. They each deal differently with hardware, systems, compute and storage. And of course, the hyper in Hyper-Converged connotes a higher level of compactness and ease of use.

Data storage is one area where the differences between CI and HCI are pronounced. Given its software-defined approach, HCI is able to pool compute storage resources such as data storage arrays. The user does not have to be aware of whether the data storage is local, direct-attached storage, Network Attached Storage (NAS) or a Storage Area Network (SAN).

Storage innovations abound in both converged and Hyper-Converged architectures. For instance, there are a number of virtual storage area networks (vSANs or virtual SANs) available for CI and HCI solutions. A vSAN mimics the characteristics of a SAN but does not require the SAN’s usual specialized hardware or software. It’s all virtual. Many CI and HCI solutions offer data storage with inline deduplication and deduplication compression — processes that reduce the overall data footprint and leads to more efficient utilization of data storage hardware and faster data backups. HPE StoreVirtual VSA is an example of such an offering.

The two architectures also scale differently, according to most sources. CI is known as a “scale up” solution, where growth is achieved by adding CPUs, disk drives, switches and virtual machines. In contrast, Hyper-Converged Infrastructure is considered a “scale out” approach to infrastructure and storage. With HCI scaling out, one grows by adding “building blocks” of HCI capacity when its needed.

Reviewing a variety of online commentary on the scaling issue, however, reveals some ambiguity. Some vendors and architects see CI as a building block architecture that can scale out as well as scale up. Converged Infrastructure usually scales with a “Building Block” approach. With compute, storage and networking in a single chassis, it is possible to add capacity or nodes by adding more chassis. This is not always the case, however. When using a CI reference architecture instead of a vendor solution, it is possible to add needed elements, such as storage, without proportionally adding compute and networking.

CI and HCI Use Cases

When do you use CI or HCI rather than traditional infrastructure? The following are some popular use cases for CI and HCI:

  • To build private and hybrid clouds – The Lego-like nature of Hyper-Converged Infrastructure makes it a natural hardware basis to build a private cloud or hybrid cloud environment. The stackable blocks of compute/storage/network/VM capacity make it possible to build and expand a private cloud without excessive concern about hardware integration and infrastructure management.

This Senior Systems Administrator writes, “Having my private cloud within my Simplivity infrastructure has given me so much more than I could have ever expected. I love the ability to fire off a full, application-aware backup of a VM and have it complete in just under four seconds. Also, I can now fail my entire data center over to my DR site and have everything up an running in well under 30 minutes, with my mission-critical servers up in under 10 minutes (the servers do have to power on!). It's awesome. You do need EZ-DR by VM20/20 to accomplish this, but it is a fraction of the cost of VMware Site Recovery Manager.”

  • To consolidate the data center – In response to appliance and storage sprawl in cramped and costly data centers, infrastructure managers are finding CI and HCI to be an appealing consolidation solution. With a single, consolidated infrastructure, CI and HCI enable better, or even optimal utilization of resources like data center space, racks, servers and so forth. Provisioning, scaling and system changes tend to become faster with CI and HCI. Some even claim that they’ve seen drops in costs for cabling, power and cooling. There can be savings in infrastructure management software expenses and administrative overhead as well.

A Senior Systems Engineer, reviewing Flexpod writes, “It benefits the organization in that we had no downtime. In almost five years of operation, we have never had a single hour of downtime that was directly related to a storage problem. There weren't things like hard drive failures.

In any other company, it would have legitimately been an issue for us to get a hard drive out. But usually it involves some sort of extreme discussion with customer service agents about how important this is to our business operation, and there was none of that with NetApp. They adhered to the SLA.

I was willing to wait if the guy was willing to reset the hard drive. And that's more-or-less what happened. I had a failure, and within two hours of the notification of the failure, I had a new hard drive in my hands on-site. That's pretty impressive, regardless of how you put it.”

  • To protect data – CI and HCI make possible the centralized management of backup and restore functions. Centralization over control of data storage systems leads to consistent backup policy enforcement. It also helps with data retention and location policy compliance. For example, an IT manager can easily track whether data is stored on multiple virtual machines if that is required for data protection and integrity. Conversely, some HCI solutions facilitate de-duplication of data, which allows for better use of storage resources and an improved data lifecycle.

A virtualization system administrator in the IT Central Station community reviewed vSAN as his HCI. He writes, “It is precisely the possibility of being able to extend the capacities of the cluster of storage and calculation by the simple addition of one or more physical server that makes us lean on this solution and that in a secure way.

Moreover, with the storage policy, we were able to create different security policies depending on the virtual machines according to their needs for performance or availability.”

When reviewing HPE Simplivity HCI solution, this Head of IT writes, “We consolidated five servers and one SAN with three arrays into two OmniCubes. Our SAN was full with 20 TB at the time of migration, and we accomodate the whole amount of data into less as the half of the disk space. Since then the infrastructure grew constantly, but the data foot print on the Omnicubes barely increases. Dedup is currently around 3.3. Additionally, we implemented DR with a third OmniCube residing in a second datacenter, where we replicate our data. Recovery of VMs is done in a couple of seconds. We now almost never use our backup software to restore data anymore.”

Advantages of converged and hyperconverged infrastructure
Why change over to a converged or hyperconverged solution?

  • To optimize workloads and applications - Centralized infrastructure visibility helps IT managers optimize workloads and applications. With a single management and monitoring interface, it’s possible to react quickly and easily to shifts in application load. Resources can be reallocated based on demand. It is also possible to move data from one resource pool to another, which can drive faster application performance and better resource utilization. These functions can be especially helpful with VDI where end users can be quite sensitive to relatively subtle changes in response times. In other cases, CI and HCI can be helpful for optimizing workloads and applications by enabling scale outs of resource clusters.

A DataBase Administrator at a large government organization reviewed Oracle Exalogic on IT Central Station. He writes, “It has improved the way my organization functions by migrating apps to one consolidated platform that is dedicated to WebLogic and Oracle apps.”

  • To enable VDI – organizations where information workers perform similar, clerical tasks are good candidates for desktop virtualization. With VDI, workers use what is essentially a terminal that replicates the functioning of a desktop PC. The PC is actually running on a virtual machine somewhere else. There are a range of benefits to this approach, including reduce maintenance, better protection against malware, lower hardware costs and so forth. HCI is well suited to the task of provisioning virtual desktops.

The Pros and Cons of Each Approach

Which one is better? That depends on many factors, of course. Neither is superior in all use cases. And, what might be considered a good feature in one scenario can be a negative in another. For instance, some CI solutions are from a single vendor, while others offer a multi-vendor capability. A single vendor CI stack could be an excellent choice for an organization that wants simplicity. If there are specific requirements best met by a multi-vendor CI stack, then that is preferable. Software licensing costs may add up, however, in a multi-vendor solution.

Industry research suggests that organizations choose CI for mission critical workloads. They like the efficiencies of CI compared to traditional infrastructure but they want to maintain a highly granular level of control over systems and data. They want the customization inherent in CI. In contrast, Hyper-Converged solutions trade off customization with simplicity.

HCI is seen as being better for infrastructure consolidation. It also tends to get favored for ease of use. The software-defined approach is also considered more flexible than either CI or traditional infrastructure. Having everything in a single appliance and a completely centralized “pane of glass” for management leads to more agility. HCI is viewed as better for agility as a result.

The ease of use in HCI, driven by its single, software-defined management toolset, is seen as being easier for IT generalists. HCI doesn’t require as much infrastructure specialization as traditional infrastructure or even CI. It’s designed for ease of use. You don’t have to be a storage or network expert to configure and manage an HCI platform. This plays well for smaller IT departments or those with hiring constraints.

IT managers who are devising private clouds also like HCI better than CI in many cases. Some HCI solutions function effectively the same as a cloud platform. The ability to add appliances quickly and easily makes it advantageous for private cloud environments. With cloud bursting and hybrid cloud capabilities, the use case is even more compelling.

It’s also important to remember that in many circumstances the best choice is “neither.” Traditional infrastructure is not going away, nor should it. It would be a mistake to think that CI and HCI are the “new things” and can therefore be your only choices for new infrastructure projects. For example, some CI and HCI solutions may be rigid when compared to their traditional counterparts. Assuming you have the expertise to configure the storage, compute and network elements the way you want them, you might find their pre-packaged natures to be restrictive.

Approaches to Realizing CI and HCI

What is the best way to implement converged or hyper Converged Infrastructure? As is the case with the pros and cons, there is no one right way to do it. That said, a number of best practices are emerging as the technologies receive a wider embrace.

  • Focus on the big picture – Moving forward with a converged approach to architecture is part of a bigger conversation inside IT. It’s about how to best realize the vision of virtualized infrastructure and the cloud.

  • Consider an incremental approach – The nature of Converged and Hyper-Converged Infrastructure lends itself to starting with small projects. A single workload, a single department can test drive the CI concept for your organization. Some applications, servers and systems are more suited to a converged approach than others. Then, based on what you learn in that experience, you can plan a bigger rollout if that is what’s needed. New CI and HCI instances can be introduced to build a “system of systems” over time.

  • Look at where you are currently experiencing stress – Converged solutions can be great stress relievers if they’re applied in the right areas. Where are you stressed? Where are your people having trouble keeping? Where are service level agreements falling apart? For example, if storage is a pain point for your IT organization, that might be a good place to start with a converged approach. Alternatively, if you’re short of expertise in a particular area, that might be the place to look at introducing HCI and turning the administration over to IT generalists.

  • Keep the business case and value in perspective – This is good advice not just for Converged Infrastructure. It’s a general principle when considering any new technology. An assessment of CI or HCI needs to answer fundamental business and value questions: Will it be worth the investment? Will it make the business operate better or more profitability? Will it save money? These are the fundamental that must be addressed.

  • Try to automate as much as possible – Converged and Hyper-Converged Infrastructure solutions lend themselves to infrastructure automation due to their centralized management. Automating virtual machine provisioning and data protection processes, for example, can pay off in terms of faster time to market for new systems as well as reduced administration costs.


Thinking about Converged Infrastructure versus Hyper-Converged Infrastructure takes you quickly into some pretty deep IT topics. Their very converged nature pulls in discussions of storage, compute, virtualization, cloud and more. It’s a lot to consider. Each has a distinct advantage for a given set of workloads. Neither is a cure-all or a singular replacement for other infrastructures that may be working well. Best practices are emerging to ensure a positive, cost-effective experience of deployment.

HenryCool review
Find out what your peers are saying about Nutanix, Dell EMC, VMware and others in Hyper-Converged (HCI). Updated: July 2021.
523,535 professionals have used our research since 2012.