We just raised a $30M Series A: Read our story

Top 8 Hyper-Converged (HCI) Tools

Nutanix Acropolis AOSVxRailVMware vSANStarWind Virtual SANHPE SimpliVityStarWind HyperConverged ApplianceNetApp HCICisco HyperFlex HX-Series
  1. leader badge
    The initial setup is straightforward.The hyperconvergence service, as well as the DR solution, are game-changers for Nutanix.
  2. leader badge
    The scalability of VxRail is very good.VxRail has high performance and has great efficiency. There is a single place for us to manage all of our virtual machines. The ability to right-size instead of overcommit VMs is a large benefit.
  3. Find out what your peers are saying about Nutanix, Dell EMC, VMware and others in Hyper-Converged (HCI). Updated: October 2021.
    542,721 professionals have used our research since 2012.
  4. leader badge
    I have found the solution to be scalable.vSAN is very integrated.
  5. leader badge
    Having the ability to scale horizontally if needed is a huge plus for future growth.The ability for us to manage all of our nodes from the same console makes systems administration very easy.
  6. leader badge
    Backups happen very quickly.The initial setup was straightforward.
  7. leader badge
    Being able to log on to the GUI to see specific data and usage statistics, executing clones, starting and stopping VMs, etc., is great. We also opted for 24-hour support monitoring for any issues. They are extremely quick to respond on issues we even cause ourselves such as bumping a network cable.
  8. report
    Use our free recommendation engine to learn which Hyper-Converged (HCI) solutions are best for your needs.
    542,721 professionals have used our research since 2012.
  9. The ability to size the available space in a way that matches our company's needs is most valuable. For instance, you can decide if you want 80/20, 70/30, or 60/40 space. Redundancy depends on your needs without changing the appliance. You just add space and decide the percentage of space that you need free and the percentage of space that you need for backup. It is all automatic, and you don't have to do anything. You just add space, and the system automatically configures itself with the chosen option.
  10. Performance-wise, everything is good. So far, we haven't had any issues. There has been no downtime at all. The price of the solution is good, especially when it comes to complex network solutions, such as UCS and Connect.

Advice From The Community

Read answers to top Hyper-Converged (HCI) questions. 542,721 professionals have gotten help from our community of experts.
Nurit Sherman
We all know that it's important to conduct a trial and/or proof-of-concept as part of the buying process.  Do you have any advice for the community about the best way to conduct a trial or POC? How do you conduct a trial effectively?  Are there any mistakes to avoid?
author avatarManish Bhatia (HCL Technologies)
Consultant

I would say, gather and understand the requirements, share and check with vendors, invite them for a solution with a POC on your environment, ask for use cases and for any legacy application/hardware, ask for the compatibility matrix, and then you will have the idea about the capabilities of that solution and vendor.

author avatarJefeDeIna6eb (Jefe de Infraestructura Tecnológica at a non-tech company with 1,001-5,000 employees)
Real User

If you want to do a proof of concept about HCI, then I recommend that you do it and that you try all the hypervisors of your choice, with any of them you will find very good results.

Of course, performing a proof of concept in HCI equipment similar to your work load in production would be the best, as it has an almost real test.

In my case, I have migrated VMs that have databases like MSQL, postgres, mysql and in all of them I have better response times in read and write operations and zero data corruption.

The cloning of VM is very fast and the simplicity of the HCI operation helps me to concentrate on other activities.

The operation in HCI is very simple.

author avatarBob Whitcombe
Real User

Selecting an HCI path is pretty straightforward and it goes through the cloud. You first select your workloads and what performance is needed for success. Since the key differentiation across HCI platforms today is software - you should be able to construct a target load of the apps you want to test and run them in a vendors cloud sandbox. You want to align your hardware solutions so you can leverage your existing support models and contracts, but you are testing software platforms for usability, performance and adaptability to your current operations model.

Once your workload homework is complete and your have selected an application type, VDI, OLTP, Data Warehouse etc, and determined worst case response times, you can throw a target workload to the cloud for evaluation. At this point you are looking for hiccups and deployment gotchas. HCI and cloud processes may be new to you - so you may need to stretch beyond your deployment models. This is a good thing. Recognize HCI is a leading edge trend and is one step removed from the cloud - which is where you will be in 5-10 years.

You want to look for key software features that lower the cost and complexity to manage this installation. But for a corner case or three, most applications will fit squarely in the middle of the "good" zone for today's SSD based HCI solutions.

With cloud testing of a target HCI platform you should learn how your applications perform, see key features you really really want and satisfy yourself that these systems can be managed without significant incremental effort by your current staff.

Then you do the grid - is the target aligned with my current hardware vendor; endorsements from people running similar applications; killer features and a drop dead signing bonus that justifies adding this platform to my portfolio of aging IT equipment? If and only If you come down to a near tie between two vendors should you go to the trouble of a full meal deal on-site PoC. They may not provide any more information than the version in the cloud, require physical hosting on your site, need an assigned project manager and then you get to deal with the loser - who may very well be your current vendor - and what a joy that will be.

author avatarMohamedMostafa1
Real User

There are several ways to evaluate HCI Solutions before buying, Customers need to contact HCI Vendors or one of the local resellers who propose the same technology.
Both of HCI Vendors and Resellers will be able to demonstrate the technology in Three Different scenarios like :

1 – Conduct Cloud-Based Demo, in which the presenter will illustrate product features and characteristics based on a ready-made environment and the presenter will be able to demonstrate also daily administration activities and reports as well.

2 – Conduct a Hosted POC, in which the presenter will work with the customer in building a dedicated environment for him and simulate his current infrastructure components.

3 – Conduct Live POC, in which the presenter has to ship appliances to customer’s data center and deploy the solution and migrate/create VMs for testing purpose and evaluate performance, manageability & Reporting.

If the vendor or a qualified reseller is doing the POC, there should be no mistakes because it’s a straightforward procedure.

author avataranush santhanam (HCL Technologies)
Consultant

Hi,

When evaluating HCI, it is absolutely essential to run a trial/POC to evaluate the system against candidate workloads it will be expected to run in production. However, there are quite a few things to watch out for. Here is a short list:

1. Remember that most HCI depend on a distributed architecture which means it is NOT the same as a standard storage array. What that means is that, if you want to do any performance benchmarking with tools such as IOMeter, you need to be extremely careful in the way you create your test VMs and how you provision disks. Guys such as Nutanix have their own tool X-Ray. However I would still stick to a more traditional approach.
2. Look at the list of apps you will be looking to run. If you are going to go for a KVM type of a hypervisor solution, you need to see if the apps are certified. More importantly, keep an eye out on OS certification. While HCI vendors will claim they will and can run anything and everything, you need the certification to come from the app/OS OEM.
3. Use industry standard benchmarking tools. Remember unless you are using a less “standard” type of a hypervisor such as KVM or Xen, you really don’t need to be wasting your time with the hypervisor part as VMWare is the same anywhere.
4. Your primary interest should be the storage layer without question and the distributed architecture. Remember with HCI, the computer does not change and hypervisor (assuming VMWare) does not change. What changes is the storage. Next there are the ancillary elements such as management and monitoring and other integration pieces. Look at these closely.
5. Use workload specific testing tools. Examples include LoginVSI, jMeter, Paessler/Bad boy for web server benchmarking etc.
6. Finally, remember to look at the best practices on a per-app basis. The reason I suggest this is because of the following. You may have been running an app like Oracle in your environment for ages in a monolithic way. However when you try the same app out in HCI it may not give you the performance you want. This has to do with the way the app has been configured/deployed. So looking at app best practices is something to note.
7. If you are looking at DR/backup etc, then evaluate your approaches. Are you using any native backup or replication capability or are you using any external tool. Evaluate these accordingly. Remember your RTO/RPO. Not all HCI will support sync replication.
8. Finally if you are looking at looking at native HCI capabilities around data efficiency etc (inline de-dupe and compression), you will need to design testing for these carefully.
9. Lastly, if you are looking at multiple HCI products, ensure you use a common approach across products. Otherwise your comparison will be like looking at oranges and apples.

Hope this helps.

author avatarShibu Babuchandran
Real User

Hi Nurit,


Some of the best POC that can be implemented and decide on if it suits our requirement is to take any of the below:


-VDI and Desktop-as-a-Service (DaaS)
-Test and Development
-Edge computing
-Cloud migration
-Backup and DR
-Logging and analytics



Mistakes to avoid (while doing a POC and taking the right call in deciding the right solution):
-Not giving storage enough consideration
-Misjudging network needs
-What to consider when scaling up
-Hard or soft HCI: Which to choose?
-Avoiding supplier lock-in
-Multiple suppliers and HCI
-Considering the whole SDDC stack

author avatarMohamadBadran
Real User

Hello,


VxRail team got a free lab environment to fully test their HCI solution. You may contact Dell or Dell partners in your area to have access.


When buying HCI, you need to accurately size the CPU, memory, and storage including future growth.

author avatarDeepen Dhulla
User

We found that a trial of Proxmox VE to deploy HCI is possible with 3-4 entry level server (we tried with our spare old server) which has been great for us to gain confidence on HCI setup and later plan accordingly for a full-fledged HCI setup.

Rony_Sklar
Hi community,  What are key factors that businesses should take into consideration when choosing between traditional SAN and hyper-converged solutions?
author avatarFernando Salado
User

Well, there are many things to consider, but I will start with scalability.


In HCI solutions scalability is achieved by adding nodes, while in dHCI (discrete HCI, hyper-converged solutions that use a SAN), you can expand the compute nodes or the storage. That means that dHCI is more flexible, and you will address your compute or storage needs in a tailored way.

The other thing to consider is availability.


HCI solutions base their availability in RAIN (Redundant Array of "inexpensive" Nodes). This means that you have more than one copy of your data located in different nodes. In case that you experience a failure in a node, your data is protected and accessible. Moreover, is extremely easy to set up a stretched cluster.

SAN-based architectures, usually include just one copy of your data, unless you use more than one storage system and a replication solution.

Another thing to consider is operations. HCI environments are easy to use, set up and scale. On the other hand, SAN-based solutions require more knowledge and maintenance efforts (Fabric OS's to update, HBAs, etc).

author avatarTim Williams
Real User

Whether to go 3 Tier (aka SAN) or HCI boils down to asking yourself what matters the most to you:

- Customization and tuning (SAN)
- Simplicity and ease of management (HCI)
- Single number to call support (HCI)
- Opex vs Capex
- Pay-as-you-grow (HCI)/scalability
- Budget cycles

If you are a company that only gets budget once every 4/5 years, and you can't get any capital expenditures for Storage/etc, pay-as-you-grow becomes less viable, and HCI is designed with that in mind. It doesn't rule out HCI, but it does reduce some of the value gained. Likewise, if you are on a budget cycle to replace storage and compute at different times, and have no means to repurpose them, HCI is a tougher sell to upper management. HCI requires you replace both at the same time, and sometimes budgets for capital don't work out.

There are also some workloads that will work better on a 3Tier solution vs HCI and vice versa. HCI works very well for anything but VMs with very large storage footprints. One of the key aspects of HCI performance is local reads and writes, a workload that is a single large VM will require essentially 2 full HCI nodes to run, and will require more storage than compute. Video workloads come to mind for this. Bodycams for police, surveillance cameras for businesses/schools, graphic editing. Those workloads can't reduce well, and are better suited for a SAN with very few features such as an HPE MSA.

HCI runs VDI exceptionally well, and nobody should ever do 3 Tier for VDI going forward. General server virtualization can realize the value of HCI, as it radically simplifies management.

3 Tier requires complex management and time, as you have to manage the storage, the storage fabric, and the hosts separately and with different toolsets. This also leads to support issues as you will frequently see the 3 vendor support teams blame each other. With HCI, you call a single number and they support everything. You can drastically reduce your opex with HCI by simplifiying support and management. If you're planning for growth up front, and cannot pay as you grow, 3 tier will probably be cheaper. HCI gives you the opportunity to not spend capital if you end up not meeting growth projections, and to grow past planned growth much easier as adding a node is much simpler than expanding storage/networking/compute independently.

In general, it's best to start with HCI and work to disqualify it rather than the other way around.

author avatarShivendraJha
Real User

There are multiple factors that you shall be looking at while selecting one over the other.
1. Price- Price for HCI is cheaper if you are refreshing your complete infrastructure stack (Compute/Storage/network) however, if you are just buying individual components in the infrastructure such as compute or storage only, then 3-Tier infrastructure is cheaper.
2. Scalability-HCI is highly and easily scalable.
3. Support- On a 3 tier architecture, you have multiple vendors/departments to call/contact to get support on the solution. Whereas for HCI, you call/contact a single vendor addressing all your issues on the solution.
4. Infrastructure- For very small infrastructure, a 3Tier architecture based on iSCSI SAN can be a little cheaper. However, for a medium or large infrastructure HCI comes cheaper every time.
5. Workload type- If you are using VDI, I strongly recommend to use HCI. Similarly, for a passive secondary site, 3-tier could be OK. Please run all bench-marking tools to know what are your requirements.

I am sure HCI can do everything though.

author avatarreviewer1234203 (Pre-sales Engineer at a tech services company with 11-50 employees)
Real User

There are so many variables to consider.

First of all, have in mind that tendency is not the rule, your needs should be the base of decision, so you don't have to choose HCI because it's the new kid on the block.

To start, think with your pocket, SAN is high cost if you are starting the infrastructure; cables, switches, and HBAs are the components to add to this structure that have a higher cost than traditional LAN components, On the other side, SAN requires more experimented experts to manage the connections and issues, but SAN has particular benefits sharing storage and servers functions like you can have on same SAN disk and backup and use special backup software and functionalities to move data between different storage components without direct impact on servers traffic.

SAN has some details to consider on cables like distance and speed, its critical the quality or purity to get the distance; the more distance, the less speed supported and transceiver cost can be the worst nightmare. But SAN have capabilities to connect storage boxes to hundreds of miles between them, LAN cables of HCI have 100 mts limit unless you consider a WAN to connect everything or repeaters or cascaded switches adding some risk element to this scenario.

Think about required capacities, do you need TB or PB?, Some dozens of TB can be fine on HCI, But if there are PBs you think on SAN, what about availability?, several and common nodes doing replication around the world but fulfilling the rules of latency can be considered with HCI, but, if you need the highest availability, replicating and high amount of data choose a SAN.
Speed, if it is a pain in the neck, LAN for HCI starts minimum at 10 Gb and can rise up to 100 Gb if you have the money, SAN has available just up to 32 Gb and your storage controller must be the same speed, this can drive the cost to the sky.

Scalability, HCI can have dozens of nodes replicating and adding capacity, performance, and availability around the world. With SAN storage you can have a limited number of replications between storage boxes, depending on manufactures normally you can have almost 4 copies of the same volume distributed around the world and scalability goes up to controllers limits its a scale-up model. HCI is a scale-out model to grow.

Functionalities, SAN storage can manage by hardware things like deduplication, compression, multiple kinds of traffic like files, blocks or objects, , on HCI just blocks and need extra hardware to accelerate some process like dedupe.

HCI is a way to share storage on LAN and have dependencies like the hypervisor and software or hardware accelerators, SAN is the way to share storage to servers, it is like a VIP lounge, so there are exclusive server visitors to share the buffet and can share the performance of hundreds of hard drives to support the most critical response times.

author avatarBart Heungens
Reseller

All depends of how you understand and use HCI:
If you see HCI as an integrated solution where storage is integrated into servers, and software-defined storage is used to create a shared pool of storage across compute nodes, performance will be the game changer of choosing for HCI or traditional SAN. The HCI solution of most vendors will be writing data 2 or 3 times for redundancy across compute nodes, and so where there is a performance impact on the applications due to the latency of the network between the nodes. Putting 25Gb networks, as some vendors recommend, is not always a solution since it is npt the bandwidth nut the latency of the network that defines the performance.

Low latency application requirements might push customers to traditional SAN in this case. If you use HCO for ease of management through a single pane of glass, I see many storage vendors delivering plugins to server and application software, eliminating the need of using the legacy SAN tools to create volumes and present them to the servers. Often it is possible to create a volume directly from within the hypervisor console and attach them to the hypervisor servers. So for this scenario, I don't see a reason choosing between the one or the other.

Today there is a vendor (HPE) that is combining traditional SAN in an HCI solution calling it dHCI. It gives you a HCI user experience, the independent scalability of storage and compute, and the low latency often required. After a time I expect other vendors will follow the same path delivering these kinds of solutions as well.

author avatarCesar Danecke
Real User

Maybe what I say becomes a little redundant.


As mentioned earlier, new technologies don't see why not use HCI.


I think it's an important factor and when you have a reduced team, you end up opting for a fully integrated solution.


HCI is wonderful, and possible to work with scalability, redundancy, there are tools to provide agile backup.


The traditional structure makes many analysts more comfortable, but for small teams it ends up overloading.


I use both frameworks, for large volatile data volume I believe that pure investment in HCI comes at a high cost, as it adds more storage host.


Also talk about abandoning the SAN you already have, in my opinion and something very drastic, each product has its strengths, replication for storage is still my favorite, even though there are very good replication solutions in HCI.


It's worth analyzing the whole, the size of the structure, the technical team, the qualifications, what kind of application you want to work on, the financial investment is important, but it can be more expensive in the end.


I've seen companies connecting their SAN to HCI, not always for performance reasons, but because it already exists, or there are low-cost solutions, and space requirements.


But when everything is new, it is possible to buy the minimum (HCI), already in SAN and I need to pre-dimension the number of ports, capacity, processing, speed, which will be used in its growth journey, this can make the project more expensive.

author avatarKrishna Randadath
User

Business-wise, direct savings across the architecture, hardware, software, backup, and recovery, hyperconvergence can transform IT organizations from cost centers to frontline revenue drivers. A major issue in traditional IT architecture was that as complexity rises, the focus shifts from business problems to tech problems. The business’s focus should be on what IT can do for the bottom line, not what the bottom line can do for IT.

Capital expenditures (CAPEX): The one-time purchase and implementation expenses associated with the solution Operational expenditures (OPEX): The running costs of an IT solution – better known as the total cost of ownership (TCO) – that are incurred for managing, administering, and updating the existing IT infrastructure Considering the separate areas of cost reductions discussed above, organizations can evaluate the expense differentials between their traditional infrastructures and the HCI environment.

Hyperconvergence helps meet current and future needs, so it’s essential to calculate the TCO accurately. The TCO of a hyperconverged infrastructure includes annual maintenance fees for data centers and facilities, telecom services, hardware, software, cloud systems, and external vendors. Other costs include staff needed for deployment and maintenance, staff training and efforts to integrate with existing and legacy systems.

HCI overcomes the enormous wastage of resources and budgets common in the early phases of traditional infrastructure deployments because their scale dwarfs business needs at the time of purchase. HCI lends itself to incremental and granular scaling, allowing IT to add/remove resources as the business grows.

author avatarManjunath V
Real User

Scalability and agility are the main consideration factor to decide between SAN and HCI. SAN infra needs huge work involvement when attaining the end of support, end of life situation. Also, budgeting and procurement frequency plays a role.

Also, the limitation of HCI to be single datastore in VMware environment is a problem, when disk corruption or data corruption happens.

Rony_Sklar
Hi community members, What are some important factors to keep in mind and to compare when choosing between HCI solutions? 
author avatarShivendraJha
Real User

1. Support


2. Migration or Conversion process from existing solution


3. Cost 


4. Hardware compatibility 


5. Integration with all critical and non-critical solutions


6. Cloud readiness

author avatarGaurav Vyas
Real User

Availability, support, cost, compatibility and scalability, cloud readiness.

author avatarSteffen Hornung
Real User

Can we do a proof-of-concept?


Does the solution support my critical/legacy application?


Does the solution support my current backup solution?


How does migration work, what downtime is to be expected (e.g based on hours/terabyte)?


Which other aspects are possible with this solution?


How responsive is vendor support?


It does not matter if the solution works natively with the hypervisor or through virtual machines which do the magic.
Native hypervisor integration will be likely a vendor-lock-in.
The main matter is "how well does it work"? See this through with my first question.


author avatarVaibhav Saini
User

> Integration with the existing running apps and solutions.


> Support Parameters.


> Ease of Scaling up and out the solution.


> Cost of the overall solution.


> Technical architecture of the solution.


> Integration with Cloud services/Solution should be cloud adaptive.


> Solution should be truly ready for the complete SDDC platform.

author avatarMichael Samaniego
User

There are several solutions that claim to be HCI in the market, however the best factor is the native integration with the hypervisor without the need to have additional virtual machines that "perform HCI", so far in several cost-efficient scenarios that I have performed and in turn With different hardware manufacturers I can personally say that the best option is VMware vSAN. Its main strength is the correct management of hardware resources.

author avatarKABELO INNOCENT SELELEKO
Real User

Collaboration

author avatarAbdelrahman Mahmoud
Real User

For me the most important component in an HCI Solution is the Software-defined Storage, so you always need to give great care when comparing SDS offerings from different HCI vendors.
You check the below points:
- Data Locality
- SDS Offerings (block storage, file storage, object storage)


Ariel Lindenfeld
There are a lot of vendors offering HCI solutions. What is the #1 most important criteria to look for when evaluating solutions? Help your peers cut through the vendor hype and make the best decision.
author avatarit_user936603 (Executive Vice President of Sales and Marketing with 11-50 employees)
User

Cost metrics, Rob, Capex, and open savings and even a TCO should be accounted for.

1) Operational efficiency assumptions based on assessments. This should yield time to deploy, VM to admin ratios, device consolidation, and power usage.
2) My most important thing is in the Recovery Time Objective and how well does it sustain without data loss. Recovery Point Objective measures how far you can go back without loss and RTO is how long mission critical devices are brought back online.

Since you will find yourself managing VMs, you might consider a cost analysis there as well. (Remember you won't be managing devices any longer)

Your benefits in using an HCI is
1) A VM Centric Approach
2) A software-defined datacenter- ( less replacement, better utilization, pay as you go)
3) Data Protection
4) Lower costs
5) Centralized and even automated self-management tools.

author avatarBart Heungens
Reseller

For me an HCI solution should provide me:
- ease of management, 1 console does all, no experts needed, cloud Experience but with on-premise guarantees
- invisible IT, don't care about the underlying hardware, 1 stack
- built-in intelligence based on AI for monitoring and configuration
- guaranteed performance for any workloads, also when failures occur
- data efficiency with always-on dedupe and compression
- data protection including backup and restore
- scalability, ease of adding resources independent of each other (scale up & out)
- a single line of support

author avatarBharat Bedi (SolarWinds)
Vendor

While there is a long list features/functions that we can look at for HCI -> In my experience of creating HCI solutions and selling it to multiple customers, here are some of the key things I have experienced most customers boil it down to:

1) Shrink the data center:
This is one of the key "Customer Pitch" that all the big giants have for you, "We will help you reduce the carbon footprint with Hyperconverged Infrastructure". It will be good to understand how much reduction they are helping you with. Can 10 racks come down to two, less or more? With many reduction technologies included and Compute + Storage residing in those nodes, what I mentioned above is possible, especially if you are sitting on a legacy infrastructure.

2) Ease of running it:
The other point of running and buying HCI is "Set it and forget it". Not only should you look at how easy it is for you to set up and install the system, but how long does it take to provision new VMs/Storage, etc. It is great to probe your vendors around to find out what they do about QOS, centralized policy management, etc. Remember that most HCI companies portfolios differ at the software layer and some of the features I mentioned above are bundled in their code and work differently with different vendors.

3) Performance:
This could be an architecture level difference. In the race of shrinking the hardware footprint down, you could face performance glitch. Here is an example: When you switch on de-duplication and compression, how much effect does it have on the overall performance on CPU, and thereby affecting the VMs. Ask your vendors how they deal with it. I know some of them out there offload such operations to a separate accelerator card

4) Scaling up + Scaling out:
How easy it is to add nodes, both for compute and storage?
How long does it take while adding nodes and is there a disruption in service?
What technologies do the vendors use to create a multi-site cluster? Keep in mind if the cluster is created with remote sites too?
Can you add "Storage only" or "Compute only" nodes if needed?
All of the above have cost implications in a longer run

5) No finger pointing:
Remember point number two? Most of these HCI are based on "Other Vendors' hardware" wrapping it with their own HCI Software and making it behave in a specific way. If something goes wrong, is your vendor okay to take full accountability and not ask you to speak with a hardware vendor? It will be a good idea to look for a vendor with a bigger customer base (not just for HCI but compute and storage in general) - making them a single point of contact and more resources to help you with, in case anything goes wrong.

author avatarSamuelMcKoy
Real User

In my opinion, the most important criteria when assessing HCI solutions other than the obvious performance. How does that HCI solution scale? Or in other words, how does one add storage and compute resources to the solution. Without understanding how the solution scales one can easily request resources without understanding how and why the overall costs have ballooned. The costs can balloon not only because you're adding additional nodes to your HCI cluster for the additional storage and compute resources that were needed but also with additional compute nodes added to the cluster this requires additional licensing for whichever hypervisor the HCI solution depends upon. This is usually on a per-compute-node basis. For example, some HCI architecture allows admins to add only storage to the HCI cluster when additional storage is needed. Not requiring the purchase of any additional licensing from the hypervisor's perspective. On the other hand, some HCI architecture requires you to add a compute node with the additional storage you need. Even if you don't need the compute resources required to add that storage. That compute node will then need to be properly licensed as well. This type of architecture can and usually does force its consumers to spend more money than the circumstances initially dictated. So for me how the HCI solution scales is most important because it can ultimately determine how cost-effective the HCI solution really is.

author avatarlobo
Real User

1)Easy to operate or not


2)Cost


3)Scaleable 

author avatarLuciano Zoccoli (Lenovo (United States))
Real User

Absolutely the important aspects are:

1- Simplification, simple to implement, simple to manage and simple to use.

2- Reliability; There is always more reliability compared with a traditional solution.

For these two items when you see the cost, or better compare the TCO to a hyper-converged solution is always better.

author avatarBhaskarRaman (AspireNXT)
Consultant

HCI solutions have matured over time. While the swing in the global market is a yoyo between VxRail and Nutanix, there are quiet a few new vendors who have brought Hardware Agnostic solutions to the market. Management, ease of implementation was the key yesterday. Of late, I see a plethora of customers, who needs multi-cloud connectors available. Nutanix has taken a decent lead here, with the acquisition of Calm. Pricey though, a minimum pack with yearly subscription provides for 25VM's. VxRail from Dell EMC has a lot to catch up there, however with a free API connector to AWS, free for the first three TB, and then priced per TB of movement between private and public cloud. DISA STIG compliance is yet another point customers are interested to see in the solutions. Nutanix claims their code is built to comply with these rigorous standards for secure Virtualization layer with AFS, whereas Dell EMC offers scripts that have been pretested, to ensure the environment can comply to the standards.

Backup companies are vying to get their products certified. Wonder what Nutanix would have for the currently certified solutions, post their acquisition of "Mine". It still has miles to go.

author avatarSimon-Leung
User

Data protection is my primary concern, backup restore is a must have feature.


Hyper-Converged (HCI) Articles

Rony_Sklar
IT Central Station
Members of the IT Central Station community are always happy to take a few minutes to help other users by answering questions posted on our site. In this Q&A round-up, we’re focusing on our users’ answers about SIEM, Identity and Access Management, and the Differences between Hyper-converged… (more)


Members of the IT Central Station community are always happy to take a few minutes to help other users by answering questions posted on our site. In this Q&A round-up, we’re focusing on our users’ answers about SIEM, Identity and Access Management, and the Differences between Hyper-converged Infrastructure vs Converged Infrastructure.

Which is the best SIEM tool for a mid-sized enterprise financial services firm: Arcsight or Securonix?

One of our users was looking for SIEM recommendations, and was specifically looking at ArcSight and Securonix. As always users were very helpful, and suggested possible tools based on their own experience.

ArcSight appeared to be the popular recommendation between the two tools; One user, Himanshu Shah, suggested that Securonix may be better suited for a mid-sized business as ArcSight “works on EPS (Events per second) costing”, which can become costly. Users also suggested looking at other options, such as QRadar, Splunk, and LogRhythm.

However, Consulta85d2 responded, “Neither, or both. Having done literally thousands of SIEM deployments, I can tell you from experience that the technology choice isn’t the most important choice. The critical choice is in the resources and commitment to manage and use the system.”

Aji Joseph held similar sentiments and highlighted the key role that the SoC team plays: “The success of SIEM solutions depends a lot on the expertise of the SoC team that will be managing the alerts generated by SIEM solutions.” He also suggested evaluating the forensics capabilities of the various solutions before buying.

What are some tips for effective identity and access management to prevent insider data breaches?

Insider breaches can be a real issue in businesses. Users gave advice on how to effectively implement Identity and Access Management to tackle this issue.

Mark Adams, a Senior Manager, IT Security and Compliance / CISO at a large construction company, gave great advice for implementing a solution, noting that it’s important to “make the implementation a formal project and involve all key stakeholders, including those from the business, not just IT folks.” He gave practical tips, including identifying and classifying all information assets and creating rules for access to those assets. He also highlighted the importance of reviewing access periodically. He stated, “Data owners should be involved in the review since they are usually in a better position to determine if individuals’ access is still legitimate.”

What are the key differences between converged and hyper-converged solutions?

Users helped to clarify key differences between hyper-converged (HCI) and converged infrastructure. Based on the users’ answers, the key differences revolve around ease of use, flexibility, and price.

HCI solutions are typically more expensive, but have significant advantages. Steffen Hornung pointed to the scaleout nature of HCI, noting that “add more nodes to the system to support new workloads without losing Performance because you add all types at once (compute, storage and networking).”

Dan Reynolds summarised the appeal of HCI really well, pointing out that it’s a complete solution: “Hyper-converged is typically an “all in one box/rack” solution. It consists of compute, storage & network resources all tied together physically (and through software)….You don’t have to architect it. All you have to know is how much “power” you need (what you want to do with it).” In contrast, he noted that “with converged infrastructure (which can still be ‘software defined’) you have to match and configure the components to work together.”

Thanks, as always, to all the users who are taking the time to ask and answer questions on IT Central Station!

IT Central Station is here for you, to learn and help your peers. In a market full of vendor hype, we enable you to get real, unbiased information from people like you.

Do you have a question that you’d like to ask our IT Central Station Community? Ask now!

(less)
Rony_Sklar@Himanshu Shah ​@Consulta85d2 ​@Aji Joseph ​@Mark Adams ​@Steffen Hornung ​@Dan… more »
Find out what your peers are saying about Nutanix, Dell EMC, VMware and others in Hyper-Converged (HCI). Updated: October 2021.
542,721 professionals have used our research since 2012.