Select Page
Here’s what you must know before investing in HyperConverged Infrastructure

Here’s what you must know before investing in HyperConverged Infrastructure


  • Many organisations are looking at HCI when looking to augment their IT infrastructure.
  • With every major vendor bringing about “game changing” innovations into the fray at a rapid pace, this document summarises important criteria to decide which HCI solution to buy.
  • Form Factor (Hardware appliance v/s software), third party hardware support, Cloud integration, Kubernetes integration and Analytics engines are some of the important factors to consider.
  • Below are some of the questions that we use to guide our customers when making a decision on which HyperConverged Infrastructure vendor to choose.

Hyper Converged Infrastructure

What is Hyper Converged Infrastructure?

  • In HCI, all three tiers of a traditional DC are packed into a single block, called a ‘node’. Many such nodes are then clustered together to form a ‘resource pool’ which can then be managed / orchestrated via software.
  • As a result, you get a single system that’s supposedly much less complex to manage across data centers.
  • While you get the benefit of simplicity, you ‘may’ loose out on the ability to customise the infrastructure to meet your business needs.

Why should you consider HCI over traditional infrastructure?

While there is a strong focus on cloud, on-premises infrastructure is far from dead. One segment which is thriving (even during the pandemic!) is HCI. Gartner expects that by 2023, 70% of enterprises will be running some form of HCI. Public cloud providers such as Amazon, Google and Microsoft are all providing connections to on-prem HCI products for hybrid deployment and management.
Hyperconverged infrastructure offers a simplified design with converged tiers, cloud integration, reduced footprint with the promise of “Pay as you Grow”, and abstraction layers that let a “storage generalist” to managed the entire infrastructure.
With almost all OEMs providing critical storage features such as protection against multi-node failure, data de-duplication and encryption at rest and in transit – let’s look at some criterion that should be in your checklist when evaluating any OEM’s HCI solution.

Form Factor

  • Option 1 is that you buy the software / license and deploy it on servers that are in the vendors hardware compatibility list.
  • Option 2 is that you buy pre-configured nodes of compute storage networking resources packed into one box. This seems to be the rave at the moment with major OEMs competing in the space. These include Cisco (Hyperflex), Dell EMC (VxRAIL), HPE (SimpliVity), Scale Computing (HC3), NetApp, etc.
  • A pre-configured solution should mean that you get a tightly integrated and tweaked and tuned box from the OEM. The latter takes up the onus of thrashing out compatibility issues between hardware and software and offers you a “validated solution”.
  • While this makes your life easier as the client with promises such as “one support number to call if you need help”, you are looking at potentially locking in on to one vendor and entering “inter-op hell” with incumbent infrastructure.
  • If you’ve already invested in the “Converged infrastructure” with three disparate layers – consider Option 1. Vmware VSAN (the current market leader in HCI as per Statista) and Nutanix Acropolis are some software-based products that fall in this category.
  • The “Software” in option 1 basically is a potpourri of code that includes the hypervisor, the storage management layer, the network management engine, the orchestrator, the analytics solution. The solution is hardware agnostic as long as the hardware firmware is in the HCI software’s list of supported third parties.
  • An obvious downside to the “Software based” approach is that you now need to deal with multiple vendors if things fall apart.


  • You’d typically want to go for an HCI solution that lets you grow your storage pool independently of your compute. Some vendors call this “Disaggregated HCI”.
  • Secondly, the HCI solution should also not try to limit you to a certain storage type – SDD vs HDD for example.
  • Another thing to factor for is the bloat added by the storage management and orchestration layer which will compete with the available resources. Evaluate the OEM overhead when making a buying decision.
  • Next – Does the solution limit you to “X number of compute nodes”, “Y number of storage nodes”, etc. Evaluate these upper limits basis your future plans and growth strategy. The last thing you’d want is to have 2 HCI pods / clusters in the same DC / COLO, each being treated as a separate management entity – making management, assurance, data protection and server migrations a nightmare.

Flexibility in Management and Orchestration

  • Let’s say you’re trying to create an HCI cluster with varying hardware (e.g. cores, RAM, disk sizes, etc aren’t the same). Does the OEM allow such a combination or would they expect you to bring parity amongst all nodes in a cluster?
  • Beware of the “single pane of glass” salesy phrase from OEMs when it comes to management / orchestration. Your workloads may require access to NAS or SAN resources for performance reasons. the HCI orchestrator may not give you the visibility or controls to managed these components.


  • Be it software-based or hardware-based, is the solution agnostic when allowing you to add third party components? I believe that’s the promise of HCI – it’s meant to abstract the hardware and let you add gear of your choice. Any OEM that doesn’t do this is resulting in you getting locked in an ecosystem they control.
  • Does the solution let you choose the hypervisor of your choice? Better still, does it allow you to mix n match hypervisors?
  • Does the solution let you add third party peripherals of your choice? Sensors, actuators, GPUs, et al? If you have a requirement of developing bespoke nodes for your unique requirements – ensure you choose vendors which support such customization.

Cloud integration

  • Imagine the power of an HCI solution that lets you handle temporary burst capacity (which it can’t meet locally) by letting you provision compute in the public cloud? For this, the orchestration layer needs to have the intelligence to allow this, and to treat the cloud node as “yet another HCI node”.
  • This scale-out; scale-in elasticity has evolved into a very attractive feature of a few leading HCI vendors.


  • What kind of intra and inter DC HA / DR features does the solution offer?
  • Does the solution indigenously offer the ability to automatically stripe the data across multiple nodes?
  • Does the solution offer remote replication services to a DR node in a different DC or to an integrated cloud service?

Backup / VTL support

  • Is backup/recovery a part of the solution feature set or do you require to invest in solutions such as Veeam, Druva, etc?
  • Does the orchestration engine allow you to add a tape robot to a node to take backups to take?
  • Does the solution allow you to attach purpose built backup appliances such as Cohesity, Rubrik, Commvault, etc?
  • Does the solution allow Public Cloud backup indigenously?

Product Support

  • When buying an appliance- will you need to call a single number? Irrespective of what fails – Hardware, hypervisor, orchestrator?
  • Will the appliance vendor own the response towards 0-day vulnerabilities in the entire solution?

The HCI space is exciting with new vendors coming up with rapid improvements, for example:

  • Cisco launched Intersight enhancements to it’s HCI offering Hyperflex, which provides predictive failure analysis, proactive problem solving, remote edge deployment and management of HX edge nodes.
  • Dell providing disaggregation features as well as Vmware Cloud Foundation as a turnkey hybrid cloud platform on top of VxRAIL.
  • Microsoft Azure HCI integration with Azure core services such as Azure backup.
  • HPE SimpliVity 4.1.0 offering the ability to run both container and Vmware virtual machine workloads at the edge with a Kubernetes container plugin and cloud-native backup from multiple edge sites via HPE Cloud Volumes.
  • IBM’s new Spectrum Fusion HCI – targeted for their Redhat Openshift platform with a focus on AI and GPU-enhanced workloads.

HCI brings a lot of benefits which help in improving the agility of the IT organisation and also can help improve the bottomline. Careful that the increased agility should not come at the cost of increased complexity. You hyperconverged infrastructure decision needs to serve your application needs for today and as your business grows.

Zindagi Technologies offers IT consulting services to help guide your in your IT decision making. We are a IT managed services organisation which provides planning, design, implementation and managed services to customers across the globe. We’re reachable on and on +919773973971 (call or Whatsapp) to help out.

Further Reading:

Network World:

Economic Times:


Abhijit Anand

Subscribe to my News Letter

Be informed of when I add write new blog posts. I do not spam - Promise.

You have Successfully Subscribed!