A top HCI vendor has recently attempted to portray themselves as a cloud company by engaging with AWS and Azure to acquire bare metal machines for running their cluster operating system on. They have also been adding grey areas in their messaging to customers by claiming that HCI now represents Hybrid Cloud Infrastructure instead of Hyper Converged Infrastructure.
This approach of theirs is not truly cloud or hybrid in any way as it simply involves a distinct setup of bare metal server hardware that AWS and Azure operate in their data centers for their cluster OS.
Therefore, I firmly oppose the notion of it being a Hybrid cloud, as it does not utilize the native cloud services of either AWS or Azure.
A cloud connector still needs to be configured for that sort of integration purpose.
Therefore, the more precise terminology is actually Hyper Converged Hybrid Cloud Infrastructure, as this accurately describes the configuration with cloud connector services, without which only another HCI based cluster running AOS can establish a connection to it.
Some individuals who pay close attention to such games may wonder: What does the HCHCI concept entail if native public cloud hosting services are not included?
Simply put, it's a method of asserting that you are utilizing the cloud with your existing applications that do not require restructuring for cloud compatibility, as they are merely traditional bare metal servers operating within AWS or Microsoft Azure Data Centers.
This is no different to using Dell, HPE, Cisco or Lenovo servers other than you cannot configure the RAM, Storage or CPU in these Public cloud bare metal server offerings, all you can do is select how many you want in a cluster.
While AWS provides a wider range of bare metal server types for AOS NC2 shenanigans compared to Microsoft's Azure bare metal server offerings (for AOS).
The current Azure NC2 Server node offerings remain unchanged. Microsoft claims to be collaborating with the Technology drive folk in San Jose to introduce additional bare metal server options but as of now this is just a viscous rumor.
So in a nutshell what sets HCHCI apart from traditional HCI is that in this architectural model, the HCI cluster Operating system can operate anywhere - on-premises, on public cloud bare metal hardware or at the edge and the reason you would do this is to avoid rewriting your legacy applications to be cloud friendly and at the same time claiming you have migrated to the cloud as you are purchasing this from AWS or Azure.
For some larger organizations they have to spend their Cloud services budget or lose it and a good many purchase NC2 just to zero that dollar pile on something useful.
Technically the correct statement for NC2 usage is you have migrated to the public cloud bare metal server offerings of either AWS or Microsoft Azure and you can even connect both to each other.
I have some big global financial organizations as clients who run NC2 clusters on AWS and Microsoft Azure across all the zones they both offer.
The big attraction here for these organizations is saving a ton of money by continuing to use their existing application platforms without spending a fortune to make them cloud friendly.
Before delving further into this topic, let's take a trip back to the time when I was known as an expert in storage systems such as EMC, Hitachi, Netapp, and IBM.
It was around this period, in 2012, that I started feeling uneasy about the large storage systems we were deploying for customers, which came with very hefty price tags.
These storage systems had scalability issues - when one filled up, another had to be added, leading to overly complex configurations.
This approach was akin to building separate infrastructures for different cities, a stark contrast to the model adopted by Hyperscalers like AWS, Google, and Azure, who treat all servers as a unified entity within their data center domain.
For years, I had my eye on projects like Sun Micro Systems' Amber Road and Microsoft's clustered compute architectures.
I envisioned a system that could pool resources from all cluster nodes seamlessly.
Essentially, the goal was to create a single Data Center appliance that could perform the functions of multiple Data Center appliances with ease.
The idea was to have the cluster infrastructure resources utilized by applications, making the infrastructure transparent to the applications through intelligent software.
Scaling up or down would simply involve adding or removing servers from the cluster, with a minimum of three servers for basic High Availability and four of them if you were in any way serious about HA.
One of the advantages of a cluster-based system is that performance theoretically increases as more servers are added, providing linear scalability and enhanced High Availability.
Furthermore, complex automation and orchestration tools could be layered on top of the cluster function, embedded into the Cluster operating system software through standard API sets.
Security features, such as encryption and firewalls, along with easy to deploy micro segmentation were integrated into the Cluster operating system, ensuring data protection and integrity of the VM and the VM data.
This architecture enabled the delivery of services that stand-alone appliances offer as part of the cluster OS feature set, with full API support for easy integration.
Virtual machines are managed by a Virtual Machine service, while storage services are enabled through Block, NAS, and Object services, ensuring High Availability through Replication and other HA services.
One of the key benefits of this cluster-based system type architecture was the ability to accommodate various specialist security services, covering a range of aspects from Ransomware to Data Protection.
Essentially, the cluster OS aimed to simplify infrastructure management and provide a seamless experience for administrators.
By focusing on cluster computing systems, the goal was to lower compute and IT costs significantly for businesses, streamlining operations and enhancing efficiency.
Through the development of a robust cluster operating system, the aim was to create a unified architecture that could address the complexities of modern IT systems effectively.
Cluster-based solutions were designed to offer superior performance, resilience, and scalability compared to traditional three-tier data center solutions, providing a more efficient and cost-effective approach to infrastructure management.
As the industry evolved, the focus shifted towards Hyper Converged Infrastructure (HCI), combining the benefits of cloud and converged infrastructure to deliver a more flexible and scalable solution.
This shift towards HCI represented a significant advancement in IT architecture, offering a more integrated and efficient approach to infrastructure management that was also stupid simple to operate vs legacy 3 tier silos of data Center gear.
As organizations increasingly adopt HCI solutions, the need for robust and scalable cluster-based systems becomes paramount, driving innovation and efficiency in the IT landscape.
We have observed of late that some HCI vendors have been making some questionable decisions with their sales strategies after years of operating in a certain set way.
Many eyebrows have been raised due to some of these two vendors recent “interesting”decisions around how they sell their fine wares.
This is actually resulted in customers seeking totally different solutions to HCI and abandoning both vendors as a result.
The channel itself is also not much amused by all this disturbing action as they were the ones that poured time, people and money into making virtualization a big thing in the first place.
It seems like a new vendor, utilizing NVIDIA DGX-2 powered equipment, is poised to surpass them all in one fell swoop and the smart channel folks are tooling up for it big time.
I am starting to believe that HCI is becoming outdated and is already yesterday's brief hero...
A case of while the fox dithers the eagle steals his dinner.....
The leading HCI vendor would be better served fixing the massive defects in Prism Central and hiring the coders back they recently lost to deliver the software advances they are now clearly never going to deliver....
It's the Bain of a Data Center Operator all these broad commercial antics that are going on...
In the meantime HCI exists in the public cloud and offers considerable cost avoidance via not having to refactor all of an organizations legacy applications for native public cloud use.
Nobody is saving money rushing to cloud, the costs are sobering.
This refactoring concept itself is a horrendously expensive idea that makes no sense to my observing eyes based on how slow and how expensive the process of refactoring applications for cloud really is.
One large entertainment company I used to sell large storage iron to switched to public cloud in 2015 with a list of 126 software systems they were going to refactor for cloud and now in 2024 the count of the ones they have succeeded with just grew to 5.
This reality would get me to embrace HCHCI in a tight bear hug if I was CIO at any corporation undertaking such refactoring marathons.
Something to spend some cycles on in deep contemplation methinks…
Comentarios