To understand Datacenter let us recall a Computer first. A Computer consists of 3 major components namely CPU (computation), Memory (RAM) and Disk (Storage). Similarly, a Datacenter used to have 3 discrete and interconnected components as Servers (for Compute), Storage Array (for Storage) and Network Switches (for connectivity). These components evolved in time and thus contributed to the Evolution of Datacenters.
Server: The most important part of a server/computer is its Processor. The evolution of CPU/Processors happened over its number of COREs (Logical Processor) to represent a physical processor. From single-core, dual-core to multi-core is the journey of Processors.
Storage: Storage got evolved from 1.4 MB Floppy (as I know), 700 MB CD to 2 TB USB, 30 TB SSDs. The protocols to connect to these storages started from IDE, SCSI, SAS to SATA, PATA to today’s NVMe.
Networking: Networking got evolved from 10 MBPS to today’s 100 GBPS. The devices ranging from Hubs, Switches to Routers, Firewalls and so on. The topologies evolved from LAN to WiFi and WAN.
All these discrete components were playing their vital role and then there came an era of Virtualization. In Virtualization, the physical components got divided into logical one and then represented to an application as multiple physical components.
All the building blocks (hardware, Kernel, User Space) of a computer got virtualized as Hardware Virtualization, Memory Virtualization, Software Virtualization, Network Virtualization, Application Virtualization, Storage Virtualization and so on. With Virtualization, the things which were seemingly impossible earlier became possible. A single computer could play multiple operating systems.
All Operating Systems booting at the same time as a virtual machine. All Virtual Machines run as an application on a single kernel called Hypervisor. A set of Virtual machines form a cluster to serve a single service or application. A single storage disk shared across multiple servers to form the Storage cluster.
The idea of component virtualization, their integration, and management gave rise to a term called Converged Infrastructure OR CI. In CI, the different components are grouped together to form a single CI-Node. Datacenter Administrators get a single management utility/interface to manage all the different components of CI. This discrete component management via a single interface allowed CI to serve features as Scale Out, Scale Up, High Availability and so on.
Then came the era of Software Defined components. Software-defined network, Software-defined storage, Software-defined compute which then formed Software Defined Infrastructure. The term “Software-Defined” implies that the services expected out of physical devices were getting programmed and performed through a piece(s) of code running on a single node. This reduced the need of discrete components and gave rise to a term called Hyper-Converged Infrastructure OR HCI.
In HCI, all the programs of discrete components are clubbed together to form a single HCI-Node. As the components are software-defined and are integrated together in a node, managing them via an external interface is easy but equally difficult to achieve the feasibility of Scale Out and Scale Up.
In a nutshell, clubbing of individual components of a Datacenter to form a single node is called a Converged Infrastructure. Clubbing of Software-Defined components to form a node is called Hyper-Converged Infrastructure.
The next era is going to be the era of Hybrid Infrastructure where few components will be hardware-based and few will be software-based.