But how many different IT infrastructures, cloud platforms, locations, and vendors can a given enterprise reasonably deploy and manage in parallel? And at what cost?
Consistency is Key
Statista reported results from a poll that show as of 2020, 69% of companies using hybrid clouds that it polled had adopted hyperconverged infrastructure (HCI) platforms or were in the process of doing so to make mixed-cloud management easier. HCI combines server computing, storage and networking capabilities into a software-defined platform designed to easily scale for changing needs.
As HCI matures and proliferates, it's moving beyond centralized data centers to edge computing. That’s because the core capabilities of HCI, such as provisioning, monitoring, management and on-demand scaling, can significantly reduce the complexities associated with edge computing.
Earlier this year, CRN cited a 451 Research study indicating that in 2021, 33% of organizations had deployed HCI in remote and branch office (ROBO) locations, up from 19% in 2020. ResearchandMarkets expects a compound annual growth rate (CAGR) of nearly 39% for edge computing through 2030.
HCI’s tight hardware component integration makes the technology a strong fit for small yet powerful edge sites that can spring up anywhere data is collected, generated and needed for employee or customer access. Its virtualized architecture enables the software-defined, unified management of IT resources remotely across sites, which is more efficient and less error-prone than manually configuring and maintaining separate hardware appliances.
Why the Edge Now?
While ROBO sites have been around for some time, there are a few drivers now pushing new types of edge solutions into mainstream deployment and management practices. One is a function of companies seeking higher-performance IT support for distributed customers, stores, and the large remote workforces created by the pandemic. Local infrastructure that’s geographically closer to users helps deliver that improvement by reducing distance-induced latency. There is an increasing variety of needs from thick (enterprise edge) to thin (and IoT) edge approaches.
In addition, an escalating number of real-time applications rely on local sensors, cameras, processing and storage in edge locations, many of which are unmanned and wholly contained. In particular, edge data centers often process streaming data to support emerging Internet of Things (IoT) and artificial intelligence (AI)-driven analytics and automation applications.
Some edge sites are served by emerging 5G wireless networks, which are highly distributed by design. 5G connections may carry partial data to locations with greater compute power for aggregated, multisite analytics.