Networking with Data Gravity, Telecom News, ET Telecom

By Ryan Perera, Vice President, Asia Content & Subsea Networks, India and Subcontinent, Ciena

Data is created everywhere, in and around our homes, offices, factories and machines. And, as companies continue their digital transformation and continue to move towards Industry 4.0, data growth will be further driven by the use of digital twin strategies using the connected Internet of Things (IoT), cognitive services and cloud computing services. New emerging applications like the Metaverse will also drive growth and put increased pressure on our underlying communication networks. In fact, Credit Suisse1 estimates that the growing interest in immersive applications and the 3D environment of the Metaverse will require telecom access networks to support 24 times more data usage over the next ten years, which must be delivered reliably , cost effective and with lower latency.

With exabytes of data created daily, data lakes are used by enterprises and public cloud providers to process, store and transform data to deliver insights and improve consumer experiences. These large datasets are now becoming centers of [Data] Gravity2 for enterprise systems, bringing other data and applications closer together, similar to the effect gravity has on objects around a planet. As the mass (of data) increases, the force of gravitational attraction (of data) also increases. In the past, data centers were built closer to the optimal locations for space and power. Now, storage-oriented “data lakes” are being built closer to end users, and these CPU/GPU-powered data lakes attract applications and workloads to them.

The effect of data gravity

Digital Realty Data Severity Index3 The report estimates that by 2024, G2000 enterprises across 53 metros are expected to create 1.4 million gigabytes per second, process an additional 30 petaflops, and store an additional 622 terabytes per second. This will certainly amplify the gravity of the data. Data Severity Intensity4which is determined by data mass, data activity level, bandwidth and, of course, latency, is expected to see a CAGR of 153% in the Asia-Pacific region, with some metros having greater attraction.

Networking with Data GravityFigure 1: Data gravity centers in Asia

The intensity of data gravity in Asia-Pacific is mainly where the major regions of public data centers are located. These centers (red bubbles with megawatt capacity shown in Figure 1) are well served by terrestrial and submarine networks (blue cylinders with terabit/s capacity). Additionally, more than 17 new open-line submarine cable systems are expected to be commissioned between 2023 and 2025 to interconnect these regions with the lowest latency and highest spectral efficiencies. Major regional telecom providers are partnering with public cloud providers to build these new undersea network corridors.

Given the ever-increasing gravitational pull of these data clusters, we expect the clusters to grow further, while bringing other smaller clusters closer together. As shown in Figure 1, high-intensity data gravity sites are mostly found in densely populated urban metros. To alleviate power and space limitations, we see these data centers growing in clusters on underlying optical WAN mesh networks, including campus-style data center clusters. Gone are the days when hyper-mega data centers were built in remote locations around the world.

Data gravity can, however, create unforeseen challenges to digital transformation considering business locations, proximity to users (latency), bandwidth (availability and cost), regulatory constraints, compliance and security. data confidentiality. Public clouds, with their vast portfolio of services, have long been considered the obvious destination to which enterprises move all their workloads. But, given egress costs, data security, over-dependency, and disaster recovery issues, the majority of enterprises are now pursuing multi-cloud hybrid strategies while trying to overcome data gravity barriers. .

Networking with Data GravityFigure 2: Data creation cycle and gravitational data extraction

Navigate Data Gravity Barriers

To address data gravity challenges, enterprises are rapidly adopting neutral colocation sites (data clearinghouses) to store data with low-latency connectivity to public and on-premises clouds. Actually 451 Search5 found that 63% of enterprises still own and operate data center facilities and many expect to leverage third-party/colocation sites such as multi-tenant data centers (MTDCs) with multi-cloud access and other ecosystems, while navigating disaster recovery and gravity barrier data.

Distributed computing, networking and storage infrastructure will increasingly involve specialized resources such as chipsets for artificial intelligence (AI) training and inference compared to general purpose applications. Additionally, edge cloud systems would be limited in scale given space and power constraints. Thus, to avoid resource stranding, the industry has identified the need for a balanced system6. This optimizes the use of distributed compute, storage, and network connectivity resources. Additionally, a declarative programming model is needed to achieve this balanced system and to tightly couple the application context with the infrastructure state. Additionally, in an application-driven networking paradigm, applications care about execution times of remote procedure call (RPC) sessions between compute nodes, not just connection latencies. This ecosystem of network operators, such as public cloud providers, MTDCs, and telecom service providers, must participate in this paradigm with scalable and programmable network infrastructure while exposing relevant APIs to application providers. .

How can the industry adapt?

In the era of distributed centers of [Data] Gravity, MTDCs will play a vital role. MTDCs will serve as co-location and data exchange points with high-capacity, low-latency interconnects to public clouds, mitigating data gravity barriers for businesses.

Moreover, in a distributed cloud computing environment, a balanced system is needed more than ever, with a tighter coupling between application context and network state. Network providers and the provider ecosystem have a key role to play in creating scalable and programmable adaptive networks with the appropriate API exposure to application providers.

Disclaimer: All views, opinions and data expressed herein are solely by the author and for general information only. Views makes no representations or warranties of any kind, express or implied, regarding the accuracy, adequacy, validity, availability, reliability or completeness of any information contained in this blog.