Flexibility and return on investment (ROI) benefits are driving most companies to move IT to cloud computing. In addition to expansion propelled by businesses, consumers are also driving cloud computing growth through accelerated adoption of mobile applications like maps, social networking, search, and photo/video sharing, which all utilize from hyper-scale cloud data centers.
Projecting out, businesses are spending vast monetary sums to perform both data mining data analytics on large datasets to gain valuable insights, make astute choices, and offer premium services. These analyses are typically run on mammoth server farms in hyper-scale data centers using innovative AI techniques, and require high throughput and low-latency network performance.
As a result of these trends, Cisco predicts that by 2020, 485 large-scale cloud data centers will account for 47 percent of all servers deployed in data centers.
400 Gigabit Ethernet (400GbE) Emergence in Hyper-Scale Data Centers
Within hyper-scale data centers, network traffic has consistently shown exponential growth. For example, Google’s data centers exhibited doubling of network performance requirements every 12-15 months. Similarly, AWS views data center networking outlays as a critical situation as expenditures accelerate compared to other infrastructure. And, Cisco predicts that traffic within large scale data centers will quintuple by 2020.
As hyper-scale data centers transition to faster more scalable network architectures, such as the 2-tier leaf-spine, the need for higher bandwidth with efficient connectivity becomes more critical. The leaf-spine architecture requires massive interconnects as each leaf switch fans-out to every spine switch, maximizing connectivity between servers. Hardware accelerators, artificial intelligence, and deep learning functions in data centers all consume high bandwidth, forcing high-end data centers to quickly move to next generation interconnects operating at higher data rates.
The majority of hyper-scale data centers have used 100 Gigabit Ethernet (100GbE) links and are in the process of transitioning to 400 Gigabit Ethernet (400GbE) links to achieve higher throughput. Per Crehan Research, 400G deployments started in 2018 and will become routine in data centers by 2020 as rapid 400GbE adoption by cloud vendors enables dramatically lower unit pricing in the initial phase of its lifecycle.
As 400GbE deployments grows in hyper-scale data centers, the price-sensitive enterprise data center market will start taking advantage of the latest generation Ethernet technology and initiate their own transition from existing 10/40/100GbE networking to 400GbE.
Enterprise Applications for 400GbE
In enterprise networks, traffic from mobile devices has consistently migrated from mobile networks to Wi-Fi, placing added strain on wireless networking campus networks and branch offices in enterprise. Enterprise IT organizations have been consistently struggling to increase network capacity to meet these mounting throughput requirements.
Simultaneously, developments in enterprise storage, such as all-flash arrays (AFAs) and Remote Direct Memory Access (RDMA) interfaces, are requiring dramatic improvements in network latencies and throughput. The benefits of these new storage technologies are completely dependent on the underlying network infrastructure.
New types of applications are also driving improvements in network performance. Throughput-sensitive rich media enterprise applications require higher bandwidth. Stored and live streaming video and digital marketing all benefit from 400GE becoming the de facto networking standard.
400GbE Standardization and Product Development
Institute of Electrical and Electronics Engineers (IEEE) officially approved the 802.3bs standard for 200GbE and 400GbE on December 6, 2017. The standard defines the technical requirements to support 200 Gb/s and 400 Gb/s Ethernet data rates at distances for intra and inter-data center applications. Specifically, the IEEE 802.3bs project defines physical layer specifications for 400GbE operation over 100m, 500m, 2km, and 10km distances.
Beyond a four-fold performance boost, the transition to 400GbE promises both power and area savings, as 400GbE optical modules are expected to consume only 2.5x the power of a 100 Gigabit Ethernet links and maintain the same small form factors, increasing interconnect densities.
Consistent with 400GbE standardization, networking ASIC, router, test, and optical modules vendors have developed products supporting the 400GbE standard. For example, during 2018, top-tier networking vendors formally launched products 400GbE market. In July 2018, Juniper Networks has announced plans to roll out 400 Gigabit Ethernet (400GbE) capabilities across its QFX data center and MX WAN product lines, while in October, 2018, Arista Networks announced 400GbE support across its line of data center switches. Similarly, Cisco launched 400GbE support across its line of switches targeting cloud and enterprise data centers in November, 2018.
Interface Masters Technologies has for over 20 years been providing off-the-shelf innovative networking solutions with customization services to OEMs, Fortune 100 and startup companies. Our headquarters is located in San Jose, California in the heart of Silicon Valley where we are proud to design and manufacture all of our products. Based on MIPS, ARM, PowerPC and x86 processors, Interface Masters appliance models enable OEMs to significantly reduce time-to-market with reliable, pre-tested and pre-integrated appliance solutions that can meet the most challenging networking requirements.