100Gbps is Increasingly Popular and is Creating a Host of Management Challenges

Name virtually any technology trend — digital transformation, cloud-first operations, data center consolidation, mobility, streaming data, AI/ML, the application explosion, etc. — they all have one thing in common: an insatiable need for higher bandwidth (and often, low latency). The result is a steady push to move 10Gbps and 25Gbps network infrastructures towards the edge and increasing adoption of 100Gbps in enterprise core, data center, and service provider networks.

Initial deployments focused on backbone interconnects (historically a dual-ring fail-over topology; more recently mesh connectivity), primarily driven by north-south traffic. Data center adoption has followed, generally in a spine-leaf architecture to handle increases in east-west connections.

Why Is 100Gbps Adoption Growing?

Beyond a hunger for bandwidth, 100Gbps is having a moment for several reasons: a commodity-derived drop in cost, increasing availability of 100Gbps-enabled components, and the derivative ability to easily break 100Gbps into 10/25Gbps line rates. In light of these trends, analyst firm Dell’Oro expects 100Gbps adoption to hit its stride this year and remain strong over the next five years.

Nobody in their right mind disputes the notion that enterprises and service providers will continue to adopt even faster networks. However, the same thing that makes 100Gbps desirable — speed — conspires to create a host of challenges when trying to manage and monitor the infrastructure. The simple truth is that the faster the network, the more quickly things can go wrong. That makes monitoring for things like regulatory compliance, load balancing, incident response/forensics, capacity planning, etc., more important than ever.

At 10Gbps, every packet is transmitted in 67 nanoseconds; at 100Gbps that increases tenfold, with packets flying by at 6.7 nanoseconds. And therein lies the problem: when it comes to 100Gbps, traditional management and monitoring infrastructure can’t keep up.

100Gbps Line Speeds

Network TAPs Must Mirror Data at 100Gbps Line Speeds

The line-rate requirement varies based on where the infrastructure sits in the monitoring stack. Network TAPs must be capable of mirroring data at 100Gbps line speeds to packet brokers and tools. Packet brokers must handle that 100Gbps traffic simultaneously on multiple ports and process and forward each packet at line rate to the tool rail. Capture devices need to be able to achieve 100Gbps bursts in capture-to-disk process. And any analysis layer must ingest information at 100Gbps speeds to allow correlation, analysis, and visualization.

Complicating matters are various “smart” features, each of which demand additional processing resources. As an example, packet brokers might include filtering, slicing, and deduplication capabilities. If the system is already struggling with the line rate, any increased processing load further degrades performance.

For any infrastructure not designed with 100Gbps in mind, the failure mode is inevitably the same: lost or dropped packets. That, in turn, results in network blind spots. When visibility is the goal, blind spots are — at the risk of oversimplification — bad. The impact can be incorrect calculations, slower time-to-resolution or incident response, longer malware dwell time, greater application performance fluctuation, compliance or SLA challenges, and more.

Lossless Mirroring Requires Visibility Stacks Designed for 100Gbps Line Speeds

Lossless monitoring requires that every part of the visibility stack is designed around 100Gbps line speeds. Packet brokers in particular, given their central role in visibility infrastructure, are a critical choke point. Where possible, a two-tier monitoring architecture is recommended with a high-density 10/25/100Gbps aggregation layer to aggregate TAPs and tools, and a high-performance 100Gbps core packet broker to process and service the packets. While upgrades are possible, beware as they add cost yet may still not achieve true 100Gbps line speeds when smart features centralize and share processing requirements at the core. Newer systems with a distributed/dedicated per-port processing architecture (versus shared central processing) are specifically designed to accommodate 100Gbps line rates and eliminate these bottlenecks.

The overarching point is that desire for 100Gbps performance cannot override the need for 100Gbps visibility or the entire network can suffer as a result. The visibility infrastructure needs to match the forwarding infrastructure. While 100Gbps line rates are certainly possible with the latest monitoring equipment and software, IT teams must not assume that existing network visibility systems can keep up with the new load.

About the Author

Nadeem Zahid
Vice President Product Management & Marketing

Nadeem has spent more than 23 years in the IT industry in various leadership roles with companies like Alcatel-Lucent, Cisco Systems, Brocade, Juniper Networks, Extreme Networks, LiveAction and tFinery. He is a prolific author with published books and articles on product management, networking, and the cloud.