The Rise of 100G Ethernet
Digital transformation is driving trends such as data center consolidation, increased mobility, and complexity of applications and IT infrastructure. This is resulting in denser infrastructures, higher connectivity bandwidth, and a need for more management instrumentation.
There has also been an increase in 100Gbps (100G) line-rate requirements for network architecture. 100G is on a dramatic growth curve in the enterprise and service provider space due to key drivers such as the exponential growth in applications and data in the consumer and business world. These higher data rates require new tools that can keep up to provide network operators with accurate and comprehensive network visibility.
Upgrade Cycles Of 100G Are Accelerating
Additionally, adoption and upgrade cycles of 100G are accelerating due to the drop in cost of 100G technology, volume, availability of components, and derivative capabilities (the ability to break down into 25G and 10G). You can see in the chart below (source: Dell’Oro) how fast 100G is being adopted along with its derivative technologies. If you’re looking at the 25G space, you can see that part of it is native 25G with a large part of it being 100G contributed. Between 100G and 25G you see 40G being replaced by both technologies.
Click to Tweet
To find out more about monitoring a high-speed 100G network, watch our on-demand webinar: “100G Enabled High-Performance Visibility”.
History Of Upgrading Network Speeds
Let’s rewind to a couple of decades back. 1G used to be the technology widely deployed at enterprise campuses and branch offices. Then 10G replaced 1G, which became the technology deployed across campus, data center, and service provider networks, and we still see it widely deployed. The next generation of bandwidth to gain traction was 40G that addressed some upgrade needs in the data centers. But, 40G wasn’t suitable for everyone, especially service providers.
When 100G came to production, service providers were among the first to adopt and deploy it. Soon after, the backbone in the enterprise followed when smaller form-factors for higher densities of 100G and much lower costs were available. The enterprise data center followed.
Below is a typical campus backbone interconnect. For a long time, large distributed enterprises ran 100G in the backbone in a dual ring failover topology that transitioned to newer mesh connectivity. This all happened due to the increase in north-south traffic between data centers, campuses, and branches.
Data centers followed. The volume deployment of 100G has a lot of potential within the data center spine-leaf. 1/10G has been deployed widely for a long time for server and storage connectivity and 10G and 25G are now mainstream for intensive workloads.
Most of the newer generation leaf switches have high-density 100G ports with breakout options or with 100G uplink options, which is making 100G popular for spine-leaf interconnections. 100G is also deployed at the edge/border.
The Need for High-Speed Visibility
Monitoring is even more important at 100G speeds since more things can go wrong in less time. Different enterprises and services providers have different use cases for why they need monitoring and visibility they can trust. Common reasons are ensuring day to day business continuity or strengthening the security posture but it extends to capacity planning, regulatory compliance, audit trail, legal evidence, incident response, and forensics analysis.
For 100G monitoring, it’s important that all components in the visibility service chain are capable of performing flawlessly at high-speed and can provide high-performance intelligence features. Otherwise, your visibility is limited and handicapped by the weakest link in the chain.
Starting at the bottom layer, you need 100G TAPs that are capable of mirroring the wired-data to the packet broker and tools – where the monitoring starts. Next, the packet broker layer needs to have the performance to handle 100G traffic simultaneously on multiple ports.
It also needs to process and forward each packet at line rate. If you are planning to capture-and-store any traffic for later analysis or record keeping, your capture devices need to handle at least the 100G burst capture-to-disk. And finally, the analysis layer needs to be able to ingest the information at these faster rates to correlate, analyze, and visualize the data.
Challenges of Monitoring at 100G Speed
Network monitoring at 100G speed is not easy, which is why dropped or missed packets is one of the leading issues with networking monitoring equipment when it comes to monitoring high-speed networks. When you’re sending the data at 10G speed, every packet is being transmitted at about 67 nanoseconds.
When you upgrade to 100G, the speed goes 10x and the time interval becomes only 6.7 nanoseconds. That is really fast! So, whatever equipment you have deployed to monitor your 100G network, such as TAPs, Network Packet Brokers (NPB), or Packet Capture devices – you have to consider the performance of those devices and their capabilities to capture, save, and mirror every packet without any loss.
Dropped Packet Data Causes Visibility Blind Spots
The primary purpose of adding a Network Packet Broker is to reliably capture packets. If it loses packets, then it’s simply not doing its job.
As you’re building your visibility architecture for high-performance 100G networks, you must deploy the monitoring tools and brokering systems that fill blind spots, not create blind spots of their own.
Imagine you have a 100G link coming into a network packet broker in a financial institution, healthcare, or retail network. In a typical workday normal activity is pumping a lot of traffic that the packet broker is seeing and sending to the performance and security monitoring tools which you are using to make important decisions.
The packet broker has to distribute the traffic, it has to process it before distributing it, and it has to perform certain operations on it such as filtering, slicing, deduplication etc. As you start turning on those “smart features,” typical packet brokers, because of their highly centralized architecture, start dropping the packets.
How To Prevent Network Blind-spots
Packets are dropped due to oversubscription because the single shared CPU cannot keep up. One of the ways these architectures attempt to solve this CPU-overloading problem is by requiring costly add-on hardware to perform advanced processing functions.
When these types of Network Packet Brokers are overloaded, the result is “blind spots” so you do not get the visibility you want to gain; at best you are trading one blind spot for another. The data forwarded to the tools is incomplete and therefore results in more risk due to slow or incorrect actions that can have adverse business implications.
What is uniquely different about cPacket Networks’ packet broker+ technology is that every port has a dedicated processing brain using silicon developed by cPacket. This distributed “smart-port” architecture is the only way to assure all packets and advanced processing features are done losslessly and flawlessly, including at 100G speeds..
View the infographic “cVu vs. Generic Packet Brokers”
Building a High-Speed 100G Visibility Architecture
Fortunately, cPacket provides a complete visibility stack to meet 100G performance at every layer.
At the first layer, cPacket cTap series supports a comprehensive line of fix and modular TAPs for passive monitoring and mirroring. cTAP devices replicate the network traffic in a lossless manner to cVu series packet-brokers for consolidation, processing, and distribution.
The cPacket cVu series packet broker+ series support a range of port density and speeds up to 100G for meeting various configuration requirements. The cVu architecture is based on a fully non-blocking any-to-any monitoring fabric that simplifies the architecture with its all-inclusive solution.
Smart-ports with pre-ingress and post-ingress filtering allow for line rate, high speed packet processing, and deep packet inspection (DPI) with zero packet drop. The cVu series packet broker+ has nanosecond accuracy, time stamping, and millisecond analytics for handling microburst network events. Network level RESTful API is fully supported to play with customers’ network ecosystems and integration with NetOps and SecOps tools.
cVu Is Ideally Suited For Data Centers
The cVu series is ideally suited for data center north-south and edge traffic monitoring. Its flexible port configurations allow for a mix of 100G, 40G, 25G, and 10G speeds in the same device. That serves well for consolidations, simplifications, and cost reductions as well as future-proofing.
Working hand in hand with the cVu devices is the cPacket cStor series capture-to-disk appliances. cStor series supports up to 100G burst capture-to-disk, fast querying, and elastic scaling of storage capacity. cStor is used in 100G network monitoring environments for multi-stream packet capture and analysis. With cStor, you gain the capability of recording and replaying network traffic for troubleshooting and forensics.
Finally, the cPacket cClear series central provisioning, management, and analytics platform provides the capability to collect metadata from all cPacket devices, correlate, and analyze it and provides high resolution analytics in single-pane-of-glass fashion.
cClear has an integrated workflow and customizable easy-to-use dashboards. RESTful API is available for integration with external ITOps/AIOps service management tools. Its machine-learning capabilities, base lining, and alerting coupled with the open integration enable network automation and prescriptive/predictive AIOps.
3 Keys to Building Hi-Speed Network Visibility
Consider this scenario – you are dealing with 100G high-performance networks and deploying it today. Things are happening really fast, and you need an architecture that will provide 20/20 visibility in your high-speed network. Sound familiar? If so, cPacket has the solution that you need.
Key #1: Work With a Vendor Who Provides a Complete Solution
cPacket provides you a complete visibility stack. It’s that simple! We have all the necessary layers of the visibility stack including tapping, brokering, capturing, converting, analyzing, and alerting. At the same time, we have full horizontal coverage across branch offices, data centers, and multi-cloud with single-pane-of glass visibility. Therefore, you can be assured that the tightly integrated cPacket solutions seamlessly work together to deliver the clear picture you need to see.
Key #2: Simplify Your Network – Less is More
One of the most important considerations when selecting a high-speed monitoring solution is complexity. You will of course want to realize the benefits quickly with little effort. This is why cPacket prides itself on offering consistently simple solutions, so you can get more done. There are additional benefits to choosing a single vendor for your visibility infrastructure – it simplifies the amount of devices you work with, reduces the amount of software upgrades you’ll need to make, and centralizes dashboards with a common user interface and consistent workflows. Even better, all of your administration, visualizations, and analytics are in a single-pane-of-glass.
Key #3: Build an Economical Visibility Architecture
Last but not the least is building a network visibility architecture and practice that is economical to sustain and easy to scale. At cPacket, we understand that. First, you need to look into a model that scales with demand, is extensible as you go digital and shift processing and apps to the cloud, and has no hidden costs. With other vendors, you must take into consideration upfront cost, add on costs, and the operational costs over the course of the solution. With cPacket, it’s simple because our flat and transparent pricing never has add-on licenses or hidden costs. We offer the best total-cost-of-ownership compared to any other commercially available solution.
To find out more about monitoring a high-speed 100G network, watch our on-demand webinar: “100G Enabled High-Performance Visibility”
About The Author
Vice President Product Management & Marketing
Nadeem has spent more than 23 years in the IT industry in various leadership roles with companies like Alcatel-Lucent, Cisco Systems, Brocade, Juniper Networks, Extreme Networks, LiveAction and tFinery. He is a prolific author with published books and articles on product management, networking and cloud.