100G transceivers in today’s data centers

The growing demand for services like cloud computing, IoT, AI and faster content delivery has resulted in the need for data centers to have more resilient architectures just to keep pace with these trends. As a result, today’s companies are in constant pursuit of future-proof optical solutions to support their networks. An article reported by Gartner states that network transceivers account for 10-15% of capital spending by enterprises. This leaves data center teams tasked with trying to modernize infrastructure, optimize efficiency and improve network security while minimizing IT costs to ensure long-term business gains.

The varying technologies and options relating to optical transmission has created confusion for many IT buyers. With so many different standards and options available for the interconnection between devices, it’s no wonder we often end up making the wrong choice. When choosing the right transceivers, there is always a trade-off between technology and cost. Data demands are increasing and to cope with these demands, interconnect is moving from 10G and 40G to 100G. The problem is the cost for optical interconnect hardware is increasing faster than the costs of other hardware. We know that if we cut corners on interconnect bandwidth we risk network downtime and increased vulnerability, all of which can be detrimental in the long run.

Advantages of a vendor agnostic approach

In situations like these, the choice to become vendor agnostic has its advantages. The ability to achieve full flexibility on the monitoring path becomes essential. This is especially true when we aggregate and/or filter from several links. Consider what happens when we become bounded to the vendor of any network element (switch/router/firewall). Let’s take the leaf-spine architecture seen in Figure 1 below as an example. This architecture has become the preferred choice to the core/aggregation/access layer design because of its adaptability and scalability. In today’s modern data centers, it’s not uncommon to have hundreds of transceivers on the tool side. Typically, suppliers sell transceivers at a very steep markup while still giving us the impression of offering a discount, but this is not necessarily true. Ultimately, when buying transceivers directly from the manufacturer, we are pressured to buy their devices versus having the freedom to choose the ones we prefer at a cheaper cost.

Figure 1: Example of a TAP deployment in a Spine-And-Leaf Architecture

Many IT buyers are faced with the lack of transparency from packer broker vendors. Oftentimes when we purchase their device, we’re given a very limited set of transceivers that are offered at a high cost but fail to deliver in performance. To make matters worse, we end up purchasing their transceivers which ultimately turn into OEM devices in the long run. This can be problematic since other devices will be refused by the software. Take for example optical BiDi transceivers. These transceivers use a single fiber to send and receive signals, but also require additional power to support the transmission of higher bandwidth. The need for this higher power requires additional features that not all vendors can provide.

Reduce costs, not performance

Nowadays, IT leaders are looking to minimize costs by opting for brand agnostic hardware. Being vendor agnostic allows users complete flexibility to choose the features that best suit their infrastructure needs. Users can change the technology on the infrastructure side while still maintaining a monitoring architecture that delivers flexibility and scalability. Furthermore, when companies choose to become vendor agnostic, there is little concern about being dragged into the next proprietary lock in.

Using the above example of optical Bidi transceivers, choosing a vendor agnostic packet broker like cPacket allows buyers access to all features despite the complexity of the technology. Users receive real-time data, accurate status information about the transceiver as well as any operational warnings and/or alarms that may occur. Figure 2 below shows a snapshot of cPacket’s real-time alerts.

Figure 2: Snapshot of real-time alerts and warnings

Updating existing hardware used to be a straightforward task that simply involved upgrading to the next available package. Today, maintaining and upgrading data center hardware becomes a balancing act of lowering costs while maintaining consistent reliability and meeting ever-increasing data needs. As automation becomes more pervasive and companies transition to speeds of 100G, the demands and cost on the data center are only going to intensify.

For most IT professionals, cutting data centers costs while ensuring optimal network performance may seem like an insurmountable feat, but it doesn’t have to be. Choosing a vendor that can offer the right set of solutions that improve security, provide agility and functionality becomes a long-term investment and one that yields the greatest ROI.

Leave a Reply

Your email address will not be published. Required fields are marked *