The Answer to Why You Need Highly Reliable Network Monitoring that is Independent from Your Network
This blog addresses the question: “Isn’t the visibility I already have from my on-premises Network Management System and/or cloud environment monitoring dashboard good enough?” The answer is “no!”
There are many attributes to visibility but unfortunately, many of them come from sources that are not reliable, consistent, and unbiased. When that is the case, you can’t assure that your IT infrastructure and application services are highly reliable, available, scalable, responsive, and secure. Partial visibility into your IT infrastructure is a risk factor and not much better than having no visibility. You, your organization, your employees, and your customers can be exposed to cybercrime, poor experiences, and correspondingly poor end-user satisfaction.
Catastrophic consequences can result from visibility gaps, blind spots, and other weaknesses that are discovered and exploited by cybercriminals. Poor or partial visibility means that security and performance problems are more difficult to resolve, which adds stress to the IT team and adversely impacts employee satisfaction. What cannot be seen cannot be managed, measured, easily fixed, or prevented. When mean time to resolution is slow because of poor or partial visibility, automated processes, productivity, end-user experiences, customer satisfaction and loyalty, revenue, and profit margins may be adversely impacted.
You should not drive a car with poor or partial visibility because doing so is not safe and likewise, you should not operate an IT infrastructure with poor visibility. A dedicated monitoring fabric and/or visibility extension services for cloud environments and the high-quality visibility it provides is essential to successfully operating your network and your overall IT infrastructure. Network visibility is the only way to assure optimal performance, great end-user and employee experiences, automation that meets its objectives, and protection from cyberattacks.
Visibility Varies by Vendor and Product
The preceding is widely understood, however, what is not widely understood is the difference between visibility and reliable, comprehensive, and unbiased visibility. When visibility is promoted as a feature, especially if it is not a core feature of a product, there is rarely any detailed specification about the reliability, consistency, completeness, breadth, and depth of the visibility.
Therefore, the visibility that you may need will vary depending upon the vendor and product, and it will likely have gaps, blind spots, biases, and false positives. It is important to understand that data acquisition and delivery constitute visibility, so attributes of data apply, such as: precision, resolution, accuracy, source, provenance, reliability, consistency, velocity, volume, variability, veracity, etc. Therefore, better data acquisition and delivery means better visibility.
Vendors wrongly and misleadingly promote visibility as though it is an absolute all-or-nothing. The reality is that data and visibility are a continuum of bad to good depending on the attributes just mentioned. This misunderstanding originates and perpetuates from products and vendors that talk about providing visibility in the absence of specifics. Here are some examples:
- Most network components generate log data that is promoted as a method of visibility. While this is true, log data is far from a panacea, which is why a company called Splunk built a billion-dollar business to help IT index, analyze, and derive value from log data. The breadth and depth of the visibility varies by vendor and each specific product line. Log data also varies across different releases and generations of the same product. More about log data below.
- Enterprise networks necessarily include a Network Management System (NMS). Some organizations’ networks consist of equipment from multiple vendors and therefore have more than one NMS. Each NMS provides some functionality for network health and visibility.
- Cloud infrastructure includes networking as a core service in addition to compute and storage. Each cloud platform provides functionality for traffic visibility, although the type and depth of the visibility varies for each providers’ combined IaaS and PaaS platform.
Reliable, Comprehensive, and Unbiased Network Visibility
Given the types of visibility just mentioned, a common feeling and statement is “We already have visibility.” This is often followed by, “Why do we need more visibility than what we already have?”
There is also a reluctance to increase costs and initiate a project to scope a monitoring fabric, upgrade an existing fabric, or replace the fabric. In response to these feelings, statements, and questions is this basic fact – products that perform a primary function other than visibility will not provide reliable, comprehensive, and unbiased visibility. So, it is important to frame the remainder of this discussion by defining reliable, comprehensive, and unbiased visibility.
Reliable visibility is obtained by solutions that will acquire and provide data that constitutes visibility without compromise. Visibility is typically made available non-stop and regardless of system utilization and loading.
Comprehensive visibility is obtained by solutions that can tap into multiple strategic locations such that there are no blind spots and unseen paths.
Unbiased visibility is obtained by solutions that convey data and generate metrics without being impacted by the perspective (i.e., the source of the data). The data and metrics must also be unaffected by system utilization and loading.
Request a demo today from cPacket Networks to see how ideal Network Visibility will lower operational and business risks.
Ways to Gain Visibility
Diving into the 3 broadly categorized ways of obtaining network visibility, let’s see how they compare against a 4th category – a dedicated monitoring fabric and/or visibility extension services for cloud environments.
Log Data Pros
- The functionality rarely has an explicit cost (e.g., it is a standard feature)
- Log data is human readable
Log Data Cons
Log data by its very nature is not comprehensive; it is very myopic with a perspective confined to the component
- Storing log data has a cost and the storage must be managed
- Log data is varied and inconsistent and difficult to use for deep insights
- Additional network instrumentation is required to deliver log data to security and performance management tools and AIOps solutions
Read this blog to learn more about the pros and cons of log data, as well as network packet and flow data: “Packets, Flows, Events – Which is Best for Troubleshooting?”
Network Management Systems, and Why Their Visibility is Limited
A Network Management System (NMS) uses a combination of log data and telemetry data from the infrastructure components to monitor the network and provide visibility. There is a big difference between monitoring for high-level health and detecting failures and having data readily available to multiple analyzers and tools to perform detailed real-time analysis.
Historical analysis is also valuable because it enables replaying traffic and interactions and determine root cause and understand patterns and trends. These data-driven benefits are only possible when using an independent and dedicated monitoring plane and/or visibility extension service for cloud environments.
Network Management System Pros
- The functionality may have little or no additional cost (e.g., it is a standard feature)
- Network health and related metrics are integrated into the NMS dashboards
Network Management System Cons
- An NMS will not reliably deliver packet and flow data to security and performance management tools and AIOps solutions
- The visibility is from the perspective of the infrastructure, which makes it inherently biased (i.e., the infrastructure cannot know its blind spots and faults and will prevent reliable problem reporting)
- Visibility is rigid because you cannot add strategic monitoring TAPs
- Monitoring is not a primary feature of the network infrastructure and the NMS so, performance is typically not robust
- Telemetry data from the infrastructure is varied, inconsistent, and will have gaps during periods of high traffic when the components are forced to operate at peak capacity
- While it is common for an NMS to make PCAP data available by downloading, the PCAP data capture has gaps because it is event-triggered versus continuous and this mechanism does provide properly governed packet data for real-time and historical analysis
- Historical data, especially with accurate timestamping and event tagging, is typically not reliably stored if at all
Cloud Visibility Services
Each public cloud (e.g., AWS, Azure, Google Cloud, etc.) provides different types and levels of monitoring and visibility. As an example, some make network packet/traffic mirroring available to their customers. A good analogy to consider is plumbing in your home. The water provider brings water to the property line, but it is your responsibility (and the home builder’s) to pipe the water where you need it throughout your house.
If you look at cloud visibility services as the water provider, you can quickly understand that you are not getting the full delivery of data/visibility needed. This is why you need to extend the delivery of data throughout your cloud environments beyond what is available from the cloud provider. Unlike physical infrastructure where you have to add a monitoring plane, in the cloud you need visibility extensions such as the cCloud® Suite.
Cloud Visibility Services Pros
- Provides packet/traffic mirroring
- Provides monitoring and management services that are like what an NMS provides for on-premises networks
Cloud Visibility Services Cons
- Intra-cloud visibility is limited to what the cloud provider offers (typically from virtual NICs)
- Does not provide data/visibility delivery services typically found in a virtualized network packet broker appliance such as replication to multiple tools
- If mirroring services are available, they have an explicit cost
Dedicated Monitoring Fabric and Visibility Extensions for Cloud
Dedicated Monitoring Fabric and Cloud Visibility Extensions Pros
- Will provide reliable, comprehensive, and unbiased network visibility to IT personnel and the tools they use
- Provides real-time and historical network packet and flow data to IT personnel and the tools they use for security, performance management, and troubleshooting
- Historical data is more than just a [TCP] data dump. Data can be timestamped, tagged, secured, and governed with policies (view this video to learn more about the benefits of historical network data: “Capturing and Analyzing Network Data with the cStor Appliances”)
- Data is replicated, processed, and rate adjusted so it can be tailored to each receiver. This ensures effectiveness and increases the useful life of tools receiving data from an intelligent monitoring fabric.
- Segmentation and governance of the monitored data
- Security, forensic analysis, problem troubleshooting, compliance record keeping, and business continuity all rely on reliable and complete packet data
- Monitored traffic from the point of acquisition to each endpoint is out-of-band so it does not add traffic to the core network
Dedicated Monitoring Fabric and Cloud Visibility Extensions Cons
- Has an explicit cost
- Is an additional overlay network that must be managed
Relying on visibility from the infrastructure is a flawed strategy that increases operational and business risk because it’s perspective is biased and unreliable. When there are failures in the network the visibility is not reliable (i.e., the network may be too sick to communicate that it is sick). Visibility is often a secondary best-effort feature, so when network utilization is high the visibility provided will be compromised and unreliable.
The typical NMS gets a lot of information from the infrastructure, but the infrastructure can only convey what it sees. For the same reasons just discussed, the NMS visibility is biased and unreliable. It is entirely possible for the infrastructure to convey that it is fully operational, yet performance issues impede productivity and IoT interactions that result in poor end-user experiences, undesired consequences, and help desk trouble tickets. This is always a conundrum for IT because their NOC dashboard, powered by their NMS, shows that everything is OK yet help desk activity reveals otherwise.
In addition to having to react to issues and problems, this same lack of unbiased visibility affects troubleshooting and MTTR. These all too common situations arise because the infrastructure is only capable of evaluating and conveying its status and metrics from its perspective. It cannot determine and convey status and metrics that are experienced by end-user and IoT devices.
Data Acquisition, Processing, and Delivery
Humans, IT tools, and automation all rely on data and the visibility it provides to effectively do their jobs. Visibility is the combination of data acquisition and delivery. Data delivered to analytics and dashboards aids humans to act with agility. Data delivered to tools and systems makes advanced automation possible. Visibility is there fundamental to maximize security and optimize performance of an organization’s IT infrastructure, applications, and workloads.
Packet processing, including data rate matching, prior to delivery ensures the receivers/collectors such as tools operate efficiently by eliminating problems that can result from receiving incorrectly formatted data and/or being overrun by too much data. Acquisition, processing, data rate matching, and replication are part of a delivery chain that provides the right data to the right tools, heightening the effectiveness and life of the tools.
Final Thought: Full Visibility is a Must Have
Visibility for IT personnel and the tools they use is a matter of data acquisition and delivery, which is why better data acquisition and delivery means better visibility. Therefore, you should seek to make sure that data is reliably acquired from strategic points in your network and reliably delivered to your tools and storage – effectively choosing and tuning your monitoring fabric with the key attributes of data in mind will maximize visibility, its quality, and its usefulness.
Lastly, poor and partial visibility are an exposure and risk. The only way to reduce exposure and risk is to have reliable, comprehensive, and unbiased visibility that spans your entire IT infrastructure and its perimeter. This is easily accomplished using a dedicated monitoring fabric and/or visibility extension for cloud and other virtualized environments.
Request a demo today from cPacket Networks to see how ideal Network Visibility will lower operational and business risks.
About the Author
Ron Stein is the director of product marketing at cPacket Networks. Ron possesses technical expertise in the areas of networking, experience assurance, cloud, Big Data, AI, ML, Advanced Analytics, and IoT. His market and industry experience spans technology, healthcare, financial services, utilities, telecommunications, public safety, smart cities, and IT Operations.
The topics covered in this blog are explored in greater detail in the related content listed below.