Reducing Customer Churn by Investing in Network Visibility

Great experiences drive customer loyalty, lifetime value, and a competitive advantage; conversely poor experiences drive churn. Assuring great customer and end user experience is a necessity that requires investing in reliable and trustworthy network visibility – especially when the role of the network is turning strategic for digital transformation.

This panel looks beyond simple ROI metrics to look at both hard and soft costs of having full network visibility. This includes poor experiences caused by inadequate performance of the network and applications that can go unseen without a consistent and reliable visibility practice.

Read on to hear from Mike Fratto, Sr. Analyst with 451 Research (now part of S&P Global), and Brendan O’Flaherty, CEO of cPacket Networks, for a business case conversation around:

  • The relationship between digital experiences and customer loyalty, customer satisfaction, and customer lifetime value
  • The role of network visibility infrastructure as a strategic asset
  • How to measure success – ROI, customer satisfaction, and churn

Want to Watch The On-Demand Instead?


Request a demo today to learn how cPacket can improve your customers’ experience and reduce customer churn.

Webinar Transcript

Introduction

Hello everyone. This is Colleen Lungsden, and on behalf of cPacket and 451 Research, I’d like to welcome you and say thanks for attending today’s webcast titled, Reducing Customer Churn by Investing in Network Visibility. Leading off our discussion today will be Mike Fratto, who’s Senior Research Analyst in Applied Infrastructure and DevOps at 451 Research. Following Mike will be Brendan O’Flaherty, who is CEO at cPacket.

And with that, I’ll turn it over to Steve.

Steve:

Hi everybody, and I hope your day is going well, wherever you are in the world, and thank you for joining us. Let’s go over the agenda really quickly today. We’re going to cover quantifying customer churn, get into that a bit. User experience dependencies to understand that, digital transformations of larger construct. Network visibility impact, and of course measuring ROI, kind of the big dog in the room.


Question

What impacts are you seeing? And 451 seeing from the poor user experience?

Mike:

Thanks Steve. So the thing that 451 is seeing from our Voice of the Connected User survey that we conduct quarterly, is users that experience poor application performance, whether it’s laggy delay, disruptive service, are more likely to cancel or switch services. Now, this particular survey was about the consumer space, but we see the same kind of thing across business to business and business to employee. Basically, nobody wants to waste their time.

Mike:

Poor experience isn’t always the thing that will drive customers away. Sometimes it is, but not always, but it can also be the nail in the coffin when there’s more than one factor affecting bad user experience. Oftentimes it’s just the cause of rage quit. In the workforce, it’s also one of the drivers for employees going rogue, right? Going and using cloud services or their own services, because the ones provided by IT are slow and basically just getting in the way, and don’t perform well.

Mike:

And, frankly, employees, they want to get work done, especially when they’re remote. Like when they’re working from home, they’ve already been disrupted, taken out of the office, and they’re now working from home and they’ve got a bunch of challenges in doing that. And they just kind of, they want to get their job done. They want to be good performers and good employees, and poor network and application performance is just one more thing that they have to get over. And so it causes customer churn, it can cause a lot of disruption among your employees. It can certainly cause disruption among your technology and trading partners. Steve?


Question

How does user experience impact customer churn? Could you get into that a little bit?

Brendan:

Yeah. Like Mike said, customer churn is impacted by a number of different things, including the usability, but also performance pricing, all of those things. But they’re all sort of interrelated to a certain extent. So like Mike said, the pricing may be better at a editor, but if the performance is good and the customer’s happy they’re less likely to [inaudible 00:03:49].

Brendan:

So all of these things are interrelated to a certain extent, and depending on the circumstances of the customer, some of them are more important than others. So for instance, on the performance side, we have a customer in the market data space that uses our equipment in order to make sure they’re monitoring market feats. They’ve been able to actually go in and show that customers, when the performance is sluggish, shift to other venues to get their market feeds. And this is obviously in the finance space, time is money.

Brendan:

They wanted to resolve that. They wanted to make sure they take away that issue with the customers. So with their ability to use network visibility, to really reduce their mean time to resolution of problems, they were able to directly show how providing their customers better performance hits the bottom line, and reduces their churn to perspective. [inaudible 00:05:11].


Question

With so many things that are competing, why should CISOs and CIOs measure it, and how should they?

Brendan:

Well, I mean they should measure it obviously because it’s your business, it’s the business. The service you’re providing for that customer has to be reliable. So, it’s hard enough at the start just to acquire the customer. So what you don’t want to do is once you go through all the hard work and effort and money and resources of acquiring the customers, to lose those customers. You want to maintain those customers. You want to make them happy.

Brendan:

In addition, a happy customer base, a customer base that likes your service, that finds it reliable, will be actually your best competitive advantage, if they’ll promote you. So it’s very, very important and a key thing that CIOs, CISOs, et cetera, need to keep in mind. Because ultimately that’s what they’re trying to do. They’re trying to make sure their customers are happy, and acquire new customers.

Brendan:

Now, how do you measure it is, in terms of churn rate, it’s the customers you have at the start of a period, minus the customers you have at the end of the period, minus the new customers in that period. So it’s pretty straightforward. You should be able to measure it. It will be different for different businesses in terms of the timeline and measuring it over, but you get that’s something you definitely should be looking at. Or you’ll be losing customers, whether it be for performance or otherwise.


Question

Looking at network visibility as a way to help reduce churns, what are the KPIs to reduce customer churn? What are the KPIs network management want to be targeting?

Brendan:

Well, first of all, just to set the stage a little bit, the four CIOs and so forth, the network historically has been the place where it’s kind of the stepdaughter to the applications and security and things like that. So CIOs and so forth focus on, obviously the network is important, but they focus initially on security applications, things like that. But when you think about it, the network is the base for everything, right?

Brendan:

So you need to make sure that you get that right. And user experience is impacted literally by network performance. So it’s something that you need to monitor. It’s something that you need to spend your time. So some of the KPIs that we would suggest that you look at, just use your experience like latency, jitter, if you have video applications, jitter is something that bothers the heck out of people. Web page load times, application response times, things like that.

Brendan:

Also measure your mean time to resolution. So, is that number going up or down? So obviously you’re going to provide better service to your customers, but it’s going down, you have to solve problems much more quickly and so forth. And then kind of more specific to networking in general, is things like the ratio of [inaudible 00:09:05], retransmission presets, all of these key performance indicators can allow you to identify when things are starting to degrade for your end user experience.

Request a demo today to learn how cPacket can improve your customers’ experience and reduce customer churn.

Which are your thoughts on what other metrics are useful to measure to reduce customer churn?

Mike:

Sure, and to echo Brendan’s point, a lot of the things that he has just mentioned there towards the end, it’s really data that you can only get off the network. You might be able to get it off of a server or server farm by instrumenting the NIC, but there’s a wealth of data that’s coming from network packet capture and traffic flows that is extremely useful, and it’s unique. You’re not going to get it from other places.

Mike:

So the big thing in measuring user experience, whether it’s application experience or network performance, is to pull in data from a variety of different sources to get the big picture, right? The network is not always going to be the single source of truth for what’s happening to applications. It’s going to be an important component of it, but you can learn other things such as end user monitoring, where you put an agent or instrument the application or the browser code to take measurements of things that Brendan had mentioned, like page load times and so forth. Because you’re looking for what is that user experience like, it doesn’t have to be across your entire user base. It could just be over a statistically significant portion of them, but it gives you that visibility.

Mike:

From a network perspective, oddly enough, traffic forwarding and routing is incredibly important. Especially as applications become more distributed and where applications are hosted, and at the same token end users are more distributed, right? Work from home is a good example, but employees work remotely all the time. And so understanding what the network paths look like, whether it’s over a private WAN, or over the internet, and looking for things like suboptimal routes.

Mike:

So traffic may be, that’s hopping over continents. There may be a server that’s closer within the same region that a user could connect to, rather than having to go back across a continent. And that’s only if that’s possible, right? There’s a bunch of factors like privacy regulations and so forth that limit where end users can connect to, but looking for those suboptimal routes, stopping things like hairpinning across the country.

Mike:

And so, you might think that if you’ve got to user on the West Coast of the US and you’ve got your servers over on the West Coast of the US that the traffic is going to remain on the West Coast of the US, and that’s not always the case. There are a number of times where I’ve talked to enterprises, and they’ve suffered bad application performance only to find out that their traffic was being routed from the West Coast to East Coast and back, right?

Mike:

They were going over high speed links, but they were still taking the long way. And then, traffic passing through a VPN. And again, this is, fairly important, fairly relevant for the work from home sorts of initiatives that folks are having. But the VPN gateway will pull the traffic to and from, so not necessarily hairpinning, but may be taking a longer route through the gateway. But also congestion from the VPN gateway and from the networking infrastructure in that central site can significantly impact performance.

Mike:

And so instrumenting the network and understanding what’s happening both at the data center, where that VPN gateway is sitting, or where the end user or the customer is sitting, is fairly important. And the VPN situation is fairly significant because all of a sudden, with it, starting in February, organizations went from a manageable number of users or predictable number of users using their VPN gateway to now all of their employees.

Mike:

So they had to go buy new licenses. They had to go buy new hardware that they maybe had to go increase licenses for services if they were using a service. And, some companies just got to a point where they were telling employees, “If you’re doing activities like web conferencing, get off the VPN while you’re doing that.” Just because it’s not only the number of users, but the amount of traffic passing through the VPN gateway was just killing performance.

Mike:

And so these are things that you can learn by looking at the gateway performance information, by looking at network data, packet rates, flow rates, capacity per second in and out of the VPN gateway and so forth, and look at latency and delay and jitter and other factors. Steve?

Steve:

Well, networks are always getting more and more complicated. The recent influx, all the moving parts you just talked about, so much going on. Where are you seeing organizations investing in IT now?

Mike:

So we look at, part of 451 Research looks at what digital leaders and digital laggards are doing. A digital leader is basically an organization that started their digital transformation early, right? At this point, in 2020, these organizations are fairly far along their digital transformation. So they’ve got a strategy and architecture, they’re executing against their plan. And this is impacting everything not only within IT, but it’s impacting the business, right? Looking for ways to optimize.

Mike:

Digital laggards are those that don’t have that architecture, or they’re starting late. Late’s not a bad term. It’s just, it’s a relative term, right? By 2020, if you don’t have a digital transformation architecture plan in place, you’re a digital laggard, and that’s fine. So what they’re doing is, is analytics is sort of the number one thing that organizations that are digital leaders, that we would classify as digital leaders are engaged in, because they can’t fix what they can’t measure. And operational efficiencies, you just got to s-

Steve:

[crosstalk 00:15:27], yeah.

Mike:

… Yeah, thank you. Operational efficiency and all the rest of the benefits that come from measurement and understanding where degradation and impacts are happening, becomes critical for creating successful environments. The majority of the laggards, those that are yet on their journey, they have a strategy, focusing more on those foundational aspects of customer outreach, but they’re not really measuring performance or customer satisfaction. So they have no idea what the customer sees. And that becomes a significant blind spot for them as they develop their IT systems and respond to changing market demands.

Steve:

I can see we have a lot of folks with us today. And so I want to remind everybody, we’re going to be taking questions here in a little bit. So feel free to start asking your questions in the Q&A box here. With all this happening, I’m wondering what happens during a black swan event? What are some of the potential impacts for IT that we want to look at?

Mike:

So the black swan events are interesting because you can’t predict them. And there’s a lot of talk among IT professionals when talking to CIOs, CxOs, CTOs, both at conferences, or in the meeting room, or in their planning sessions, that they want to be responsive and agile and able to respond to dynamic changes. That’s the goal, and it’s a good goal. And then along comes something like the coronavirus, and then the shutting of businesses and the work from home orders. And that impacts not only IT, but obviously it impacts the entire business. I mean, it’s rather remarkable that IT went from supporting probably the majority of their employees in offices, to now supporting them at home pretty much overnight. Within a couple of days.

Mike:

That’s impressive. That’s very impressive. But what it shows us is that it’s also disruptive. So this chart is from a flash survey that we conducted back in March, around COVID and the impact on IT, you’ll notice over this eight day period, the level of disruption doubled for most of the things that we asked for. So, there was increased disruption to the services that organizations were using both in-house, as well as cloud based services that they were using. There was additional IT strain, not only from having to deal with supporting all of these services, but just doing that in addition to getting everybody to work from home efficiently. And that puts a strain on everyone, on customers, employees, on partners, that entire chain.

Mike:

And then there was a strain or an indication that there was a reduction in customer demand and employee productivity. Now that may not have been entirely or even completely, or network performance may not have had a huge impact there, or a huge influence, but it certainly didn’t help if your services were under load and application performance were suffering. That doesn’t help in that stressful time, not for IT, not for employees and not for customers.

Mike:

So knowing the impacts of network performance, understanding the baseline, not only of how the network normally performs, but comparing that against those KPIs that Brendan had mentioned earlier, and making sure that you’re meeting those goals, and then at a time of a black swan event, where you’re going to miss having the information to understand everything from low-level network statistics, all the way up to application performance, that’s going to help you make better decisions about what kinds of mitigations, controls and optimizations to put in place. Steve?

Steve:

The whole change in what’s happening has been an incredibly interesting inflection point in terms of how networks are responding, what people are doing. Brendan, I’m kind of curious from a CxO perspective, how is network visibility becoming strategic for digital transformation in this time?

Brendan:

Well, I think that’d be as Mike has gone through, it is much more strategic because, I mean, let’s face it, your customers are more impatient. They expect things to happen in just about any industry.

Steve:

Even during extraordinary times.

Brendan:

Yeah, not to mention extraordinary times. Keep in mind, the black swan events, we’ve actually had three in the last 20 years. So it’s not a once in a century type thing, it’s more common things you have to obviously be prepared for, as prepared as you can be for things like that. So I think on a number of levels, you need to be … this is a strategic imperative to be competitive in the marketplace. Your customers are going to demand it more. The talk about churn, they will move on to other services if they’re not happy with the performance of the combination of the stuff we discussed before. And I think on top of that, having that visibility can allow you, the CxO, to run a more efficient organization and more efficient delivery system, or infrastructure with visibility.

Brendan:

So Mike mentioned a little while ago about realizing that their traffic went across the country, and got hairpinned and all these types of things. We see that a lot where when the products are pulled in, the visibility is pulled in. The underlying assumptions made by our customers are not necessarily the case. They look at it and they say, “Wow, we didn’t realize traffic was still here, there and everywhere. If we’d known that, we would have made different decisions.” And it really does impact how the applications works and how they service their customers.

Steve:

Mike, with more shared applications and users’ digital experience, how does it complicate security and network monitoring?

Mike:

So it’s no secret that the application landscape is changing, right? And by that, I mean the operating environments that IT has to support either directly, or are being driven by, chosen by, selected by application developers and business units. And even those choices, who’s making those choices and who’s driving those choices is changing. More business folks, now I’m talking about business folks who are influencing those decisions where application developers are influencing the decisions of where an application is going to be developed and run. If they have a more familiarity with say Microsoft Azure, more likely applications will be hosted there. There’s more of an opportunity for AWS or Oracle or Google, or name your cloud service.

Mike:

That’s where your applications are going to end up, at least for some time. If they’re only in development, then in development then they’ll get moved to a production environment, but oftentimes applications are being more and more being developed in the environment which they’re going to end up running. Eventually this falls to IT in order to manage it. And the networking folks have to manage the networking component. And this is an area which is under a very, very active development, not only for just kind of activity between cloud services and data centers and users, but the networking within those environments themselves. And that includes network monitoring. Everything from packet collection, to analysis, for performance and for security.

Mike:

So as applications become more distributed across a greater variety of environments, it’s going to come up to IT to manage, and to maintain consistency across all of those different environments. And this becomes very complicated. Operationally, it can be very complicated. Now, it’s a solvable problem, it’s addressable. You can reduce the chaos, and it’s going to take some discipline, and these are things that IT professionals do today, but it will take some effort and takes some discipline to sort of rein in all of this diversity and get control of it.

Mike:

From a security specific stand point, the more diverse, the greater number of environments that are being supported, it just means more places to attack, or more opportunities for the attacker to gain entry. And there’s going to be different kinds of attack potentials depending on what the service is, and the security controls that are placed in those services. It’s going to take a more concerted effort to have consistent monitoring, security monitoring across all of those environments, whether it’s intrusion detection, prevention or security event monitoring. And it’s going to take, again, IT applying uniformity and consistency to rein in that chaos and regain control. Steve?

Steve:

Brendan, I would love to have you dig in on that a little bit.

Brendan:

Sure. I mean, as Mike said, that you’re networking with multiple environments, clouds, data center or branch office, and the whole set up of both new and expanding environments. So, going back to the end customer, the end user, they don’t really care what environment you’re using. What they care about is, are they getting the results [inaudible 00:26:25]? Are things working for them? Are they working for them in a reliable manner?

Brendan:

So that’s what the end user cares about, and that’s what drives them towards your business. So the problem, the magnitude of what, as Mike said, of what you need to stay ahead of, of what you need to monitor, what you need to make sure it’s all working together, has greatly expanded. On the other hand, the ability of the resources that you have to monitor and manage all of this from a CxO side of things, has not necessarily expanded. In fact, in some cases, budget’s been cut and so forth, and we find that with our customers, they have to do more with less, as well as handle all these new environments and do it in a uniform manner.

Brendan:

So that makes the problems, that makes the challenges much, much greater. And what it leads to, I think ultimately is that, you need to automate things more. The more basic tasks need to be automated so your network or security Ops crew can focus on some of the higher level things, and focus their time and energy on solving that. Now it’s all great to say, “We’re going to automate everything.” But if you don’t have the underlying infrastructure, you don’t have the underlying monitoring infrastructure to provide information to the team, to the tools, to automation, at the right resolution and from the right places in the network, with the right level of accuracy, then your automation is going to be limited.

Brendan:

So it’s very important to not just rely on, I guess what I would call kind of your 2000 visibility standards. You need to get to a level of visibility that feeds directly into the challenges that we’re facing now with distributed environments, with increased customer demand, and with more impatient customers.

Steve:

That’s interesting. Mike, Brandon has been talking about automation. I’d love to get your take on what you see is the role of AI and machine learning for IT.

Mike:

Sure. So a lot of the work in AI and ML, which is being applied in networks is actually being driven, or is being conducted by vendors. They’ve got the expertise in-house, not only the data scientists, but also the network engineers, the designers, developers, and so forth to build out the machine learning, and AI algorithms that actually do the work, and then to validate and verify the output. So that’s where we see a lot of the work being conducted.

Mike:

They also have these very large data lakes. So they’re pulling in information from across their devices. Oftentimes that also contains packet data or may contain packet data or data that’s pulled off of the network. And this huge data lake of diverse data really does become this breeding ground for machine learning and AI.

Mike:

A lot of it is what’s called supervised learning. So there’s a human involved in that feedback loop, trying to adjust and take into account accurate or inaccurate analysis and recommendations. But it all starts with collecting data. So there’s already, in the security space, been a whole lot of work done in say network anomaly detection systems that are able to detect all kinds of, sort of known and unknown attacks just by looking at what’s occurring on the network. Everything from the number of packets to an increase in errors, and we have this concept of seasonality and a whole bunch of stuff going on.

Mike:

So there’s been a lot of work. And the whole idea behind AI and ML is to pull the signal out of the noise, and to make sense of what’s happening in the network and to correlate across different kinds of environments, what different events are saying. And to do so in a way that doesn’t add to IT overhead, management overhead.

Mike:

Security event management systems were great when they first came out, they were very powerful tools, but what we found was very quickly was you had to hire experts who could sift through the data, look for the patterns. They were basically the AI and ML agents. And then, create the pattern matching and the correlation rules, you test them and verify them and make sure that over time, they didn’t drift from the data that was coming in. And so security event management never really took off in a big way, because it was an awful lot of work.

Mike:

So machine learning and AI, the goal is to simplify, get those benefits of automated analysis, and to simplify or reduce the workload on IT and not make them the AI and ML. And so we’re seeing strides coming along again, a lot of this is coming out of cloud managed services, but we’re starting to see more products are coming that are using data that is derived from on premises, and isn’t shared with a larger audience, for a variety of reasons.

Mike:

We haven’t gotten to robots gone wild yet, but that’s in the future. I think it’s still a ways out. But it all comes down to the first step is to collect that data, to have, to be able to operate on. And it doesn’t matter if the data is three years old or current. I mean, it matters as far as alerting, but doing the analysis and looking for patterns, just having that breadth of data is what organizations are going to need to be successful with ML and AI.

Steve:

Brendan, how does AIOps help produce customer assurance? Can you get into that a little bit?

Brendan:

Yeah, sure. I mean, like Mike said, that the point behind AI and ML is to also reduce the stress on IT teams, more generally. I mean, one thing just going back to what you mentioned, one thing that is important is the resolution, the accuracy of information that you’re pulling in, that you’re using for machine learning and AIOps, and AI. So it’s very important that you’re not using things like averages from [inaudible 00:34:38], that are five minute averages [inaudible 00:34:42]. And there’s a lot of tools that give you that type of stuff.

Brendan:

You really need more accurate information in order to take advantage of AI. So like Mike mentioned, you can go back and look at multiple years of information, come up with the data, a lot of the data that you need to utilize AI, but then you also have to put it into practice. Putting it into practice involves making sure that your baselining your current network, which is not an easy thing to do. That you’re then comparing to what should be happening, pointing out those anomalies, and then very quickly being able to identify the anomalies, and then make whatever corrective actions. Either automated through AI or ML, or by a person, and then being able to continue monitoring and saying, “Oh, does this have the intended result?”

Brendan:

And the intended results goes back to the churn question, does this make the customer happy and make sure they’re getting the experience that they expect? So things like predictive analytics and baselining and all of those types of functions, become more and more important with respect to making sure that the end customer has the experience that they expect. [crosstalk 00:36:35].

Steve:

Mike, when you’re looking at … I’m sorry.

Brendan:

I mean, we have just an example of that. We have customers in the streaming services space. Everybody knows there’s multiple now streaming services, and they’re very competitive with each other. This particular customer wants to make sure that they can look at, in real time, look at data in their video applications by monitoring their multi test traffic.

Brendan:

So they’re able to make adjustments in real time, through the use of that information, and they’re automating more of those adjustments. So once again, it impacts the end user experience, which ultimately impacts customer churn.

Steve:

Mike, when you’re looking at a well architected network performance and security monitoring solution, what are some of the other benefits? And could you give us a real world example?

Mike:

Yeah. So riffing off of what Brendan has just said, a couple of things too, in the application space, this whole movement to composable infrastructure, DevOps, being agile, those all sort of talk to how applications are developed and deployed, but there’s also the operational side of that as well, which is being able to respond to changes more quickly and in an automated fashion. And that’s what Brendan was kind of wrapping up with the example of the streaming media company being able to look at the data coming off the network, do the analysis, measure something very specific or across multiple systems. If you want to sort of broaden it, compare that against your KPIs and then take an action and do that in real time. Without an operator going and necessarily making adjustments.

Mike:

That ultimately starts with the data collection and the visibility that you gain. So what happens with a lot of organizations, and this example is a medium size manufacturing firm that I’ve talked to a couple of times over the years, they remain unnamed because they don’t want to be known. Their monitoring system, if you wanted to call it that, just sort of grew up over the years. It was emergent. They would buy an intrusion detection system, and the integrator or VAR would deliver a tap and say, “Here, go install this in front of your powder or firewall.”

Mike:

They would go get a network performance monitoring software system, and the VAR or integrator would say, or the best practices from that particular vendor would say, “Pull a tap off a switch or a router,” or, “Do a mirror report, a spam report,” something of that nature. And as the company continued to add more and more visibility tools for security, performance monitoring, governance, et cetera, they started to have more and more ways of accessing the packet data and it became unmanageable. It was kind of untenable, they wanted to do any physical moves or changes, there’s always the fear that you’re going to pull the wrong wire or you’re going to drop visibility, or something.

Mike:

As their application started moving, as they grew and they had more than one data center and applications started showing up in more places, they didn’t have the visibility in all of those places necessarily. As the speeds increased in the underlying network, from one to 10 to, well, they went to 40, then back to 25 and up. They were constantly having this accordion effect of having to fan out more detection, more targets, because they couldn’t handle the capacity, they couldn’t make the jump from 10 Gig to 40 Gig on their services.

Mike:

So now they needed four X more of a target to send data to. And it was this constant churn basically, of equipment that was complicating IT and they were spending a lot of time doing this, and it was very error prone. So they finally said enough and they were actually making changes across all of their IT. So this was a good time to get control of their monitoring system. They came up with an architecture, they treated it like a network. As an architecture, it has a set of goals. They’re going to have a roadmap for how their monitoring system is going to expand, and it’s going to serve multiple needs. And that’s important.

Mike:

When they were buying equipment to serve a single purpose and then buying basically redundant equipment to serve another purpose, this just gets very expensive. If you have a multi port tap, why can’t you use it for multiple purposes? So they embarked on this goal. They identified their goals, what they wanted to do, and their set of requirements today. And they created a roadmap for where they wanted to go into the future. They wanted to reduce operations and reduce management overhead, and that was a big component.

Mike:

So they wanted to settle on a single vendor. So they have a common management system and a common set of capabilities that they could then understand the capabilities and limitations of the equipment. So they didn’t have to continually relearn everything. They wanted some intelligence, so they could have some buffer space. Well, bad term. So they had some versatility in how they were deploying their products. So for example, I mentioned that when they went and made the jump from 10 Gig to 40 Gig, all of a sudden they needed four times more intrusion detection systems to look at the traffic, but they weren’t looking at all the traffic. But they didn’t have any way to filter it. so filtering became very important.

Mike:

So as the interface speeds went from 10 to 40, they were still able to use that 10 Gig link, their existing infrastructure, and just filter out the traffic they didn’t want to see. That gave them some breathing room to then increase the interface speeds on their own timeline without being driven to it because of some externality. And they were able to fit this into their IT automation roadmap, because the product did have exposed APIs and other integration tools.

Mike:

So now when they deploy an application, or they move an application and they’re conducting daily operations, if an application moves from one rack to another rack, or one data center to another data center, whatever happens or a new application comes up, the networking gets installed, it gets configured. It gets tested to make sure that all of it works. It gets put into the model, the monitoring follows suit, so that all of this is set up. They’re treating it as a critical part of the infrastructure. And it’s just, it’s automated.

Mike:

And as they need to make changes to their infrastructure, they can do so very quickly and barely non disruptively. So it’s all about taking that chaos of those distributed environments that Brendan and I have been talking about, and the competing demands, using that IT discipline to gain control of it, and it actually set you up for success going into the future. And then you can use that infrastructural course across multiple use cases, whether it’s performance monitoring, if it’s just basic data collection, whether it’s security, it’s what have you, everybody has their slice of the pie. Steve?

Steve:

I know we’re almost out of time. I have a couple of quick questions I still want to ask, and I want to encourage the audience to use the Q&A feature. We have a lot of great questions coming in. Please ask questions. We’ll get to as many as possible. Brendan, I want to kind of throw you a tough one, even though we don’t have a lot of time, but from a CxO perspective, what are some of the hard and soft costs you want to look at when we’re talking about network visibility, when we’re doing our ROI calculation?

Brendan:

I mean the hard costs are pretty straightforward. I mean, it’s the equipment, the software, the support, the personnel that you’re going to need to run the monitoring equipment. The soft costs, I guess, the things that you have to play that against in terms of figuring out what your return on investment is, is if you don’t have visibility in the places that you need visibility, and there’s a problem, you’re not going to be able to solve that problem. It’s going to be very difficult.

Brendan:

So to a certain extent, it becomes a must have. You need to be able to identify a problem, and if you can’t identify a problem, you can’t see the problem. Ultimately, it’s a series, it’s a tiered system. You’ve got to determine, is it an application problem or a network problem? And then you need to say, “Okay, if it’s clear it’s a network problem, then where in the network?” And so forth.

Brendan:

And you need that visibility. So it’s really what you’re giving up. If then your service doesn’t work, or works sluggishly, then you’re going to lose revenue. You lose your reputation. As you lose your reputation, your ability to attract new customers is going to be impacted. I mean, I know in my day job or I guess my night job, I own a couple of bars, one called O’Flaherty’s in San Jose. So anybody’s welcome to come visit, have a drink when we can all go back and have a drink in the bar.

Brendan:

But one of the things that I use there is Yelp. Just to, in terms of getting visibility, to what “third party” thinks of what’s going on in the establishment. That information is valuable because it allows me to be able to ask questions, to get direct feedback and so forth. So your understanding that you’re providing good service in any business is very, very important to your return on investment. And so, like I said, I think it’s fairly straight forward as to the hard costs. It’s what you give up if you don’t make that investment.

Steve:

Well, from your perspective as a business owner and a CxO, what are you seeing in the world? I mean, are industry leaders are ready for … What’s been the reaction of business rating network visibility? I mean, there’s so many things they’re doing, what’s been their take on it?

Brendan:

Well, I mean, with our customers, their take is clearly that it’s becoming more of a necessity, and it’s more of a necessity because their customers are demanding it. So you’re dealing with an impatient customer group growing more impatient, and you have to be able to solve problems quickly, and you can’t solve problems quickly if you can’t even get the information you need to move from your network.

Brendan:

So what I’ve seen is it’s become much more strategic, it’s become almost table stakes for most enterprises that they need to be able to have the visibility necessary to run their networks, period.

Steve:

I think on that note, we’re going to go to Q&A. I just want to mention briefly that in the resource section, we have a bunch of great resources with the links. All these links are there in the resource section, if you check it out, and I think with that, I’m going to turn it over to Colleen to do the Q&A. Thank you guys.

Colleen:

Thanks Mike and Brendan, it’s that time for our Q and A session. As a reminder, simply type in your question on the box on your screen, and we will get to as many as we can. Our first question today, is it’s often hard to get budget authority, even for monitoring. To help justify this as a needed budget item, what data points about my network will be the most convincing for my bosses?

Mike:

I can start. So I think there’s going to be a couple of things to look for. And one, it’s the kind of data that IT already is looking for. Things like being able to show current capacity and being able to forecast what the capacity growth will look like, so that you can increase your speeds and feeds before congestion occurs.

Mike:

Looking at what happens, being able to do a longer time series over a week, a month or a quarter to look for seasonal types of changes, it depends on what kind of company it is, if there are seasonal changes. But to look for those seasonal changes and see if there’s something that can be done to head off any kinds of capacity shortages, or impacts on performance and so forth.

Mike:

On a more sort of daily operational basis, showing how using network data can get you to a proof of innocence faster. So basically what you’re trying, the first step when you run into a problem is to do a root cause or at least understand what’s causing the issue. Oftentimes that also means excluding sources of problems. So you don’t have to go look at it.

Mike:

But the typical finger pointing exercise is the application owners come along and say, “Hey, my application’s slow. It’s the network.” And the network folks look at the network and go, “No, we’re good, it’s you.” Well, that happens all the time. And so being able to point out and to show, yeah, it is the network or no, it’s not the network, you can end a lot of the sort of fruitless work, mean time to innocence is the phrase.

Mike:

You can end a lot of fruitless work very quickly, and then go focus on what is actually causing the problem. And if it is a network issue, then the more data, the more granular the data that you have, and the ability to go back in time, the quicker IT is going to be able to resolve the issues. So it’s not so much a data point that’s going to do it to get that budget, it’s going to be showing those operational benefits. And then one of the things that is really important is to show how, for example, data collection network monitoring can be used in multiple scenarios. So it’s not just a one off kind of process and solution, but it can be used in networking, it can be used by help desk, it can be used by security, it can be used by multiple entities.

Brendan:

Yeah, I agree with Mike on that, we find that the information that we’re providing is used by multiple entities within an organization, that they may look at it for different reasons, or they may look at it under a different lens, but the data that’s provided can serve multiple purposes. And as Mike said, the mean time to innocence argument is actually very powerful because not only do you simply say that networking, that team says it’s the network, you actually have the data and graph information that you can show. “Hey, I’m not just saying that, here’s the data, here’s the information.” And that’s very, very powerful to basically make the whole operation much more efficient.

Colleen:

Okay. For the next question, I am wondering if you can give an overview of important network analytics tools. How do these tools differentiate themselves?

Mike:

I guess we’re on this one too.

Brendan:

Yeah.

Mike:

Okay, thanks Brendan.

Brendan:

Sure.

Mike:

So I’m not going to name vendors. But there’s a couple. In the security space, there’s network anomaly detection tools. These are either packet based or they’re flow based. So pulling NetFlow, IPFIX, sFlow, et cetera, off routers and switches that support that kind of export of data. Anomaly detection looks for using ML or just statistical analysis, depending on your definition, will show anomalies not only at a point in time, but over time and trending. And most of them are smart enough to do things like show a moving average, and adapt to changing capacity. So they’ll learn what the season looks like, in its view, and then be able to say, this spike in data usage for a particular application protocol destination, what have you, is normal because that’s when the backup service kicks off.

Mike:

And they’re useful for finding other kinds of anomalies. So if all of a sudden you see a huge spike in say, TCP resets, well, that’s probably unusual. Or you see a bunch of ICNP unreachables. That, combined with other sources of data, sources tell you something about what’s happening on the network. Performance wise, being able to look at all kinds of information at a really deep level. So particularly with data that’s not encrypted, but even if it is, there’s some metadata that can be pulled out to be able to predict what kinds of applications are being used and sub applications.

Mike:

So Microsoft Teams, Facebook, et cetera, you have texting and IM and posting and video and voice, and whiteboard sharing and what have you. So those kinds of applications can be predicted. Those are interesting. And then there’s more sort of the, I hate to call them noble, more common use cases. Just looking at things like capacity, traffic types, top talkers, top destinations, getting an understanding of usage. So, I mean, those are a couple of them. Yeah. Brendan, if you had any of that.

Brendan:

Yeah, I mean, I think you mentioned most of them. I mean, what we find is the level of, I’ve mentioned accuracy and resolution a number of times. We find that the tools, some of the tools out there, many of the historical tools out there give you averages. An averages may be a five minute average, may be one minute average, may be a 30 second average. But those averages mask the spikes and the microbursts that cause the problem, that cause drop packets and traffic and so forth. So in our mind it’s important to make sure that you’re looking that the tools are getting the information at the right resolution, in order for them to be effective.

Colleen:

We have time for one final question. What emerging technologies do you see becoming relevant for network visibility in the near future?

Mike:

Sure. So I would say that there’s a couple, we’ve already talked about AI and ML. Both to help IT understand what’s happening on the network, and to automatically pull out information that otherwise wouldn’t be seen, needle from a haystack type of stuff. And on the AI side of developing recommendations for actions, for corrective actions, either optimizations, or to resolve a problem. So that’s a big one coming up.

Mike:

Software based packet brokers. So pure software packet brokers that would run in environments like cloud services or containerized environments, so that you can collect data across a larger variety of operating environments. So we know that many of the cloud services now have very basic sort of packet capture and forwarding capabilities. And so a lot of the network packet, the software packet broker, or the virtualized packet broker software can actually add capabilities to that traffic stream, like being able to fan it out to multiple destinations, provide some filtering and do some other stuff in software itself. So again, you have that consistency, but it’s having that visibility in those different environments. And then, yeah, those are probably the two biggest that I see.

Brendan:

Yeah. And then, just to follow on that, we see these cloud services or virtual packet brokers, virtual packet capture for all the different cloud services. But we also see most of our customers are going to have a hybrid environment where they’re going to do some of their applications in the cloud, some in the data center and so forth. And so what they’re looking for is solutions that will give them uniformity across these environments, so that the information is in one place, one dashboard, regardless of the environment which it’s processed in. And it’s able to be used by the same team that’s familiar with whatever they’re using in the data center, same as in the cloud. So we see that ability to make things more simple and uniform is going to be very important going forward as well.

Colleen:

Thank you so much to all of our speakers today, that concludes today’s webcast. As a reminder, the automatic replays will be available on 24scene. On behalf of cPacket and 451 Research, thank you so much for attending and have a great day.