Edit This Guide Record
Guides Technology DDoS Is Not Something You Can Prevent

DDoS Is Not Something You Can Prevent

Published on 10/26/2016 | Technology

226 0

Tom Smith

Marketing strategist, research analyst, story creator, and writer who conducts one-on-one interviews to obtain insights for content and to identify and solve business problems.

IoT GUIDE

Overview

We recently spoke to Avi Freedman, CEO of Kentik, the maker of a cloud-based network visibility and analytics solution that provides deep insights into network traffic behavior. The Kentik Detect platform allows service providers and enterprises to make quicker decisions about network performance, based on the analysis of tens of billions of data records each day. In the following Q&A, Avi explains how telecommunications companies are adopting network analytics to create new service offerings for their customers.

DZone: To start, how can service providers benefit from the use of network analytics?

AF: Service providers must continually innovate to stay relevant in such a hyper-competitive market. Many telecommunications firms these days are facing cannibalization by newer, Over-the-Top (OTT) service providers and other cloud-based services that are proliferating across the internet.

 

Security services are a bright spot though, with a 25%-35% annual growth rate, according to Gartner, so investing next-generation technology for these services is a smart move. Network analytics and visibility tools can provide powerful insights for security purposes, and serve as the detection layer for DDoS protection services.  In addition, network traffic analytics can drive other revenue opportunities, like places where traffic is being sent for free today that can be brought on as customers, and optimization, usually around planning, consolidation, and ensuring profitability of customers and services.

 

DZone: How can service providers turn such insights into services that can be monetized?

AF: Many service providers started offering DDoS protection services using first-generation detection approaches that are based on appliances. These legacy DDoS detection platforms are constrained by low-scale compute and storage power, which means that they have accuracy issues, leading to a significant amount of false negatives (attacks that aren’t caught).  They also don’t retain any traffic details, so there’s no deep analytics to serve as basis for the kind of consultative relationship that service providers would like to provide to their customers.

That’s why more service providers are evolving to adopt big data network analytics platforms that can perform more accurate anomaly detection for DDoS attacks, as well as provide in-depth analytics capabilities.  Particularly if the detection and analytics platform is offered as SaaS, service providers can maximize their monetization of security services by offering and rapidly onboarding potential customers into trials and then delivering consultative insights.  This puts the service provider in the role of a trusted advisor, and in many cases that’s exactly what enterprise organizations need, so it’s very appealing.  This then turns into a package of services that can include DDoS detection and mitigation, plus ongoing consultation and analyses of network traffic as value-added professional services.

DZone: Does automation figure into how big data handles detection of traffic anomalies?

AF: Great question.  There are two factors to why big data is more accurate in detecting DDoS attacks.  The first factor is how comprehensively the data is examined.  In traditional, appliance-based systems, you have to make a number of analytical compromises due to limited resources.  For example, in order to perform any kind of base-lining, it’s common for appliances to have to segment traffic flow data by which devices exported the flow records.  So let’s say a host IP is being hit by a DDoS attack, but it’s coming in via multiple routers, then instead of seeing a large bump of network-wide traffic going to that host, the appliance detection algorithm will see a small bump of traffic across several routers—all of which may not trigger any alert.  A big data approach doesn’t have the computing constraints, so it can always look at network-wide traffic, and so it will naturally notice attacks that would otherwise get missed.  

 

The second factor has to do with automation.  With a big data approach, it’s possible to have an adaptive approach to base lining.  This means that instead of having a static set of IP addresses that you’re baselining (or worse a big set of them that where accuracy is diluted heavily by averaging across the set), the system continuously adjust the set of IPs are “interesting” based on how much total traffic they’re receiving within a given segment of time.  Interesting IP addresses are included in base-lining for that period and thresholds are evaluated against that baseline.  This kind of automation makes for superior detection.

 

DZone:  What are “best practices” for reducing the incidence of, and the harm caused by, DDoS attacks?

AF:  Well, first I think it’s important to say that DDoS is something that you can’t really prevent, since it’s an attack from outside of your network that can be coming from a variety of motivations.  So that means that in terms of reducing the harm, the first step is to be educated. 

 

Many IT leaders are still coming to grips with the fact that DDoS is an industrialized threat, that it is trivial to purchase DDoS attacks from a broad variety of dark providers.  A huge number of businesses report being attacked multiple times.  Yet, many IT teams are still attempting to deal with DDoS using inappropriate tools such as firewalls or Intrusion Prevention Systems (IPS) which utilize stateful tracking of connections and therefore are susceptible to resource exhaustion when faced with volumetric attacks.

 

So the second best practice is to have a plan for an appropriate defense based on the risk of losing network or application availability.  That might be purchasing a package of one-time cloud-based mitigations from your Internet service provider or CDN provider if you just need occasional protection on a contingency basis.  It might mean a more dedicated cloud based or hybrid deployment of on-premises and cloud-based defenses.  If you have a high degree of risk from business disruptions due to DDoS, then a best practice is not just to implement a perimeter defense strategy, but to have deep analytics so you can understand the changes happening in your network and application traffic, and can adjust accordingly.

 

And third, something that our customers have been focusing on more recently - especially after the recent wave of Mirai-based IoT attacks - is to focus inwards and detect and adapt to attacks originating from your network.  They consume time, resource, and impact performance, and with more modern big data-based approaches, it’s finally possible to find them - even with hundreds of thousands or millions of locally infected nodes.

 

DZone:  Do different clouds vary in the security they provide?

AF:  When it comes to DDoS specifically, there isn’t any real difference between cloud providers—any additional protection offered is going to be an add-on service because there’s a real cost as well as real demand for fee-based DDoS protection services.  And - if you threaten their infrastructure they’ll rate-limit or shut you off, usually without consultation.

 

In general, I think it’s probably safe to assume that the big cloud providers like Amazon, Microsoft, Google, and IBM are in most cases going to be at least as secure if not more secure in their practices as any private IT group would be, just because they have put a lot of systemic work into automation, compliance and other processes to protect their revenue and brand.

 

But you should always understand the security practices of anyone you outsource any critical assets, resources or process to, and that applies to cloud providers as well.  And also ask and understand what happens if you are attacked while on their infrastructure.

 

DZone:  What’s the future of big data analytics in the cloud?

AF:  I think the future is already here in many ways.  There are now a variety of big data cloud options, such as Google’s BigQuery engine, Amazon’s Elastic MapReduce, and many ways to run Hadoop, Spark, ELK stacks and other big data platforms in the cloud. 

 

And companies like Kentik are an example of a big data platform built around a specific use case—a SaaS rather than a PaaS, built as a network-savvy but open big data platform.  I would anticipate that there will be many more big data-powered SaaS providers for various use cases in time.

 

DZone:  How can Kentik help developers and engineers be more successful?

AF:  One major way that Kentik helps both devs and ops engineers is by providing a cloud- and network-relevant way to understand network performance versus application performance issues.  Developers have had cloud-friendly application performance management (APM) tools for a while now, but network performance monitoring has been stuck in the pre-cloud era.  Kentik gives engineers an easy to deploy host agent that collects network performance metrics on real application traffic and sends those metrics to our cloud back-end that can generate alerts and provide ad-hoc analytics so teams can work together to deliver a strong user experience.  And ultimately that translates into better proactive alerts and insights, access to granular traffic data, and most importantly, time saved and the ability to do architecture and engineering instead of putting out fires.

 

DZone:  What are a couple of use cases where end users are benefiting from Kentik today?

AF:  As we do more and more of our personal and work business over the internet, it’s critical to both companies and their users to get a great network experience.  One core set of Kentik customers who use us to focus on that are web enterprises like ad serving technology companies, who have gained a lot more insight into network performance by deploying the nProbe host agent onto their application servers.  nProbe gathers and sends performance metrics like TCP retransmits and latency to the Kentik Data Engine.  Engineers can then easily be alerted when there is increased latency or outages, pointed to the local or remote root cause, and trigger mitigations or re-routing via API push. 

 

From a DDoS protection perspective, I’m thinking about a service provider that started using our analytics platform and noticed that they were seeing many attacks that their legacy DDoS protection platform wasn’t catching.   They decided to use Kentik Detect as their DDoS detection platform.  It integrates with some of the top mitigation solutions—so in their case they configured alerts to automatically trigger mitigation from their Radware mitigation.  They noticed a roughly 30% improvement in catching and scrubbing DDoS attacks as a result, and fewer false positives as well, which let them make the entire solution automated.  That meant that customers didn’t have to experience severe service impacts from attack traffic, and their end users got improved service quality, and engineers could sleep at night.

test test