Since no two environments are identical and no network remains stagnant in Network Monitoring today, the only thing we can expect is the unexpected!
The network has become a living dynamic and complex environment that requires a flexible approach to monitor and analyze. Network and Security teams are under pressure to go beyond simple monitoring techniques to quickly identify the root causes of issues, de-risk hidden threats and to monitor network-connected things.
A common gripe for Network Engineers is that their current network monitoring solution doesn’t provide the depth of information needed to quickly ascertain the true cause of a network issue.
Many already have over-complicated their monitoring systems and methodologies by continuously extending their capabilities with a plethora of add-ons, or relying on disparate systems that often don’t interface very well with each other.
There is also an often mistaken belief that the network monitoring solutions that they have invested in will suddenly give them the depth they need to have the required visibility to manage complex networks.
Networks today are exponentially faster, bigger and more complex than those of just a few years ago.
With thousands of devices going online for the first time each minute, and the data influx continuing unabated, it’s fair to say that we’re in the throes of an always-on culture.
As the network becomes arguably the most valuable asset of the 21st century business, IT departments will be looked at to provide not just operational functions, but, more importantly, strategic value.
Compliance in IT is not new and laws regulating how organizations should manage their customer data exist such as: HIPPA, PCI, SCADA and Network transaction logging has begun to be required of business. Insurance companies are gearing up to qualify businesses by the information they retain to protect their services and customer information. Government and industry regulations and enforcement are becoming increasingly stringent.
Most recently many countries have begun to implement Mandatory Data Retention laws for telecom service providers.
Usage-based billing refers to the methods of calculating and passing back the costs of running a network to the consumers of data that occur through the network. Both Internet Service Providers (ISP) and Corporations have a need for Usage-based billing with different billing models.
NetFlow is the ideal technology for usage-based billing because it allows for the capture of all transactional information pertaining to the usage and some smart NetFlow technologies already exist to assist in the counting, allocation and substantiation of data usage.
In today’s global marketplace there has never been more pressure on organizations to reduce costs in order to be competitive. No longer can an organization afford to ignore the ever-escalating costs associated with increasing complexity and the lack of visibility of network traffic. Time is money! A great deal of time is spent on painstaking and often unsuccessful searches for the causes of performance and security incidents and monitoring inappropriate network use and risk. Applications continue to grow in complexity and have ever increasing bandwidth. Network infrastructure is growing and increasing in cost and complexity.
Few would debate legendary martial artist Chuck Norris’ ability to take out any opponent with a quick combination of lightning-fast punches and kicks. Norris, after all, is legendary for his showdowns with the best of fighters and being the last man standing in some of the most brutal and memorable fight scenes. It’s no surprise, then, that hackers named one of their most dubious botnet attacks after “tough guy” Norris, which wreaked havoc on internet routers worldwide. The “Chuck Norris” botnet, or CNB, was strategically designed to target poorly configured Linux MIPS systems, network devices such as routers, CCTV cameras, switches, Wifi modems, etc. In a study on CNB, the University of Masaryk in the Czech Republic, examined the attack’s inner workings and demonstrated how it employed Netflow as a countermeasure to actively detect and incapacitate the threat.
Part 3 in our series on How to counter-punch botnets, viruses, ToR and more with Netflow focuses on ToR threats to the enterprise.
ToR (aka Onion routing) and anonymized p2p relay services such as Freenet is where we can expect to see many more attacks as well as malevolent actors who are out to deny your service or steal your valuable data. Its useful to recognize that flow analytics provides the best and cheapest means of de-anonymizing or profiling this traffic.
Data Retention Compliance
Hosts that communicate with more than one known threat type should be designated a high risk and repeated threat breaches with that hosts or dependent hosts can be marked as repeat offenders and provide an early warning system to a breach or an attack.
It would be negligent of me not to mention that the same flow-based End-Point threat detection techniques can be used as part of Data Retention compliance. In my opinion this enables better individual privacy with the ability to focus on profiling known bad end-points and be used to qualify visitors to such known traffic end-points that are used in illicit p2p swap sessions or access to specific kinds of subversive or dangerous sites that have been known to host such traffic in the past.