If there's a serious security threat, or network or application problem, its critical to find out what happened quickly, and respond appropriately, so you can minimize the damage. The problem is that the investigation process can be extremely time-consuming and is often inconclusive.
With the ever-increasing flood of alerts that SecOps, NetOps, DevOps and IT teams face, the result is a backlog of unexamined alerts that constitute an unknown risk of costly outages, or even costlier security breaches. Unfortunately, these unexamined issues can often be the very ones that end up causing the damage. In the post-event forensic analysis that follows a serious breach or outage, all too often it turns out the warning signs were there before it happened, the issue just never got looked at.
There are a number of flaws in the typical investigation process:
- Evidence must be collected from multiple sources - such as syslogs, NetFlow data, authentication logs, application logs - and collated to reconstruct what happened. The process is slow, cumbersome and often results in inconclusive answers:
• Issues may not have been logged
• Logs may have been compromised
• There may be insufficient detail to see what took place
- When troubleshooting performance issues, inter-team finger-pointing is common. The DevOps team blames the network, the NetOps team blames the application. The to-ing and fro-ing means problems can persist for extended periods.
- The cost of deploying analytics appliances is high. Often, teams we talk to say they’d like to deploy a particular analytics appliance in 50 places on their network, but they can only afford to deploy in 20 places, leaving them with blind-spots. If an issue occurs in a part of the network where there’s no analytics deployed, chances are it won’t get picked up. And if it gets reported to the help-desk by a user, there’s no evidence to troubleshoot the problem.
- The vast majority of monitoring tools are focused on what’s happening right now in “real-time”. If an issue is missed when it first happened, and insufficient evidence was collected that can prove what took place, it’s often impossible to figure out what happened. So often teams deploy packet capture post-event and hope the problem reoccurs.
Killing Issues Stone Dead with Network History
With an accurate, packet-level history of what has happened on the network, investigations are both fast and conclusive.
Access to Network History lets analysts quickly and precisely determine the severity of security threats or the root cause of performance issues. And it removes the ambiguity that can otherwise result from relying on Syslogs, NetFlow data and log files to investigate security or performance issues.
Rapid Domain Isolation
Often performance issues are discovered when a user calls the help-desk complaining of slow application performance. But should the issue be referred to the NetOps team or the DevOps team for investigation? Understanding which team is responsible can be difficult without the right evidence.
NetFlow will tell you whether a connection occurred between a client device and an application server. But it doesn’t provide sufficient detail to show whether that application responded in a reasonable timeframe or not.
Likewise, a SQL server log will show that a query was triggered, and a transaction occurred, but it won’t tell you whether a network issue caused a problem with the response, or whether an overloaded web server is preventing the response from being returned in a reasonable timeframe.
Network History provides definitive evidence for quickly isolating where the root cause of issues lies. Examining the packet-level detail will show conclusively whether it’s the web-server, or the database-server - or perhaps a network routing issue - that’s causing the problem, allowing the resolution of the problem to be directed to the right team for further investigation.
Throwing Off the Shackles of Real-Time-Only Monitoring
With EndaceProbe Analytics Platforms deployed across the network, it becomes possible to use real-time monitoring tools to go back-in-time to analyze historical security or performance issues – even if a monitoring tool was not deployed at the time.
With recorded Network History on hand, it no longer matters if a monitoring tool was not in the right place, or wasn’t active, when an issue occurred. As long as the traffic related to that issue was recorded and is stored on an EndaceProbe, a monitoring or analytics application can be deployed on-demand into Application Dock on that EndaceProbe, and the traffic can be replayed to it using Playback™.
This provides powerful, back-in-time, automated investigation capability that is impossible to achieve without packet-level Network History.
Network History Breadth and Depth
The ability to connect hundreds of EndaceProbes into a network-wide packet capture, recording and analytics hosting fabric – EndaceFabric – enables unlimited scalability. You can increase the depth of storage to enable deeper history and extend across the network to increase network visibility and eradicate blind-spots.
With our largest EndaceProbes providing up to a petabyte of effective packet storage per platform and eight monitoring ports, that allows fabrics to scale to hundreds of Petabytes and thousands of individual recording points – enough for the world’s largest networks.
Yes I'd Like a Demo
How about a Demo?
Integrating Network History into your security and performance monitoring tools gives you definitive evidence at your fingertips.
Find out just how fast and accurate your investigations could be.