Optimize IT operations, pinpoint vulnerabilities

Log Management and Pragmatic Operations

Last month, I introduced the concept of the Pragmatic CSO methodology, a 12-step program to help security professionals overcome their addiction to throwing new products at every new attack vector and security problem. Additionally, the process helps security professionals build a value proposition, interface with senior management more effectively, and run their security operation as a business. As a high level construct, the 12 steps are helpful, but ultimately security professionals need to do something, and that’s what we are going to discuss this month.

The next step in the journey is to understand how Pragmatic CSOs operate their businesses and stay on top of what is a seemingly infinite attack surface, with new innovative threats appearing pretty much every day. We’ll wrap that around to how leveraging log management helps you keep your environment secure.

The operational disciplines of running your security business are discussed in Step 7: Operations and Monitoring of the Pragmatic CSO. The approach is largely predicated on understanding what is happening on your network. Many organizations have no idea what is going on with their networks. Seriously. So they have no way to know that they’ve been compromised. That is until it becomes painfully obvious, and that is way too late.

I don’t know a lot, but I can tell you if you don’t have a very clear idea about what is “normal” on your network, you will have a hard time figuring out when something is NOT normal. Determining these anomalies is the first step in figuring out if you have a problem in your environment. That is the first clue to the fact that you’ve been had.

Nowadays, the attack surface is pretty much infinite, so the idea of protecting every flank is neither practical nor achievable. Thus, the idea of “getting ahead of the threat” is bunk. The best we can hope for is to REACT FASTER when we identify an issue. I’ve alluded to how you react faster above, but let me be more specific. We’ve got to baseline the environment, make sure the baseline is clean (so we aren’t normalizing on a compromised environment), and then monitor to detect when something is not “normal.”

This idea of “looking forward” has proven to be very effective in combating both known attacks, as well as those new, innovative attacks that make practitioners crazy. But once you determine something bears more investigation, what then? It’s critical to compliment the ability to look forward with a capability of “looking back” to investigate an issue, identify the root cause and remediate the problem.

Since we are trying to react faster, the sooner we can investigate the issue, the better it will be for our ability to contain potential damage. The good news is that a lot of the information we need to investigate these issues exists. Amazing, eh? You’ve got pretty much everything you need to get to the bottom of the issue in your logs.

Yes, your logs. Pretty much every device, server, application and database generates logs. This log data can provide the basis to analyze what happened, when and by whom. When you are trying to isolate bad behavior, this information is invaluable. It’s a good idea to gather as much data as you can from as many sources as you can. You have to balance how much data is too much, but I’d rather opt on the side of gathering more, rather than less data.

So if you are going to gather all this log data, what is important to think about? First you need to make sure the data is protected. The first thing a bad actor does is to go back and erase their tracks by messing with the logs. Why leave evidence when the objective is to remain undetected? So the log files must be moved off the main system, so the bad folks can’t get to them. Other layers of security can include locking down the data store, and hashing and sequencing the log records to further prevent tampering.

Scalability is also a pretty important aspect of your log management platform. Most log data gets tossed in the virtual circular bin because the sheer volume of information. We are talking about a LOT of data. A typical large enterprise can (and do) generate billions of log messages a day. Yes, that’s billions with a B. Clearly this is nothing that a human can do him/herself. You need help and that’s where a log management platform comes into play.

What else do you need to worry about? Given that we are trying to react faster, the ability to analyze and drill down into specific log sources quickly and effectively is also pretty important. I didn’t say in real time because that’s not going to happen. First the log files need to be sent to the platform and analyzed, which takes a bit of time. In reality, it’s more like “pseudo real-time.” Odds are by the time you figure out you need to investigate, you’ll have the data at your disposal.

Next month we are going to dig deep into using the log management platform within the context of incident response, so I’ll table the rest of that discussion until then.

So what else do we need to use this log data for? Lastly, log data can be a very important decision support tool. Aside from helping to investigate incidents, the idea of using that log data to do trend analysis and pinpoint scalability issues, new attack vectors, potentially troubling internal activity and the like can all be unlocked via log data.

This trending analysis is also a critical tool in your ongoing efforts to substantiate your value to senior management, as Pragmatic CSOs must do, and prove compliance to auditors. Remember, senior management like reports that show what you do and why? Trend analysis and good colorful reports add a measure of credibility, so that is an added benefit to centralizing logs.

Until next month, be Pragmatic.

Industry News

Survey: Security policies neglect off-network devices 
A majority of companies put confidential data at risk every day when equipment such as servers, desktops, laptops and portable storage devices leave the confines of their network, according to a recent survey of 735 IT security practitioners.

Related content –protect the server where data resides and not just the perimeter, to minimize theft of confidential/sensitive data.