Archive

The EPS Myth

Often when I engage with a prospect their first question is “How many events per second (EPS) can EventTracker handle?” People tend to confuse EPS with scalability so by simply giving back an enormous-enough number (usually larger than the previous vendor they spoke with) it convinces them your product is, indeed, scalable. The fact is scalability and Events per Second (EPS) are not the same and many vendors get away from the real scalability issue by intentionally using the two interchangeably. A high EPS rating does not guarantee a scalable solution.If the only measure of scalability available is an EPS rating, you as a prospect should be asking yourself a simple question. What is the vendor definition of EPS? You will generally find that the answer is different with each vendor.

  • Is it number of events scanned/second?
  • Is it number of events received/second?
  • Is it number of events processed/second?
  • Is it number of events inserted in the event store/second?
  • Is it a real time count or a batch transfer count?
  • What is the size of these events? Is it some small non-representative size, for instance, 100 bytes per event or is it a real event like a windows event which may vary from 1000 to 6,000 bytes?
  • Are you receiving these events in UDP mode or TCP mode?
  • Are they measuring running correlation rules against the event stream? How many rules are being run?
  • And let’s not even talk about how fast the reporting function runs, EPS does not measure that at all.

At the end of the day, an EPS measure is generally a measure of a small, non-typical normalized event received. Nothing measured about actually doing something useful with the event, and indeed, pretty much useless.

With the lack of definition of what an event actually is, EPS is also a terrible comparative measure. You cannot assume that one vendor claiming 12,000EPS is faster than another claiming 10,000EPS as they are often measuring very different things. A good analogy would be if you asked someone how far away an object was, and they replied 100. For all the usefulness of the EPS measure the unit could be inches or miles.

EPS is even worse for ascertaining true solution capability. Some vendors market appliances that promise 2,000 EPS and 150 GB disk space for log storage. They also promise to archive security events for multiple years to meet compliance. For the sake of argument let’s assume the system is receiving, processing and storing 1000 windows events/sec with an average 1K event size (a common size for a Windows event). In 24 hours you will receive 86 million events. Compressed at 90% this consumes 8.6GB or almost 7% of your storage in a single day. Even with heavy compression it can handle only a few weeks of data with this kind of load. Think of buying a car with an engine that can race to 200MPH and a set of tires and suspension that cannot go faster that 75MPH. The car can’t go 200, the engine can, but the car can’t. A SIEM solution is the car in this example, not the engine. Having the engine does not do you any good at all.

So when asked about EPS, I sigh, and say it depends, and try to explain all this. Sometimes it sinks in, sometimes not. All in all don’t pay a lot of attention to EPS – it is largely an empty measure until the unit of measure is standardized, and even then it will only be a small part of overall system capability.

Steve Lafferty

EventTracker review; Zero-day attack protection and more

Creating lasting change from security management

Over the past year, I’ve dealt with how to implement a Pragmatic approach to security management and then dug  deeper into the specifics of how to successfully implement a security management environment successfully. Think of those previous tips as your high school level education in security management.

Now it’s time to kiss the parents, hug the dog, and head off to the great unknown that represents college, university or some other secondary education. The tools are in place and you have a quick win to celebrate, but the reality is these are still just band-aids. The next level of your education is about creating lasting change that results constant improvement of your security posture. Creating this kind of change means that your security management platform needs to:

  • Make you better – If there isn’t a noticeable difference in your ability to do your job, then the security management platform wasn’t worth the time or the effort to set it up. Everybody loses in that situation. You should be able to pinpoint issues faster and figure out what to investigate more accurately. These may sound like no-brainers, but many organizations spend big money to implement technology that doesn’t show any operational value.
  • Save you time – The reality is, as interesting as reports are for compliance, if using your platform doesn’t help you do your job faster, then you won’t use it. No one has discretionary time to waste doing things less efficiently. Thus, you need to be able to utilize your dashboard daily to investigate issues quickly and ensure you can isolate problems without having to gather data from a variety of places. Those penalties in time can make the difference between nipping a problem in the bud or cleaning up a major data breach.

I know those two objectives may seem a long way off when you are just starting the process, but let’s take a structured approach to refining our environment and before you know it, your security management environment will be a well-oiled machine, and dare I say it, you will be the closest thing to a hero on the security team.

Step 1: Revisit the metrics

Keep in mind that in the initial implementation (and while searching for the quick win), you gathered some data and started pulling reports on it to identify the low-hanging fruit that needed to be fixed right now.This is a good time to make sure you are gathering enough data to draw broader conclusions. Remember that we are looking mostly for anomalies. Since we defined normal for your environment during the initial implementation, now we need to focus on what is “not normal.” Here are a couple of areas to focus on:

  • Networks – This is the easiest of the data to gather because you are probably already monitoring much of it. Yes, that data coming out of your firewalls, IPS devices, and content gateways (web filtering and anti-spam), should already be pumped into the system.Data center – Many of the attacks now are targeted towards databases and servers because that’s where the “money” is. Thus pulling log feed from databases and server operating systems is another set of data sources that should be leveraged. Again, once you gather the baseline – you are in good shape to start to focus on behavior that is not “normal.”Endpoints – Depending on the size of your organization, this may not be feasible, but another area of frequent compromise are end user devices. Maybe they are copying data to a USB thumb drive or installing unauthorized applications. Periodically gathering system log information and analyzing it can also yield a treasure of information.
  • Applications – Finally, you can also gather data directly from the application logs. Who is accessing the application and what transactions are the performing. You can look for patterns, which in many cases could indicate a situation that needs to be investigated.

Step 2: Refine the Thresholds 

Remember the REACT FASTER doctrine? That’s all about making sure you learn about an issue as quickly as possible and act decisively to head off any real damage. Since you are gathering a very comprehensive set of data now (from Step 1), the key to being able to wade through all that data and make sense of it are thresholds.To be clear, initially your thresholds will be wrong and the system will tend to be a bit noisy. You’ll get notified about too much stuff because you are better off setting loose thresholds initially, then missing the iceberg (yes, it’s a Titanic reference). But over time (and time can be measured in weeks, not months), you can and should be tightening those thresholds to really narrow in on the “right” time to be alerted to an issue.The point is all about automation. You’d rather not have your nose buried in log data all day or watching the packets fly by, so you need to learn to trust your thresholds. Once you have them in a comfortable place (like the Three Bears) not too many false positives, but not too few either. Then you can start to spot check some of the devices, just to make sure.Constant improvement is all about finding the right mix of data sources and monitoring thresholds to make an impact. And don’t think you are done tuning the system – EVER. What’s right today is probably wrong tomorrow, given the dynamic nature of IT infrastructure and the attack space.

Step 3: Document thyself

Finally, once your system is operating well, it’s time to revisit all of those reports you generate. Look from a number of different perspectives:

  • Operational reporting – You probably want to be getting daily (weekly at a minimum) reports, which pinpoint things like attacks dropped at the perimeter, login failures, and other operational data. Make sure by looking at the ops reports you get a great feel for what is going on within your networks, data centers and applications. Remember that security professionals hate surprises. These reports help to eliminate surprises.
  • Compliance reporting – The reports that help you run your security operation are not necessarily what an auditor is going to want to see. Many of the security platforms have pre-built reports for regulations like PCI and HIPAA. Use these templates as a starting point and work with your auditor or assessor and make sure your reports are tuned to what they both expect and need. The less time you spend generating compliance reports, the more time you are spending fixing issues and building a security strategy.

Congratulations, you are ready for your diploma. If you generally follow some of the tips and utilize many of the resources built into your security management platform, you can make a huge impact in how you run your security environment. I won’t be so bold as to say you can “get ahead of the threat,” because you can’t. But you can certainly REACT FASTER and more effectively.

Good luck on your journey, and you can always find me at http://blog.securityincite.com.

Industry News

Adobe zero day flaw being actively exploited in wild

The widely used Adobe Flash Player has a zero day flaw that is being targeted by a number of attackers who set up more than 200,000 Web pages to exploit the flaw.

Exploiting Security Holes Automatically

Software patches, which are sent over the Internet to protect computers from newly discovered security holes, could help the bad guys as well as the good guys, according to research recently presented at the IEEE Symposium on Security and Privacy. The research shows that attackers could use patches to automatically generate software to attack vulnerable computers, employing a process that can take as little as 30 seconds.

Learn how you can protect your IT systems from zero-day attacks

There is always a lag between the time a new virus hits the web and the time a patch is created and antivirus definitions updated, which often gives the virus several hours to proliferate across thousands of machines (The Adobe flaw is a perfect case in point). In addition, virus signatures are changing constantly and often the same virus can come back with a slight variation that is enough to elude antivirus systems.

Auditing Drive Mappings – TECH TIP

Windows does not track drive mappings for auditing out of the box. To audit drive mappings you will need to do the following steps:

  1. Turn on Object Access Auditing via Group Policy on the system(s) in questionYou will need to perform the following steps on each system that you want to track the drive mappings
  2. Open the registry and drill down to HKEY_CURRENT_USERNetwork
  3. Right click on Network and choose Permissions (if you click on the plus sign you will see each of your mapped drive listed)
  4. Click on the Advanced button
  5. Click on the Auditing tab then click on the Add button
  6. In the Select User or Group box type in Everyone
  7. This will open the Auditing dialog box
  8. Select the settings that you want to audit for; stay away from the Full Control option and Read Control. I recommend the following settings: Create Subkey, Create Link and Delete.

Windows will now generate event ids 560, 567 and 564 when the drive mappings are added or deleted. 564 will be generated when a mapping is deleted, 567 will be created when a mapping is deleted or added and 560 will be generated both times as well. Event ID’s 567 and 564 will not give you the full information that you are looking for, they will tell you what was done to the mappings but not WHICH mapping. To determine which mapping you will need,the Handle ID code that will be found in the event description on the 564/567 events. The Handle ID will allow you to track back to the 560 event which will give you the mapping that is being added/deleted. Event ID 567 will only be generated on Windows XP or Windows 2003 systems, Windows 2000 will not generate 567.

– Isaac