Archive

Detecting Zeus, Logging for incident response, and more


Logging for Incident Response: Part 1 – Preparing the Infrastructure From all the uses for log data across the spectrum of security, compliance, and operations, using logs for incident response presents a truly universal scenario –you can be forced to use logs for incident response at any moment, whether you’re prepared or not.

Logs vs Bots and Malware Today


Despite the fact that security industry has been fighting malicious software – viruses, worms, spyware, bots and other malware since the late 1980s, malware still represents one of the key threat factors for organizations today. While silly viruses of the 1990s and noisy worms (Blaster, Slammer, etc.) of the early 2000’s have been replaced by commercial bots and so-called “advanced persistent threats,” the malware fight rages on.

Portable drives and Working remotely in Today’s IT Infrastructure


So, Wikileaks announced this week that its next release will be 7 times as large as the Iraq logs. The initial release brought a very common problem that organizations of all sizes face to the top of the global stage – anyone with a USB drive or writeable CD drive can download confidential information, and walk right out the door. The reverse is true, and harmful malware, Trojans, and viruses can be placed onto the network, as seen with the Stuxnet virus. These pesky little portable media drives are more trouble than they are worth! OK, you’re right, let’s not cry “The sky is falling” just yet.

But, the Wikileaks and Stuxnet virus aside, how big is this threat?

  • A 2009 a study revealed that 59% of former employees stole data from their employers prior to leaving learn more
  • A new study in the UK reveals USB sticks (23%) and other portable storage devices (19%) are the most common devices for insider theft learn more

Right now, there are two primary schools of thought to this significant problem. The first is to take an alarmist approach, and disable all drives, so that no one can steal this data, or infect the network. The other approach is to turn a blind eye, and have no controls in place.

But how does one know who is doing what, and which files are being downloaded or uploaded? The answer is in your device and application logs, of course. The first step is to define your organization’s security policy concerning USB and readable CD drives:

1. Define the capabilities for each individual user as tied to their system login

  • Servers and folders they have permission to access
  • Allow/disallow USB and writeable CD drives
  • Create a record of the serial numbers of the CD drive and USB drive

2. Monitor log activity for USB drives and writeable CD drives to determine what information may have been taken, and by whom

Obviously, this is like closing the barn door after the horse has left. You will be able to know who did what, and when… but by then it may be too late to prevent any financial loss or harm to your customers.

The ideal solution is to support this organization-wide policy that defines the abilities of each individual user, and determine who has permission to use the writeable capabilities of the CD drive or USB drive at the workstation, while monitoring and controlling serial numbers and information access from the server level with automation… combing through all of the logs to look for this event, and being able to trace what happened would seem almost impossible.

With a SIEM/log management solution, this process can be automated, and your organization can be alerted to any event that occurs where the transfer of data does not match the user profile/serial number combination. It is even possible to prevent that data from being transferred by automatically disabling the device. In other words, if someone with a sales ID attempts to copy a file from the accounting server onto a USB drive where the serial number does not match their profile, you can have the drive automatically disabled and issue an incident to investigate this activity. By the same token, if someone with the right user profile/serial number combination copies a file they are permitted to access – something that is a normal, everyday event in conducting business – they would be allowed to do so.

This solution prevents many headaches, and will prevent your confidential data from making the headlines of the Los Angeles Times or the Washington Post.

To learn how EventTracker can actually automate this security initiative for you, click here 

-John Sennott

100 Log Management uses #68 Secure Auditing HPUX


Today we continue our series on Secure Auditing with a look at HPUX.   I apologize for the brief hiatus, and we will now be back on our regular schedule.

Lessons from Honeynet Challenge “Log Mysteries”


Ananth, from Prism Microsystems, provides in-depth analysis on the Honeynet Challenge “Log Mysteries” and his thoughts on what it really means in the real world. EventTracker’s Syslog monitoring capability protects your enterprise infrastructure from external threats. “Syslog monitoring”

Log review for incident response EventTracker Excels in UNIX Challenge


Log Review for Incident Response: Part 2 From all the uses for log data across security, compliance and operations (see, for example, LogTalk: 100 Uses for Log Management #67: Secure Auditing – Solaris), using logs for incident response presents a truly universal scenario: you can be forced to use logs for incident response at any moment, whether you are prepared to or not.

EventTracker 7 is here; Detailed FISMA guidance and more


Logging for FISMA part 2 : Detailed FISMA logging guidance in NIST 800-92 and SANS CSC20 The Federal Information Security Management Act of 2002 (FISMA) “requires each federal agency to develop, document, and implement an agency-wide program to provide information security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source.”

FISMA How To; Preview EventTracker 7 and more


The Federal Information Security Management Act of 2002 (FISMA) “requires each federal agency to develop, document, and implement an agency-wide program to provide information security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source.”

100 Log Management uses #67 Secure Auditing & Solaris


Today we continue our series on Secure Auditing with a look at Solaris and the C2 or BSM (Basic Security Module) option.

Logging for HIPAA Part 2; Secure auditing in Linux


HIPAA Logging HOWTO, Part 2 The Health Insurance Portability and Accountability Act of 1996 (HIPAA) outlines relevant security and privacy standards for health information – both electronic and physical. The main mission of the law is “to improve portability and continuity of health insurance coverage in the group and individual markets, to combat waste, fraud, and abuse in health insurance and health care delivery” (HIPAA Act of 1996 http://www.hhs.gov/ocr/privacy/). A recent enhancement to HIPAA is called Health Information Technology for Economic and Clinical Health Act or HITECH Act.

100 Log Management uses #66 Secure Auditing – LAuS


Today we continue our series on Secure Auditing with a look at the LAuS, the Linux Audit-Subsystem Design secure auditing implementation in Linux. Redhat and Open SUSE both have supported implementations but the LAuS is available in the generic Linux kernel as well.

[See post to watch Flash video] -Ananth

HIPAA Logging Howto; New attack bypasses AV protection


The Health Insurance Portability and Accountability Act of 1996 (HIPAA) outlines relevant security and privacy standards for health information – both electronic and physical. The main mission of the law is “to improve portability and continuity of health insurance coverage in the group and individual markets, to combat waste, fraud, and abuse in health insurance and health care delivery”.

Is correlation killing the SIEM market?


Correlation – what’s it good for? Absolutely nothing!*

* Thank you Edwin Starr.

Ok, that might be a little harsh, but hear me out.

The grand vision of Security Information and Event Management is that it will tell you when you are in danger, and the means to deliver this is through sifting mountains of log files looking for trouble signs. I like to think of that as big-C correlation. Big-C correlation is an admirable concept of associating events with importance. But whenever a discussion occurs about correlation or for that matter SIEM – it quickly becomes a discussion about what I call little-c correlation – that is rules-based multi-event pattern matching.

To proponents of correlation, correlation can detect patterns of behavior so subtle that it would be impossible for a human unaided to do the same. It can deliver the promise of SIEM – telling you what is wrong in a sea of data. Heady stuff indeed and partially true. But the naysayers have numerous good arguments against as well; in no particular order some of the more common ones:

• Rules are too hard to write
• The rule builders supplied by the vendors are not powerful enough
• Users don’t understand the use cases (that is usually a vendor rebuttal argument for the above).
• Rules are not “set and forget” and require constant tuning
• Correlation can’t tell you anything you don’t already know (you have to know the condition to write the rule)
• Too many false positives

The proponents reply that this is a technical challenge and the tools will get better and the problem will be conquered. I have a broader concern about correlation (little c) however, and that is just how useful is it to the majority of customer uses cases. And if it is not useful, is SIEM, with a correlation focus, really viable?

The guys over at Securosis have been running a series defining SIEM that is really worth a read. Now the method they recommend for approaching correlation is that you look at your last 4-5 incidents when it comes to rule-authoring. Their basic point is that if the goals are modest, you can be modestly successful. OK, I agree, but then how many of the big security problems today are really the ones best served by correlation? Heck it seems the big problems are people being tricked into downloading and running malware and correlation is not going to help that. Education and Change Detection are both better ways to avoid those types of threats. Nor will correlation help with SQL injection. Most of the classic scenarios for correlation are successful perimeter breaches but with a SQL attack you are already within the perimeter. It seems to me correlation is potentially solving yesterdays’ problems – and doing it, because of technical challenges, poorly.

So to break down my fundamental issue with correlation – how many incidents are 1) serious 2) have occurred 3) cannot be mitigated in some other more reasonable fashion and 4) the future discovery is best done by detecting a complex pattern?

Not many, I reckon.

No wonder SIEM gets a bad rap on occasion. SIEM will make a user safer but the means to the end is focused on a flawed concept.

That is not to say correlation does not have its uses – certainly the bigger and more complex the environment the more likely you are going to have cases where correlation could and does help. In F500 the very complexity of the environment can mean other mitigation approaches are less achievable. The classic correlation focused SEM market started in large enterprise but is it a viable approach?

Let’s use Prism as an example, as I can speak for the experiences of our customers. We have about 900 customers that have deployed EventTracker, our SIEM solution. These customers are mostly smaller enterprises, what Gartner defines as SME, however they still purchased predominantly for the classic Gartner use case – the budget came from a compliance drive but they wanted to use SIEM as a means of improving overall IT security and sometime operations.

In the case of EventTracker the product is a single integrated solution so the rule-based correlation engine is simply part of the package. It is real-time, extensible and ships with a bunch of predefined rules.

But only a handful of our customers actually use it, and even those who do, don’t do much.

Interestingly enough, most of the customers looked at correlation during evaluation but when the product went into production only a handful actually ended up writing correlation rules. So the reality was, although they thought they were going to use the capability, few did. A larger number, but still a distinct minority, are using some of the preconfigured correlations as there are some uses cases (such as failed logins on multiple machines from a single IP) that a simple correlation rule makes good sense for. Even with the packaged rules however customers tended to use only a handful and regardless these are not the classic “if you see this on a firewall, and this on a server, and this in AD, followed by outbound ftp traffic you are in trouble” complex correlation examples people are fond of using.

Our natural reaction was there was something wrong with the correlation feature so we went back to the installed base and started nosing about. The common response was, no nothing wrong, just never got to it. On further questioning we surfaced the fact for most of the problems they were facing – rules were simply not the best approach to solving the problem.

So we have an industry that, if you agree with my premise, is talking about core value that is impractical to all but a small minority. We are, as vendors, selling snake oil.

So what does that mean?

Are prospects overweighting correlation capability in their evaluations to the detriment of other features that they will actually use later? Are they setting themselves up to fail with false expectations into what SIEM can deliver?

From a vendor standpoint are we all spending R&D dollars on capability that is really simply demoware? Case in point is correlation GUIs. Lots of R&D $$ go into correlation GUIs because writing rules is too hard and customers are going to write a lot of rules. But the compelling value promised for correlation is the ability to look for highly complex conditions. Inevitably when you make a development tool simpler you compromise the power in favor of speed of development. In truth you have not only made it simpler, but also stupider, and less capable. And if you are seldom writing rules, per the Securosis approach, does it need to be fast and easy at all?

That is not to say SIEM is valueless, SIEM is extremely valuable, but we are focusing on its most difficult and least valuable component which is really pretty darn strange. There was an interesting and amusing exchange a week or so ago when LogLogic lowered the price of their SEM capability. This, as you might imagine, raised the hackles of the SEM apologists. Rocky uses Arcsight as an example of successful SIEM (although he conveniently talks about SEM as SIEM, and the SIEM use case is broader now than SEM) – but how much ESM is Arcsight selling down-market? I tend to agree with Rocky in large enterprise but using that as an indicator of the broad market is dangerous. Plus the example of our customers I gave above would lead one to believe people bought for one reason but using the products in an entirely different way.

So hopefully this will spark some discussion. This is, and should not be, a slag between Log Management, SEM or SIM because it seems to me the only real differences between SEM and LM these days is in the amount of lip service paid to real-time rules.

So let’s talk about correlation – what is it good for?

-Steve Lafferty

100 Log Management uses #65 Secure Auditing – Introduction


This post introduces the concepts behind secure auditing. In subsequent posts we will look at secure auditing implementations in several of the Unix (Solaris, AIX, HP-UX) and Linux distributions. My apologies that this intro is a bit long at about 10 minutes but I think the foundation is worthwhile.

SIEM or Log Management?


Mike Rothman of Securosis has a thread titled Understanding and Selecting SIEM/Log Management. He suggests both disciplines have fused and defines the holy grail of security practitioners as “one alert telling exactly what is broken”. In the ensuing discussion, there is a suggestion that SIEM and Log Mgt have not fused and there are vendors that do one but not the other.

After a number of years in the industry, I find myself uncomfortable with either term (SIEM or Log Mgt) as it relates to the problem the technology can solve, especially for the mid-market, our focus.

The SIEM term suggests it’s only about Security, and while that is certainly a significant use-case, it’s hardly the only use for the technology. That said if a user wishes to use the technology for only the security use case, fine, but that is not a reflection of the technology. Oh by the way, Security Information Management would perforce include other items such as change audit and configuration assessment data as well which is outside scope of “Log Management”.

The trouble with the term Log Management is that it is not tied to any particular use case and that makes it difficult to sell (not to mention boring). Why would you want to manage logs anyway? Users only care about solutions to real problems they have; not generic “best practice” because Mr. Pundit says so.

SIEM makes sense as “the” use case for this technology as you go to large (Fortune 2000) enterprises and here SIEM is often a synonym for correlation.
But to do this in any useful way, you will need not just the box (real or
virtual) but especially the expert analyst team to drive it, keep it updated and ticking. What is this analyst team busy with? Updating the rules to accommodate constantly changing elements (threats, business rules, IT components) to get that “one alert”. This is not like AntiVirus where rule updates can happen directly from the vendor with no intervention from the admin/user. This is a model only large enterprises can afford.

Some vendors suggest that you can reduce this to an analyst-in-a-box for small enterprise i.e., just buy my box, enable these default rules, minimal intervention and bingo you will be safe. All too common results are either irrelevant alerts or the magic box acts as the dog in the night time. A major reason for “pissed-off SIEM users”. And of course a dedicated analyst (much less a team) is simply not available.

This not to say that the technology is useless absent the dedicated analyst or that SIEM is a lost cause but rather to paint a realistic picture that any “box” can only go so far by itself; and given the more-with-less needs in this mid-market, obsessing on SIEM features obscures the greater value offered by this technology.

Most Medium Enterprise networks are “organically grown architectures” a response to business needs — there is rarely an overarching security model that covers the assets. Point solutions dominate based on incidents or perceived threats or in response to specific compliance mandates. See the results of our virtualization survey for example. Given the resource constraints, the technology must have broad features beyond the (essential) security ones. The smarter the solution, the less smart the analyst needs to be — so really it’s a box-for-an-analyst (and of course all boxes now ought to be virtual).

It makes sense to ask what problem is solved, as this is the universe customers live in. Mike identifies reacting faster, security efficiency and compliance automation to which I would add operations support and cost reduction. More specifically, across the board, show what is happening (track users, monitor critical systems/applications/firewalls, USB activity, database activity, hypervisor changes, physical eqpt etc), show what has happened (forensic, reports etc) and show what is different (change audit).

So back to the question, what would you call such a solution? SIEM has been pounded by Gartner et al into the budget line items of large enterprises so it becomes easier to be recognized as a need. However it is a limiting description. If I had only these two choices, I would have to favor Log Management where one (essential) application is SIEM.

-Ananth

PCI HOWTO Part 2; Revised NIST guidelines


PCI Logging HOWTO, Part 2 Payment Card Industry Data Security Standard (PCI DSS) was created by the major card brands and is now managed by the PCI Security Standards Council. Since its creation in 2006, PCI DSS continues to affect how thousands of organization approach security. PCI applies to all organizations that handle credit card transactions or that store or process payment card data – and such organizations number in the millions worldwide. Despite its focus on reducing payment card transaction risk, PCI DSS also makes an impact on broader data security as well as network and application security.

100 Log Management uses #64: Tracking user activity, Part III


Continuing our series on user activity monitoring, today we look at something that is very hard to do in Vista and later, and impossible in XP and earlier — that is reporting on system idle time. The only way to accomplish this in Windows is to setup a domain policy to lock the screen after a certain amount of time and then calculate from the time the screen saver is invoked to when it is cleared. In XP and prior, however, the invocation of the screensaver does not generate an event so you are out of luck. In Vista and later, an event is triggered so it is slightly better, but even there the information generated should only be viewed as an estimate as the method is not fool-proof. We’ll look at the Pro’s (few) and Con’s (many). Enjoy.

Logging for PCI HOWTO; New Trojan masquerades as Adobe update


PCI Logging HOWTO Payment Card Industry Data Security Standard (PCI DSS) was created by the major card brands – Visa, MasterCard, American Express, JCB and Discover – and is now managed by the PCI Security Standards Council. Since its creation in 2006, PCI DSS continues to affect how thousands of organization approach security. PCI applies to all organizations that handle credit card transactions or that store or process payment card data – and such organization number in the millions worldwide.

100 Log Management uses #63 Tracking user activity, Part II


Today we continue our series on user activity monitoring using event logs. The beginning of any analysis of user activity starts with the system logon. We will take a look at some sample events and describe the types of useful information that can be pulled from the log. While we are doing user logons, we will also take a short diversion into failed user logons. While perhaps not directly useful for activity monitoring paying attention to attempts to logon are also critical.

100 Log Management uses #62 Tracking user activity


Today we begin a new miniseries – looking at and reporting on user activities. Most enterprises restrict what users are able to do — such as playing computer games during work hours. This can be done through software that restricts access, but often it is simply enforced on the honor system. Regardless of which approach a company takes, analyzing logs presents a pretty good idea of what users are up to. In the next few sessions we will take a look at the various logs that get generated and what can be done with them.

100 Log Management uses #61: Static IP address conflicts


Today we look at an interesting operational use case of logs that we learned about by painful experience — static IP address conflicts. We have a pretty large number of static IP addresses assigned to our server machines. Typical of a smaller company we assigned IP addresses and recorded them in a spread sheet. Well, one of our network guys made a mistake and we ended up having problems with duplicate addresses. The gremlins came out in full force and nothing seemed to be working right! We used logs to quickly diagnosis the problem. Although I mention a windows pop-up as a possible means of being alerted to the problem I can safely say we did not see it, or if we did, we missed it.

– By Ananth

Anomaly detection and log management; State of virtualization security and more


Anomaly Detection and Log Management: What we Can (and Can’t) Learn from the Financial Fraud Space Have you ever been in a store with an important purchase, rolled up to the cash register and handed over your card only to have it denied? You scramble to think why: “Has my identity been stolen?” “Is there something wrong with the purchase approval network?” “Did I forget to pay my bill?” While all of the above are possible explanations

100 Log Management uses #60 The top 10 workstation reports that must be reviewed to improve security and prevent outages


In the conclusion of our three part series on monitoring workstations we look at the 10 reports that you should run and review to increase your overall security and prevent outages.

100 Log Management uses #59 – 6 items to monitor on workstations


In part 2 of our series on workstation monitoring we look at the 6 things that are in your best interest to monitor — the types of things that if you proactively monitor will save you money by preventing operational and security problems. I would be very interested if any of you monitor other things that you feel would be more valuable. Hope you enjoy it.

100 Log Management uses #58 The why, how and what of monitoring logs on workstations


Today we are going to start a short series on the value of monitoring logs on Windows workstations. It is commonly agreed to that log monitoring on servers is a best practice, but until recently the complexity and expense of log management on workstations made most people shy away, but log monitoring on the workstation is valuable, and easy as well, if you know what to look for. These next 3 blogs will tell you the why, how and what.

SQL injection leaves databases exposed; zero-day flaw responsible for Google hack


Turning log information into business intelligence with relationship mapping Now that we’re past January, most of us have received all of our W2 and 1099 tax forms. We all know that it’s important to keep these forms until we’ve filed our taxes and most of us also keep the forms for seven years after filing in case there is a problem with a previous year’s filing. But how many of us keep those records past the seven year mark? Keeping too much data can be as problematic as not keeping records at all. One of the biggest problems with retention of too much information is that storage needs increase and it becomes difficult to parse through the existing data to find what’s most important.

Sustainable vs. Situational Values


I am often asked that if Log Management is so important to the modern IT department, then how come more than 80% of the market that “should” have adopted it has not done so?

The cynic says “unless you have best practice as an enforced regulation (think PCI-DSS here)” then twill always be thus.

One reason why I think this is so is because earlier generations never had power tools and found looking at logs to be hard and relatively unrewarding work. That perception is hard to overcome even in this day and age after endless punditry and episode after episode has clarified the value.

Still resisting the value proposition? Then consider a recent column in the NY Times which quotes Dov Seidman, the C.E.O. of LRN who describes two kinds of values: “situational values” and “sustainable values.”

The article is in the context of the current political situation in the US but the same theme applies to many other areas.

“Leaders, companies or individuals guided by situational values do whatever the situation will allow, no matter the wider interests of their communities. For example, a banker who writes a mortgage for someone he knows can’t make the payments over time is acting on situational values, saying: I’ll be gone when the bill comes due.”

At the other end, people inspired by sustainable values act just the opposite, saying: I will never be gone. “I will always be here. Therefore, I must behave in ways that sustain — my employees, my customers, my suppliers, my environment, my country and my future generations.”

We accept that your datacenter grew organically, that back-in-the-day there were no power tools and you dug ditches with your bare hands outside when it was 40 below and tweets were for the birds…but…that was then and this is now.

Get Log Management, it’s a sustainable value.

Ananth

100 Log Management uses #57 PCI Requirement XII


Today we conclude our journey through the PCI Standard with a quick look at Requirement 12. Requirement 12 documents the necessity to setup and maintain a policy for Information Security for employees and contractors. While this is mostly a documentation exercise it does have requirements for monitoring and alerting that log management can certainly help with.

5 cyber security myths, the importance of time synchronization, and more


Time won’t give me time: The importance of time synchronization for Log Management

Does this sound familiar? You get off a late night flight and wearily make your way to your hotel. As you wait to check in, you look at the clocks behind the registration desk and do a double-take.

100 Log Management uses #56 PCI Requirements X and XI


Today we look at the grand-daddy of all logging requirements in PCI — Section 10 (specifically, Section 10.5) and Section 11. As with most of PCI, the requirements are fairly clear and it is hard to understand how someone could accomplish them without log management.

EventTracker SIEM Trial

EventTracker SIEM Trial

Experience the difference of a platform built to deliver vital and actionable data.

Try it for Free