100 Log Management uses #59 – 6 items to monitor on workstations


In part 2 of our series on workstation monitoring we look at the 6 things that are in your best interest to monitor — the types of things that if you proactively monitor will save you money by preventing operational and security problems. I would be very interested if any of you monitor other things that you feel would be more valuable. Hope you enjoy it.

100 Log Management uses #58 The why, how and what of monitoring logs on workstations


Today we are going to start a short series on the value of monitoring logs on Windows workstations. It is commonly agreed to that log monitoring on servers is a best practice, but until recently the complexity and expense of log management on workstations made most people shy away, but log monitoring on the workstation is valuable, and easy as well, if you know what to look for. These next 3 blogs will tell you the why, how and what.

SQL injection leaves databases exposed; zero-day flaw responsible for Google hack


Turning log information into business intelligence with relationship mapping Now that we’re past January, most of us have received all of our W2 and 1099 tax forms. We all know that it’s important to keep these forms until we’ve filed our taxes and most of us also keep the forms for seven years after filing in case there is a problem with a previous year’s filing. But how many of us keep those records past the seven year mark? Keeping too much data can be as problematic as not keeping records at all. One of the biggest problems with retention of too much information is that storage needs increase and it becomes difficult to parse through the existing data to find what’s most important.

Sustainable vs. Situational Values


I am often asked that if Log Management is so important to the modern IT department, then how come more than 80% of the market that “should” have adopted it has not done so?

The cynic says “unless you have best practice as an enforced regulation (think PCI-DSS here)” then twill always be thus.

One reason why I think this is so is because earlier generations never had power tools and found looking at logs to be hard and relatively unrewarding work. That perception is hard to overcome even in this day and age after endless punditry and episode after episode has clarified the value.

Still resisting the value proposition? Then consider a recent column in the NY Times which quotes Dov Seidman, the C.E.O. of LRN who describes two kinds of values: “situational values” and “sustainable values.”

The article is in the context of the current political situation in the US but the same theme applies to many other areas.

“Leaders, companies or individuals guided by situational values do whatever the situation will allow, no matter the wider interests of their communities. For example, a banker who writes a mortgage for someone he knows can’t make the payments over time is acting on situational values, saying: I’ll be gone when the bill comes due.”

At the other end, people inspired by sustainable values act just the opposite, saying: I will never be gone. “I will always be here. Therefore, I must behave in ways that sustain — my employees, my customers, my suppliers, my environment, my country and my future generations.”

We accept that your datacenter grew organically, that back-in-the-day there were no power tools and you dug ditches with your bare hands outside when it was 40 below and tweets were for the birds…but…that was then and this is now.

Get Log Management, it’s a sustainable value.

Ananth

100 Log Management uses #57 PCI Requirement XII


Today we conclude our journey through the PCI Standard with a quick look at Requirement 12. Requirement 12 documents the necessity to setup and maintain a policy for Information Security for employees and contractors. While this is mostly a documentation exercise it does have requirements for monitoring and alerting that log management can certainly help with.

5 cyber security myths, the importance of time synchronization, and more


Time won’t give me time: The importance of time synchronization for Log Management

Does this sound familiar? You get off a late night flight and wearily make your way to your hotel. As you wait to check in, you look at the clocks behind the registration desk and do a double-take.

100 Log Management uses #56 PCI Requirements X and XI


Today we look at the grand-daddy of all logging requirements in PCI — Section 10 (specifically, Section 10.5) and Section 11. As with most of PCI, the requirements are fairly clear and it is hard to understand how someone could accomplish them without log management.

100 Log Management uses #55 PCI Requirements VII, VIII & IX


Today we look at PCI-DSS Requirements 7, 8 and 9. In general these are not quite as applicable as the audit requirements in Requirement 10 which we will be looking at next time, but still log management is useful in several ancillary areas. Restricting access and strong access control are both disciplines log management helps you enforce.

New EventTracker 6.4; 15 reasons why your business may be insecure


Tuning Log Management and SIEM for Compliance Reporting The winter holidays are quickly approaching, and one thing that could probably make most IT Security wish lists is a way to produce automated compliance reports that make auditors say “Wow!” In last month’s newsletter, we took a look at ways to work better with auditors. This month, we’re going to do a deeper dive into tuning of log management and SIEM for more effective compliance reporting.

Panning for gold in event logs


Ananth, the CEO of Prism is fond of remarking “there is gold in them thar logs…” this is absolutely true but the really hard thing about logs is figuring out how to get the gold out without needing to be the guy with the pencil neck and the 26 letters after their name that enjoys reading logs in their original arcane format. For the rest of us, I am reminded of the old western movies where prospectors pan for gold – squatting by the stream, scooping up dirt and sifting through it looking for gold, all day long, day after day. Whenever I see one of those scenes my back begins to hurt and I feel glad I am not a prospector. At Prism we are in the business of gold extraction tools. We want more people finding gold and lots of it. It is good for both of us.

One of the most common refrains we hear from prospects is they are not quite sure what the gold looks like. When you are panning for gold and you are not sure that glinty thing in the dirt is gold, well, that makes things really challenging. If very few people can recognize the gold we are not going to sell large quantities of tools.

In EventTracker 6.4 we undertook a little project where we asked ourselves “what can we do for the person that does not know enough to really look or ask the right questions?” A lot of log management is looking for the out-of-ordinary, after all. The result is a new dashboard view we call the Enterprise Activity Monitor.

Enterprise Activity uses statistical correlation to looks for things that are simply unusual. We can’t tell you they are necessarily trouble, but we can tell you they are not normal and enable you to analyze them and make a decision. Little things that are interesting – like if you get a new IP address coming into your enterprise 5000 times. Or if a user generally performs 1000 activities in a day, but suddenly does 10,000, or even as simple as a new executable showing up unexpectedly on user machines. Will you chase the occasionally false positive ? definitely, but a lot of the manual log review being performed by the guys with the alphabets after their names is really simply manually chasing trends – this enables you to stop wasting significant time in detecting the trend — all the myriad clues that are easily lost when you are aggregating 20 or 100 million logs a day.

The response from the Beta customers indicates that we are onto something. After all, any thing that can make our (hopefully more) customers’ lives less tedious and their backs hurt less, is all good!

Steve Lafferty

100 Log Management uses #54 PCI Requirements V & VI


Last we looked at PCI-DSS Requirements 3 and 4, so today we are going to look at Requirements 5 and 6. Requirement 5 talks about using AV software, and log management can be used to monitor AV applications to ensure they are running and updated. Requirement 6 is all about building and maintaining a secure network for which log management is a great aid.

-By Ananth

100 Log Management uses #53 PCI Requirements III & IV


Today we continue our journey through the Payment Card Industry Data Security Standard (PCI-DSS). We left off last time with Requirement 2, so today we look at Requirements 3 and 4, and how log management can be used to help ensure compliance.

-By Ananth

Tips for working well with auditors Inside the Walmart breach


Working Well with Auditors For some IT professionals, the mere mention of an audit conjures painful images of being trussed and stuffed like a Thanksgiving turkey. If you’ve ever been through an audit that you weren’t prepared for, you may harbor your own unpleasant images of an audit process gone wrong. As recently as 10-15 years ago, many auditors were just learning their way around the “new world” of IT, while just as many computer and network professionals were beginning to learn their way around the audit world.

PCI-DSS under the gun


Have you been wondering how some of the statements coming from the credit card processing industry seem a little contradictory? You hear about PCI compliant entities being hacked but the PCI guys are still claiming they have never had a compliant merchant successfully breached. Perhaps not, but if both statements are true, you certainly have an ineffective real world standard or problematic certification process at the very least.

Not to pick on Heartland again but Heartland passed their PCI mandated audit and were deemed compliant by a certified PCI Auditor approximately one month prior to the now infamous hack. Yet, at Visa’s Global Security Summit in Washington in March, Visa officials were adamant in pointing out that no PCI compliant organization has been breached.

Now, granted, Heartland was removed from their list of certified vendors after the breach although perhaps this was just a bizarre Catch 22 in play – you are compliant until you are hacked, but when you are hacked the success of the hack makes you non-compliant.

Logically it seems 4 things or a combination of the 4 could potentially have occurred at Heartland. 1) The audit could have been inadequate or the results incorrect leading to a faulty certification. 2) Heartland in the intervening month made a material change in the infrastructure such that it threw them out of compliance. 3) The hack was accomplished in an area outside of the purview of the DSS, or 4) Ms. Richey (and others) is doing some serious whistling past the graveyard.

What is happening in the Heartland case is the classic corporate litigation-averse response to a problem. Anytime something bad happens the blame game starts with multiple targets, and as a corporation your sole goal is to be sure to get behind one or the other (preferably larger) target because when the manure hits the fan the person in the very front is going to get covered. Unfortunately this behavior does not seem to really foster solving the problem as everyone has their lawyers and are not talking.

Regardless, maybe the PCI should not be saying things like “no compliant entity has ever been breached” and maybe say something like “perhaps we have a certification issue here”, or “how do we reach continuous compliance?” or even “what are we missing here?”

-Steve Lafferty

100 Log Management uses #52 PCI Requirement I & II – Building and maintaining a secure network


Today’s blog looks at Requirement 1 of the PCI Data Security Standard, which is about building and maintaining a secure network. We look at how logging solutions such as EventTracker can help you maintain the security of your network by monitoring logs coming from security systems.

-By Ananth

100 Log Management uses #51 Complying with PCI-DSS


Today we are going to start a new series on how logs help you meet PCI DSS. PCI DSS is one of those rare compliance standards that call out specific requirements to collect and review logs. So in the coming weeks, we’ll look at the various sections of the standard and how logs supply the information you need to become compliant. This is the introductory video. As always, comments are welcome.

– By Ananth

Lessons from the Heartland – What is the industry standard for security?


I saw a headline a day or so ago on BankInfoSecurity.com about the Heartland data breach: Lawsuit: Heartland Knew Data Security Standard was ‘Insufficient’. It is worth a read as is the actual complaint document (remarkably readable for legalese, but I suspect the audience for this document was not other lawyers). The main proof of this insufficiency seems to be contained in point 56 in the complaint. I quote:

56. Heartland executives were well aware before the Data Breach occurred that the bare minimum PCI-DSS standards were insufficient to protect it from an attack by sophisticated hackers. For example, on a November 4, 2008 Earnings Call with analysts, Carr remarked that “[w]e also recognize the need to move beyond the lowest common denominator of data security, currently the PCI-DSS standards. We believe it is imperative to move to a higher standard for processing secure transactions, one which we have the ability to implement without waiting for the payments infrastructure to change.” Carr’s comment confirms that the PCI standards are minimal, and that the actual industry standard for security is much higher. (Emphasis added)

Despite not being a mathematician, I do know that the lowest common denominator does not mean minimal or barely adequate, but that aside lets look at the 2 claims in the last sentence.

It is increasingly popular to bash compliance regulations in the security industry these days and often with good reason. We have heard and made the arguments many times before that compliant does not equal secure and further, don’t embrace the standard, embrace the spirit or intent of the standard. But to be honest the PCI DSS Standard is far from minimal, especially by comparison to most other compliance regulations.

The issue with standards has been the fear that they make companies complacent. Does PCI-DSS make you safe from attacks from sophisticated hackers? Well, no, but there is no single regulation, standard or practice out there that will. You can make it hard or harder to get attacked, and PCI-DSS does make it harder, but impossible, no.

Is the Data Security Standard perfect? No. Is the industry safer with it than without it? I would venture a guess that in the case of PCI DSS it is, in fact. That there was significant groaning and a lot of work on the part of the industry to implement the standard would lead one to believe that they were not doing it prior and that there are not a lot of worthless requirements in the DSS. PCI DSS makes a company take positive steps like run vulnerability scans, examine logs for signs of intrusion, and encrypt data. If all those companies handling credit cards prior to the standard were not doing these things, imagine what it was like before?

The second claim is where the real absurdity lays — the assertion that the industry standard for security is so much better than PCI DSS. What industry standard are they talking about exactly? In reality, the industry standard for security is whatever the IT department can cajole, scare, or beg the executives into providing them in terms of budget and resources – which is as little as possible (remember this is capitalism – profits do matter). Using this as a basis, the actual standard for security is to do as little as possible for the least amount of money to avoid being successfully sued, your executives put in jail or losing business. Indeed PCI DSS forced companies to do more, but emphasis on the forced. (So, come to think of it maybe Heartland did not do the industry standard, as they are getting sued, but let’s wait on that outcome!).

Here is where I have my real problem with the entire matter. The statements taken together imply that Heartland had some special knowledge to the DSS’s shortcomings and did nothing, and indeed did not even do what other people in the industry were doing – the “industry standard”. The reality is anyone with a basic knowledge of cyber security and the PCI DSS would have known the limitations, this included no doubt many, many people on the staffs of the banks that are suing. So whatever knowledge Heartland had, the banks that were customers of Heartland knew as well, and even if they did not, Mr. Carr went so far as to announce it in the call noted above. If this statement was so contrary to the norm, why didn’t the banks act in the interest of their customers and insist Heartland shape up or fire them? What happened to the concept of the educated and responsible buyer?

If Heartland was not compliant I have little sympathy for them, or if it can be proved they were negligent, well, have at them. But the banks here took a risk getting into the credit card issuing business– and no doubt made a nice sum of money – but they knew the risk of a data breach and the follow-on expense existed. I thought the nature of risk was that you occasionally lose and in the case of business risk impacts your profits. This lawsuit seems to be like the recent financial bailout – the new expectation of risk in the financial community is when it works, pocket the money, and when it does not, blame someone else to make them pay or get a bailout!

-Steve Lafferty

100 Log Management Uses #50 Data loss prevention (CAG 15)


Today we wrap up our series on the Consensus Audit Guidelines. Over the last couple of months we have looked at the 15 CAG controls that can be automated, and we have examined how log management and log management solutions such as EventTracker can help meet the Guidelines. Today we look at CAG 15 — data loss prevention and examine the many ways logs help in preventing data leakage.

By Ananth

Leverage the audit organization for better security Bankers gone bad and more


Log Management in virtualized environments Back in the early/mid-90s I was in charge of the global network for a software company. We had a single connection to the Internet and had set up an old Sun box as the gatekeeper between our internal network and the ‘net. My “log management” process consisted of keeping a terminal window open on my desktop where I streamed the Sun’s system logs (or “tailed the syslog”) in real time.

IT: Appliance sprawl – Where is the concern?


Over the past few years you have seen an increasing drumbeat in the IT community to server consolidation through Virtualization with all the trumpeted promises of cheaper, greener, more flexible customer focused data centers with never a wasted CPU cycle. It is a siren song to all IT personnel and quite frankly it actually looks like it delivers on a great many of the promises.

Interestingly enough, while reduced CPU wastage, increased flexibility, fewer vendors are all being trumpeted for servers there continues to be little thought provided to purchasing hardware appliances willy-nilly. Hardware appliances started out as specialized devices built or configured in a certain way to maximize performance – A SAN device is a good example, you might want high speed dual port Ethernet and a huge disk capacity with very little requirement for a beefy CPU or memory. These make sense to be appliances. Increasingly however an appliance is a standard Dell or rack mounted rack mounted system with an application installed on it, usually on a special Linux distribution. The advantages to the appliance vendor are many and obvious — a single configuration to test, increased customer lockin, and a tidy up sell potential as the customer finds their event volume growing. From the customer perspective it suffers all the downsides that IT has been trying to get away from – specialized hardware that cannot be re-purposed, more, locked-in hardware vendors, excess capacity or not enough, wasted power from all the appliances running, the list goes on and on and contains all the very things that have caused the move to virtualization. And the major benefit for appliances? Easy to install seems to be the major one. So to provision a new machine, install software might take an hour or so – the end-user is saving that and the downstream cost of maintaining a different machine type eats that up in short order.

Shortsighted IT managers still manage to believe that, even as they move aggressively to consolidate Servers, it is still permissible to buy an appliance even if it is nothing but a thinly veiled Dell or HP Server. This appliance sprawl represents the next clean-up job for IT managers, or will simply eat all the savings they have realized in server consolidation. Instead of 500 servers you have 1 server and 1000 hardware appliances – what have you really achieved? You have replaced relationships with multiple hardware vendors with multiple appliance vendors and worse when a server blew-up at least it was all Windows/Intel configurations so in general so you could keep the applications up and running. Good luck doing that with a proprietary appliance. This duality in IT organizations reminds me somewhat of people that go to the salad bar and load up on the cheese, nuts, bacon bits and marinated vegetables, then act vaguely surprised when the salad bar regimen has no positive effect.

-Steve Lafferty

100 Log Management Uses #49 Wireless device control (CAG control 14)


We now arrive at CAG Control 14. – Wireless Device Control. For this control specialty WIDS scanning tools are the primary defense, that and a lot of configuration policy. This control is primarily a configuration problem not a log problem. Log Management helps  in all the standard ways — collecting and correlating data, monitoring for signs of attack etc. Using EventTracker’s Change component, configuration data in the registry and file system of the client devices can also be collected and alerted on. Generally depending on how one sets the configuration policy, when a change is made it will generate either a log entry or a change in the registry or file system. In this way EventTracker provides a valuable means of enforcement.

By Ananth

Can you count on dark matter?


Eric Knorr, the Editor in Chief over at InfoWorld has been writing about “IT Dark Matter” which he defines as system device and application logs. Turns out half of enterprise data is logs or so-called Dark Matter. Not hugely surprising and certainly good news for the data storage vendors and hopefully for SIEM vendors like us! He described these logs or dark matter as “widely distributed and hidden” which got me thinking. The challenge with blogging is that we have to reduce fairly complex concepts and arguments into simple claims otherwise posts end up being on-line books. The good thing in that simplification, however, is that often gives a good opportunity to point out other topics of discussion.

There are two great challenges in log management – the first is being able to provide the tools and knowledge to make the log data readily available and useful, which leads to Eric’s comment on how Dark Matter is “Hidden” as it is simply too hard to mine without some advanced equipment. The second challenge, however, is preserving the record – making sure it is accurate, complete and unchanged. In Eric’s blog this Dark Matter is “widely distributed” and there is an implied assumption that this Dark Matter is just there to be mined – that the Dark Matter will and does exist and even more so, it is accurate. In reality it is, for all practical purposes, impossible to have logs widely distributed and expect them to be complete and accurate – this fatally weakens their usefulness.

Let’s use a simple illustration we all know well in computer security — almost the first thing a hacker will do once they penetrate a system is shut down logging, or as soon as they finish whatever they are doing, delete or alter the logs. Let’s use the analogy of video surveillance at your local 7/11. How useful would it be if you left the recording equipment out in the open at the cash register unguarded – not real useful, right? When you do nothing to secure the record, the value of the record is compromised, and the more important the record the more likely it is to be compromised or simple deleted.

This is not to imply that there are not useful nuggets to be mined even if the records are distributed. Without attempting to secure and preserve the logs, logs become the trash heap of IT. Archeologists spend much of their time digging through the trash of civilizations to figure out how people lived. Trash is an accurate indication of what really happened simply because 1) it was trash and had no value and 2) no one worried that someone 1000 years later was going to dig it up. It represents a pretty accurate, if fragmentary, picture of day to day existence. But don’t expect to find treasure, state secrets or individual records in the trash heap however. The usefulness of the record is 1) a matter of luck that the record was preserved and 2) directly inverse to the interest of the creating parties to modify it.

 Steve Lafferty

Security threats from well-meaning employees, new HIPAA requirements SMB flaw


The threat within: Protecting information assets from well-meaning employees Most information security experts will agree that employees form the weakest link when it comes to corporate information security. Malicious insiders aside, well-intentioned employees bear responsibility for a large number of breaches today. Whether it’s a phishing scam, a lost USB or mobile device that bears sensitive data, a social engineering attack or downloading unauthorized software, unsophisticated but otherwise well-meaning insiders have the potential of unknowingly opening company networks to costly attacks.

100 Log Management Uses #48 Control of ports, protocols and services (CAG control 13)


Today we look at CAG Control 13 – limitation and control of Ports, Protocols and Services. Hackers search for these kinds of things — software installs for example may turn on services the installer never imagined may be vulnerable, and it is critical to limit new ports being opened or services installed. It is also a good idea to monitor for abnormal or new behavior that indicates that something has escaped internal controls — for instance a system suddenly broadcasting or receiving network traffic on a new Port is something suspicious that should be investigated, new installs or new Services being run is also worth investigation — we will take a look at how Log Management can help you monitor for such occurrences.

By Ananth

Doing the obvious – Why efforts like the Consensus Audit Guidelines are valuable


I came across this interesting (and scary if you are a business person) article in the Washington Post. In a nutshell pretty much every business banks electronically. Some cyber gangs in Eastern Europe have come up with a pretty clever method to swindle money from small and medium sized companies. They do a targeted email attack on the finance guys and get them to click on a bogus attachment – when they do so, key logging malware is installed that harvests electronic bank account passwords. These passwords are then used to transfer large sums of money to the bad guys.

The article is definitely worth a read for a number of reasons, but what I found surprising was first that businesses do not have the same protection from electronic fraud as consumers do so the banks don’t monitor commercial account activity as closely, and second, just how much this type of attack is happening. Turns out businesses only have 2 days to report fraudulent activity instead of a consumer’s 60 days so businesses that suffer a loss usually don’t recover their money.

My first reaction was to ring up our finance guys and tell them about the article. Luckily their overall feel was that since Marketing spent the money as quickly as the Company made it, we were really not too susceptible to this type of attack as we had no money to steal – an unanticipated benefit of a robust (and well paid, naturally!) marketing group. I did make note of this helpful point for use during budget and annual review time.

My other thought was how this demonstrated the usefulness of efforts like the Consensus Audit Guidelines from SANS. Sometime security personnel pooh-pooh the basics but you can make it lot harder on the bad guys with some pretty easy blocking and tackling activity. CAG Control 12 talks about monitoring for active and updated anti-virus and anti-spyware on all systems. Basic, but it really helps – remember a business does not have 60 days but 2. You can’t notice the malware a week after the signatures finally get updated.

There are a number of other activities that can also really help to prevent these attacks in advanced tools such as EventTracker such as change monitoring, tracking first time executable launch, monitoring the AV application has not been shut down and monitoring network activity for anomalous behavior, but that is a story for another day. If you can’t do it all, at least start with the obvious – you might not be safe, but you will be safer.

Steve Lafferty

100 Log Management Uses #47 Malware defense (CAG control 12)


Today we continue our journey through the Consensus Audit Guidelines with a look at CAG 12 — Malware Defense. When people think about the pointy end of the stick for Malware prevention they typically think anti-virus, but log management can certainly improve your chances by adding defense in depth. We also examine some of the additional benefits log management provides.

By Ananth

Managing the virtualized enterprise historic NIST recommendations and more


Every drop in the business cycle brings out the ‘get more value for your money’ strategies.  For IT this usually means either use the tools you have to solve a wider range of problems or buy a tool that with fast initial payback and can be used to solve a wide range of other problems. This series looks at how different log management tasks can be applied to solve a wider range of problems beyond the traditional compliance and security drivers so that companies can get more value for their IT money.

100 Log Management Uses #46 Account Monitoring (CAG control 11)


Today’s Consensus Audit Guideline Control is a good one for logs — account monitoring. Account monitoring should go well beyond simply having a process to get rid of invalid accounts. Today we look at tips and tricks on things to look for in your logs such as excessive failed access to folders or machines, inactive accounts becoming active and other outliers that are indicative of an account being high-jacked.

By Ananth

100 Log Management Uses #45 Continuous vulnerability testing and remediation (CAG control 10)


Today we look at CAG Control 10 — continuous vulnerability testing and remediation. For this control, vulnerability scanning tools like Rapid7 or Tenable are the primary solutions, so how do logs help here? The reality is that most enterprises can’t patch critical infrastructure on a constant basis. There is often a fairly lengthy gap between when you have a known vulnerability and when the fix is applied and so it becomes even more important to monitor logs for system access, anti-virus status, changes in configuration and more.

By Ananth

100 Log Management Uses #44 Data access (CAG control 9)


We continue our journey through the Consensus Audit Guidelines and today look at Control 9 – data access on a need to know basis. Logs help with monitoring of the enforcement of these policies, and user activities such as file, folder access and trends should all be watched closely.

By Ananth