PCI-DSS under the gun

Have you been wondering how some of the statements coming from the credit card processing industry seem a little contradictory? You hear about PCI compliant entities being hacked but the PCI guys are still claiming they have never had a compliant merchant successfully breached. Perhaps not, but if both statements are true, you certainly have an ineffective real world standard or problematic certification process at the very least.

Not to pick on Heartland again but Heartland passed their PCI mandated audit and were deemed compliant by a certified PCI Auditor approximately one month prior to the now infamous hack. Yet, at Visa’s Global Security Summit in Washington in March, Visa officials were adamant in pointing out that no PCI compliant organization has been breached.

Now, granted, Heartland was removed from their list of certified vendors after the breach although perhaps this was just a bizarre Catch 22 in play – you are compliant until you are hacked, but when you are hacked the success of the hack makes you non-compliant.

Logically it seems 4 things or a combination of the 4 could potentially have occurred at Heartland. 1) The audit could have been inadequate or the results incorrect leading to a faulty certification. 2) Heartland in the intervening month made a material change in the infrastructure such that it threw them out of compliance. 3) The hack was accomplished in an area outside of the purview of the DSS, or 4) Ms. Richey (and others) is doing some serious whistling past the graveyard.

What is happening in the Heartland case is the classic corporate litigation-averse response to a problem. Anytime something bad happens the blame game starts with multiple targets, and as a corporation your sole goal is to be sure to get behind one or the other (preferably larger) target because when the manure hits the fan the person in the very front is going to get covered. Unfortunately this behavior does not seem to really foster solving the problem as everyone has their lawyers and are not talking.

Regardless, maybe the PCI should not be saying things like “no compliant entity has ever been breached” and maybe say something like “perhaps we have a certification issue here”, or “how do we reach continuous compliance?” or even “what are we missing here?”

-Steve Lafferty

100 Log Management uses #51 Complying with PCI-DSS

Today we are going to start a new series on how logs help you meet PCI DSS. PCI DSS is one of those rare compliance standards that call out specific requirements to collect and review logs. So in the coming weeks, we’ll look at the various sections of the standard and how logs supply the information you need to become compliant. This is the introductory video. As always, comments are welcome.

– By Ananth

Lessons from the Heartland – What is the industry standard for security?

I saw a headline a day or so ago on BankInfoSecurity.com about the Heartland data breach: Lawsuit: Heartland Knew Data Security Standard was ‘Insufficient’. It is worth a read as is the actual complaint document (remarkably readable for legalese, but I suspect the audience for this document was not other lawyers). The main proof of this insufficiency seems to be contained in point 56 in the complaint. I quote:

56. Heartland executives were well aware before the Data Breach occurred that the bare minimum PCI-DSS standards were insufficient to protect it from an attack by sophisticated hackers. For example, on a November 4, 2008 Earnings Call with analysts, Carr remarked that “[w]e also recognize the need to move beyond the lowest common denominator of data security, currently the PCI-DSS standards. We believe it is imperative to move to a higher standard for processing secure transactions, one which we have the ability to implement without waiting for the payments infrastructure to change.” Carr’s comment confirms that the PCI standards are minimal, and that the actual industry standard for security is much higher. (Emphasis added)

Despite not being a mathematician, I do know that the lowest common denominator does not mean minimal or barely adequate, but that aside lets look at the 2 claims in the last sentence.

It is increasingly popular to bash compliance regulations in the security industry these days and often with good reason. We have heard and made the arguments many times before that compliant does not equal secure and further, don’t embrace the standard, embrace the spirit or intent of the standard. But to be honest the PCI DSS Standard is far from minimal, especially by comparison to most other compliance regulations.

The issue with standards has been the fear that they make companies complacent. Does PCI-DSS make you safe from attacks from sophisticated hackers? Well, no, but there is no single regulation, standard or practice out there that will. You can make it hard or harder to get attacked, and PCI-DSS does make it harder, but impossible, no.

Is the Data Security Standard perfect? No. Is the industry safer with it than without it? I would venture a guess that in the case of PCI DSS it is, in fact. That there was significant groaning and a lot of work on the part of the industry to implement the standard would lead one to believe that they were not doing it prior and that there are not a lot of worthless requirements in the DSS. PCI DSS makes a company take positive steps like run vulnerability scans, examine logs for signs of intrusion, and encrypt data. If all those companies handling credit cards prior to the standard were not doing these things, imagine what it was like before?

The second claim is where the real absurdity lays — the assertion that the industry standard for security is so much better than PCI DSS. What industry standard are they talking about exactly? In reality, the industry standard for security is whatever the IT department can cajole, scare, or beg the executives into providing them in terms of budget and resources – which is as little as possible (remember this is capitalism – profits do matter). Using this as a basis, the actual standard for security is to do as little as possible for the least amount of money to avoid being successfully sued, your executives put in jail or losing business. Indeed PCI DSS forced companies to do more, but emphasis on the forced. (So, come to think of it maybe Heartland did not do the industry standard, as they are getting sued, but let’s wait on that outcome!).

Here is where I have my real problem with the entire matter. The statements taken together imply that Heartland had some special knowledge to the DSS’s shortcomings and did nothing, and indeed did not even do what other people in the industry were doing – the “industry standard”. The reality is anyone with a basic knowledge of cyber security and the PCI DSS would have known the limitations, this included no doubt many, many people on the staffs of the banks that are suing. So whatever knowledge Heartland had, the banks that were customers of Heartland knew as well, and even if they did not, Mr. Carr went so far as to announce it in the call noted above. If this statement was so contrary to the norm, why didn’t the banks act in the interest of their customers and insist Heartland shape up or fire them? What happened to the concept of the educated and responsible buyer?

If Heartland was not compliant I have little sympathy for them, or if it can be proved they were negligent, well, have at them. But the banks here took a risk getting into the credit card issuing business– and no doubt made a nice sum of money – but they knew the risk of a data breach and the follow-on expense existed. I thought the nature of risk was that you occasionally lose and in the case of business risk impacts your profits. This lawsuit seems to be like the recent financial bailout – the new expectation of risk in the financial community is when it works, pocket the money, and when it does not, blame someone else to make them pay or get a bailout!

-Steve Lafferty

100 Log Management Uses #50 Data loss prevention (CAG 15)

Today we wrap up our series on the Consensus Audit Guidelines. Over the last couple of months we have looked at the 15 CAG controls that can be automated, and we have examined how log management and log management solutions such as EventTracker can help meet the Guidelines. Today we look at CAG 15 — data loss prevention and examine the many ways logs help in preventing data leakage.

By Ananth

IT: Appliance sprawl – Where is the concern?

Over the past few years you have seen an increasing drumbeat in the IT community to server consolidation through Virtualization with all the trumpeted promises of cheaper, greener, more flexible customer focused data centers with never a wasted CPU cycle. It is a siren song to all IT personnel and quite frankly it actually looks like it delivers on a great many of the promises.

Interestingly enough, while reduced CPU wastage, increased flexibility, fewer vendors are all being trumpeted for servers there continues to be little thought provided to purchasing hardware appliances willy-nilly. Hardware appliances started out as specialized devices built or configured in a certain way to maximize performance – A SAN device is a good example, you might want high speed dual port Ethernet and a huge disk capacity with very little requirement for a beefy CPU or memory. These make sense to be appliances. Increasingly however an appliance is a standard Dell or rack mounted rack mounted system with an application installed on it, usually on a special Linux distribution. The advantages to the appliance vendor are many and obvious — a single configuration to test, increased customer lockin, and a tidy up sell potential as the customer finds their event volume growing. From the customer perspective it suffers all the downsides that IT has been trying to get away from – specialized hardware that cannot be re-purposed, more, locked-in hardware vendors, excess capacity or not enough, wasted power from all the appliances running, the list goes on and on and contains all the very things that have caused the move to virtualization. And the major benefit for appliances? Easy to install seems to be the major one. So to provision a new machine, install software might take an hour or so – the end-user is saving that and the downstream cost of maintaining a different machine type eats that up in short order.

Shortsighted IT managers still manage to believe that, even as they move aggressively to consolidate Servers, it is still permissible to buy an appliance even if it is nothing but a thinly veiled Dell or HP Server. This appliance sprawl represents the next clean-up job for IT managers, or will simply eat all the savings they have realized in server consolidation. Instead of 500 servers you have 1 server and 1000 hardware appliances – what have you really achieved? You have replaced relationships with multiple hardware vendors with multiple appliance vendors and worse when a server blew-up at least it was all Windows/Intel configurations so in general so you could keep the applications up and running. Good luck doing that with a proprietary appliance. This duality in IT organizations reminds me somewhat of people that go to the salad bar and load up on the cheese, nuts, bacon bits and marinated vegetables, then act vaguely surprised when the salad bar regimen has no positive effect.

-Steve Lafferty

100 Log Management Uses #49 Wireless device control (CAG control 14)

We now arrive at CAG Control 14. – Wireless Device Control. For this control specialty WIDS scanning tools are the primary defense, that and a lot of configuration policy. This control is primarily a configuration problem not a log problem. Log Management helps  in all the standard ways — collecting and correlating data, monitoring for signs of attack etc. Using EventTracker’s Change component, configuration data in the registry and file system of the client devices can also be collected and alerted on. Generally depending on how one sets the configuration policy, when a change is made it will generate either a log entry or a change in the registry or file system. In this way EventTracker provides a valuable means of enforcement.

By Ananth

Can you count on dark matter?

Eric Knorr, the Editor in Chief over at InfoWorld has been writing about “IT Dark Matter” which he defines as system device and application logs. Turns out half of enterprise data is logs or so-called Dark Matter. Not hugely surprising and certainly good news for the data storage vendors and hopefully for SIEM vendors like us! He described these logs or dark matter as “widely distributed and hidden” which got me thinking. The challenge with blogging is that we have to reduce fairly complex concepts and arguments into simple claims otherwise posts end up being on-line books. The good thing in that simplification, however, is that often gives a good opportunity to point out other topics of discussion.

There are two great challenges in log management – the first is being able to provide the tools and knowledge to make the log data readily available and useful, which leads to Eric’s comment on how Dark Matter is “Hidden” as it is simply too hard to mine without some advanced equipment. The second challenge, however, is preserving the record – making sure it is accurate, complete and unchanged. In Eric’s blog this Dark Matter is “widely distributed” and there is an implied assumption that this Dark Matter is just there to be mined – that the Dark Matter will and does exist and even more so, it is accurate. In reality it is, for all practical purposes, impossible to have logs widely distributed and expect them to be complete and accurate – this fatally weakens their usefulness.

Let’s use a simple illustration we all know well in computer security — almost the first thing a hacker will do once they penetrate a system is shut down logging, or as soon as they finish whatever they are doing, delete or alter the logs. Let’s use the analogy of video surveillance at your local 7/11. How useful would it be if you left the recording equipment out in the open at the cash register unguarded – not real useful, right? When you do nothing to secure the record, the value of the record is compromised, and the more important the record the more likely it is to be compromised or simple deleted.

This is not to imply that there are not useful nuggets to be mined even if the records are distributed. Without attempting to secure and preserve the logs, logs become the trash heap of IT. Archeologists spend much of their time digging through the trash of civilizations to figure out how people lived. Trash is an accurate indication of what really happened simply because 1) it was trash and had no value and 2) no one worried that someone 1000 years later was going to dig it up. It represents a pretty accurate, if fragmentary, picture of day to day existence. But don’t expect to find treasure, state secrets or individual records in the trash heap however. The usefulness of the record is 1) a matter of luck that the record was preserved and 2) directly inverse to the interest of the creating parties to modify it.

 Steve Lafferty

100 Log Management Uses #48 Control of ports, protocols and services (CAG control 13)

Today we look at CAG Control 13 – limitation and control of Ports, Protocols and Services. Hackers search for these kinds of things — software installs for example may turn on services the installer never imagined may be vulnerable, and it is critical to limit new ports being opened or services installed. It is also a good idea to monitor for abnormal or new behavior that indicates that something has escaped internal controls — for instance a system suddenly broadcasting or receiving network traffic on a new Port is something suspicious that should be investigated, new installs or new Services being run is also worth investigation — we will take a look at how Log Management can help you monitor for such occurrences.

By Ananth

Doing the obvious – Why efforts like the Consensus Audit Guidelines are valuable

I came across this interesting (and scary if you are a business person) article in the Washington Post. In a nutshell pretty much every business banks electronically. Some cyber gangs in Eastern Europe have come up with a pretty clever method to swindle money from small and medium sized companies. They do a targeted email attack on the finance guys and get them to click on a bogus attachment – when they do so, key logging malware is installed that harvests electronic bank account passwords. These passwords are then used to transfer large sums of money to the bad guys.

The article is definitely worth a read for a number of reasons, but what I found surprising was first that businesses do not have the same protection from electronic fraud as consumers do so the banks don’t monitor commercial account activity as closely, and second, just how much this type of attack is happening. Turns out businesses only have 2 days to report fraudulent activity instead of a consumer’s 60 days so businesses that suffer a loss usually don’t recover their money.

My first reaction was to ring up our finance guys and tell them about the article. Luckily their overall feel was that since Marketing spent the money as quickly as the Company made it, we were really not too susceptible to this type of attack as we had no money to steal – an unanticipated benefit of a robust (and well paid, naturally!) marketing group. I did make note of this helpful point for use during budget and annual review time.

My other thought was how this demonstrated the usefulness of efforts like the Consensus Audit Guidelines from SANS. Sometime security personnel pooh-pooh the basics but you can make it lot harder on the bad guys with some pretty easy blocking and tackling activity. CAG Control 12 talks about monitoring for active and updated anti-virus and anti-spyware on all systems. Basic, but it really helps – remember a business does not have 60 days but 2. You can’t notice the malware a week after the signatures finally get updated.

There are a number of other activities that can also really help to prevent these attacks in advanced tools such as EventTracker such as change monitoring, tracking first time executable launch, monitoring the AV application has not been shut down and monitoring network activity for anomalous behavior, but that is a story for another day. If you can’t do it all, at least start with the obvious – you might not be safe, but you will be safer.

Steve Lafferty

100 Log Management Uses #47 Malware defense (CAG control 12)

Today we continue our journey through the Consensus Audit Guidelines with a look at CAG 12 — Malware Defense. When people think about the pointy end of the stick for Malware prevention they typically think anti-virus, but log management can certainly improve your chances by adding defense in depth. We also examine some of the additional benefits log management provides.

By Ananth

100 Log Management Uses #46 Account Monitoring (CAG control 11)

Today’s Consensus Audit Guideline Control is a good one for logs — account monitoring. Account monitoring should go well beyond simply having a process to get rid of invalid accounts. Today we look at tips and tricks on things to look for in your logs such as excessive failed access to folders or machines, inactive accounts becoming active and other outliers that are indicative of an account being high-jacked.

By Ananth

100 Log Management Uses #45 Continuous vulnerability testing and remediation (CAG control 10)

Today we look at CAG Control 10 — continuous vulnerability testing and remediation. For this control, vulnerability scanning tools like Rapid7 or Tenable are the primary solutions, so how do logs help here? The reality is that most enterprises can’t patch critical infrastructure on a constant basis. There is often a fairly lengthy gap between when you have a known vulnerability and when the fix is applied and so it becomes even more important to monitor logs for system access, anti-virus status, changes in configuration and more.

By Ananth

100 Log Management Uses #42 Administrator privileges and activities (CAG control 8)

Today’s CAG control is a good one for logs – monitoring administrator privileges and activities. As you can imagine, when an Admin account is hacked or when an Admin goes rogue, because of their power, the impact from the breach can be devastating. Luckily most Admin activity is logged so by analyzing the logs you can do a pretty good job of detecting problems.

By Ananth

100 Log Management Uses #38 Meeting CAG controls 3 & 4

Today we continue our look at the Consensus Audit Guidelines, in this case CAG Controls 3 and 4 for maintaining secure configurations on system and network devices. We take a look at how log and configuration monitoring can ensure that configurations remain secure by detecting changes in the secured state.

By Ananth

100 Log Management uses #36 Meeting the Consensus Audit Guidelines (CAG)

Today we are going to begin another series on a standard that leverages logs. The Consensus Audit Guidelines, or CAG for short, is a joint initiative of SANS and a number of Federal CIO’s and CISO’s to put in place some lower level guidelines for FISMA. One of the criticisms of FISMA is that is it is very vague and implementation can be very different from agency to agency. The CAG is a series of recommendations that make it easier for IT to make measurable improvements in security by knocking off some low hanging targets. There are 20 CAG recommended controls and 15 of them can be automated. Over the next few weeks we will look at each one. Hope you enjoy it.

By Ananth

100 Log Management uses #34 Error handling in the web server

Today we conclude our series on OWASP vulnerabilities with a look at A6 — error handling in the web server. Careless or non-configuration of error handling in a web server gives a hacker quite a lot of useful information about the structure of your web application. While careful configuration can take care of many issues, hackers will still probe your application deliberately triggering error conditions to see what information is there to be had. In this video we look at how you can use web server logs to detect whether you are being probed by a potential hacker.

-By Ananth

Compromise to discovery

The Verizon Business Risk Team publishes a useful Data Breach Investigations Report drawn from over 500 forensic engagements over a four-year period.

The report describes a “Time Span of Breach” event broken into four stages of an attack. These are:

– Pre-Attack Research
– Point of Entry to Compromise
– Compromise to Discovery
– Discovery to Containment

The top two are under control of the attacker but the rest are under the control of the defender. Where log management is particularly useful would be in discovery. So what does the 2008 version of the DBIR show about the time between Compromise to Discovery? Months Sigh. Worse yet, in 70% of the cases, Discovery was the victim being notified by someone else.

Conclusion? Most victims do not have sufficient visibility into their own networks and equipment.

It’s not hard but it is tedious. The tedium can be relieved, for the most part, by a one-time setup and configuration of a log management system. Perhaps not the most exciting project you can think of but hard to beat for effectiveness and return on investment.

Ananth