Archive

PCI-DSS under the gun

Have you been wondering how some of the statements coming from the credit card processing industry seem a little contradictory? You hear about PCI compliant entities being hacked but the PCI guys are still claiming they have never had a compliant merchant successfully breached. Perhaps not, but if both statements are true, you certainly have an ineffective real world standard or problematic certification process at the very least.

Not to pick on Heartland again but Heartland passed their PCI mandated audit and were deemed compliant by a certified PCI Auditor approximately one month prior to the now infamous hack. Yet, at Visa’s Global Security Summit in Washington in March, Visa officials were adamant in pointing out that no PCI compliant organization has been breached.

Now, granted, Heartland was removed from their list of certified vendors after the breach although perhaps this was just a bizarre Catch 22 in play – you are compliant until you are hacked, but when you are hacked the success of the hack makes you non-compliant.

Logically it seems 4 things or a combination of the 4 could potentially have occurred at Heartland. 1) The audit could have been inadequate or the results incorrect leading to a faulty certification. 2) Heartland in the intervening month made a material change in the infrastructure such that it threw them out of compliance. 3) The hack was accomplished in an area outside of the purview of the DSS, or 4) Ms. Richey (and others) is doing some serious whistling past the graveyard.

What is happening in the Heartland case is the classic corporate litigation-averse response to a problem. Anytime something bad happens the blame game starts with multiple targets, and as a corporation your sole goal is to be sure to get behind one or the other (preferably larger) target because when the manure hits the fan the person in the very front is going to get covered. Unfortunately this behavior does not seem to really foster solving the problem as everyone has their lawyers and are not talking.

Regardless, maybe the PCI should not be saying things like “no compliant entity has ever been breached” and maybe say something like “perhaps we have a certification issue here”, or “how do we reach continuous compliance?” or even “what are we missing here?”

-Steve Lafferty

100 Log Management uses #52 PCI Requirement I & II – Building and maintaining a secure network

Today’s blog looks at Requirement 1 of the PCI Data Security Standard, which is about building and maintaining a secure network. We look at how logging solutions such as EventTracker can help you maintain the security of your network by monitoring logs coming from security systems.

-By Ananth

100 Log Management uses #51 Complying with PCI-DSS

Today we are going to start a new series on how logs help you meet PCI DSS. PCI DSS is one of those rare compliance standards that call out specific requirements to collect and review logs. So in the coming weeks, we’ll look at the various sections of the standard and how logs supply the information you need to become compliant. This is the introductory video. As always, comments are welcome.

– By Ananth

Lessons from the Heartland – What is the industry standard for security?

I saw a headline a day or so ago on BankInfoSecurity.com about the Heartland data breach: Lawsuit: Heartland Knew Data Security Standard was ‘Insufficient’. It is worth a read as is the actual complaint document (remarkably readable for legalese, but I suspect the audience for this document was not other lawyers). The main proof of this insufficiency seems to be contained in point 56 in the complaint. I quote:

56. Heartland executives were well aware before the Data Breach occurred that the bare minimum PCI-DSS standards were insufficient to protect it from an attack by sophisticated hackers. For example, on a November 4, 2008 Earnings Call with analysts, Carr remarked that “[w]e also recognize the need to move beyond the lowest common denominator of data security, currently the PCI-DSS standards. We believe it is imperative to move to a higher standard for processing secure transactions, one which we have the ability to implement without waiting for the payments infrastructure to change.” Carr’s comment confirms that the PCI standards are minimal, and that the actual industry standard for security is much higher. (Emphasis added)

Despite not being a mathematician, I do know that the lowest common denominator does not mean minimal or barely adequate, but that aside lets look at the 2 claims in the last sentence.

It is increasingly popular to bash compliance regulations in the security industry these days and often with good reason. We have heard and made the arguments many times before that compliant does not equal secure and further, don’t embrace the standard, embrace the spirit or intent of the standard. But to be honest the PCI DSS Standard is far from minimal, especially by comparison to most other compliance regulations.

The issue with standards has been the fear that they make companies complacent. Does PCI-DSS make you safe from attacks from sophisticated hackers? Well, no, but there is no single regulation, standard or practice out there that will. You can make it hard or harder to get attacked, and PCI-DSS does make it harder, but impossible, no.

Is the Data Security Standard perfect? No. Is the industry safer with it than without it? I would venture a guess that in the case of PCI DSS it is, in fact. That there was significant groaning and a lot of work on the part of the industry to implement the standard would lead one to believe that they were not doing it prior and that there are not a lot of worthless requirements in the DSS. PCI DSS makes a company take positive steps like run vulnerability scans, examine logs for signs of intrusion, and encrypt data. If all those companies handling credit cards prior to the standard were not doing these things, imagine what it was like before?

The second claim is where the real absurdity lays — the assertion that the industry standard for security is so much better than PCI DSS. What industry standard are they talking about exactly? In reality, the industry standard for security is whatever the IT department can cajole, scare, or beg the executives into providing them in terms of budget and resources – which is as little as possible (remember this is capitalism – profits do matter). Using this as a basis, the actual standard for security is to do as little as possible for the least amount of money to avoid being successfully sued, your executives put in jail or losing business. Indeed PCI DSS forced companies to do more, but emphasis on the forced. (So, come to think of it maybe Heartland did not do the industry standard, as they are getting sued, but let’s wait on that outcome!).

Here is where I have my real problem with the entire matter. The statements taken together imply that Heartland had some special knowledge to the DSS’s shortcomings and did nothing, and indeed did not even do what other people in the industry were doing – the “industry standard”. The reality is anyone with a basic knowledge of cyber security and the PCI DSS would have known the limitations, this included no doubt many, many people on the staffs of the banks that are suing. So whatever knowledge Heartland had, the banks that were customers of Heartland knew as well, and even if they did not, Mr. Carr went so far as to announce it in the call noted above. If this statement was so contrary to the norm, why didn’t the banks act in the interest of their customers and insist Heartland shape up or fire them? What happened to the concept of the educated and responsible buyer?

If Heartland was not compliant I have little sympathy for them, or if it can be proved they were negligent, well, have at them. But the banks here took a risk getting into the credit card issuing business– and no doubt made a nice sum of money – but they knew the risk of a data breach and the follow-on expense existed. I thought the nature of risk was that you occasionally lose and in the case of business risk impacts your profits. This lawsuit seems to be like the recent financial bailout – the new expectation of risk in the financial community is when it works, pocket the money, and when it does not, blame someone else to make them pay or get a bailout!

-Steve Lafferty

100 Log Management Uses #50 Data loss prevention (CAG 15)

Today we wrap up our series on the Consensus Audit Guidelines. Over the last couple of months we have looked at the 15 CAG controls that can be automated, and we have examined how log management and log management solutions such as EventTracker can help meet the Guidelines. Today we look at CAG 15 — data loss prevention and examine the many ways logs help in preventing data leakage.

By Ananth

Leverage the audit organization for better security Bankers gone bad and more

Log Management in virtualized environments

Back in the early/mid-90s I was in charge of the global network for a software company. We had a single connection to the Internet and had set up an old Sun box as the gatekeeper between our internal network and the ‘net. My “log management” process consisted of keeping a terminal window open on my desktop where I streamed the Sun’s system logs (or “tailed the syslog”) in real time. Since we were using hardcoded IP addresses for the internal desktops, I could tell, just by looking at the log information, which person or device, inside the company, was doing what out on the Internet. If someone outside the company was performing a ping sweep, I saw the evidence in the log file and could respond immediately. This system worked fine for a couple of months. Then we installed a firewall, and a new mail server, and distribution servers in the DMZ, and, well, you get the idea. There was more log information than a single human could parse, not to mention the fact that while I worked a 50 hour week, the log files were on a 168 hour/week schedule.

While my example may seem almost laughably archaic to many, we’re seeing a similar data overload phenomenon occurring in today’s data centers and network operations centers (NOCs). Log management systems that were installed a few years ago to handle 100 servers and applications that can’t scale to today’s needs. What started out as a few gigabytes of log information per week is now a terabyte a day. One reason for the log information explosion is that as companies become comfortable with the technology, they expand the log monitoring coverage scope. Another significant driving factor: virtualization and the advent of the virtualized data center.

Virtualization brings new challenges to network monitoring and log management. Virtualization enables administrators and users to install multiple unique server instances on a single hardware component. The result is a marked increase in server and application installs and a concurrent increase in server and application log data. In addition to more log information, virtualization presents a few additional challenges as well.

Inter-VM traffic refers to data moving between virtual machines running on the same physical machine under a single hypervisor. Because the traffic isn’t moving off the physical device, it will not be seen by monitoring solutions that use physical network based monitoring points like span or mirror ports. Monitoring solutions that are installed directly on hosts will log the devices information, but if there is just one agent on the host and it is not integrated with the hypervisor itself inter-VM data transfer could still be missed. An alternative is to install agents on each virtual machine. Keep in mind, however, that this could impact corporate use licenses by increasing the total number of agent installs. And for companies that want an entirely agent-less solution, this alternative won’t work. Some additional alternatives for inter-VM traffic monitoring are presented below.

What else changes in the virtualized data environment? Well, zone based policy enforcement might. Consider databases. These are often repositories of sensitive information and only approved for install in protected network zones. Virtualization allows organizations to move servers and application quickly between locations and zones using V-motion functionality. The problem comes in when V-motion is used to move a service or server into a zone or location that has an incompatible protection policy. Think of a database of healthcare information that if V-motioned from a high sensitivity zone into a DMZ. Log management can help here by alerting administrators when a system or service is being moved to a zone with a different policy control level. In order to do this, the log management solution must have access to V-motion activity information. VMWare provides migration audit trail information which can be fed into an organizations log management console.

So how do we perform comprehensive log management in virtualized environments? First, it’s critical that the inter-VM “blind-spot” is removed. One option has already been discussed – installing host-based log management agents on every virtual machine instance. If that’s not a good fit for your company consider purchasing a log management or security information and event management solution that has hypervisor-aware agents that can monitor inter-VM traffic. VMWare has a partner program, VMSafe™, which provides application programming interfaces (APIs) so vendor partner solutions can monitor virtual machine memory pages, network traffic passing through the hypervisor, and activity on the virtual machines.

To keep a handle on mushrooming installs, track and report all new server, service and application instances to a central operations or log management console. In cases where unapproved services are being brought-online this can be particularly helpful. For example, if a mail server install is detected this could indicate the installation of a server that hasn’t had core services turned off – or worse – it could be an indication of an e-mail scam or bot-net.

If your log management provider isn’t VM-aware, check to see if any of your firewall or IPS vendors are. If so, the virtual-aware monitoring information from the firewall or IPS sensor on the hypervisor can be passed through to your log management solution in the same way that physical span port information is aggregated. Regardless of how the inter-VM traffic is (on host agent, inter-VM log management, inter-VM firewall/IPS or other sensor) collected, it’s imperative that the information is brought into the existing log management solution; otherwise, you’ll have a significant blind-spot in your log management solution.

Finally, don’t forget to review existing rules and update or amend them as needed for the virtual environment. For example, have rules that manage virtual machine migration audit trails been added? Are new rules required inter-VM traffic monitory for policy or compliance mandates?

Virtualization has introduced great flexibility into networks and data centers. But with this flexibility comes additional log data new monitoring challenges. To make sure you aren’t missing out on any critical information, implement VM-aware monitoring solutions that work with your existing log management installation and update rules and policies.

Related content: Managing the virtualized enterprise: New technologies, new challenges
Because of its many benefits, employing virtual technology is an apparent “no brainer” which explains why so many organizations are jumping on the bandwagon. This whitepaper examines the technology and management challenges that result from virtualization, and how EventTracker addresses them.

Industry News

How CISO’s can leverage the internal audit process
Say the word auditor at any gathering of information security folks, and you can almost feel the hackles rise. Chief information security officers (CISOs) and internal auditors, by definition of their roles, are typically not the best of friends…Yet, the CISO’s traditional adversary can be an effective deputy.

Did you know? EventTracker provides a number of audit-friendly capabilities that can enhance your collaboration efforts such as over 2000 audit ready reportsautomated-audit trail creation and more.

Lawsuit: Heartland knew data security standard was insufficient
Months before announcing the Heartland Payment Systems (HPY) data breach, company CEO Robert Carr told industry analysts that the Payment Card Industry Data Security Standard (PCI DSS) was an insufficient protective measure. This is the contention of a new master complaint filed in the class action suit against Heartland

Note: We have a different take – Read Steve Lafferty’s (Prism’s VP of Marketing) commentary titled, Lessons from the Heartland – What is the industry standard for security? Leave a comment, and tell us your thoughts.

Prism Microsystems named finalist in Government Security News annual homeland security awards
EventTracker recognized as a leader in the security incident and event management category

EventTracker officially in evaluation for Common Criteria EAL 2+
Internationally endorsed framework assures government agencies of EventTracker’s security functionality