Sustainable vs. Situational Values

I am often asked that if Log Management is so important to the modern IT department, then how come more than 80% of the market that “should” have adopted it has not done so?

The cynic says “unless you have best practice as an enforced regulation (think PCI-DSS here)” then twill always be thus.

One reason why I think this is so is because earlier generations never had power tools and found looking at logs to be hard and relatively unrewarding work. That perception is hard to overcome even in this day and age after endless punditry and episode after episode has clarified the value.

Still resisting the value proposition? Then consider a recent column in the NY Times which quotes Dov Seidman, the C.E.O. of LRN who describes two kinds of values: “situational values” and “sustainable values.”

The article is in the context of the current political situation in the US but the same theme applies to many other areas.

“Leaders, companies or individuals guided by situational values do whatever the situation will allow, no matter the wider interests of their communities. For example, a banker who writes a mortgage for someone he knows can’t make the payments over time is acting on situational values, saying: I’ll be gone when the bill comes due.”

At the other end, people inspired by sustainable values act just the opposite, saying: I will never be gone. “I will always be here. Therefore, I must behave in ways that sustain — my employees, my customers, my suppliers, my environment, my country and my future generations.”

We accept that your datacenter grew organically, that back-in-the-day there were no power tools and you dug ditches with your bare hands outside when it was 40 below and tweets were for the birds…but…that was then and this is now.

Get Log Management, it’s a sustainable value.

Ananth

100 Log Management uses #57 PCI Requirement XII

Today we conclude our journey through the PCI Standard with a quick look at Requirement 12. Requirement 12 documents the necessity to setup and maintain a policy for Information Security for employees and contractors. While this is mostly a documentation exercise it does have requirements for monitoring and alerting that log management can certainly help with.

5 cyber security myths, the importance of time synchronization, and more

Time won’t give me time: The importance of time synchronization for Log Management

Does this sound familiar? You get off a late night flight and wearily make your way to your hotel. As you wait to check in, you look at the clocks behind the registration desk and do a double-take. Could it really be 3:24:57 PM in Sydney, 1:36:02 PM in Tokyo, and 11:30:18 PM in New York? Of course not; time zones are separated by full hours – not minutes and seconds. The clocks have become de-synchronized and are showing incorrect readings.

But while de-synchronized clocks at a hotel are a minor nuisance, de-synchronized clocks across distributed servers in a corporate network are a serious and sometimes risky headache. This is all the more apparent when log aggregation and SIEM tools are in use to visualize and correlate activities across geographically distributed networks. Without an accurate timestamp on the log files, these solutions are unable to re-create accurate sequencing patterns for proactive alerting and post-incident forensic purposes.

Think a few minutes or even seconds of log time isn’t important? Consider the famous hacking case recounted by Clifford Stoll in his 1990 real-life thriller, The Cuckoo’s Egg. Using log information, a 75 cent (USD) accounting error was traced back to 9 seconds of unaccounted computer usage. Log data and a series of impressive forensic and tracking techniques enabled Stoll to track-back the attack to Markus Hess, in Hanover, Germany. Hess had been collecting information from US computers and selling the information to the Soviet KGB. A remarkable take-down that started with a mere 9 seconds of lost log data.

Needless to say, accurate synchronization of log file timestamps is a critical lynchpin in an effective log management and SIEM program. But how can organizations improve their time synchronization efforts?

Know what you have

If you don’t know what you’re tracking, it will be impossible to ensure all the log information on the targets is synchronized. First things first: start with a comprehensive inventory of systems, services, and applications in the log management/SIEM environment. Some devices and operating systems use a form of standardized time stamping format: for example, the popular syslogprotocol which is used by many Unix systems, routers, and firewalls, is an in process IETF standard. The latest version of the protocol includes parameters that indicate if the log and system is time synchronized (isSynced) to a reliable external time source and if the synchronization is accurate (synAccuracy).

Other parameters to check for that can impact the accuracy of the synchronization process include the time zone of the device or system and the log time representation, 24 hour clock or AM/PM format. Since all logs do not follow the same exact format, it’s also important that the log parsing engine in use for aggregation and management is capable of identifying where in the log file the timestamp is recorded. Some engines have templates or connectors that automatically parse the file to locate the timestamp and may also provide customizable scripts or graphical wizards where administrators can enter in the parameters to pinpoint the correct location for timestamps in the log. This function is particularly useful when log management systems are collecting log data from applications and custom services which may not be using a standard log format.

Normalize

Once you know where the timestamp information is coming from (geographically, time zone, system, application, and/or service) it’s time to employ normalization techniques within the log management system itself. If a log is being consumed from a device that is known to have a highly accurate and trustworthy external time source, the original timestamp in the log may be deemed acceptable. Keep in mind, however, that the log management engine may still need to normalize the time information to recreate a single meta-time for all the devices so that correlation rules can run effectively.

For example, consider a company with firewalls in their London, New York City, and San Jose offices. The log data from the firewalls are parsed by the engine and alert that at 6:45 pm, 1:45pm, and 10:45am on January 15th 2010 a denial of service was detected. For their local zones, these are the correct timestamps, but if the log management engine normalizes the geographic time into a single meta-time, or Coordinated Universal Time (UTC), it’s clear that all three firewalls were under attack at the same time. Another approach is to tune the time reporting in the devices’ log files to reflect the desired universal time at the correlation engine rather than the correct local time.

For devices and logs that are not accurately synchronized with external time sources, the log management engine could provide its own normalization by tracking the time the log file information was received and time stamping it with an internal time value. This approach guarantees a single time source for the stamping, but accuracy can be impeded by delays in log transfer times and would be ineffective for organizations that batch transfer log information only a few times a day.

Trust the Source

Regardless of which kinds of normalization are used, reliability of the time source matters. During a criminal or forensic examination, the timestamps on your organizations network may be compared to devices outside. Because of this, you want to make sure the source you are using is as accurate as possible. One of the most common protocols in use for time synchronization is NTP (Network Time Protocol)3 which provides time information in UTC. Microsoft Windows systems implement NTP as WTS (Windows Time Service) and some atomic clocks provide data to the Internet for NTP synchronization. One example of this is the NIST Internet Time Service4.

There are some security concerns with NTP because it uses a stateless protocol for transport and is not authenticated. Also, there have been some incidents of denial of service attacks against NTP servers making them temporarily unavailable to supply time information. What can we do about that? Not much – despite the minor security concerns, NTP is the most widely used (and widely supported) protocol for network device time synchronization, so we can do our best to work around these issues. Consider adding extra monitoring and network segregation to authoritative time sources where possible.

All Together Now

When it comes to log management and alerting, the correct time is a must. Determine which devices and systems your log management system is getting inputs from, make sure the time information is accurate by synchronizing via NTP, and perform some kind of normalization on the information – either on the targets or within the log mgmt engine itself. It’s a little tricky to make sure all log information has the correct and accurate time information, but the effort is time well spent.

Footnotes:

1 The Cuckoo’s Egg, by Cliff Stoll, 1990, Pocket Books, ISBN-13: 978-1416507789 and http://en.wikipedia.org/wiki/The_Cuckoo’s_Egg_(book)

2 IETF, RFC 5424, http://tools.ietf.org/html/rfc5424

3 The Network Time Protocol project at http://www.ntp.org/ and IETF, RFC 1305, http://www.ietf.org/rfc/rfc1305.txt

4 NIST Internet Time Service (ITS), http://tf.nist.gov/timefreq/service/its.htm

Next Month: Turning Log Management into Business Intelligence with Relationship Mapping, by Diana Kelley

Industry News

Tech insight: Learn to love log analysis
Log analysis and log management are often considered dirty words to enterprises, unless they’re forced to adopt them for compliance reasons. It’s not that log analysis and management have a negative impact on the security posture of an organization — just the opposite. But their uses and technologies are regularly misunderstood, leading to the potential for security breaches going unnoticed for days, weeks, and sometimes months.

Heartland pays Amex $3.6M over 2008 data breach
Heartland Payment Systems will pay American Express $3.6 million to settle charges relating to the 2008 hacking of its payment system network. This is the first settlement Heartland has reached with a card brand since disclosing the incident in January of 2009.

Did you know?  A security breach can not just result in substantial clean-up costs, but also cause long term damage to corporate reputation, sales, revenue, business relationships and partnerships. Read how Log management solutions not only significantly reduce the risks associated with security breaches through proactive detection and remediation, but also generate significant business value

Five myths about cyber security
While many understand the opportunities created through this shared global infrastructure, known as cyberspace, few Americans understand the threats presented in cyberspace, which regularly arise at individual, organizational and state (or societal) levels.  And these are not small threats…

Did you know? From sophisticated, targeted cyber attacks aimed at penetrating a company’s specific defenses to insider theft, Log Management solutions like EventTracker help detect and deter costly security breaches.

Ovum/Butler Group tech audit of EventTracker
In this 8-page technology audit, a leading analyst firm analyses EventTracker’s product offering, with a focus on functionality, operation, architecture and deployment.

Jack Stose joins Prism Microsystems as VP of Sales
Jack Stose joins Prism’s senior leadership team as VP of sales to help Prism take advantage of the tremendous opportunity presented by the growing adoption of virtualization and cloud computing, and the resultant demand for security solutions that can span both physical and virtual environments.

100 Log Management uses #56 PCI Requirements X and XI

Today we look at the grand-daddy of all logging requirements in PCI — Section 10 (specifically, Section 10.5) and Section 11. As with most of PCI, the requirements are fairly clear and it is hard to understand how someone could accomplish them without log management.

100 Log Management uses #55 PCI Requirements VII, VIII & IX

Today we look at PCI-DSS Requirements 7, 8 and 9. In general these are not quite as applicable as the audit requirements in Requirement 10 which we will be looking at next time, but still log management is useful in several ancillary areas. Restricting access and strong access control are both disciplines log management helps you enforce.

New EventTracker 6.4; 15 reasons why your business may be insecure

Tuning Log Management and SIEM for Compliance Reporting 

The winter holidays are quickly approaching, and one thing that could probably make most IT Security wish lists is a way to produce automated compliance reports that make auditors say “Wow!” In last month’s newsletter, we took a look at ways to work better with auditors. This month, we’re going to do a deeper dive into tuning of log management and SIEM for more effective compliance reporting.

Though being compliant and having a strong, well-managed IT risk posture aren’t always the same thing, they are intertwined. Auditors look for evidence – documentation and reporting that validates and supports compliance activities. For example, if a policy or mandate requires that access to a database be protected and monitored, evidence comprised of a log management or SIEM report can show who accessed that database and when. If the users who accessed the database have roles that are approved for access, the reports can provide proof that the access controls were working.

To ensure that the reports generated by the log management and SIEM solutions support compliance work, it’s important to understand the IT controls underlying the mandates. Last month we discussed some of the regulations and standards that mention log reviews (including HIPAA, PCI, and FISMA). Compliance Frameworks also highlight the importance of log reviews. ISO/IEC 27001:2005 calls for audit logs that record “user activities, exceptions, and information security events(1),” and CoBiT 4.1 references that organizations should “ensure that sufficient chronological information is being stored in operations logs.”

The trick is to know how to translate the log management and SIEM information into reports that speak directly to the requirements. Log review is a fairly broad category – it’s what’s being monitored and reported in the logs that counts. Getting the right set of criteria to monitor for can be challenging, but mapping policy to IT controls is a good place to start. Some mandates are more prescriptive than others. PCI, for example, calls out which areas of reporting will be of high interest to auditors. Is there a credit card number being captured in the logs? That’s an indicator that an application is out compliance with PCI because PANs (Primary Account Numbers) are not allowed to be stored, unencrypted, anywhere in the payment systems.

Some log management and SIEM tools have compliance reporting built in – they might, for example, have a PCI report that you can run that shows what an auditor might look for during an actual audit. This can help with the process by creating a baseline template for reporting, but keep in mind that the pre-canned reports may not tell the entire story. Review the reports to confirm that the correct information is being logged and reported on. Keep in mind that templates created by vendors are designed to meet a large number of customers, so although some event information is clearly in the scope of certain compliance reports, your environment is (probably) not exactly the same as the other guy’s.

To make sure that you’re getting the right level of detail and that you’re covering the right areas, map which systems and events are specifically required for your environment and the set of regulations in your scope. For example, if you’re a hospital or other covered entity, be mindful that HIPAA requires there to be separate/unique logins for access to protected health information. But many healthcare organizations have systems where logins are shared by employees in violation of the regulation. A report that simply looks for unique logins may not tell the whole story because one login could be shared across multiple users. In this case, a covered entity may need to create additional correlation rules to identify that each user has his/her own unique login ID and that logins are timed out on shared machines to force unique logins for access.

What isn’t being monitored may matter for compliance as well. Email logs can be integrated into the larger log management and SIEM reporting console, but not all critical business correspondence goes through email nowadays. Many companies are also using IM and other peer to peer solutions for important business communications – if an organization approves IM for use, adding these systems to the log management review will provide a more complete view of whether or not critical data is being shared. Collaboration workspaces, like Lotus Notes, Microsoft Sharepoint, and Google Docs, are important data repositories where controlled or regulated information may be shared. If these tools are in use in your organization, be sure to capture the relevant log and event information in your reporting console to show to auditors that the broader universe of protected data is being monitored and reported on.

Don’t forget that compliance reporting covers technical IT controls as well as written policy creation and distribution. While a log management solution isn’t a document management tool, it may be possible and advisable to capture the log data from the document tool. Events such as an employee reviewing an acceptable use policy can be brought into the reporting console to round out the compliance reporting coverage.

Finally, be prepared to continue the tuning work as new systems and regulations come online. IT environments and the regulatory landscape change frequently, so don’t expect reporting on these to stay static. Rather, use existing mapping work of policy to controls to leverage re-use where possible. For example, already have unique logins and tight access controls on a database? When a new regulation or standard is activated for your compliance program, look at what is already being reported on. It could be that you’re already gathering the right information. Another area for careful re-use is bringing new systems or applications on-line. Rather than re-invent the compliance reporting wheel, look at how previous versions of the system (or similar versions) were monitored by the log management or SIEM system and confirm that the same level and granularity of compliance reporting can be implemented in the new system. And knowing what, if any, exposures in the reporting system were missing in previous versions of application and systems logs can provide a solid baseline for log and reporting requirements definitions when introducing a new solution.

Log files are treasure troves of data, much of which can be used in effective compliance reporting. To make the most of your solutions, read through the mandates and regulations and translate the words into areas of reporting that can be managed by a log or SIEM solution. Look for exposures in any systems that aren’t already covered and continue to tweak the reporting for new mandates. While this may require a little bit of upfront work, the ongoing benefits for automated compliance reporting will more than make up for the extra effort upfront. And no matter what time of year, more efficient compliance reporting is a great gift we can all appreciate.

Footnotes:

1 ISO/IEC 27001:2005, A.10.10.1

Did you know? EventTracker provides over 2000 pre-configured reports mapped to specific FISMA, PCI-DSS, HIPAA, NISPOM and Sarbanes-Oxley requirements.

Industry News

State pilot shows a way to improve security while cutting costs
The State Department may have cracked a vexing cybersecurity problem. With a program of continuous monitoring…and a focus on critical controls and vulnerabilities (Consensus Audit Guidelines), the agency has significantly improved its IT security while lowering cost.

Did you know? EventTracker supports all 15 automated controls of the Consensus Audit Guidelines to help organizations mitigate the most damaging threats known to be active today.

Compliance as security: The root of insanity
How companies lose their way by confusing a completed compliance checklist with ironclad security…This leads us to the undeniable realization that while a byproduct of security is compliance, the reverse couldn’t be further from the truth.

Did you know? EventTracker doesn’t just help you comply with regulatory requirements, but fundamentally improves your security posture and protects your organization from a wide variety of attacks including , and zero-day attacks

EventTracker 6.4 launches with deep support for virtual infrastructures
EventTracker version 6.4 offers SIEM support for all layers of the virtual environment including the hardware, the management application, the barebones hypervisor, the guest OS, and all resident applications. Also new is a dashboard that identifies any new or out-of-ordinary behavior by user, admin, system, process and IP address to detect hitherto unknown attacks such as zero-day breaches and malware.

Panning for gold in event logs

Ananth, the CEO of Prism is fond of remarking “there is gold in them thar logs…” this is absolutely true but the really hard thing about logs is figuring out how to get the gold out without needing to be the guy with the pencil neck and the 26 letters after their name that enjoys reading logs in their original arcane format. For the rest of us, I am reminded of the old western movies where prospectors pan for gold – squatting by the stream, scooping up dirt and sifting through it looking for gold, all day long, day after day. Whenever I see one of those scenes my back begins to hurt and I feel glad I am not a prospector. At Prism we are in the business of gold extraction tools. We want more people finding gold and lots of it. It is good for both of us.

One of the most common refrains we hear from prospects is they are not quite sure what the gold looks like. When you are panning for gold and you are not sure that glinty thing in the dirt is gold, well, that makes things really challenging. If very few people can recognize the gold we are not going to sell large quantities of tools.

In EventTracker 6.4 we undertook a little project where we asked ourselves “what can we do for the person that does not know enough to really look or ask the right questions?” A lot of log management is looking for the out-of-ordinary, after all. The result is a new dashboard view we call the Enterprise Activity Monitor.

Enterprise Activity uses statistical correlation to looks for things that are simply unusual. We can’t tell you they are necessarily trouble, but we can tell you they are not normal and enable you to analyze them and make a decision. Little things that are interesting – like if you get a new IP address coming into your enterprise 5000 times. Or if a user generally performs 1000 activities in a day, but suddenly does 10,000, or even as simple as a new executable showing up unexpectedly on user machines. Will you chase the occasionally false positive ? definitely, but a lot of the manual log review being performed by the guys with the alphabets after their names is really simply manually chasing trends – this enables you to stop wasting significant time in detecting the trend — all the myriad clues that are easily lost when you are aggregating 20 or 100 million logs a day.

The response from the Beta customers indicates that we are onto something. After all, any thing that can make our (hopefully more) customers’ lives less tedious and their backs hurt less, is all good!

Steve Lafferty

100 Log Management uses #54 PCI Requirements V & VI

Last we looked at PCI-DSS Requirements 3 and 4, so today we are going to look at Requirements 5 and 6. Requirement 5 talks about using AV software, and log management can be used to monitor AV applications to ensure they are running and updated. Requirement 6 is all about building and maintaining a secure network for which log management is a great aid.

-By Ananth

100 Log Management uses #53 PCI Requirements III & IV

Today we continue our journey through the Payment Card Industry Data Security Standard (PCI-DSS). We left off last time with Requirement 2, so today we look at Requirements 3 and 4, and how log management can be used to help ensure compliance.

-By Ananth

Tips for working well with auditors Inside the Walmart breach

Working Well with Auditors 

For some IT professionals, the mere mention of an audit conjures painful images of being trussed and stuffed like a Thanksgiving turkey. If you’ve ever been through an audit that you weren’t prepared for, you may harbor your own unpleasant images of an audit process gone wrong. As recently as 10-15 years ago, many auditors were just learning their way around the “new world” of IT, while just as many computer and network professionals were beginning to learn their way around the audit world.

At that time, auditors were seen as the people that swooped in and made an IT staffer’s life miserable – by telling them where their controls were failing, by pointing out control deficiencies (both real and imaginary) to management, and by recommending difficult to implement fixes that may have satisfied a regulatory requirement but didn’t take into account the underlying business processes.

Caught in a communications stalemate, many IT and audit departments operated at odds for years. And, unfortunately, that’s where some of us still are. But the world keeps turning. It’s time to move on – to leverage the complimentary roles that IT and audit fulfill to achieve maximum effectiveness in our risk management programs. By working cooperatively with the internal or external audit teams, IT and security can gain support and cost-justification for risk mitigation projects.

Turning Log Review into Log Management

Think it’s not possible for IT, security and audit to work well together? Not so – consider log management. Many regulations explicitly or implicitly require log review. PCI is explicit, requiring that every log, for every system in the cardholder data environment (CDE), be reviewed every day1. In healthcare, HIPAA calls for regular review of records2, like audit logs and FISMA, the Federal Information Security Management Act,3 calls for log review for federal agencies. What’s interesting about these mandates is that while all of them call for review of the log files, none of them specify how to accomplish a comprehensive log review program. Depending on the size of the organization and the number of systems on the network, the log files could account for gigabyte or even terabytes of data per week. Parsing through all of that information manually would be extremely labor intensive and inefficient. Automated log management: aggregating the log information into a central spot and using an automated parsing engine to sift through it all is a more effective and achievable approach.

Log management for security’s sake alone may be difficult to “sell” to executives as an investment that will benefit the organization. It’s not uncommon to hear budgetary war stories from IT and security administrators who unhappily watch log management funding get cut quarter after quarter in favor of other projects that are deemed more impactful to the company’s bottom line. And here is where the auditor/IT relationship can come into focus. Auditors are looking for controls and systems that enable them to sign off on log review requirements, IT and security are looking for ways to meet those requirements in an effective way. By linking a log management implementation project to a compliance requirement, the cost-justification for the program is elevated and is far more likely to stay in the budget after the next round of cuts.

Tips for Working Well with Auditors

Hopefully you’re now convinced that auditors and IT work better in a cooperative rather than competitive environment. But if you’ve never worked with auditors before, you might be wondering how you can bridge the communication gap. To help you with that, here’s a short list of tips that I’ve seen work in a number of organizations:

  • Speak their Language – Know the regulations and mandates the auditor is checking for and be sure you are using normalized terms to describe your controls. For example, NIST SP800-53 refers to “audit records” and “user activity logs.” If your department has a different name for this information; be sure to have a notation in your reporting that explains why your “syslogs” are functionally equivalent to NIST’s “activity logs.”
  • Know the Frameworks – Many auditors use well-known compliance frameworks to round out their regulatory specific assessment process. If you have controls in place that map to these frameworks, call this out for the auditor. Using log management as an example there are maps to ISO/IEC 27001:2005, A.10.10.1: “Audit logs recording user activities, exceptions, and information security events shall be produced” and COBIT 4.1 DS13.3: “Ensure that sufficient chronological information is being stored in operations logs to enable … reconstruction, review and examination…”
  • Write it Down – While techies are great at white boarding – they don’t always excel at written documentation. To an auditor a perfectly implemented process and set of controls is still materially deficient without current documentation to go with it. Make sure not only that you have the required documents ready for the auditor, but also that it is up to date and accurate.
  • Make it Clear – Network maps that show zoning and segmentation as well as locations of relevant systems will help the auditors assess compliance and, where appropriate, help to reduce the scope of the audit zone. Name audit sensitive systems according to a standardized model, such as by location or purpose. While it might be fun to name your mail servers and firewalls Kenny, Cartman, Kyle, and Stan – it’s not going to help an auditor identify these systems during an assessment.
  • Anticipate their Reporting Needs – Generate reports that are mapped back to the regulations or mandates in question. In the case of log management systems, build rules that identify auditor hot-buttons such as: logging user access to a database that stores credit card information or proof of encryption controls in a database storing PII.

Summary

There’s an old aphorism that says you can catch more flies with honey than with vinegar. The same might be said of successful compliance work. While it may be tempting to recoil when you see the person with the compliance checklist, it’s more effective to work with, rather than against the audit team. What you might find out is that not only is your next audit season a little less contentious, but also that you may have found an ally in the cost-justification process.

Footnotes:

1 PCI DSS Requirements 10.2 “Implement automated audit trails for all system components” and 10.6, “Review logs for all system components at least daily,” PCI DSS v1.2.1, July 2009
2 HIPAA 164.308(a)(1)(ii)(D): “. . . regularly review records of information system activity, such as audit logs,” Code of Federal Regulations (CFR) Part 164

3 NIST SP800-53, AC-13: “The organization reviews audit records (e.g., user activity logs) for inappropriate activities” and NIST SP800-92

Industry News

Big-Box breach – The inside story of Walmart’s attack
Internal documents reveal for the first time that the nation’s largest retailer was among the earliest targets of a wave of cyber attacks that went after the bank-card processing systems of brick-and-mortar stores around the United States beginning in 2005.

Did you know? EventTracker combines both Log Management and Change Monitoring capabilities to provide holistic protection from risks posed by hackers

Manage your Network right
Focus on specialized tools targeting specific areas of network management – As current IT trends push us to the lofty goal of cloud computing, and Software as a Service is promoted by all the biggest software vendors, now is the time to be sure that your network-management capabilities are as good as money can buy.

Note: EventTracker beats products from IBM, CA and BMC in the above article. Don’t miss the review on page 3.

PCI-DSS under the gun

Have you been wondering how some of the statements coming from the credit card processing industry seem a little contradictory? You hear about PCI compliant entities being hacked but the PCI guys are still claiming they have never had a compliant merchant successfully breached. Perhaps not, but if both statements are true, you certainly have an ineffective real world standard or problematic certification process at the very least.

Not to pick on Heartland again but Heartland passed their PCI mandated audit and were deemed compliant by a certified PCI Auditor approximately one month prior to the now infamous hack. Yet, at Visa’s Global Security Summit in Washington in March, Visa officials were adamant in pointing out that no PCI compliant organization has been breached.

Now, granted, Heartland was removed from their list of certified vendors after the breach although perhaps this was just a bizarre Catch 22 in play – you are compliant until you are hacked, but when you are hacked the success of the hack makes you non-compliant.

Logically it seems 4 things or a combination of the 4 could potentially have occurred at Heartland. 1) The audit could have been inadequate or the results incorrect leading to a faulty certification. 2) Heartland in the intervening month made a material change in the infrastructure such that it threw them out of compliance. 3) The hack was accomplished in an area outside of the purview of the DSS, or 4) Ms. Richey (and others) is doing some serious whistling past the graveyard.

What is happening in the Heartland case is the classic corporate litigation-averse response to a problem. Anytime something bad happens the blame game starts with multiple targets, and as a corporation your sole goal is to be sure to get behind one or the other (preferably larger) target because when the manure hits the fan the person in the very front is going to get covered. Unfortunately this behavior does not seem to really foster solving the problem as everyone has their lawyers and are not talking.

Regardless, maybe the PCI should not be saying things like “no compliant entity has ever been breached” and maybe say something like “perhaps we have a certification issue here”, or “how do we reach continuous compliance?” or even “what are we missing here?”

-Steve Lafferty

100 Log Management uses #52 PCI Requirement I & II – Building and maintaining a secure network

Today’s blog looks at Requirement 1 of the PCI Data Security Standard, which is about building and maintaining a secure network. We look at how logging solutions such as EventTracker can help you maintain the security of your network by monitoring logs coming from security systems.

-By Ananth

100 Log Management uses #51 Complying with PCI-DSS

Today we are going to start a new series on how logs help you meet PCI DSS. PCI DSS is one of those rare compliance standards that call out specific requirements to collect and review logs. So in the coming weeks, we’ll look at the various sections of the standard and how logs supply the information you need to become compliant. This is the introductory video. As always, comments are welcome.

– By Ananth

Lessons from the Heartland – What is the industry standard for security?

I saw a headline a day or so ago on BankInfoSecurity.com about the Heartland data breach: Lawsuit: Heartland Knew Data Security Standard was ‘Insufficient’. It is worth a read as is the actual complaint document (remarkably readable for legalese, but I suspect the audience for this document was not other lawyers). The main proof of this insufficiency seems to be contained in point 56 in the complaint. I quote:

56. Heartland executives were well aware before the Data Breach occurred that the bare minimum PCI-DSS standards were insufficient to protect it from an attack by sophisticated hackers. For example, on a November 4, 2008 Earnings Call with analysts, Carr remarked that “[w]e also recognize the need to move beyond the lowest common denominator of data security, currently the PCI-DSS standards. We believe it is imperative to move to a higher standard for processing secure transactions, one which we have the ability to implement without waiting for the payments infrastructure to change.” Carr’s comment confirms that the PCI standards are minimal, and that the actual industry standard for security is much higher. (Emphasis added)

Despite not being a mathematician, I do know that the lowest common denominator does not mean minimal or barely adequate, but that aside lets look at the 2 claims in the last sentence.

It is increasingly popular to bash compliance regulations in the security industry these days and often with good reason. We have heard and made the arguments many times before that compliant does not equal secure and further, don’t embrace the standard, embrace the spirit or intent of the standard. But to be honest the PCI DSS Standard is far from minimal, especially by comparison to most other compliance regulations.

The issue with standards has been the fear that they make companies complacent. Does PCI-DSS make you safe from attacks from sophisticated hackers? Well, no, but there is no single regulation, standard or practice out there that will. You can make it hard or harder to get attacked, and PCI-DSS does make it harder, but impossible, no.

Is the Data Security Standard perfect? No. Is the industry safer with it than without it? I would venture a guess that in the case of PCI DSS it is, in fact. That there was significant groaning and a lot of work on the part of the industry to implement the standard would lead one to believe that they were not doing it prior and that there are not a lot of worthless requirements in the DSS. PCI DSS makes a company take positive steps like run vulnerability scans, examine logs for signs of intrusion, and encrypt data. If all those companies handling credit cards prior to the standard were not doing these things, imagine what it was like before?

The second claim is where the real absurdity lays — the assertion that the industry standard for security is so much better than PCI DSS. What industry standard are they talking about exactly? In reality, the industry standard for security is whatever the IT department can cajole, scare, or beg the executives into providing them in terms of budget and resources – which is as little as possible (remember this is capitalism – profits do matter). Using this as a basis, the actual standard for security is to do as little as possible for the least amount of money to avoid being successfully sued, your executives put in jail or losing business. Indeed PCI DSS forced companies to do more, but emphasis on the forced. (So, come to think of it maybe Heartland did not do the industry standard, as they are getting sued, but let’s wait on that outcome!).

Here is where I have my real problem with the entire matter. The statements taken together imply that Heartland had some special knowledge to the DSS’s shortcomings and did nothing, and indeed did not even do what other people in the industry were doing – the “industry standard”. The reality is anyone with a basic knowledge of cyber security and the PCI DSS would have known the limitations, this included no doubt many, many people on the staffs of the banks that are suing. So whatever knowledge Heartland had, the banks that were customers of Heartland knew as well, and even if they did not, Mr. Carr went so far as to announce it in the call noted above. If this statement was so contrary to the norm, why didn’t the banks act in the interest of their customers and insist Heartland shape up or fire them? What happened to the concept of the educated and responsible buyer?

If Heartland was not compliant I have little sympathy for them, or if it can be proved they were negligent, well, have at them. But the banks here took a risk getting into the credit card issuing business– and no doubt made a nice sum of money – but they knew the risk of a data breach and the follow-on expense existed. I thought the nature of risk was that you occasionally lose and in the case of business risk impacts your profits. This lawsuit seems to be like the recent financial bailout – the new expectation of risk in the financial community is when it works, pocket the money, and when it does not, blame someone else to make them pay or get a bailout!

-Steve Lafferty

100 Log Management Uses #50 Data loss prevention (CAG 15)

Today we wrap up our series on the Consensus Audit Guidelines. Over the last couple of months we have looked at the 15 CAG controls that can be automated, and we have examined how log management and log management solutions such as EventTracker can help meet the Guidelines. Today we look at CAG 15 — data loss prevention and examine the many ways logs help in preventing data leakage.

By Ananth

Leverage the audit organization for better security Bankers gone bad and more

Log Management in virtualized environments

Back in the early/mid-90s I was in charge of the global network for a software company. We had a single connection to the Internet and had set up an old Sun box as the gatekeeper between our internal network and the ‘net. My “log management” process consisted of keeping a terminal window open on my desktop where I streamed the Sun’s system logs (or “tailed the syslog”) in real time. Since we were using hardcoded IP addresses for the internal desktops, I could tell, just by looking at the log information, which person or device, inside the company, was doing what out on the Internet. If someone outside the company was performing a ping sweep, I saw the evidence in the log file and could respond immediately. This system worked fine for a couple of months. Then we installed a firewall, and a new mail server, and distribution servers in the DMZ, and, well, you get the idea. There was more log information than a single human could parse, not to mention the fact that while I worked a 50 hour week, the log files were on a 168 hour/week schedule.

While my example may seem almost laughably archaic to many, we’re seeing a similar data overload phenomenon occurring in today’s data centers and network operations centers (NOCs). Log management systems that were installed a few years ago to handle 100 servers and applications that can’t scale to today’s needs. What started out as a few gigabytes of log information per week is now a terabyte a day. One reason for the log information explosion is that as companies become comfortable with the technology, they expand the log monitoring coverage scope. Another significant driving factor: virtualization and the advent of the virtualized data center.

Virtualization brings new challenges to network monitoring and log management. Virtualization enables administrators and users to install multiple unique server instances on a single hardware component. The result is a marked increase in server and application installs and a concurrent increase in server and application log data. In addition to more log information, virtualization presents a few additional challenges as well.

Inter-VM traffic refers to data moving between virtual machines running on the same physical machine under a single hypervisor. Because the traffic isn’t moving off the physical device, it will not be seen by monitoring solutions that use physical network based monitoring points like span or mirror ports. Monitoring solutions that are installed directly on hosts will log the devices information, but if there is just one agent on the host and it is not integrated with the hypervisor itself inter-VM data transfer could still be missed. An alternative is to install agents on each virtual machine. Keep in mind, however, that this could impact corporate use licenses by increasing the total number of agent installs. And for companies that want an entirely agent-less solution, this alternative won’t work. Some additional alternatives for inter-VM traffic monitoring are presented below.

What else changes in the virtualized data environment? Well, zone based policy enforcement might. Consider databases. These are often repositories of sensitive information and only approved for install in protected network zones. Virtualization allows organizations to move servers and application quickly between locations and zones using V-motion functionality. The problem comes in when V-motion is used to move a service or server into a zone or location that has an incompatible protection policy. Think of a database of healthcare information that if V-motioned from a high sensitivity zone into a DMZ. Log management can help here by alerting administrators when a system or service is being moved to a zone with a different policy control level. In order to do this, the log management solution must have access to V-motion activity information. VMWare provides migration audit trail information which can be fed into an organizations log management console.

So how do we perform comprehensive log management in virtualized environments? First, it’s critical that the inter-VM “blind-spot” is removed. One option has already been discussed – installing host-based log management agents on every virtual machine instance. If that’s not a good fit for your company consider purchasing a log management or security information and event management solution that has hypervisor-aware agents that can monitor inter-VM traffic. VMWare has a partner program, VMSafe™, which provides application programming interfaces (APIs) so vendor partner solutions can monitor virtual machine memory pages, network traffic passing through the hypervisor, and activity on the virtual machines.

To keep a handle on mushrooming installs, track and report all new server, service and application instances to a central operations or log management console. In cases where unapproved services are being brought-online this can be particularly helpful. For example, if a mail server install is detected this could indicate the installation of a server that hasn’t had core services turned off – or worse – it could be an indication of an e-mail scam or bot-net.

If your log management provider isn’t VM-aware, check to see if any of your firewall or IPS vendors are. If so, the virtual-aware monitoring information from the firewall or IPS sensor on the hypervisor can be passed through to your log management solution in the same way that physical span port information is aggregated. Regardless of how the inter-VM traffic is (on host agent, inter-VM log management, inter-VM firewall/IPS or other sensor) collected, it’s imperative that the information is brought into the existing log management solution; otherwise, you’ll have a significant blind-spot in your log management solution.

Finally, don’t forget to review existing rules and update or amend them as needed for the virtual environment. For example, have rules that manage virtual machine migration audit trails been added? Are new rules required inter-VM traffic monitory for policy or compliance mandates?

Virtualization has introduced great flexibility into networks and data centers. But with this flexibility comes additional log data new monitoring challenges. To make sure you aren’t missing out on any critical information, implement VM-aware monitoring solutions that work with your existing log management installation and update rules and policies.

Related content: Managing the virtualized enterprise: New technologies, new challenges
Because of its many benefits, employing virtual technology is an apparent “no brainer” which explains why so many organizations are jumping on the bandwagon. This whitepaper examines the technology and management challenges that result from virtualization, and how EventTracker addresses them.

Industry News

How CISO’s can leverage the internal audit process
Say the word auditor at any gathering of information security folks, and you can almost feel the hackles rise. Chief information security officers (CISOs) and internal auditors, by definition of their roles, are typically not the best of friends…Yet, the CISO’s traditional adversary can be an effective deputy.

Did you know? EventTracker provides a number of audit-friendly capabilities that can enhance your collaboration efforts such as over 2000 audit ready reportsautomated-audit trail creation and more.

Lawsuit: Heartland knew data security standard was insufficient
Months before announcing the Heartland Payment Systems (HPY) data breach, company CEO Robert Carr told industry analysts that the Payment Card Industry Data Security Standard (PCI DSS) was an insufficient protective measure. This is the contention of a new master complaint filed in the class action suit against Heartland

Note: We have a different take – Read Steve Lafferty’s (Prism’s VP of Marketing) commentary titled, Lessons from the Heartland – What is the industry standard for security? Leave a comment, and tell us your thoughts.

Prism Microsystems named finalist in Government Security News annual homeland security awards
EventTracker recognized as a leader in the security incident and event management category

EventTracker officially in evaluation for Common Criteria EAL 2+
Internationally endorsed framework assures government agencies of EventTracker’s security functionality

IT: Appliance sprawl – Where is the concern?

Over the past few years you have seen an increasing drumbeat in the IT community to server consolidation through Virtualization with all the trumpeted promises of cheaper, greener, more flexible customer focused data centers with never a wasted CPU cycle. It is a siren song to all IT personnel and quite frankly it actually looks like it delivers on a great many of the promises.

Interestingly enough, while reduced CPU wastage, increased flexibility, fewer vendors are all being trumpeted for servers there continues to be little thought provided to purchasing hardware appliances willy-nilly. Hardware appliances started out as specialized devices built or configured in a certain way to maximize performance – A SAN device is a good example, you might want high speed dual port Ethernet and a huge disk capacity with very little requirement for a beefy CPU or memory. These make sense to be appliances. Increasingly however an appliance is a standard Dell or rack mounted rack mounted system with an application installed on it, usually on a special Linux distribution. The advantages to the appliance vendor are many and obvious — a single configuration to test, increased customer lockin, and a tidy up sell potential as the customer finds their event volume growing. From the customer perspective it suffers all the downsides that IT has been trying to get away from – specialized hardware that cannot be re-purposed, more, locked-in hardware vendors, excess capacity or not enough, wasted power from all the appliances running, the list goes on and on and contains all the very things that have caused the move to virtualization. And the major benefit for appliances? Easy to install seems to be the major one. So to provision a new machine, install software might take an hour or so – the end-user is saving that and the downstream cost of maintaining a different machine type eats that up in short order.

Shortsighted IT managers still manage to believe that, even as they move aggressively to consolidate Servers, it is still permissible to buy an appliance even if it is nothing but a thinly veiled Dell or HP Server. This appliance sprawl represents the next clean-up job for IT managers, or will simply eat all the savings they have realized in server consolidation. Instead of 500 servers you have 1 server and 1000 hardware appliances – what have you really achieved? You have replaced relationships with multiple hardware vendors with multiple appliance vendors and worse when a server blew-up at least it was all Windows/Intel configurations so in general so you could keep the applications up and running. Good luck doing that with a proprietary appliance. This duality in IT organizations reminds me somewhat of people that go to the salad bar and load up on the cheese, nuts, bacon bits and marinated vegetables, then act vaguely surprised when the salad bar regimen has no positive effect.

-Steve Lafferty

100 Log Management Uses #49 Wireless device control (CAG control 14)

We now arrive at CAG Control 14. – Wireless Device Control. For this control specialty WIDS scanning tools are the primary defense, that and a lot of configuration policy. This control is primarily a configuration problem not a log problem. Log Management helps  in all the standard ways — collecting and correlating data, monitoring for signs of attack etc. Using EventTracker’s Change component, configuration data in the registry and file system of the client devices can also be collected and alerted on. Generally depending on how one sets the configuration policy, when a change is made it will generate either a log entry or a change in the registry or file system. In this way EventTracker provides a valuable means of enforcement.

By Ananth

Can you count on dark matter?

Eric Knorr, the Editor in Chief over at InfoWorld has been writing about “IT Dark Matter” which he defines as system device and application logs. Turns out half of enterprise data is logs or so-called Dark Matter. Not hugely surprising and certainly good news for the data storage vendors and hopefully for SIEM vendors like us! He described these logs or dark matter as “widely distributed and hidden” which got me thinking. The challenge with blogging is that we have to reduce fairly complex concepts and arguments into simple claims otherwise posts end up being on-line books. The good thing in that simplification, however, is that often gives a good opportunity to point out other topics of discussion.

There are two great challenges in log management – the first is being able to provide the tools and knowledge to make the log data readily available and useful, which leads to Eric’s comment on how Dark Matter is “Hidden” as it is simply too hard to mine without some advanced equipment. The second challenge, however, is preserving the record – making sure it is accurate, complete and unchanged. In Eric’s blog this Dark Matter is “widely distributed” and there is an implied assumption that this Dark Matter is just there to be mined – that the Dark Matter will and does exist and even more so, it is accurate. In reality it is, for all practical purposes, impossible to have logs widely distributed and expect them to be complete and accurate – this fatally weakens their usefulness.

Let’s use a simple illustration we all know well in computer security — almost the first thing a hacker will do once they penetrate a system is shut down logging, or as soon as they finish whatever they are doing, delete or alter the logs. Let’s use the analogy of video surveillance at your local 7/11. How useful would it be if you left the recording equipment out in the open at the cash register unguarded – not real useful, right? When you do nothing to secure the record, the value of the record is compromised, and the more important the record the more likely it is to be compromised or simple deleted.

This is not to imply that there are not useful nuggets to be mined even if the records are distributed. Without attempting to secure and preserve the logs, logs become the trash heap of IT. Archeologists spend much of their time digging through the trash of civilizations to figure out how people lived. Trash is an accurate indication of what really happened simply because 1) it was trash and had no value and 2) no one worried that someone 1000 years later was going to dig it up. It represents a pretty accurate, if fragmentary, picture of day to day existence. But don’t expect to find treasure, state secrets or individual records in the trash heap however. The usefulness of the record is 1) a matter of luck that the record was preserved and 2) directly inverse to the interest of the creating parties to modify it.

 Steve Lafferty

Security threats from well-meaning employees, new HIPAA requirements SMB flaw

The threat within: Protecting information assets from well-meaning employees

Most information security experts will agree that employees form the weakest link when it comes to corporate information security. Malicious insiders aside, well-intentioned employees bear responsibility for a large number of breaches today. Whether it’s a phishing scam, a lost USB or mobile device that bears sensitive data, a social engineering attack or downloading unauthorized software, unsophisticated but otherwise well-meaning insiders have the potential of unknowingly opening company networks to costly attacks.

These types of internal threats can be particularly hard to detect especially if a company has placed most of its efforts on shoring up external security. For instance, some cyber gangs in Eastern Europe have come up with a pretty clever method to swindle money from small US companies. They send targeted Phishing emails to the company’s treasurer that contains a link which, when opened, installs malicious software that harvests account passwords. Using this information, the criminals initiate wire transfers in small enough amounts to avoid triggering anti money laundering procedures. In cases like these, traditional defenses (firewalls, anti-virus etc) prove to be useless as legitimate accounts are used to commit fraud. This story is not uncommon. In a study conducted by Ponemon Institute earlier this year, it was found that over 88% of data breaches were caused by employee based negligence. In another survey of over 400 business technology professionals by Information Week Analytics, a majority of respondents stated that locking down inside nodes was just as vital as perimeter security.

Employees, the weakest link

Let’s take a look at some of the easy ways that employees can compromise a company’s confidential data without really meaning to.

Social engineering attacks – In its basic form, this refers to hackers manipulating employees out of their usernames and passwords to get access to confidential data. They typically do this by tracking down detailed information that can be used to gain the trust of the employee. With the growing popularity of social networking sites, and the amount of seemingly innocent data that a typical employee shares on these sites, this information is not hard to track down for the resourceful hacker. Email addresses, job titles, work-related discussions, nicknames, all can provide valuable information to launch targeted phishing attacks or trick emails that lead an unsuspecting employee to hand over account information to a hacker posing as a trusted resource. Once the account information has been obtained hackers can penetrate perimeter defense systems. Read more

Industry News

SANS interviews Ananth, CEO of Prism Microsystems, as part of their Security Thought Leader program
Ananth talks with Stephen Northcutt of SANS about trends in Log Management/SIEM, cloud computing, and the “shallow-root” problem of current SIEM solutions

Court allows suit against bank for lax security 
In a ruling issued last month, the District Court for the Northern District of Illinois, denied a request by Citizens Financial Bank to dismiss a negligence claim brought against it by Marsha and Michael Shames-Yeakel. The Crown Point, Ind. couple — customers of the bank — alleged that Citizens’ failure to implement up-to-date user authentication measures resulted in the theft of more than $26,000 from their home equity line of credit.

HITECH Act ramps up HIPAA compliance requirements
The American Recovery and Reinvestment Act of 2009 (ARRA) includes a section that expands the reach of the Health Insurance Portability and Accountability Act (HIPAA) and introduces the first federally mandated data breach notification requirement.

Note: While this article is a few months old, it is a must-read. In particular, the part about (stiffer) penalties being funneled back into the Department of Health and Human Services’. HIPAA has essentially been a toothless tiger, this could be a sign that it is getting new teeth.

Former IT Specialist Hacks into Charity’s Network
A computer specialist has been arrested and indicted for breaking into his former employer’s computer network one year after he was let go. The admin is accused of causing significant damage by deleting records and crippling critical communication systems such as email and telephone.

Did you know? EventTracker offers advanced protection from insider threats, whether it’s a malicious employee or ex-employee looking to steal confidential data or an unsophisticated employee that accidentally causes a breach

Attackers target Microsoft IIS; new SMB flaw discovered
Microsoft updated an advisory, warning customers that attacks have been detected against a zero-day flaw affecting its FTP Service in Microsoft Internet Information Services (IIS). Meanwhile, new exploit code surfaced last weekend, targeting a zero-day vulnerability in Microsoft Server Message Block (SMB).

Did you know? EventTracker’s integrated file integrity and registry monitoring module detects Zero-day attacks that evade signature based solutions such as antivirus.

100 Log Management Uses #48 Control of ports, protocols and services (CAG control 13)

Today we look at CAG Control 13 – limitation and control of Ports, Protocols and Services. Hackers search for these kinds of things — software installs for example may turn on services the installer never imagined may be vulnerable, and it is critical to limit new ports being opened or services installed. It is also a good idea to monitor for abnormal or new behavior that indicates that something has escaped internal controls — for instance a system suddenly broadcasting or receiving network traffic on a new Port is something suspicious that should be investigated, new installs or new Services being run is also worth investigation — we will take a look at how Log Management can help you monitor for such occurrences.

By Ananth

Doing the obvious – Why efforts like the Consensus Audit Guidelines are valuable

I came across this interesting (and scary if you are a business person) article in the Washington Post. In a nutshell pretty much every business banks electronically. Some cyber gangs in Eastern Europe have come up with a pretty clever method to swindle money from small and medium sized companies. They do a targeted email attack on the finance guys and get them to click on a bogus attachment – when they do so, key logging malware is installed that harvests electronic bank account passwords. These passwords are then used to transfer large sums of money to the bad guys.

The article is definitely worth a read for a number of reasons, but what I found surprising was first that businesses do not have the same protection from electronic fraud as consumers do so the banks don’t monitor commercial account activity as closely, and second, just how much this type of attack is happening. Turns out businesses only have 2 days to report fraudulent activity instead of a consumer’s 60 days so businesses that suffer a loss usually don’t recover their money.

My first reaction was to ring up our finance guys and tell them about the article. Luckily their overall feel was that since Marketing spent the money as quickly as the Company made it, we were really not too susceptible to this type of attack as we had no money to steal – an unanticipated benefit of a robust (and well paid, naturally!) marketing group. I did make note of this helpful point for use during budget and annual review time.

My other thought was how this demonstrated the usefulness of efforts like the Consensus Audit Guidelines from SANS. Sometime security personnel pooh-pooh the basics but you can make it lot harder on the bad guys with some pretty easy blocking and tackling activity. CAG Control 12 talks about monitoring for active and updated anti-virus and anti-spyware on all systems. Basic, but it really helps – remember a business does not have 60 days but 2. You can’t notice the malware a week after the signatures finally get updated.

There are a number of other activities that can also really help to prevent these attacks in advanced tools such as EventTracker such as change monitoring, tracking first time executable launch, monitoring the AV application has not been shut down and monitoring network activity for anomalous behavior, but that is a story for another day. If you can’t do it all, at least start with the obvious – you might not be safe, but you will be safer.

Steve Lafferty

100 Log Management Uses #47 Malware defense (CAG control 12)

Today we continue our journey through the Consensus Audit Guidelines with a look at CAG 12 — Malware Defense. When people think about the pointy end of the stick for Malware prevention they typically think anti-virus, but log management can certainly improve your chances by adding defense in depth. We also examine some of the additional benefits log management provides.

By Ananth

Managing the virtualized enterprise historic NIST recommendations and more

Smart Value: Getting more from Log Management

Every drop in the business cycle brings out the ‘get more value for your money’ strategies.  For IT this usually means either use the tools you have to solve a wider range of problems or buy a tool that with fast initial payback and can be used to solve a wide range of other problems. This series looks at how different log management tasks can be applied to solve a wider range of problems beyond the traditional compliance and security drivers so that companies can get more value for their IT money.

Log Value Chain: data loss prevention, email trending for cost reduction and problem identification

The bubbling acronym soup of compliance regulations (HIPAA, PCI-DSS, FRCP, etc) are putting more focus on data loss (leak) prevention (DLP).  In other words, preventing users from unintentionally giving out too sensitive corporate information.

Computing gives us many ways to share data — USB drives, email, online file synchronization services, blogs, browser-based desktop sharing, twitter — the list can seem endless.  Every new innovation in data sharing creates a new way for employees to leak sensitive information.  User education alone is not going to cut it.  Most people know they shouldn’t send financial and medical records to people outside the company just like they know they should eat fewer snack foods and more vegetables.  But its hard to have good eating habits when grocery stores have most of their shelf space dedicated to snacks (as I know so well!).  Similarly, the wide variety of data sharing mechanisms makes it hard for users to be responsible with business information all of the time.

Needless to say, every security vendor on the planet has unveiled their ‘comprehensive solution for DLP’  — oh great — this is just what cash-strapped businesses need — another suite of security products (with one module to address each of those data sharing mechanisms)  that they have to purchase just to keep a chip in the compliance game.

Well, maybe not.

Companies looking for a quick and cost effective way to start addressing DLP should to look at extending their log management solutions.  Computing devices, for the most part, are capable of logging everything that is going on.   It is analysis of that log data that helps knowledgeable people understand what is happening.  Want to know what files were uploaded to a USB drive — look at the logs for file writes.  Want to know which users are using browser based desktop sharing services — look at the browser history logs.  Want to know who is downloading specific files after hours — look at the server logs where the files reside. Want to know if employees are emailing files to their personal GMail accounts, look at the logs for specific IP addresses and correlate it with logs about email attachments. Alternatively you can look at email trends for suspicious activity — a sharp spike in activity in the middle of the night is  often evidence of a security attack or the malicious behavior of disgruntled  employees.

If you have a scalable log management solution with analytics that make it easy to correlate events, and reporting capabilities that can easily group issues into top ten lists, then you have the makings of a DLP solution that can investigate any current (and future) data sharing mechanism.

But more than that — you also have an email trend analysis solution which can save you service or storage costs. I quick look at my own desktop email client, shows email archiving files doubling every six months.  Why? Because there are hundreds of internal emails with 4MB Word and PowerPoint attachments that never get removed.  I shudder to think of businesses with hundreds or thousands of employees with my email habits.

So if these businesses could prove that 70% of your email storage is large attachments sent between remote employees, they could come up with a more cost effective internal file-sharing mechanism or automate a process to eliminate the attachment overkill. Proving these email trends should be just another job for your log analysis and reporting  solution.

Speaking of analyzing email trends, I often have days when I seem to get very little email and I always wonder if everyone is on holiday, or nobody wants to talk to me, or something is really wrong with my email service. So I spend time doing personal checks, can I get email from my hotmail account or from a coworker, is my router working, is Vista downloading a massive patch, then I call my ISP who runs their tests tells me “our service is working” — at which point I give up because I’ve spent an hour of problem resolution for a problem that ‘doesn’t exist.’  But sometimes a chunk of email the next day that clearly was supposed to be delivered the day before, so I know the problem was real and I wonder what got lost in the process.

I suspect that a little trend analysis of my email logs would help with these transient customer service problems.  In my case, since there is no evidence that I typically get 50 non-spam emails per day but today I got 5, my ISP doesn’t know what to do with my call so they close the ticket probably with a ‘couldn’t replicate problem’ tag.  Would email trend analysis prevent the problem — maybe not . However, if these type of customer service calls can be tagged with ‘abnormal email trends’ I’d bet they would identify issues faster and I would get my chunk of email later the same day instead of 24-36 hours later — better customer service powered by log analysis.

My point is that the business requirements will always be adding more and more analysis tasks to IT’s to-do list. Most of the time the raw information to complete those tasks is buried somewhere in the logs. By leveraging a flexible reporting and analysis solution, IT can respond to these new tasks — and automate them if they are recurring — without ponying up more of ITs precious budget for new solutions for every new task.

Industry News

Tenenbaum hit with $675,000 fine for music piracy
In another big victory for the Recording Industry Association of America (RIAA) a federal jury has fined Boston University student Joel Tenenbaum $675,000 for illegally downloading and distributing 30 copyrighted songs.

Did you know? EventTracker’s advanced network connection monitoring feature allows you to monitor network activity including web surfing, file sharing traffic, incoming network connections and more

NIST Issues Final Version of SP 800-53; Enables Rapid Adoption of the Twenty Critical Controls (Consensus Audit Guidelines)
The new version of 800-53 solves three fatal problems in the old version – calling for common controls (rather than system by system controls), continuous monitoring (rather than periodic certifications), and prioritizing controls (rather than asking IGs to test everything). Those are the three drivers for the 20 Critical Controls (CAG)

Did you know? EventTracker supports all 15 automated security controls outlined in the Consensus Audit Guidelines (CAG)

Customer review of EventTracker
Northgate Minerals Corporation uses EventTracker for compliance with Sarbanes-Oxley and overall security.

Detecting ‘bot rot’ using Log Management and SIEM
There are many kinds of tools that can help detect the presence of a bot…Once a PC has been turned into a bot, it will begin exhibiting specific behaviors that include communicating with a command and control (C&C) master. This communication typically follows a pattern that is detectable by analyzing and/or correlating logs and looking for activities that stand out as “not the norm.”

Free Windows Security tools every admin must have
Since security and limited budgets are all the rage these days, here’s a set of free Windows server security tools you need to check out.

100 Log Management Uses #46 Account Monitoring (CAG control 11)

Today’s Consensus Audit Guideline Control is a good one for logs — account monitoring. Account monitoring should go well beyond simply having a process to get rid of invalid accounts. Today we look at tips and tricks on things to look for in your logs such as excessive failed access to folders or machines, inactive accounts becoming active and other outliers that are indicative of an account being high-jacked.

By Ananth

100 Log Management Uses #45 Continuous vulnerability testing and remediation (CAG control 10)

Today we look at CAG Control 10 — continuous vulnerability testing and remediation. For this control, vulnerability scanning tools like Rapid7 or Tenable are the primary solutions, so how do logs help here? The reality is that most enterprises can’t patch critical infrastructure on a constant basis. There is often a fairly lengthy gap between when you have a known vulnerability and when the fix is applied and so it becomes even more important to monitor logs for system access, anti-virus status, changes in configuration and more.

By Ananth

100 Log Management Uses #44 Data access (CAG control 9)

We continue our journey through the Consensus Audit Guidelines and today look at Control 9 – data access on a need to know basis. Logs help with monitoring of the enforcement of these policies, and user activities such as file, folder access and trends should all be watched closely.

By Ananth

100 Log Management Uses #42 Administrator privileges and activities (CAG control 8)

Today’s CAG control is a good one for logs – monitoring administrator privileges and activities. As you can imagine, when an Admin account is hacked or when an Admin goes rogue, because of their power, the impact from the breach can be devastating. Luckily most Admin activity is logged so by analyzing the logs you can do a pretty good job of detecting problems.

By Ananth

100 Log Management Uses #41 Application Security (CAG control 7)

Today we move on to the Consensus Audit Guideline’s Control #7 on application security. The best approach to application security is to design it in from the start, but web applications are vulnerable in several fairly common ways many of which can lead to attacks that can be detected through analyzing web server logs.

By Ananth

100 Log Management Uses #40 Monitoring Audit Logs (CAG control 6)

Today on CAG we look at a dead obvious one for logging — monitoring audit logs! It is nice to see that the CAG authors put as much value behind a review of audit logs. We certainly believe it is a valuable exercise.

– By Ananth