Archive

100 Log Management uses #55 PCI Requirements VII, VIII & IX

Today we look at PCI-DSS Requirements 7, 8 and 9. In general these are not quite as applicable as the audit requirements in Requirement 10 which we will be looking at next time, but still log management is useful in several ancillary areas. Restricting access and strong access control are both disciplines log management helps you enforce.

New EventTracker 6.4; 15 reasons why your business may be insecure

Tuning Log Management and SIEM for Compliance Reporting 

The winter holidays are quickly approaching, and one thing that could probably make most IT Security wish lists is a way to produce automated compliance reports that make auditors say “Wow!” In last month’s newsletter, we took a look at ways to work better with auditors. This month, we’re going to do a deeper dive into tuning of log management and SIEM for more effective compliance reporting.

Though being compliant and having a strong, well-managed IT risk posture aren’t always the same thing, they are intertwined. Auditors look for evidence – documentation and reporting that validates and supports compliance activities. For example, if a policy or mandate requires that access to a database be protected and monitored, evidence comprised of a log management or SIEM report can show who accessed that database and when. If the users who accessed the database have roles that are approved for access, the reports can provide proof that the access controls were working.

To ensure that the reports generated by the log management and SIEM solutions support compliance work, it’s important to understand the IT controls underlying the mandates. Last month we discussed some of the regulations and standards that mention log reviews (including HIPAA, PCI, and FISMA). Compliance Frameworks also highlight the importance of log reviews. ISO/IEC 27001:2005 calls for audit logs that record “user activities, exceptions, and information security events(1),” and CoBiT 4.1 references that organizations should “ensure that sufficient chronological information is being stored in operations logs.”

The trick is to know how to translate the log management and SIEM information into reports that speak directly to the requirements. Log review is a fairly broad category – it’s what’s being monitored and reported in the logs that counts. Getting the right set of criteria to monitor for can be challenging, but mapping policy to IT controls is a good place to start. Some mandates are more prescriptive than others. PCI, for example, calls out which areas of reporting will be of high interest to auditors. Is there a credit card number being captured in the logs? That’s an indicator that an application is out compliance with PCI because PANs (Primary Account Numbers) are not allowed to be stored, unencrypted, anywhere in the payment systems.

Some log management and SIEM tools have compliance reporting built in – they might, for example, have a PCI report that you can run that shows what an auditor might look for during an actual audit. This can help with the process by creating a baseline template for reporting, but keep in mind that the pre-canned reports may not tell the entire story. Review the reports to confirm that the correct information is being logged and reported on. Keep in mind that templates created by vendors are designed to meet a large number of customers, so although some event information is clearly in the scope of certain compliance reports, your environment is (probably) not exactly the same as the other guy’s.

To make sure that you’re getting the right level of detail and that you’re covering the right areas, map which systems and events are specifically required for your environment and the set of regulations in your scope. For example, if you’re a hospital or other covered entity, be mindful that HIPAA requires there to be separate/unique logins for access to protected health information. But many healthcare organizations have systems where logins are shared by employees in violation of the regulation. A report that simply looks for unique logins may not tell the whole story because one login could be shared across multiple users. In this case, a covered entity may need to create additional correlation rules to identify that each user has his/her own unique login ID and that logins are timed out on shared machines to force unique logins for access.

What isn’t being monitored may matter for compliance as well. Email logs can be integrated into the larger log management and SIEM reporting console, but not all critical business correspondence goes through email nowadays. Many companies are also using IM and other peer to peer solutions for important business communications – if an organization approves IM for use, adding these systems to the log management review will provide a more complete view of whether or not critical data is being shared. Collaboration workspaces, like Lotus Notes, Microsoft Sharepoint, and Google Docs, are important data repositories where controlled or regulated information may be shared. If these tools are in use in your organization, be sure to capture the relevant log and event information in your reporting console to show to auditors that the broader universe of protected data is being monitored and reported on.

Don’t forget that compliance reporting covers technical IT controls as well as written policy creation and distribution. While a log management solution isn’t a document management tool, it may be possible and advisable to capture the log data from the document tool. Events such as an employee reviewing an acceptable use policy can be brought into the reporting console to round out the compliance reporting coverage.

Finally, be prepared to continue the tuning work as new systems and regulations come online. IT environments and the regulatory landscape change frequently, so don’t expect reporting on these to stay static. Rather, use existing mapping work of policy to controls to leverage re-use where possible. For example, already have unique logins and tight access controls on a database? When a new regulation or standard is activated for your compliance program, look at what is already being reported on. It could be that you’re already gathering the right information. Another area for careful re-use is bringing new systems or applications on-line. Rather than re-invent the compliance reporting wheel, look at how previous versions of the system (or similar versions) were monitored by the log management or SIEM system and confirm that the same level and granularity of compliance reporting can be implemented in the new system. And knowing what, if any, exposures in the reporting system were missing in previous versions of application and systems logs can provide a solid baseline for log and reporting requirements definitions when introducing a new solution.

Log files are treasure troves of data, much of which can be used in effective compliance reporting. To make the most of your solutions, read through the mandates and regulations and translate the words into areas of reporting that can be managed by a log or SIEM solution. Look for exposures in any systems that aren’t already covered and continue to tweak the reporting for new mandates. While this may require a little bit of upfront work, the ongoing benefits for automated compliance reporting will more than make up for the extra effort upfront. And no matter what time of year, more efficient compliance reporting is a great gift we can all appreciate.

Footnotes:

1 ISO/IEC 27001:2005, A.10.10.1

Did you know? EventTracker provides over 2000 pre-configured reports mapped to specific FISMA, PCI-DSS, HIPAA, NISPOM and Sarbanes-Oxley requirements.

Industry News

State pilot shows a way to improve security while cutting costs
The State Department may have cracked a vexing cybersecurity problem. With a program of continuous monitoring…and a focus on critical controls and vulnerabilities (Consensus Audit Guidelines), the agency has significantly improved its IT security while lowering cost.

Did you know? EventTracker supports all 15 automated controls of the Consensus Audit Guidelines to help organizations mitigate the most damaging threats known to be active today.

Compliance as security: The root of insanity
How companies lose their way by confusing a completed compliance checklist with ironclad security…This leads us to the undeniable realization that while a byproduct of security is compliance, the reverse couldn’t be further from the truth.

Did you know? EventTracker doesn’t just help you comply with regulatory requirements, but fundamentally improves your security posture and protects your organization from a wide variety of attacks including , and zero-day attacks

EventTracker 6.4 launches with deep support for virtual infrastructures
EventTracker version 6.4 offers SIEM support for all layers of the virtual environment including the hardware, the management application, the barebones hypervisor, the guest OS, and all resident applications. Also new is a dashboard that identifies any new or out-of-ordinary behavior by user, admin, system, process and IP address to detect hitherto unknown attacks such as zero-day breaches and malware.

Panning for gold in event logs

Ananth, the CEO of Prism is fond of remarking “there is gold in them thar logs…” this is absolutely true but the really hard thing about logs is figuring out how to get the gold out without needing to be the guy with the pencil neck and the 26 letters after their name that enjoys reading logs in their original arcane format. For the rest of us, I am reminded of the old western movies where prospectors pan for gold – squatting by the stream, scooping up dirt and sifting through it looking for gold, all day long, day after day. Whenever I see one of those scenes my back begins to hurt and I feel glad I am not a prospector. At Prism we are in the business of gold extraction tools. We want more people finding gold and lots of it. It is good for both of us.

One of the most common refrains we hear from prospects is they are not quite sure what the gold looks like. When you are panning for gold and you are not sure that glinty thing in the dirt is gold, well, that makes things really challenging. If very few people can recognize the gold we are not going to sell large quantities of tools.

In EventTracker 6.4 we undertook a little project where we asked ourselves “what can we do for the person that does not know enough to really look or ask the right questions?” A lot of log management is looking for the out-of-ordinary, after all. The result is a new dashboard view we call the Enterprise Activity Monitor.

Enterprise Activity uses statistical correlation to looks for things that are simply unusual. We can’t tell you they are necessarily trouble, but we can tell you they are not normal and enable you to analyze them and make a decision. Little things that are interesting – like if you get a new IP address coming into your enterprise 5000 times. Or if a user generally performs 1000 activities in a day, but suddenly does 10,000, or even as simple as a new executable showing up unexpectedly on user machines. Will you chase the occasionally false positive ? definitely, but a lot of the manual log review being performed by the guys with the alphabets after their names is really simply manually chasing trends – this enables you to stop wasting significant time in detecting the trend — all the myriad clues that are easily lost when you are aggregating 20 or 100 million logs a day.

The response from the Beta customers indicates that we are onto something. After all, any thing that can make our (hopefully more) customers’ lives less tedious and their backs hurt less, is all good!

Steve Lafferty

100 Log Management uses #54 PCI Requirements V & VI

Last we looked at PCI-DSS Requirements 3 and 4, so today we are going to look at Requirements 5 and 6. Requirement 5 talks about using AV software, and log management can be used to monitor AV applications to ensure they are running and updated. Requirement 6 is all about building and maintaining a secure network for which log management is a great aid.

-By Ananth

100 Log Management uses #53 PCI Requirements III & IV

Today we continue our journey through the Payment Card Industry Data Security Standard (PCI-DSS). We left off last time with Requirement 2, so today we look at Requirements 3 and 4, and how log management can be used to help ensure compliance.

-By Ananth

Tips for working well with auditors Inside the Walmart breach

Working Well with Auditors 

For some IT professionals, the mere mention of an audit conjures painful images of being trussed and stuffed like a Thanksgiving turkey. If you’ve ever been through an audit that you weren’t prepared for, you may harbor your own unpleasant images of an audit process gone wrong. As recently as 10-15 years ago, many auditors were just learning their way around the “new world” of IT, while just as many computer and network professionals were beginning to learn their way around the audit world.

At that time, auditors were seen as the people that swooped in and made an IT staffer’s life miserable – by telling them where their controls were failing, by pointing out control deficiencies (both real and imaginary) to management, and by recommending difficult to implement fixes that may have satisfied a regulatory requirement but didn’t take into account the underlying business processes.

Caught in a communications stalemate, many IT and audit departments operated at odds for years. And, unfortunately, that’s where some of us still are. But the world keeps turning. It’s time to move on – to leverage the complimentary roles that IT and audit fulfill to achieve maximum effectiveness in our risk management programs. By working cooperatively with the internal or external audit teams, IT and security can gain support and cost-justification for risk mitigation projects.

Turning Log Review into Log Management

Think it’s not possible for IT, security and audit to work well together? Not so – consider log management. Many regulations explicitly or implicitly require log review. PCI is explicit, requiring that every log, for every system in the cardholder data environment (CDE), be reviewed every day1. In healthcare, HIPAA calls for regular review of records2, like audit logs and FISMA, the Federal Information Security Management Act,3 calls for log review for federal agencies. What’s interesting about these mandates is that while all of them call for review of the log files, none of them specify how to accomplish a comprehensive log review program. Depending on the size of the organization and the number of systems on the network, the log files could account for gigabyte or even terabytes of data per week. Parsing through all of that information manually would be extremely labor intensive and inefficient. Automated log management: aggregating the log information into a central spot and using an automated parsing engine to sift through it all is a more effective and achievable approach.

Log management for security’s sake alone may be difficult to “sell” to executives as an investment that will benefit the organization. It’s not uncommon to hear budgetary war stories from IT and security administrators who unhappily watch log management funding get cut quarter after quarter in favor of other projects that are deemed more impactful to the company’s bottom line. And here is where the auditor/IT relationship can come into focus. Auditors are looking for controls and systems that enable them to sign off on log review requirements, IT and security are looking for ways to meet those requirements in an effective way. By linking a log management implementation project to a compliance requirement, the cost-justification for the program is elevated and is far more likely to stay in the budget after the next round of cuts.

Tips for Working Well with Auditors

Hopefully you’re now convinced that auditors and IT work better in a cooperative rather than competitive environment. But if you’ve never worked with auditors before, you might be wondering how you can bridge the communication gap. To help you with that, here’s a short list of tips that I’ve seen work in a number of organizations:

  • Speak their Language – Know the regulations and mandates the auditor is checking for and be sure you are using normalized terms to describe your controls. For example, NIST SP800-53 refers to “audit records” and “user activity logs.” If your department has a different name for this information; be sure to have a notation in your reporting that explains why your “syslogs” are functionally equivalent to NIST’s “activity logs.”
  • Know the Frameworks – Many auditors use well-known compliance frameworks to round out their regulatory specific assessment process. If you have controls in place that map to these frameworks, call this out for the auditor. Using log management as an example there are maps to ISO/IEC 27001:2005, A.10.10.1: “Audit logs recording user activities, exceptions, and information security events shall be produced” and COBIT 4.1 DS13.3: “Ensure that sufficient chronological information is being stored in operations logs to enable … reconstruction, review and examination…”
  • Write it Down – While techies are great at white boarding – they don’t always excel at written documentation. To an auditor a perfectly implemented process and set of controls is still materially deficient without current documentation to go with it. Make sure not only that you have the required documents ready for the auditor, but also that it is up to date and accurate.
  • Make it Clear – Network maps that show zoning and segmentation as well as locations of relevant systems will help the auditors assess compliance and, where appropriate, help to reduce the scope of the audit zone. Name audit sensitive systems according to a standardized model, such as by location or purpose. While it might be fun to name your mail servers and firewalls Kenny, Cartman, Kyle, and Stan – it’s not going to help an auditor identify these systems during an assessment.
  • Anticipate their Reporting Needs – Generate reports that are mapped back to the regulations or mandates in question. In the case of log management systems, build rules that identify auditor hot-buttons such as: logging user access to a database that stores credit card information or proof of encryption controls in a database storing PII.

Summary

There’s an old aphorism that says you can catch more flies with honey than with vinegar. The same might be said of successful compliance work. While it may be tempting to recoil when you see the person with the compliance checklist, it’s more effective to work with, rather than against the audit team. What you might find out is that not only is your next audit season a little less contentious, but also that you may have found an ally in the cost-justification process.

Footnotes:

1 PCI DSS Requirements 10.2 “Implement automated audit trails for all system components” and 10.6, “Review logs for all system components at least daily,” PCI DSS v1.2.1, July 2009
2 HIPAA 164.308(a)(1)(ii)(D): “. . . regularly review records of information system activity, such as audit logs,” Code of Federal Regulations (CFR) Part 164

3 NIST SP800-53, AC-13: “The organization reviews audit records (e.g., user activity logs) for inappropriate activities” and NIST SP800-92

Industry News

Big-Box breach – The inside story of Walmart’s attack
Internal documents reveal for the first time that the nation’s largest retailer was among the earliest targets of a wave of cyber attacks that went after the bank-card processing systems of brick-and-mortar stores around the United States beginning in 2005.

Did you know? EventTracker combines both Log Management and Change Monitoring capabilities to provide holistic protection from risks posed by hackers

Manage your Network right
Focus on specialized tools targeting specific areas of network management – As current IT trends push us to the lofty goal of cloud computing, and Software as a Service is promoted by all the biggest software vendors, now is the time to be sure that your network-management capabilities are as good as money can buy.

Note: EventTracker beats products from IBM, CA and BMC in the above article. Don’t miss the review on page 3.

PCI-DSS under the gun

Have you been wondering how some of the statements coming from the credit card processing industry seem a little contradictory? You hear about PCI compliant entities being hacked but the PCI guys are still claiming they have never had a compliant merchant successfully breached. Perhaps not, but if both statements are true, you certainly have an ineffective real world standard or problematic certification process at the very least.

Not to pick on Heartland again but Heartland passed their PCI mandated audit and were deemed compliant by a certified PCI Auditor approximately one month prior to the now infamous hack. Yet, at Visa’s Global Security Summit in Washington in March, Visa officials were adamant in pointing out that no PCI compliant organization has been breached.

Now, granted, Heartland was removed from their list of certified vendors after the breach although perhaps this was just a bizarre Catch 22 in play – you are compliant until you are hacked, but when you are hacked the success of the hack makes you non-compliant.

Logically it seems 4 things or a combination of the 4 could potentially have occurred at Heartland. 1) The audit could have been inadequate or the results incorrect leading to a faulty certification. 2) Heartland in the intervening month made a material change in the infrastructure such that it threw them out of compliance. 3) The hack was accomplished in an area outside of the purview of the DSS, or 4) Ms. Richey (and others) is doing some serious whistling past the graveyard.

What is happening in the Heartland case is the classic corporate litigation-averse response to a problem. Anytime something bad happens the blame game starts with multiple targets, and as a corporation your sole goal is to be sure to get behind one or the other (preferably larger) target because when the manure hits the fan the person in the very front is going to get covered. Unfortunately this behavior does not seem to really foster solving the problem as everyone has their lawyers and are not talking.

Regardless, maybe the PCI should not be saying things like “no compliant entity has ever been breached” and maybe say something like “perhaps we have a certification issue here”, or “how do we reach continuous compliance?” or even “what are we missing here?”

-Steve Lafferty

100 Log Management uses #52 PCI Requirement I & II – Building and maintaining a secure network

Today’s blog looks at Requirement 1 of the PCI Data Security Standard, which is about building and maintaining a secure network. We look at how logging solutions such as EventTracker can help you maintain the security of your network by monitoring logs coming from security systems.

-By Ananth

100 Log Management uses #51 Complying with PCI-DSS

Today we are going to start a new series on how logs help you meet PCI DSS. PCI DSS is one of those rare compliance standards that call out specific requirements to collect and review logs. So in the coming weeks, we’ll look at the various sections of the standard and how logs supply the information you need to become compliant. This is the introductory video. As always, comments are welcome.

– By Ananth

Lessons from the Heartland – What is the industry standard for security?

I saw a headline a day or so ago on BankInfoSecurity.com about the Heartland data breach: Lawsuit: Heartland Knew Data Security Standard was ‘Insufficient’. It is worth a read as is the actual complaint document (remarkably readable for legalese, but I suspect the audience for this document was not other lawyers). The main proof of this insufficiency seems to be contained in point 56 in the complaint. I quote:

56. Heartland executives were well aware before the Data Breach occurred that the bare minimum PCI-DSS standards were insufficient to protect it from an attack by sophisticated hackers. For example, on a November 4, 2008 Earnings Call with analysts, Carr remarked that “[w]e also recognize the need to move beyond the lowest common denominator of data security, currently the PCI-DSS standards. We believe it is imperative to move to a higher standard for processing secure transactions, one which we have the ability to implement without waiting for the payments infrastructure to change.” Carr’s comment confirms that the PCI standards are minimal, and that the actual industry standard for security is much higher. (Emphasis added)

Despite not being a mathematician, I do know that the lowest common denominator does not mean minimal or barely adequate, but that aside lets look at the 2 claims in the last sentence.

It is increasingly popular to bash compliance regulations in the security industry these days and often with good reason. We have heard and made the arguments many times before that compliant does not equal secure and further, don’t embrace the standard, embrace the spirit or intent of the standard. But to be honest the PCI DSS Standard is far from minimal, especially by comparison to most other compliance regulations.

The issue with standards has been the fear that they make companies complacent. Does PCI-DSS make you safe from attacks from sophisticated hackers? Well, no, but there is no single regulation, standard or practice out there that will. You can make it hard or harder to get attacked, and PCI-DSS does make it harder, but impossible, no.

Is the Data Security Standard perfect? No. Is the industry safer with it than without it? I would venture a guess that in the case of PCI DSS it is, in fact. That there was significant groaning and a lot of work on the part of the industry to implement the standard would lead one to believe that they were not doing it prior and that there are not a lot of worthless requirements in the DSS. PCI DSS makes a company take positive steps like run vulnerability scans, examine logs for signs of intrusion, and encrypt data. If all those companies handling credit cards prior to the standard were not doing these things, imagine what it was like before?

The second claim is where the real absurdity lays — the assertion that the industry standard for security is so much better than PCI DSS. What industry standard are they talking about exactly? In reality, the industry standard for security is whatever the IT department can cajole, scare, or beg the executives into providing them in terms of budget and resources – which is as little as possible (remember this is capitalism – profits do matter). Using this as a basis, the actual standard for security is to do as little as possible for the least amount of money to avoid being successfully sued, your executives put in jail or losing business. Indeed PCI DSS forced companies to do more, but emphasis on the forced. (So, come to think of it maybe Heartland did not do the industry standard, as they are getting sued, but let’s wait on that outcome!).

Here is where I have my real problem with the entire matter. The statements taken together imply that Heartland had some special knowledge to the DSS’s shortcomings and did nothing, and indeed did not even do what other people in the industry were doing – the “industry standard”. The reality is anyone with a basic knowledge of cyber security and the PCI DSS would have known the limitations, this included no doubt many, many people on the staffs of the banks that are suing. So whatever knowledge Heartland had, the banks that were customers of Heartland knew as well, and even if they did not, Mr. Carr went so far as to announce it in the call noted above. If this statement was so contrary to the norm, why didn’t the banks act in the interest of their customers and insist Heartland shape up or fire them? What happened to the concept of the educated and responsible buyer?

If Heartland was not compliant I have little sympathy for them, or if it can be proved they were negligent, well, have at them. But the banks here took a risk getting into the credit card issuing business– and no doubt made a nice sum of money – but they knew the risk of a data breach and the follow-on expense existed. I thought the nature of risk was that you occasionally lose and in the case of business risk impacts your profits. This lawsuit seems to be like the recent financial bailout – the new expectation of risk in the financial community is when it works, pocket the money, and when it does not, blame someone else to make them pay or get a bailout!

-Steve Lafferty

100 Log Management Uses #50 Data loss prevention (CAG 15)

Today we wrap up our series on the Consensus Audit Guidelines. Over the last couple of months we have looked at the 15 CAG controls that can be automated, and we have examined how log management and log management solutions such as EventTracker can help meet the Guidelines. Today we look at CAG 15 — data loss prevention and examine the many ways logs help in preventing data leakage.

By Ananth

Leverage the audit organization for better security Bankers gone bad and more

Log Management in virtualized environments

Back in the early/mid-90s I was in charge of the global network for a software company. We had a single connection to the Internet and had set up an old Sun box as the gatekeeper between our internal network and the ‘net. My “log management” process consisted of keeping a terminal window open on my desktop where I streamed the Sun’s system logs (or “tailed the syslog”) in real time. Since we were using hardcoded IP addresses for the internal desktops, I could tell, just by looking at the log information, which person or device, inside the company, was doing what out on the Internet. If someone outside the company was performing a ping sweep, I saw the evidence in the log file and could respond immediately. This system worked fine for a couple of months. Then we installed a firewall, and a new mail server, and distribution servers in the DMZ, and, well, you get the idea. There was more log information than a single human could parse, not to mention the fact that while I worked a 50 hour week, the log files were on a 168 hour/week schedule.

While my example may seem almost laughably archaic to many, we’re seeing a similar data overload phenomenon occurring in today’s data centers and network operations centers (NOCs). Log management systems that were installed a few years ago to handle 100 servers and applications that can’t scale to today’s needs. What started out as a few gigabytes of log information per week is now a terabyte a day. One reason for the log information explosion is that as companies become comfortable with the technology, they expand the log monitoring coverage scope. Another significant driving factor: virtualization and the advent of the virtualized data center.

Virtualization brings new challenges to network monitoring and log management. Virtualization enables administrators and users to install multiple unique server instances on a single hardware component. The result is a marked increase in server and application installs and a concurrent increase in server and application log data. In addition to more log information, virtualization presents a few additional challenges as well.

Inter-VM traffic refers to data moving between virtual machines running on the same physical machine under a single hypervisor. Because the traffic isn’t moving off the physical device, it will not be seen by monitoring solutions that use physical network based monitoring points like span or mirror ports. Monitoring solutions that are installed directly on hosts will log the devices information, but if there is just one agent on the host and it is not integrated with the hypervisor itself inter-VM data transfer could still be missed. An alternative is to install agents on each virtual machine. Keep in mind, however, that this could impact corporate use licenses by increasing the total number of agent installs. And for companies that want an entirely agent-less solution, this alternative won’t work. Some additional alternatives for inter-VM traffic monitoring are presented below.

What else changes in the virtualized data environment? Well, zone based policy enforcement might. Consider databases. These are often repositories of sensitive information and only approved for install in protected network zones. Virtualization allows organizations to move servers and application quickly between locations and zones using V-motion functionality. The problem comes in when V-motion is used to move a service or server into a zone or location that has an incompatible protection policy. Think of a database of healthcare information that if V-motioned from a high sensitivity zone into a DMZ. Log management can help here by alerting administrators when a system or service is being moved to a zone with a different policy control level. In order to do this, the log management solution must have access to V-motion activity information. VMWare provides migration audit trail information which can be fed into an organizations log management console.

So how do we perform comprehensive log management in virtualized environments? First, it’s critical that the inter-VM “blind-spot” is removed. One option has already been discussed – installing host-based log management agents on every virtual machine instance. If that’s not a good fit for your company consider purchasing a log management or security information and event management solution that has hypervisor-aware agents that can monitor inter-VM traffic. VMWare has a partner program, VMSafe™, which provides application programming interfaces (APIs) so vendor partner solutions can monitor virtual machine memory pages, network traffic passing through the hypervisor, and activity on the virtual machines.

To keep a handle on mushrooming installs, track and report all new server, service and application instances to a central operations or log management console. In cases where unapproved services are being brought-online this can be particularly helpful. For example, if a mail server install is detected this could indicate the installation of a server that hasn’t had core services turned off – or worse – it could be an indication of an e-mail scam or bot-net.

If your log management provider isn’t VM-aware, check to see if any of your firewall or IPS vendors are. If so, the virtual-aware monitoring information from the firewall or IPS sensor on the hypervisor can be passed through to your log management solution in the same way that physical span port information is aggregated. Regardless of how the inter-VM traffic is (on host agent, inter-VM log management, inter-VM firewall/IPS or other sensor) collected, it’s imperative that the information is brought into the existing log management solution; otherwise, you’ll have a significant blind-spot in your log management solution.

Finally, don’t forget to review existing rules and update or amend them as needed for the virtual environment. For example, have rules that manage virtual machine migration audit trails been added? Are new rules required inter-VM traffic monitory for policy or compliance mandates?

Virtualization has introduced great flexibility into networks and data centers. But with this flexibility comes additional log data new monitoring challenges. To make sure you aren’t missing out on any critical information, implement VM-aware monitoring solutions that work with your existing log management installation and update rules and policies.

Related content: Managing the virtualized enterprise: New technologies, new challenges
Because of its many benefits, employing virtual technology is an apparent “no brainer” which explains why so many organizations are jumping on the bandwagon. This whitepaper examines the technology and management challenges that result from virtualization, and how EventTracker addresses them.

Industry News

How CISO’s can leverage the internal audit process
Say the word auditor at any gathering of information security folks, and you can almost feel the hackles rise. Chief information security officers (CISOs) and internal auditors, by definition of their roles, are typically not the best of friends…Yet, the CISO’s traditional adversary can be an effective deputy.

Did you know? EventTracker provides a number of audit-friendly capabilities that can enhance your collaboration efforts such as over 2000 audit ready reportsautomated-audit trail creation and more.

Lawsuit: Heartland knew data security standard was insufficient
Months before announcing the Heartland Payment Systems (HPY) data breach, company CEO Robert Carr told industry analysts that the Payment Card Industry Data Security Standard (PCI DSS) was an insufficient protective measure. This is the contention of a new master complaint filed in the class action suit against Heartland

Note: We have a different take – Read Steve Lafferty’s (Prism’s VP of Marketing) commentary titled, Lessons from the Heartland – What is the industry standard for security? Leave a comment, and tell us your thoughts.

Prism Microsystems named finalist in Government Security News annual homeland security awards
EventTracker recognized as a leader in the security incident and event management category

EventTracker officially in evaluation for Common Criteria EAL 2+
Internationally endorsed framework assures government agencies of EventTracker’s security functionality

IT: Appliance sprawl – Where is the concern?

Over the past few years you have seen an increasing drumbeat in the IT community to server consolidation through Virtualization with all the trumpeted promises of cheaper, greener, more flexible customer focused data centers with never a wasted CPU cycle. It is a siren song to all IT personnel and quite frankly it actually looks like it delivers on a great many of the promises.

Interestingly enough, while reduced CPU wastage, increased flexibility, fewer vendors are all being trumpeted for servers there continues to be little thought provided to purchasing hardware appliances willy-nilly. Hardware appliances started out as specialized devices built or configured in a certain way to maximize performance – A SAN device is a good example, you might want high speed dual port Ethernet and a huge disk capacity with very little requirement for a beefy CPU or memory. These make sense to be appliances. Increasingly however an appliance is a standard Dell or rack mounted rack mounted system with an application installed on it, usually on a special Linux distribution. The advantages to the appliance vendor are many and obvious — a single configuration to test, increased customer lockin, and a tidy up sell potential as the customer finds their event volume growing. From the customer perspective it suffers all the downsides that IT has been trying to get away from – specialized hardware that cannot be re-purposed, more, locked-in hardware vendors, excess capacity or not enough, wasted power from all the appliances running, the list goes on and on and contains all the very things that have caused the move to virtualization. And the major benefit for appliances? Easy to install seems to be the major one. So to provision a new machine, install software might take an hour or so – the end-user is saving that and the downstream cost of maintaining a different machine type eats that up in short order.

Shortsighted IT managers still manage to believe that, even as they move aggressively to consolidate Servers, it is still permissible to buy an appliance even if it is nothing but a thinly veiled Dell or HP Server. This appliance sprawl represents the next clean-up job for IT managers, or will simply eat all the savings they have realized in server consolidation. Instead of 500 servers you have 1 server and 1000 hardware appliances – what have you really achieved? You have replaced relationships with multiple hardware vendors with multiple appliance vendors and worse when a server blew-up at least it was all Windows/Intel configurations so in general so you could keep the applications up and running. Good luck doing that with a proprietary appliance. This duality in IT organizations reminds me somewhat of people that go to the salad bar and load up on the cheese, nuts, bacon bits and marinated vegetables, then act vaguely surprised when the salad bar regimen has no positive effect.

-Steve Lafferty

100 Log Management Uses #49 Wireless device control (CAG control 14)

We now arrive at CAG Control 14. – Wireless Device Control. For this control specialty WIDS scanning tools are the primary defense, that and a lot of configuration policy. This control is primarily a configuration problem not a log problem. Log Management helps  in all the standard ways — collecting and correlating data, monitoring for signs of attack etc. Using EventTracker’s Change component, configuration data in the registry and file system of the client devices can also be collected and alerted on. Generally depending on how one sets the configuration policy, when a change is made it will generate either a log entry or a change in the registry or file system. In this way EventTracker provides a valuable means of enforcement.

By Ananth

Can you count on dark matter?

Eric Knorr, the Editor in Chief over at InfoWorld has been writing about “IT Dark Matter” which he defines as system device and application logs. Turns out half of enterprise data is logs or so-called Dark Matter. Not hugely surprising and certainly good news for the data storage vendors and hopefully for SIEM vendors like us! He described these logs or dark matter as “widely distributed and hidden” which got me thinking. The challenge with blogging is that we have to reduce fairly complex concepts and arguments into simple claims otherwise posts end up being on-line books. The good thing in that simplification, however, is that often gives a good opportunity to point out other topics of discussion.

There are two great challenges in log management – the first is being able to provide the tools and knowledge to make the log data readily available and useful, which leads to Eric’s comment on how Dark Matter is “Hidden” as it is simply too hard to mine without some advanced equipment. The second challenge, however, is preserving the record – making sure it is accurate, complete and unchanged. In Eric’s blog this Dark Matter is “widely distributed” and there is an implied assumption that this Dark Matter is just there to be mined – that the Dark Matter will and does exist and even more so, it is accurate. In reality it is, for all practical purposes, impossible to have logs widely distributed and expect them to be complete and accurate – this fatally weakens their usefulness.

Let’s use a simple illustration we all know well in computer security — almost the first thing a hacker will do once they penetrate a system is shut down logging, or as soon as they finish whatever they are doing, delete or alter the logs. Let’s use the analogy of video surveillance at your local 7/11. How useful would it be if you left the recording equipment out in the open at the cash register unguarded – not real useful, right? When you do nothing to secure the record, the value of the record is compromised, and the more important the record the more likely it is to be compromised or simple deleted.

This is not to imply that there are not useful nuggets to be mined even if the records are distributed. Without attempting to secure and preserve the logs, logs become the trash heap of IT. Archeologists spend much of their time digging through the trash of civilizations to figure out how people lived. Trash is an accurate indication of what really happened simply because 1) it was trash and had no value and 2) no one worried that someone 1000 years later was going to dig it up. It represents a pretty accurate, if fragmentary, picture of day to day existence. But don’t expect to find treasure, state secrets or individual records in the trash heap however. The usefulness of the record is 1) a matter of luck that the record was preserved and 2) directly inverse to the interest of the creating parties to modify it.

 Steve Lafferty

Security threats from well-meaning employees, new HIPAA requirements SMB flaw

The threat within: Protecting information assets from well-meaning employees

Most information security experts will agree that employees form the weakest link when it comes to corporate information security. Malicious insiders aside, well-intentioned employees bear responsibility for a large number of breaches today. Whether it’s a phishing scam, a lost USB or mobile device that bears sensitive data, a social engineering attack or downloading unauthorized software, unsophisticated but otherwise well-meaning insiders have the potential of unknowingly opening company networks to costly attacks.

These types of internal threats can be particularly hard to detect especially if a company has placed most of its efforts on shoring up external security. For instance, some cyber gangs in Eastern Europe have come up with a pretty clever method to swindle money from small US companies. They send targeted Phishing emails to the company’s treasurer that contains a link which, when opened, installs malicious software that harvests account passwords. Using this information, the criminals initiate wire transfers in small enough amounts to avoid triggering anti money laundering procedures. In cases like these, traditional defenses (firewalls, anti-virus etc) prove to be useless as legitimate accounts are used to commit fraud. This story is not uncommon. In a study conducted by Ponemon Institute earlier this year, it was found that over 88% of data breaches were caused by employee based negligence. In another survey of over 400 business technology professionals by Information Week Analytics, a majority of respondents stated that locking down inside nodes was just as vital as perimeter security.

Employees, the weakest link

Let’s take a look at some of the easy ways that employees can compromise a company’s confidential data without really meaning to.

Social engineering attacks – In its basic form, this refers to hackers manipulating employees out of their usernames and passwords to get access to confidential data. They typically do this by tracking down detailed information that can be used to gain the trust of the employee. With the growing popularity of social networking sites, and the amount of seemingly innocent data that a typical employee shares on these sites, this information is not hard to track down for the resourceful hacker. Email addresses, job titles, work-related discussions, nicknames, all can provide valuable information to launch targeted phishing attacks or trick emails that lead an unsuspecting employee to hand over account information to a hacker posing as a trusted resource. Once the account information has been obtained hackers can penetrate perimeter defense systems. Read more

Industry News

SANS interviews Ananth, CEO of Prism Microsystems, as part of their Security Thought Leader program
Ananth talks with Stephen Northcutt of SANS about trends in Log Management/SIEM, cloud computing, and the “shallow-root” problem of current SIEM solutions

Court allows suit against bank for lax security 
In a ruling issued last month, the District Court for the Northern District of Illinois, denied a request by Citizens Financial Bank to dismiss a negligence claim brought against it by Marsha and Michael Shames-Yeakel. The Crown Point, Ind. couple — customers of the bank — alleged that Citizens’ failure to implement up-to-date user authentication measures resulted in the theft of more than $26,000 from their home equity line of credit.

HITECH Act ramps up HIPAA compliance requirements
The American Recovery and Reinvestment Act of 2009 (ARRA) includes a section that expands the reach of the Health Insurance Portability and Accountability Act (HIPAA) and introduces the first federally mandated data breach notification requirement.

Note: While this article is a few months old, it is a must-read. In particular, the part about (stiffer) penalties being funneled back into the Department of Health and Human Services’. HIPAA has essentially been a toothless tiger, this could be a sign that it is getting new teeth.

Former IT Specialist Hacks into Charity’s Network
A computer specialist has been arrested and indicted for breaking into his former employer’s computer network one year after he was let go. The admin is accused of causing significant damage by deleting records and crippling critical communication systems such as email and telephone.

Did you know? EventTracker offers advanced protection from insider threats, whether it’s a malicious employee or ex-employee looking to steal confidential data or an unsophisticated employee that accidentally causes a breach

Attackers target Microsoft IIS; new SMB flaw discovered
Microsoft updated an advisory, warning customers that attacks have been detected against a zero-day flaw affecting its FTP Service in Microsoft Internet Information Services (IIS). Meanwhile, new exploit code surfaced last weekend, targeting a zero-day vulnerability in Microsoft Server Message Block (SMB).

Did you know? EventTracker’s integrated file integrity and registry monitoring module detects Zero-day attacks that evade signature based solutions such as antivirus.

100 Log Management Uses #48 Control of ports, protocols and services (CAG control 13)

Today we look at CAG Control 13 – limitation and control of Ports, Protocols and Services. Hackers search for these kinds of things — software installs for example may turn on services the installer never imagined may be vulnerable, and it is critical to limit new ports being opened or services installed. It is also a good idea to monitor for abnormal or new behavior that indicates that something has escaped internal controls — for instance a system suddenly broadcasting or receiving network traffic on a new Port is something suspicious that should be investigated, new installs or new Services being run is also worth investigation — we will take a look at how Log Management can help you monitor for such occurrences.

By Ananth

Doing the obvious – Why efforts like the Consensus Audit Guidelines are valuable

I came across this interesting (and scary if you are a business person) article in the Washington Post. In a nutshell pretty much every business banks electronically. Some cyber gangs in Eastern Europe have come up with a pretty clever method to swindle money from small and medium sized companies. They do a targeted email attack on the finance guys and get them to click on a bogus attachment – when they do so, key logging malware is installed that harvests electronic bank account passwords. These passwords are then used to transfer large sums of money to the bad guys.

The article is definitely worth a read for a number of reasons, but what I found surprising was first that businesses do not have the same protection from electronic fraud as consumers do so the banks don’t monitor commercial account activity as closely, and second, just how much this type of attack is happening. Turns out businesses only have 2 days to report fraudulent activity instead of a consumer’s 60 days so businesses that suffer a loss usually don’t recover their money.

My first reaction was to ring up our finance guys and tell them about the article. Luckily their overall feel was that since Marketing spent the money as quickly as the Company made it, we were really not too susceptible to this type of attack as we had no money to steal – an unanticipated benefit of a robust (and well paid, naturally!) marketing group. I did make note of this helpful point for use during budget and annual review time.

My other thought was how this demonstrated the usefulness of efforts like the Consensus Audit Guidelines from SANS. Sometime security personnel pooh-pooh the basics but you can make it lot harder on the bad guys with some pretty easy blocking and tackling activity. CAG Control 12 talks about monitoring for active and updated anti-virus and anti-spyware on all systems. Basic, but it really helps – remember a business does not have 60 days but 2. You can’t notice the malware a week after the signatures finally get updated.

There are a number of other activities that can also really help to prevent these attacks in advanced tools such as EventTracker such as change monitoring, tracking first time executable launch, monitoring the AV application has not been shut down and monitoring network activity for anomalous behavior, but that is a story for another day. If you can’t do it all, at least start with the obvious – you might not be safe, but you will be safer.

Steve Lafferty

100 Log Management Uses #47 Malware defense (CAG control 12)

Today we continue our journey through the Consensus Audit Guidelines with a look at CAG 12 — Malware Defense. When people think about the pointy end of the stick for Malware prevention they typically think anti-virus, but log management can certainly improve your chances by adding defense in depth. We also examine some of the additional benefits log management provides.

By Ananth

Managing the virtualized enterprise historic NIST recommendations and more

Smart Value: Getting more from Log Management

Every drop in the business cycle brings out the ‘get more value for your money’ strategies.  For IT this usually means either use the tools you have to solve a wider range of problems or buy a tool that with fast initial payback and can be used to solve a wide range of other problems. This series looks at how different log management tasks can be applied to solve a wider range of problems beyond the traditional compliance and security drivers so that companies can get more value for their IT money.

Log Value Chain: data loss prevention, email trending for cost reduction and problem identification

The bubbling acronym soup of compliance regulations (HIPAA, PCI-DSS, FRCP, etc) are putting more focus on data loss (leak) prevention (DLP).  In other words, preventing users from unintentionally giving out too sensitive corporate information.

Computing gives us many ways to share data — USB drives, email, online file synchronization services, blogs, browser-based desktop sharing, twitter — the list can seem endless.  Every new innovation in data sharing creates a new way for employees to leak sensitive information.  User education alone is not going to cut it.  Most people know they shouldn’t send financial and medical records to people outside the company just like they know they should eat fewer snack foods and more vegetables.  But its hard to have good eating habits when grocery stores have most of their shelf space dedicated to snacks (as I know so well!).  Similarly, the wide variety of data sharing mechanisms makes it hard for users to be responsible with business information all of the time.

Needless to say, every security vendor on the planet has unveiled their ‘comprehensive solution for DLP’  — oh great — this is just what cash-strapped businesses need — another suite of security products (with one module to address each of those data sharing mechanisms)  that they have to purchase just to keep a chip in the compliance game.

Well, maybe not.

Companies looking for a quick and cost effective way to start addressing DLP should to look at extending their log management solutions.  Computing devices, for the most part, are capable of logging everything that is going on.   It is analysis of that log data that helps knowledgeable people understand what is happening.  Want to know what files were uploaded to a USB drive — look at the logs for file writes.  Want to know which users are using browser based desktop sharing services — look at the browser history logs.  Want to know who is downloading specific files after hours — look at the server logs where the files reside. Want to know if employees are emailing files to their personal GMail accounts, look at the logs for specific IP addresses and correlate it with logs about email attachments. Alternatively you can look at email trends for suspicious activity — a sharp spike in activity in the middle of the night is  often evidence of a security attack or the malicious behavior of disgruntled  employees.

If you have a scalable log management solution with analytics that make it easy to correlate events, and reporting capabilities that can easily group issues into top ten lists, then you have the makings of a DLP solution that can investigate any current (and future) data sharing mechanism.

But more than that — you also have an email trend analysis solution which can save you service or storage costs. I quick look at my own desktop email client, shows email archiving files doubling every six months.  Why? Because there are hundreds of internal emails with 4MB Word and PowerPoint attachments that never get removed.  I shudder to think of businesses with hundreds or thousands of employees with my email habits.

So if these businesses could prove that 70% of your email storage is large attachments sent between remote employees, they could come up with a more cost effective internal file-sharing mechanism or automate a process to eliminate the attachment overkill. Proving these email trends should be just another job for your log analysis and reporting  solution.

Speaking of analyzing email trends, I often have days when I seem to get very little email and I always wonder if everyone is on holiday, or nobody wants to talk to me, or something is really wrong with my email service. So I spend time doing personal checks, can I get email from my hotmail account or from a coworker, is my router working, is Vista downloading a massive patch, then I call my ISP who runs their tests tells me “our service is working” — at which point I give up because I’ve spent an hour of problem resolution for a problem that ‘doesn’t exist.’  But sometimes a chunk of email the next day that clearly was supposed to be delivered the day before, so I know the problem was real and I wonder what got lost in the process.

I suspect that a little trend analysis of my email logs would help with these transient customer service problems.  In my case, since there is no evidence that I typically get 50 non-spam emails per day but today I got 5, my ISP doesn’t know what to do with my call so they close the ticket probably with a ‘couldn’t replicate problem’ tag.  Would email trend analysis prevent the problem — maybe not . However, if these type of customer service calls can be tagged with ‘abnormal email trends’ I’d bet they would identify issues faster and I would get my chunk of email later the same day instead of 24-36 hours later — better customer service powered by log analysis.

My point is that the business requirements will always be adding more and more analysis tasks to IT’s to-do list. Most of the time the raw information to complete those tasks is buried somewhere in the logs. By leveraging a flexible reporting and analysis solution, IT can respond to these new tasks — and automate them if they are recurring — without ponying up more of ITs precious budget for new solutions for every new task.

Industry News

Tenenbaum hit with $675,000 fine for music piracy
In another big victory for the Recording Industry Association of America (RIAA) a federal jury has fined Boston University student Joel Tenenbaum $675,000 for illegally downloading and distributing 30 copyrighted songs.

Did you know? EventTracker’s advanced network connection monitoring feature allows you to monitor network activity including web surfing, file sharing traffic, incoming network connections and more

NIST Issues Final Version of SP 800-53; Enables Rapid Adoption of the Twenty Critical Controls (Consensus Audit Guidelines)
The new version of 800-53 solves three fatal problems in the old version – calling for common controls (rather than system by system controls), continuous monitoring (rather than periodic certifications), and prioritizing controls (rather than asking IGs to test everything). Those are the three drivers for the 20 Critical Controls (CAG)

Did you know? EventTracker supports all 15 automated security controls outlined in the Consensus Audit Guidelines (CAG)

Customer review of EventTracker
Northgate Minerals Corporation uses EventTracker for compliance with Sarbanes-Oxley and overall security.

Detecting ‘bot rot’ using Log Management and SIEM
There are many kinds of tools that can help detect the presence of a bot…Once a PC has been turned into a bot, it will begin exhibiting specific behaviors that include communicating with a command and control (C&C) master. This communication typically follows a pattern that is detectable by analyzing and/or correlating logs and looking for activities that stand out as “not the norm.”

Free Windows Security tools every admin must have
Since security and limited budgets are all the rage these days, here’s a set of free Windows server security tools you need to check out.

100 Log Management Uses #46 Account Monitoring (CAG control 11)

Today’s Consensus Audit Guideline Control is a good one for logs — account monitoring. Account monitoring should go well beyond simply having a process to get rid of invalid accounts. Today we look at tips and tricks on things to look for in your logs such as excessive failed access to folders or machines, inactive accounts becoming active and other outliers that are indicative of an account being high-jacked.

By Ananth

100 Log Management Uses #45 Continuous vulnerability testing and remediation (CAG control 10)

Today we look at CAG Control 10 — continuous vulnerability testing and remediation. For this control, vulnerability scanning tools like Rapid7 or Tenable are the primary solutions, so how do logs help here? The reality is that most enterprises can’t patch critical infrastructure on a constant basis. There is often a fairly lengthy gap between when you have a known vulnerability and when the fix is applied and so it becomes even more important to monitor logs for system access, anti-virus status, changes in configuration and more.

By Ananth

100 Log Management Uses #44 Data access (CAG control 9)

We continue our journey through the Consensus Audit Guidelines and today look at Control 9 – data access on a need to know basis. Logs help with monitoring of the enforcement of these policies, and user activities such as file, folder access and trends should all be watched closely.

By Ananth

100 Log Management Uses #42 Administrator privileges and activities (CAG control 8)

Today’s CAG control is a good one for logs – monitoring administrator privileges and activities. As you can imagine, when an Admin account is hacked or when an Admin goes rogue, because of their power, the impact from the breach can be devastating. Luckily most Admin activity is logged so by analyzing the logs you can do a pretty good job of detecting problems.

By Ananth

100 Log Management Uses #41 Application Security (CAG control 7)

Today we move on to the Consensus Audit Guideline’s Control #7 on application security. The best approach to application security is to design it in from the start, but web applications are vulnerable in several fairly common ways many of which can lead to attacks that can be detected through analyzing web server logs.

By Ananth

100 Log Management Uses #40 Monitoring Audit Logs (CAG control 6)

Today on CAG we look at a dead obvious one for logging — monitoring audit logs! It is nice to see that the CAG authors put as much value behind a review of audit logs. We certainly believe it is a valuable exercise.

– By Ananth

100 Log Management Uses #39 Boundary defense (CAG control 5)

Today, after a brief holiday (it is Summer, after all), we continue our look at the SAN’s Consensus Audit Guidelines (CAG). Today we look at something very well suited for logs — boundary defense. Hope you enjoy it.

– By Ananth

EventTracker 6.3 review; Getting more from Log Management Correlation techniques and more

Smart Value: Getting more from Log Management

Every dip in the business cycle brings out the ‘get more value for your money’ strategies, and our current “Kingda Ka style” economic drop only increases the strategy implementation urgency.  For IT this usually means either use the tools you have to solve a wider range of problems or buy a tool that with fast initial payback and can be used to solve a wide range of other problems. This series looks at how different log management tasks can be applied to solve a wider range of problems beyond the traditional compliance and security drivers so that companies can get more value for their IT money.

Login attack identification is a common use of log management. Most folks monitor and analyze login failures from a security perspective. They use reporting and policy engines to identify anomalies in user login patterns multiple login failures with different user names in a short amount of time, as indicators of security attack or for forensic or auditing purposes. Others are taking this one step further to apply this analysis to recognize the specific devices a customer uses to login as a means to prevent fraud or lower attack risks.  However, these login analysis and reporting tasks can have uses beyond this traditional security driver.

Performance problem resolution

Login failures can also be an indicator of server or database misconfigurations, particularly since modern applications and databases depend on a complex collections of software modules. Those modules depend on login permissions to communicate just as much as we depend on login permissions to check our email.

Sometimes error messages about unknown login types or missing database connections are  the result of duplicate installations of a particular module or slight variances in permissions within a database server cluster.  Depending on where the error sits it may be fatal to the performance of a critical business service or it may fly under the radar — until a specific set of circumstances causes service performance to rapidly unravel.

These types of performance problems will also be occurring more frequently because:

  • Security and compliance concerns:  Many companies are requiring more frequent password changes for both users and communicating software modules. More frequent change means more opportunities for problems which creates more problem resolution work which eats up IT admin time that they should be spending on problem prevention.
  • Virtualization: If misconfigurations get baked into virtual machine templates that are deployed over and over again, then the situation definitely gets worse.  You end up with a template which causes the same performance problems which have to be solved over and over again.

These types of performance problems require log analysis solutions to identify error patterns and uncover unsuspected relationships between production environment deployment choices and error occurrences.

Customer service

Login failures could also be a customer service indicator as well. For example, you can analyze the number of users that request password reminders that actually login a few minutes later.  If your analysis shows that most users do not login successfully after a failed login then you have an indicator that a particular business goal is not being met.  The business is missing opportunities to connect with those users — and you have an opportunity to engage/align/interact with business managers to figure out how to positively impact the business.

That’s the type of “tech hero” I think most IT managers aspire to be. The guys and gals that go beyond their day-to-day tasks to find ways to lighten burdens their colleagues didn’t know they were carrying.  The data to do this type of hero-work is in the logs. It just needs to be surfaced in a way that makes sense to business managers, web designers and application developers.

Doing more with the same

If you already have tools to consolidate and analyze log data for login failures for security breaches you also have tools to prevent login misconfigurations from causing application performance problems, prevent login misconfigurations from creeping into VM templates, and provide insight into lost customer opportunities.  It is simply a matter of applying the tools to these additional situations.  However, we all know that just because something seems simple doesn’t mean that it is easy to achieve.  It’s when you apply a solution to multiple problems do you really put the claims of flexibility and usability to the test.  A good analysis tool should help you uncover patterns and relationships without creating a whole lot of extra work to bring in new data sources or run ad-hoc reports.

If you are trying to justify log management and analysis tools specifically for identifying login-based attacks don’t forget to include an ROI roadmap that shows a timeline for benefits beyond security attacks. The reason I like ROI roadmaps is that they get business folks thinking about IT solutions and IT time saved as assets to be leveraged in the next round of efficiency and productivity improvements — instead of thinking about IT time as only a maintenance cost that should be eliminated.

The most effective roadmaps would show how the solution will initially be used, the resulting benefits and the initial payback period as the first phase.  Subsequent phases would show how you would leverage the time saved to apply the solution to other areas and the resulting benefits.  These subsequent phases don’t have to be completely fleshed out, but should include enough substance to demonstrate that you are doing one of the fundamental laws of good business execution — thinking strategically while acting tactically.

Industry News

4th of July hacker jailed after hospital hack
A Dallas hospital guard was ordered to jail following his arrest on charges of breaking into computers, planting malicious software and planning a massive distributed-denial-of-service (DDoS) attack on the Fourth of July.

Related Resource Read how Lehigh Valley Hospital uses EventTracker to get real-time alerts on unauthorized access, detect suspicious activity and security threats, and conduct forensic investigations.

Microsoft confirms another zero-day vulnerability
The vulnerability resides in Microsoft’s Office Web Components, which are used for publishing spreadsheets, charts and databases to the Web, among other functions. The company is working on a patch but did not indicate when it would be released, according to an advisory. “If exploited successfully, an attacker could gain the same user rights as the local user”

Did you know? EventTracker’s powerful integrated Change Monitoring module detects zero-day attacks and prevents costly damage from these new attacks types.

Insider arrested for stealing critical proprietary code from Financial Services Company
Wall Street is abuzz with news that a computer programmer has been arrested for stealing top-secret application code that drives his former company’s high-speed financial trading platform. Blogger says stolen code might have been Goldman Sachs’ ‘secret sauce’

Did you know? Log Management can not only proactively detect and help prevent incidents of insider theft, but also provide evidence to catch a culprit after the fact

EventTracker 6.3 review
IT Pro Magazine review of EventTracker 6.3 : “It [EventTracker] also provides a range of features not found in standard log management products…”

100 Log Management Uses #38 Meeting CAG controls 3 & 4

Today we continue our look at the Consensus Audit Guidelines, in this case CAG Controls 3 and 4 for maintaining secure configurations on system and network devices. We take a look at how log and configuration monitoring can ensure that configurations remain secure by detecting changes in the secured state.

By Ananth

100 Log Management Uses #37 Consensus Audit Guidelines (CAG) controls 1 and 2

Today we start in earnest on our Consensus Audit Guidelines (CAG) series by taking a look at CAG 1 and 2. Not hugely interesting from a log standpoint but there are some things that log management solutions like EventTracker can help you with.

By Ananth

100 Log Management uses #36 Meeting the Consensus Audit Guidelines (CAG)

Today we are going to begin another series on a standard that leverages logs. The Consensus Audit Guidelines, or CAG for short, is a joint initiative of SANS and a number of Federal CIO’s and CISO’s to put in place some lower level guidelines for FISMA. One of the criticisms of FISMA is that is it is very vague and implementation can be very different from agency to agency. The CAG is a series of recommendations that make it easier for IT to make measurable improvements in security by knocking off some low hanging targets. There are 20 CAG recommended controls and 15 of them can be automated. Over the next few weeks we will look at each one. Hope you enjoy it.

By Ananth

New NIST recommendations; Using Log Management to detect web vulnerabilities and more

Log and security event management tame the wild west environment of a university network

Being a network administrator in a university environment is no easy task.  Unlike the corporate world, a university network typically has few restrictions over who can gain access; what type or brand of equipment people use at the endpoint; how those endpoint devices are configured and managed; and what users do once they are on the network.

A university network often has a higher volume of traffic than a private sector network does, as well as more wireless connections.  Rather than looking at faculty and students as users whose computing can be managed or dictated, university administrators must view them as customers whose needs must be met.  And the needs can be quite varied – everything from financial transactions at the campus bookstore to large file transfers for university research projects.  Needless to say, security for the network can be quite a challenge.

“In many ways, a university environment is much more complex than a corporate environment,” according to James Perry, the Information Security Officer at the University of Tennessee.  A university IT department almost functions more like an ISP than as a traditional IT department that sets computing standards and dictates how a network can be used.

Morris Reynolds, the Director of Information Security and Access Management at Wayne State University, echoes Perry’s comments.  “The students are basically our customers,” says Reynolds.  “Their computing needs present challenges, but if they complain, the IT group has to acquiesce.”

This requires a delicate balancing act.  On the one hand, the IT operations and security teams need to ensure the well being of university computing resources, as well as compliance with regulations such as HIPAA, PCI and the Family Educational Rights & Privacy Act (FERPA).  On the other hand, universities must be careful to avoid control procedures that may be viewed as violating student privacy, suppressing the right of free speech, or stifling to research programs and innovation.

In this “almost anything goes” environment, log and security event management are a boon to the university network administrator.  By correlating and analyzing log data from a wide range of devices, the admin is able to “see” so much more of what is happening on his network.  This helps him be more proactive in managing the operations and more effective in identifying security breaches based on university policies.  It’s a bit like bringing some semblance of order to the “Wild West” atmosphere of the college campus.

Log management helps bring order to chaos

For instance, Wayne State University has 33,000 students and 10,000 faculty members.  There are 10,000 concurrent users physically located on campus, and another 50,000 concurrent users coming into the network remotely.  The university network has more than 1,200 servers, 30,000 wired ports and 1,000 wireless access points.  The students provide their own PCs.  There’s no central control for the configuration of these endpoint devices, and they are largely unmanaged.

In this environment, a network firewall can easily experience more than 50,000 events per day.  When you take into consideration all the disparate event logs from all the devices, the total number of events logged in a single day is staggering.  And this is typical for many university networks.  Capturing the log data from all the network devices, normalizing it into a standard format, and correlating events can help to identify problems and lead to remediation.

For example, unmanaged endpoint devices like the students’ laptops are highly susceptible to viruses and malware that turn the PCs into nodes of a botnet.   When a botnet infection occurs, there is often a huge uptick in client-to-client session initiation.   As a result, there can be a major rise in the network bandwidth consumption by the infected machines.  There also may be an increase in the number of attempts to connect to the Internet.   These events are captured in device logs and can then be detected by a SIM/SIEM by correlating events across different devices such as routers and firewalls.  The SIM/SIEM can issue alerts and can remediate by restricting the students’ network access until their PCs have been cleaned.  This helps to limit further exposure and infection.

Logs also provide specific insight into changes to network resources, such as updates to Active Directory or modifications to a server’s registry and .ini files.  The changes recorded in the logs can be cross-referenced to the university’s change management logs/system to assure the change was expected and approved.   When an unauthorized change has been detected, the appropriate alerting and remediation can take place by backing out unauthorized changes.

From a network operation perspective, logs can provide insight into operational reliability problems, such as when a device becomes “noisy” – in other words, it generates many log entries.  This usually means that there is a problem such as an imminent device failure, the need for a software patch, or a misconfiguration.  These events can trigger an alert to a technician who can tend to the device’s needs before a complete failure.

In a university network environment where configuration standards and usage control just aren’t possible, log management and SIM/SIEM provide network administrators with a measure of control.  These tools help in identifying the root cause of issues by providing a holistic view into the network’s operational, security and audit logs in a centralized management tool, which in turn can assist in the detection of security breach, unauthorized change and operational events.

Compliance requirements also drive the need for log management

There is one way that university networks are similar to corporate networks.  A multitude of regulatory requirements is common in many large university environments, making compliance another driver for log and security information management.  Such regulations often dictate that logs be captured and monitored for events that violate a regulatory statute.  The University of Tennessee network is a typical example.

The UT network spans five campuses.  In addition to supporting the needs of the students and faculty, the network serves about 160 merchants, including bookstores, coffee shops and other sales operations.  Because these merchants accept payments via credit cards, this segmented portion of the network must meet PCI DSS compliance requirements.  Two of the UT campuses work with medical data, so HIPAA compliance is a must.  There’s financial data, meaning GLBA compliance, and student information that is governed by FERPA.  Log management is a vital tool in meeting compliance requirements and validating the efforts.

It’s a challenge to oversee the operations and security of a university network environment.  Perhaps that’s why so many university network administrators use their log management and SIM/SIEM tools to take the environment from “wild” to “mild.”

Brian Musthaler, CISA – is a Principal Consultant with Essential Solutions Corp.  A former audit and information systems manager, he directs the firm’s evaluations and analysis of enterprise applications, with a particular interest in security and compliance tools.

Industry News

Federal IT Security recommendations released in final NIST draft

The National Institute of Standards and Technology has collaborated with the military and intelligence communities to produce the first set of security controls for all government information systems, including national security systems.

Did you know? EventTracker offers a comprehensive solution that enables compliance with multiple regulations, standards and guidelines including NIST recommendations, FISMA, PCI-DSS, Sarbanes-Oxley, HIPAA, Consensus Audit Guidelines (CAG) and others

T-Mobile net reportedly hit by hacker/extortion attack

T-mobile customers are awakening this morning to reports that hacker/extortionists have victimized the cellular carrier through a massive network breach resulting in the theft of untold amounts of corporate and customer data, which they’re threatening to sell to the highest bidder.

Did you know? EventTracker provides 24/7 insight into enterprise networks and detects security threats/breaches in real-time for immediate remediation before costly reputation-damaging consequences occur

Hackers hit US Army websites

A group of computer hackers based in Turkey breached the sites of two U.S. Army facilities, leveraging SQL injection attacks, according to reports. “The question of vulnerability to SQL injection attacks has come up frequently… “The number is rising dramatically. SQL injection is a serious threat. Not enough organizations are paying attention to it.”

Did you know? Log Management can help you detect and prevent web attacks including SQL injection attacks

Revamped EventTracker KnowledgeBase

The EventTracker KnowledgeBase, a free repository of detailed descriptions and information on over 20,000 event logs, has a new look! The revamped web portal now provides easy Google-like searching and options for advanced search to quickly pinpoint specific events.

100 Log Management uses #35 OWASP web vulnerabilites wrap-up

We have been talking a lot recently about web vulnerabilities, specifically the OWASP Top 10 list. We have covered how logs can help detect signs of web attacks in OWASP A1 through A6. A7 – A10 cannot be detected by logging, but in this wrap-up of the OWASP series we’ll take a look at them.

-By Ananth

100 Log Management uses #34 Error handling in the web server

Today we conclude our series on OWASP vulnerabilities with a look at A6 — error handling in the web server. Careless or non-configuration of error handling in a web server gives a hacker quite a lot of useful information about the structure of your web application. While careful configuration can take care of many issues, hackers will still probe your application deliberately triggering error conditions to see what information is there to be had. In this video we look at how you can use web server logs to detect whether you are being probed by a potential hacker.

-By Ananth

100 Log Management uses #33 Detecting and preventing cross site request forgery attacks

Today’s video blog continues our series on web vulnerabilities. We look at OWASP A5 — cross site request forgery hacks and we discuss ways that Admins can help both prevent these attacks and detect them when they do occur.

-By Ananth

100 Log Management uses #32 Detecting insecure object references

Continuing on our OWASP series, today we look at Vulnerability A4, using object references to grab important information, and how logs can be used by Admins to detect signs of these attacks. We also look at some best practices you can employ on your servers to make these attacks more difficult.

By Ananth

100 Log Management uses #31 Detecting malicious file execution in the web server

Today’s video continues our series on web vulnerabilities. We look at OWASP A3 — malicious code execution attacks in the web server — and discuss ways that Admins can help both prevent these attacks and detect them when they do occur.

-By Ananth

Compromise to discovery

The Verizon Business Risk Team publishes a useful Data Breach Investigations Report drawn from over 500 forensic engagements over a four-year period.

The report describes a “Time Span of Breach” event broken into four stages of an attack. These are:

– Pre-Attack Research
– Point of Entry to Compromise
– Compromise to Discovery
– Discovery to Containment

The top two are under control of the attacker but the rest are under the control of the defender. Where log management is particularly useful would be in discovery. So what does the 2008 version of the DBIR show about the time between Compromise to Discovery? Months Sigh. Worse yet, in 70% of the cases, Discovery was the victim being notified by someone else.

Conclusion? Most victims do not have sufficient visibility into their own networks and equipment.

It’s not hard but it is tedious. The tedium can be relieved, for the most part, by a one-time setup and configuration of a log management system. Perhaps not the most exciting project you can think of but hard to beat for effectiveness and return on investment.

Ananth

100 Log Management uses #30 Detecting Web Injection Attacks

Today’s Log Management use case continues our look at web vulnerabilities from the OWASP website. We will look at vulnerability A2, or how injection techniques, particularly SQL injection can be detected by analyzing web server log files.

By Ananth

100 Log Management uses #29 Detecting XSS attacks

Today we begin our series on web vulnerabilities. The number 1 vulnerability on the OWASP list is cross site scripting or XSS. XSS seems to have replaced SQL injection as the new favorite for web attacker. We look at using web server logs to detect signs of these XSS attacks.

-Ananth

EventTracker gets 5 star review; 100 Log Management uses and more

Have your cake and eat it too- improve IT security, comply with multiple regulations while reducing operational costs and saving money

Headlines don’t lie. The number and severity of security breaches suffered by companies has consistently increased over the past couple of years and statistics show that 9 out of 10 businesses will suffer an attack on their corporate network in 2009. At the same time, there is growing pressure to comply with regulations and standards such as PCI-DSS, HIPAA and Sarbanes-Oxley, non-compliance of which can result in large fines and cause costly long-term damage to corporate reputations. However, in the midst of an economic recession when companies are tightening their belts, reducing headcount and scrutinizing project costs, it is getting difficult for IT professionals to get the funding they need to meet their goals. The silver lining is that SIEM solutions allow you to reduce security risks, comply with multiple regulations all the while helping you save money – a win-win situation in the current environment.

The new IT landscape

From inside theft to highly-targeted malware and zero-day attacks, Cyber crime is evolving rapidly and what was secure last year is not necessarily secure this year. With the proliferation of mobile devices, the new avenues for data theft are plenty –  USB thumb drives, PDAs and iPods are easy to conceal and copying confidential data onto these devices often takes just a couple of minutes. And with corporate networks accommodating not just employees, but also outside contractors and third-party providers across multiple locations, the risk is real, serious and extremely hard to minimize without clamping down on productivity.

On the other hand, cyber crime has evolved from a hobbyist occupation to a multi-billion dollar industry. Organized profit-driven groups use automated processes and highly targeted attacks to infiltrate networks in very little time and surreptitiously siphon off enterprise data. Certainly the threat to critical IT assets is only increasing in volume and sophistication. And with the global meltdown, the impetus behind data theft has grown multifold – From both disgruntled ex-employees who have been victims of layoffs, to desperate people willing to take desperate measures for financial gain. With the capabilities of IT departments being pushed to their limits, the recession has led to a perfect storm in the world of IT security, and criminals are taking advantage of this storm to attack. It is no longer a question of if but when and how – when will an attack occur and how costly will it be.

While dealing with this widening threat landscape, IT departments are still tasked with maintaining compliance with regulatory standards and government stipulations that are often vague and difficult to translate into implementation guidelines. Non-compliance is not an option since the potential for costly repercussions, whether in the form of fines, lawsuits, litigation or corporate reputation damage, is high.

The challenge 

So the challenge for IT lays in managing multiple requirements in the face of budget cuts, increasing layoffs and shrinking resources. As companies scrutinize every investment, fear factor arguments for funding security projects are waning because of a number of reasons including:

  • “We have not been attacked so far, therefore we must be immune” syndrome
  • Absence of a widespread, debilitating (9/11 style) malware attack
  • Absence of hard figures on the economic impact of a security breach
  • Measuring ROI on security investments is difficult to do because it is based on a company’s tolerance for risk, the money “saved” is intangible.
  • It can be difficult to prove that the organization would have been attacked without the solution in place.

It is no wonder then that compliance remains the main driver for many security solutions. However, because of the recession, compliance projects are facing increased competition from other business and revenue generating initiatives. So while companies understand that compliance is mandatory, a security professional may only get 30% of the funding requested. This gives rise to 2 challenges:

  1. Minimizing the cost of compliance
  2. Justifying expense

And the best way to minimize cost and justify funding is by demonstrating that that the solution in question will address multiple requirements, outside the limited scope of regulatory compliance, and provide a clear and tangible ROI.

The pressure is on to do more with less

The solution

The good news is that SIEM solutions like EventTracker can help you do just that – meet multiple requirements spanning compliance and security while providing tangible, demonstrable operational cost-savings. Benefits include:

  • In-depth protection of critical IT assets from both internal and external breaches
  • Compliance with multiple regulatory frameworks including Sarbanes-Oxley, HIPAA, PCI-DSS, FISMA, GLBA and more, as well support for evolving mandates
  • Cost-savings in the form of reduced dependence on existing resources, optimized operations, improved system availability and quick resolution of issues before they escalate into costly disruptions.

SIEM for Security

A comprehensive SIEM solution like EventTracker allows you to:

  • Detect and prevent damage from Zero-Day and other new forms of attack vectors
  • Monitor user activity and USB device usage for unauthorized internal access to sensitive data
  • Monitor networks for suspicious activity that often precedes a security breach
  • Create customized correlation rules to detect common and critical security conditions in real-time.
  • React quickly and early to suspicious activity with instant alerts and automatic remediation for proactive prevention
  • Research the sequence of events that led to an attack and test your security improvements by playing back a saved event sequence.

SIEM for Compliance

SIEM solutions help you wade through the vague guidelines of compliance requirements with predefined reports mapped to specific regulatory requirements. A comprehensive SIEM solution will help you:

  • Automate the entire compliance process from securing your environment, establishing baselines, tracking user activity, alerting to potential violations to creating audit-ready reports
  • Demonstrate to auditors that periodic reviews are being conducted in compliance with internal and external policies
  • Comply with a variety of regulatory standards spanning multiple verticals

SIEM for Operations

SIEM solutions enable you to increase IT efficiency and decrease the total cost of ownership by:

  • Automating routine tasks and decreasing dependence on existing resources
  • Optimizing operations by monitoring, alerting and reporting on disk space trends, CPU usage trends, runaway processes, high-memory usage, service downtime
  • Enabling IT staff to quickly diagnose issues before they excalate into costly disruptions
  • Accelerating troubleshooting and simplifying forensic investigations

SIEM solutions such as EventTracker provide a fast and demonstrable ROI within 8-9 months and help you save on average $100 per server per month in ongoing maintenance and operational costs.

Selecting the right SIEM solution

Now that you are able to justify funding for a SIEM solution, the next step is to identify the right SIEM solution for your environment.  This is no easy task because of 2 reasons. Firstly, there is a large number of products available and vendors have done a great job of making their products sound roughly the same in core features such as correlation, reporting, collection, etc. and secondly, vendors are too busy differentiating themselves on features that in many cases have little or nothing to do with core functionality.

The reality is that SIEM solutions are typically optimized for different use-cases and you need to find a solution that will best meet you own needs. To help define your requirements and determine the best solution for your organization, you should answer the following questions:

  • What is the easiest way to automate the collection of events?
  • How can I store all that data securely and efficiently so it is still accessible?
  • How can I gain actionable intelligence from all that data in real-time?
  • How do I generate reports out of consolidated data?
  • Can the solution handle my unique requirements without expensive customization?
  • How long will it take me to get a solution up and running, and what are my ongoing costs?
  • Which offering has the broadest feature set to maximize my investment

A comprehensive SIEM solution should automate the secure collection and consolidation of all enterprise events to a central point and make them readily available to IT personnel for analysis. The architecture needs to be scalable and highly configurable while still being easy to install and quick to implement. It should provide an efficient, secure, tamper-proof event archive for reporting and compliance requirements, a powerful real-time correlation engine that operates on the event stream, and a reporting and analytics engine for ad-hoc and scheduled querying.

Make sure the solution can receive and process logs from all platforms and sources in your network including Syslog, Syslog NG, SNMP V1/V2, Windows, Solaris BSM, IIS, Exchange, Oracle, SQL Server and has the capability to monitor system thresholds such as CPU, disk usage and memory, as well as USB devices. Look for a solution where the agents can be centrally configured, managed and distributed and can perform sophisticated filtering of the event logs prior to transmission to the central collection point, so if reduction of the event stream is possible, it can be easily accomplished.

A good SIEM solution should allow you to access the data in the way that fits your organizational structure. You may want a single central console which includes a UI for administration, configuration and event viewing, reporting and analysis. Or support for multiple, distributed consoles. Or a role-based web interface integrated with Active Directory for single sign-on support.

For larger organizations that have multiple sites or are organized into multiple units within the same site, it may be necessary for all of the event log data to be consolidated and archived in a single place for compliance purposes, with the correlation and day to day management the responsibility of different, distinct IT groups.

Think about how events are stored – with millions of events generated daily, a database can be an expensive and slow medium for archiving data. Storing even a small time period of event data can require a huge database, a big database server machine and additional expensive database licenses. Databases are also not guaranteed secured storage. Look for a SIEM solution that can archive the original log in a compressed and secured archive optimized for the write-once/read many times nature of event log information.

A robust correlation and analytics engine is critical to ongoing security efforts and enables powerful real-time monitoring and rules-based alerting on the event stream. Rules can watch for multiple, seemingly minor unrelated events occurring on multiple systems across time that together represent clear indications of an impending system problem or security breach. Detecting these problems in real-time prevents or minimizes costly impact on the business.

Integrated change monitoring and configuration control allows you to monitor and manage changes that occur on the Windows file system and registry – often the only clue IT staff have of Zero-day and malware attacks or installation of unauthorized or unsupported software. By quickly identifying those hard to find changes you will enhance security, reduce system downtime, and lower overall IT costs.

A powerful report wizard enables you to create and generate meaningful reports either on an ad-hoc or schedule reports to be regularly generated on the off-hours and distributed to subscriber lists. Look for flexibility in report delivery such as in PDF, CSV or DOC format and delivered via email or RSS feed. In addition, you should be able to research the sequence of events that led to an attack or security breach and test your security improvements by playing back a saved event sequence.

Finallyevaluate solutions for long-term value rather than initial price. A vendor might offer you a great price that fits your budget initially but what happens when your IT infrastructure grows? How will licensing scale when your log volume increases beyond solution capacity? Look also for hidden costs in terms of separate modules, compliance packs, storage, training and support. The last thing you need is unexpected costs that you never accounted for.

The bottom line

Limited-scope solutions may be beneficial for extremely specific requirements, but in the current economy, the investment required for such solutions is often hard to justify. Also, procuring a number of solutions to meet a variety of disparate requirements can prove a burden on shrinking staff and existing resources. In order to maximize spend, companies must purchase products that provide a wide range of functionalities that address multiple areas. SIEM solutions such as EventTracker not only provide broad capabilities that can be applied across the compliance and security use cases but also help you save hard-dollars on operational costs.

Industry News

EventTracker gets 5 star review from SC Magazine
“EventTracker is a robust security information and event log management (SIEM) tool that has a lot of useful features”

SMBs often hit hardest by botnets
A small or midsize business (SMB) is ultimately a more attractive target for spammers, botnet operators, and other attackers than a home user mainly because it has a treasure trove of valuable data without the sufficient IT and security resources to protect it.

Did you know? Granular licensing, predictable pricing and modest resource requirements allow SMB’s to take advantage of EventTracker’s advanced security, regulatory and operational monitoring capabilities without breaking the bank.

UC Berkeley says hacker broke into health services databases
The University of California at Berkeley Friday disclosed that hackers broke into restricted computer databases in the campus health-services center, as the university began notifying current and former Berkeley students their personal information may have been taken.

Did you know? EventTracker offers complete coverage from the server to the workstation and USB level, real-time correlation and alerting, to ensure that IT personnel are instantly notified of any suspicious activity before costly damage is caused.

100 Log Management uses #28 Web application vulnerabilities

During my recent restful vacation down in Cancun I was able to reflect a bit on a pretty atypical use of logs. This actually turned into a series of 5 entries that look at using logs to trace web application vulnerabilities using the OWASP Top 10 Vulnerabilities as a base. Logs may not get all the OWASP top 10, but there are 5 that you can use logs to look for — and by periodic review ensure that your web applications are not being hacked. This is the intro. Hope you enjoy them.

[See post to watch Flash video] -Ananth

100 Log Management Uses #27 Printer logs

Back from my vacation and back to logs and log use cases! Here is a fairly obvious one — using logs to manage printers. IN this video, we look at the various events generated on Windows and what you can do with them.

-Ananth

Logs and forensics, a lesson in compliance and more

How logs support data forensics investigations

Novak and his team have been involved in hundreds of investigations employing data forensics.  He says log data is a vital resource in discovering the existence, extent and source of any security breach.  “Computer logs are central and pivotal components to any forensic investigation,” according to Novak.  “They are a ‘fingerprint’ that provides a record of computer and system activities that may demonstrate a data leak or security breach.”  The incriminating activities might include failed login attempts, user and system access, file uploads/downloads, database access or manipulation, access privilege modification, application system transactions, transmission of email messages or attachments, and many other common activities.

In many cases, when logs are setup and configured properly, they can tell the story of the tactics a hacker used during a breach.  They can give insight as to how advanced (or not) the hacker is, and provide an understanding of the extent of a breach by showing how long a hacker was inside the confines of the firewall.  “You can see if the unauthorized person has been in your system for five minutes or five months,” explains Novak.

Given the security insight that logs can provide, it’s no surprise that data protection regulations such as the Payment Card Industry Data Security Standard (PCI DSS), the Federal Rules of Civil Procedure (FRCP),  the Sarbanes-Oxley Act (SOX), and the Health Insurance Portability and Accountability Act (HIPAA) all mandate the requirement for logs and log management.  The information captured by logs can be used to help protect sensitive data and to support incident response and forensic analysis in the event of a suspected data breach.

Often it’s these regulations that are driving organizations to become better at log management and event correlation.  In Novak’s experience, however, many organizations do need to improve in their log monitoring and management practices.  “It’s not uncommon to find that companies collect the logs but don’t review them as closely as they should,” says Novak.  “The monitoring of logs in many instances is hampered due to the extensive amounts of good data being captured and the lack of means to properly manage or analyze that data.  As a result, if there is a breach or questionable activity, it may take weeks or months to actually detect it – if it’s detected at all.”  Novak says the lack of logs or log management can increase the cost and length of an investigation substantially.

The dimension of data correlation is critically important in the support of a forensic investigation.  Correlating data from multiple sources provides the means to substantiate other evidence sources, and logs are a good way to do that.  “We use logs to corroborate what is seen in a forensic image or, vice versa, what we see in a forensic image to what we see in logs,” says Novak.

In investigations, it’s common to use logs to play off one another to validate each other.  For example, an environment has firewall, intrusion detection system (IDS), system and application logs.  If they are properly configured, an investigator can go through all the logs and “show” that a hacker got into the network or application at a specific time.  If all the logs aren’t in agreement about the illicit activity, this could be an indication the hacker manipulated one or more of the logs to make it difficult to follow his actions.  By correlating the log data, it’s possible to determine this manipulation.

Log data should be viewed and treated like a primary evidence source.  Hopefully it will never be needed to investigate or validate a data breach or hacking incident.  In any event, here are some best practices that can help ensure that log data and log management practices properly support forensic investigations.

  • Have a clear corporate policy for managing logs across the entire organization.
  • Have centralized storage and retention of all logs, with everything in one place and in one format.
  • Ensure the time synchronization of logs to facilitate correlating the data and retrieving data over specific timeframes.
  • Ensure the separation of duties over logs and log management systems to protect from potential internal threats such as a super user or administrator turning off or modifying logs to conceal illicit activity.
  • Always maintain backup copies of logs.
  • Document what is being logged and why, and how the log data is captured, stored and analyzed.  Ensure that 100 percent of log-able devices and applications are captured and the data is unfiltered.
  • Have a defined retention policy that specifies the retention period across the organization for all log data.  Organizations should work with counsel to determine the best time frames and have log data incorporated into an overall data retention policy.
  • Have a defined procedure to follow after an incident.
  • Test the incident response plan, including the retrieval of backup log data from off-site storage.

If an incident or data breach is suspected, there are several steps to take right away:

  • Increase the logging capability to the maximum and consider adding a network sniffer to capture additional detail from network traffic.  In an incident, it’s better to have more data rather than less.
  • Freeze the rotation or destruction of existing logs to prevent the loss of potential evidence.
  • Get backup copies of the logs and make sure they are secure.
  • Deploy a qualified investigations team to determine the situation.

With the right care and feeding, data logs can provide solid forensic evidence in the event of a security breach or data loss.  Analyzing the logs may not make for an exciting TV drama, but it can be rewarding nonetheless.

Brian Musthaler, CISA – is a Principal Consultant with Essential Solutions Corp. A former audit and information systems manager, he directs the firm’s evaluations and analysis of enterprise applications, with a particular interest in security and compliance tools.

Industry News

Conficker worm arms itself to steal and spam
The Conficker/Downadup worm is on the move again. After a relatively uneventful April 1, on which the worm began widening the number of Web sites that it scanned for instructions, a new Conficker variant has emerged and appears to be preparing to spam and steal information.

 Did you know? EventTracker is the only SIEM solution that comes integrated with a powerful change and configuration monitoring solution that detects zero-day attacks and helps prevent costly damage from new, emerging threats.

A lesson in compliance from the chemical industry
Events occurring in the U.S. chemical-manufacturing industry, specifically those relating to security guidelines being enforced by the federal government, are likely foreshadowing what’s next in line for other industries.

 Did you know? EventTracker provides support for the broadest set of compliance requirements among SIEM/Log Management vendors. Customizable reports and active defense in depth ensure that companies are able to comply with constantly evolving and new regulations.

In poor economy, more IT pros could turn to e-crime
In an annual security survey, Sixty-six percent of respondents felt that out-of-work IT workers would be tempted to join the criminal underground, driven in part by threats to bonuses, job losses, and worthless stock options

Did you know?  EventTracker detects in real-time suspicious activity that often precedes a security breach, and enables instant remediation before costly data theft occurs.

Some thoughts on SAAS

A few months ago I wrote some thoughts on cloud security and compliance.The other day I came across this interesting article in Network World about SaaS security and it got me thinking on the subject again. The Burton analyst quoted, Eric Maiwald, made some interesting and salient points about the challenges of SaaS security but he stopped short of explicitly addressing compliance issues. If you have a SaaS service and you are subject to any one of the myriad compliance regulations how will you demonstrate compliance if the SaaS app is processing critical data subject to the standard? And is the vendor passing a SAS-70 audit going to satisfy your auditors and free you of any compliance requirement?

Mr. Maiwald makes a valid point that you have to take care in thinking through the security requirements and put it in the contract with the SaaS vendor. The same can also be held true for any compliance requirement, but he raises an even more critical point where he states that SaaS vendors want to offer a one size fits all offering (rightly so, or else I would put forward we would see a lot of belly-up SaaS vendors). My question then becomes how can an SME that is generally subject to compliance mandates but lacks the purchasing power to negotiate a cost effective agreement with a SaaS vendor take advantage of the benefits such services provide? Are we looking at one of these chicken and egg situations where the SaaS vendors don’t see the demand because the very customers they would serve are unable to use their service without this enabling technology? At the very least I would think that SaaS vendors would benefit from putting in the same audit capability that the other enterprise application vendors are, and making that available (maybe for a small additional fee) to their customers. Perhaps it could be as simple as user and admin activity auditing, but it seems to me a no brainer – if a prospect is going to let critical data and services go outside their control they are going to want the same visibility as they had when it resided internally, or else it becomes a non-starter until the price is driven so far down that reward trumps risk. Considering we will likely see more regulation, not less, in the future that price may well be pretty close to zero.

– Steve Lafferty

Log Monitoring – real time or bust?

As a vendor of a log management solution, we come across prospects with a variety of requirements — consistent with a variety of needs and views of approaching problems.

Recently, one prospect was very insistent on “real-time” processing. This is perfectly reasonable but as with anything, when taken to an extreme, can be meaningless. In this instance, the “typical” use case (indeed the defining one) for the log management implementation was “a virus is making its way across the enterprise; I don’t have time to search or refine or indeed any user (slow) action; I need instant notification and ability to sort data on a variety of indexes instantly”.

As vendors we are conditioned to think “the customer is always right” but I wonder if the requirement is reasonable or even possible. Given specifics of a scenario, I am sure many vendors can meet the requirement — but in general? Not knowing which OS, which attack pattern, how logs are generated/transmitted?

I was reminded again by this blog by Bejtlich in which he explains that “If you only rely on your security products to produce alerts of any type, or blocks of any type, you will consistently be “protected” from only the most basic threats.”

While real-time processing of logs is a perfectly reasonable requirement, retrospective security analysis is the only way to get a clue as to attack patterns and therefore a defense.

 Ananth

100 Log Management uses #26 MS debug logs-Part II

Today is a continuation of our earlier look at Microsoft debug logs. Today we are going to look at logs from the Time and Task Scheduler services.

-By Ananth

100 Log Management uses #25 MS debug logs

MSdebug logs. Pretty arcane stuff but Sysadmins occasionally need to get deep into OS services such as group policy to debug problems in the OS. Logging for most of these types of services requires turning on in the registry as there is generally a performance penalty. We are going to look at a few examples over the next couple of days. Today we look at logs that are important on some older operating systems, while next time we look at services such as Time and Task Scheduler that are really most useful in the later Microsoft versions.

-By Ananth

100 Log Management uses #24 404 errors

Today’s log tip is a case of a non-obvious, but valuable, use of log collection. Web server logs provide lots of good information for web developers; today we look at some of the interesting information contained in 404 errors.

-By Ananth

The blind spot of mobile computing detecting a hack attempt and more

Overcoming the blind spot of mobile computing

For many organizations, mobile computing has become a strategic approach to improve productivity for sales professionals, knowledge workers and field personnel.  As a result, the Internet has become an extension of the corporate network.  Mobile and remote workers use the Internet as the means to access applications and resources that previously were only available to “in-house” users – those who are directly connected to the corporate network.

Managing laptops and other portable devices such as smart phones and PDAs can be a real challenge for any organization.  Because these devices aren’t continuously connected to the corporate network in a secure manner, they pose a large security risk.   Once a mobile device is disconnected from the network, there is limited visibility to IT operations on the device.  For example, it’s difficult to tell if the device has its firewall engaged, the anti-virus signatures are up to date, or the operating system has all the necessary security patches.  What’s more, a disconnected device can’t “phone home” to provide the central systems management application with its log and intrusion detection system data.

Further exacerbating this challenge is the vast array of mobile devices with their unique mobile operating systems.  Depending on the manufacturer and brand, PDAs and smart phones use everything from Windows Mobile to Symbian OS.  Other popular mobile operating systems in play today include BlackBerry OS, Mac OS X, Palm OS, and various flavors of mobile Linux.  Moreover, the devices have diverse and often proprietary event logs.  It’s almost pure chaos for the IT department that is anxious to receive operational information from the devices to know if there are security events that can pose a risk to the individual devices, or worse, to the corporate network when the devices do connect again.  Unfortunately, there are no common methods of collecting, consolidating and reviewing these mobile device logs today.

Stephen Northcutt, president of the SANS Technology Institute, says this lack of mobile device log data creates a blind spot in the overall detective controls provided by log analysis.  This blind spot is a critical issue during forensic analysis when attempting to determine the source of an actual data breach or even in determining if attempts have been made to hack or corrupt a mobile device.

Without log data, organizations will have reduced situational awareness and difficulty in supporting device and application status reporting, the troubleshooting of problems with applications and equipment, incident response, and forensic investigation.

Knowing that you will not have this situational awareness of what is happening to mobile devices when they are not connected to the network, what can be done to improve the security of mobile devices and the data they hold?

First of all, recognize that log data management and analysis is just one part of a “controlled” mobility strategy and the overall IT system of internal controls, albeit an important one.  While a continuous feed of log data of mobile devices is highly desired and would be great to have, all is not lost without it.  When these devices do connect to the network, you can retrieve whatever log data is “available” and capable of being read in order to collect information on the software, hardware and security applications located on mobile devices.  This information can be used to support your compliance requirements, if nothing else.  You can show, for example, that a group of laptops all had a personal firewall and anti-virus software, and that the anti-virus DAT files were updated at a certain time.

Second, assess the risks associated with the data and devices that you are attempting to protect.  It should be part of an organization’s overall data protection process to identify data which is critical or sensitive and to develop and implement the appropriate policies and procedures concerning the use and care of that data.  Where mobile computing is concerned, the biggest risks are when the information is in motion (i.e., moving to/from the outside world via the Internet) or at the endpoints of the network (i.e., on mobile PCs, on USB devices, on external drives, or on other highly mobile devices such as smart phones and PDAs).

Third, implement strong preventative controls that assure secure communications, force encryption of sensitive data, and provide automated processes to manage the mobile platform.   There are numerous mobile device management products and services you can use to apply timely security patches and software updates; prevent an infected device from attaching to the network; back up or encrypt sensitive information; ensure that corporate policies are enforced, and so on.

By taking these and other steps required based on unique business risk, your organization can feel more comfortable about your mobile computing security posture, as well as your ability to demonstrate that the mobile devices connected to the enterprise network are in compliance with corporate security policies at the time that they are both on and off the network.

Brian Musthaler, CISA – is a Principal Consultant with Essential Solutions Corp. A former audit and information systems manager, he directs the firm’s evaluations and analysis of enterprise applications, with a particular interest in security and compliance tools.

Industry News

Get it free: Full-featured search engine for all log data
…A tip for any systems administrator who has had to dig through old log files, searching for clues about an event that happened on the network. Maybe it was a server configuration change, or an intrusion attempt, or a hardware device sending signals that it’s about to fail.

Workers stealing company data
Six out of every ten employees stole company data when they left their job last year, said a study of US workers. 24% could still access data after leaving the company

Did you know? EventTracker’s advanced user activity and USB monitoring provides in-depth protection from internal theft or inadvertent data loss without clamping down on normal usage.

Heartland breach bad as Tylenol poisonings?
Heartland Payment Systems stock (HPY) was hit hard in the wake of what is being described as the biggest single breach of consumer and financial data security ever. The company issued statements Friday (1/23) in an effort at damage control in which the CEO compares the potential industry-wide impact of the breach to none other than that of the Tylenol poisonings of some twenty-five years ago that nearly brought down the drug maker.

Did you know? EventTracker detects in real-time suspicious activity that often precedes a security breach, and enables instant remediation before costly data theft occurs.

Considering a SIEM solution? Read this first
Cutting through SIEM vendor hype – SIEM solutions are optimized for difference usecases and one size never fits all. The good news is that with the number of potential solutions to choose from, if you do your homework, you will find a product that meets your requirements.

Prism Microsystems named finalist in the 2009 CODiE awards
EventTracker recognised as top performer in the data security category; finalist selection made from over 850 nominations submitted by 600 companies.

100 Log Management uses #23 Server shutdown

Today we look at monitoring server shutdowns. Typically I would recommend that you set up an alert from your log management solution that immediately alerts you if any critical production server is shutdown or restarted, but even for non-critical servers it is wise to check on occasion what is going on. I do it on a weekly basis — servers shutting down can happen normally (win update, maintenance, etc), but can also indicate crashes and instability in the machine or someone simply screwing around; and by eyeballing a short report (it should be short) you will be able to quickly see any odd patterns.

100 Log Management uses #22 After hours login

Today we use logs to do a relatively easy check for unusual activity – in this case after hours log-ons. If your organization is mostly day shift, for example, your typical users will not be logging in after hours and if they are this is something worth checking out. This kind of simple analysis is a quick and easy way to look for unusual patterns of activity that could indicate a security problem.

-By Ananth

100 Log Management uses #21 File deletes

Today’s use case is a good one. Windows makes it very hard and resource expensive to track file deletes, but there are certain directories (like in our case, our price and sales quote folders), where files should not be deleted from. Making use of Object Access Auditing and a good log analysis solution you can pull a lot of valuable information from the logs that indicate unwarranted file deletions.

– By Ananth

Famous Logs

The Merriam Webster dictionary defines a log as “a record of performance, events, or day-to-day activities”. Though we think of logs in the IT context, over the years many famous logs have been written. Here are some of my favorites:

Dr Watson who logged the cases of Sherlock Holmes

The Journals of Lewis and Clark, one of the greatest voyages of discovery in human history.

The Motorcycle Diaries: Notes on a Latin American Journey

Fictional Prof. Pierre Arronax chronicled the fantastic travels of Capt. Nemo in Jules Vernes’ 20,000 Leagues Under the Sea

Diary of a Young Girl by Anne Frank, a vivid, insightful journal and one of the most moving and eloquent documents of the Holocaust.

Personal logs from captains of the Enterprise (Kirk, Picard, Janeway).

Samuel Pepys, the renowned 17th century diarist who lived in London, England.

The record by Charles Darwin, of his trip on the HMS Beagle

Bridget Jones Diary by Helen Fielding

Ananth

100 Log Management uses #20 Solaris BSM system boots

Today is another Solaris BSM example. The Basic Security Module of Solaris audits all system boots, and it is good practice to have checks in place to ensure that these critical systems are only being restarted at the correct times. Any unexpected activity is something that should be investigated.

– By Ananth

100 Log Management uses #19 Account Management

Today’s look at logs illustrates a typical use case of using logs to review for unexpected behavior. Within Active Directory you have users and groups that are created, deleted and modified. It is always a good idea to go in and review the activities of your domain admins just to be sure that it matches what you feel should be occurring. If it differs it is something to investigate further.

– By Ananth

100 Log Management uses #18 Account unlock by admin

Today we look at something a little different – reviewing admin activity for unlocking accounts. Sometimes a lockout occurs simply because a user has fat fingers, but often accounts are locked on purpose and unlocking one of these should be reviewed to see why

100 Log Management uses #17 Monitoring Solaris processes

The Solaris operating systems has some interesting daemons that warrant paying attention to. Today’s log use case examines monitoring processes like sendmail, auditd and sadm to name a few.

Security threats rise in recession Comply secure and save with Log Management

How LM / SIEM plays a critical role in the integrated system of internal controls

Many public companies are still grappling with the demands of complying with the Sarbanes-Oxley Act of 2002 (SOX). SOX Section 404 dictates that audit functions are ultimately responsible for ensuring that financial data is accurate. One key aspect of proof is the absolute verification that sufficient control has been exercised over the corporate network where financial transactions are processed and records are held.

Where do auditors find that proof? In the data points logged by today’s SIEM tools, of course.

The logged data is a pure treasure trove of information that provides insight into every aspect of an organization’s information technology (IT) operations. As a compensating / detective control, the data is an integral part of an organization’s overall system of internal controls. Moreover, depending on the tools being utilized, the data also can be the starting point of a preventative control.

The proper distillation of critical log data is a bit like looking at a very large haystack and helping the auditor determine if a needle (i.e., a violation of a control) is buried within. A perspective of what guides the audit function as it pertains to SOX will help to explain the search for the elusive needle, if it even exists.

The COSO control framework guides the SOX audit function

The Committee of Sponsoring Organizations of the Treadway Commission (COSO) is a U.S. private-sector initiative whose major objective is to identify the factors that cause fraudulent financial reporting and to make recommendations to reduce its incidence. In 1992, COSO established a common definition of internal controls, standards and criteria against which companies and organizations can assess their control systems. This widely used framework provides a corporate governance model, a risk model and control components that together form the blueprint for establishing internal controls that minimize risk, help ensure the reliability of financial statements, and comply with various laws and regulations.

COSO is a general framework that is not specific to the IT area of a company— or to any other functional area, for that matter. However the COSO framework can be, and often is, applied specifically to IT processes and controls that are governed by SOX Section 404 compliance, the Assessment of Internal Control for all controls related to financial data and reporting.

According to the COSO framework, internal controls consist of five interrelated components. These components are derived from the way management runs a business and are integrated with the organization’s management processes. The components are: the Control Environment, Risk Assessment, Control Activities, Information and Communication, and Monitoring. And, as described below, log management has a crucial role in each of them.

  • The Control Environment – Coming from the Board of Directors and the executive management, a company’s control environment sets the tone of how the organization will conduct its business, thereby influencing the control consciousness of the entire workforce. The control environment provides discipline and structure, and includes factors such as corporate integrity, ethical values, management’s operating style, delegation of authority systems, and the processes for managing and developing people in the organization.

Log management aids corporate management in designing, implementing, and refining controls via its ability to establish a baseline, or snapshot, of an organization’s IT infrastructure and its activities; for example, knowing what devices exist, what applications are running on them, and who is accessing the applications.

  • Risk Assessment – Every organization has business objectives; for example, to produce a product or provide a service. Likewise, every organization faces a variety of risks to meeting its objectives. The risks, which come from both internal and external sources, must be identified and assessed. This risk assessment process is a prerequisite for determining how the risks should be managed.

Log data/management is a starting point of the iterative IT risk management process by providing baseline and near real-time insight into the condition of an organization’s infrastructure. This helps the company identify and assess the risks that may threaten the business objectives and provides the opportunity for the revision of an organization’s acceptable risk posture. And then with a continual feed, log data can be used to ascertain current conditions and to alert someone to the need for appropriate corrective action to mitigate a risk if one arises.

  • Control Activities – Control activities are the policies and procedures that help ensure management directives are carried out and that necessary actions are taken to address the risks to achieving the organization’s objectives. Control activities occur throughout the organization, at all levels and in all functions. Numerous control activities are utilized in the IT area, including access control, change control and configuration control, to name a few.

Log management provides automated event correlation/consolidation and reporting, thereby providing assurance that log data entries are presented to control stakeholders accurately and in a timely fashion. This reporting allows management to take corrective action if needed, as well as measure the effectiveness of designed processes and controls.

  • Information and Communication – Information systems play a key role in internal control systems as they produce reports including operational, financial and compliance-related information that make it possible to run and control the business. An effective communication system ensures that useful information is promptly distributed to the people who need it – outside as well as inside the organization – so they can carry out their responsibilities.

Within log management, this takes the form of automated generation and delivery of detail and summary reports and alerts of key events for appropriate management review and/or action.

  • Monitoring – Internal control systems need to be monitored – a process that assesses the quality of the system’s performance over time. This is accomplished through ongoing monitoring activities, separate evaluations or a combination of the two.

From a log manager’s view, “monitoring” is what he is doing on a daily basis – i.e., performing a “control activity.” From the COSO view, “monitoring” is the assessment of how well the control activities are performing. In other words, the latter is looking over the shoulder of the former to make sure the control activities are effective.

Once an organization has established its control structure(s), an auditor is charged with the independent review of the controls that have been implemented. He is ultimately responsible for assessing the effectiveness of the controls, including those IT controls designed to protect the accuracy and reliability of financial data. This is the heart of SOX Section 404.

A unified and comprehensive log management approach will continue to be the cornerstone of an IT organization’s control processes. It is the best way to get timely insight into all activities on the network that have a material impact on all systems, including financial systems.

Brian Musthaler, CISA – is a Principal Consultant with Essential Solutions Corp. A former audit and information systems manager, he directs the firm’s evaluations and analysis of enterprise applications, with a particular interest in security and compliance tools.

Industry News

PCI costs slow compliance projects in down economy
The economic recession is making it difficult for some information security pros in financial services to get the funding they need to accomplish their goals. A good example of a project that can help both the bottom line and PCI compliance is automated log management

Security threats rise in recession
Threats to data and network security increase during tough times, even as scarce resources make companies more vulnerable to attack.

Did you know? EventTracker allows you to meet a large number of requirements while helping you cut costs and boost productivity. Comply with standards such as PCI-DSS, secure critical servers, protect from inside theft and optimize IT operations while saving money at the same time! Need hard numbers? Take a look at our ROI calculator

Feds allege plot to destroy Fannie Mae data
A fired Fannie Mae contract worker pleaded not guilty Friday to a federal charge he planted a virus designed to destroy all the data on the mortgage giant’s 4,000 computer servers nationwide.

Did you know? Employees, especially disgruntled ones, can significantly increase the risk exposure of a company. EventTracker helps companies minimize this risk by tracking and alerting on all unusual/unauthorized user activity.

Prism Microsystems continues record revenue into 4th quarter
We had a great 4th quarter – get a recap of our performance and key product innovations in 2008

100 Log Management uses #16 Patch updates

I recorded this Wednesday — the day after patch Tuesday, so fittingly, we are going to look at using logs to monitor Windows Updates. Not being up to date on the latest patches leaves security holes but with so many machines and so many patches it is often difficult to keep up with them all. Using logs helps.

100 Log Management uses #15 Pink slip null

Today is a depressing log discussion but certainly a sign of the times. When companies are going through reductions in force, IT is called upon to ensure that the company’s Ip is protected. This means that personnel no longer with the company should no longer have access to corporate assets. Today we look at using logs to monitor if there is any improper access.

-Ananth

100 Log Management uses #14 SQL login failure

Until now, we have been looking mostly at system, network and security logs. Today, we shift gear and look at database logs, more specifically user access logs in SQL Server.

-By Ananth

100 Log Management uses #13 Firewall traffic analysis

Today, we stay on the subject of Firewalls and Cisco PIX devices in particular. We’ll look at using logs to analyze trends in your firewall activity to quickly spot anomalies.

-By Ananth

100 Log Management uses #12 Firewall management

Today’s and tomorrow’s posts look at your firewall. There should be few changes to your firewall and even fewer people making those changes. Changing firewall permissions is likely the easiest way to open up the most glaring security hole in your enterprise. It pays to closely monitor who makes changes and what the changes are, and today we’ll show you how to do that.

-By Ananth

100 Log Management uses #11 Bad disk blocks

I often get the feeling that one of these days I am going to fall victim to disk failure. Sure, most times it is backed up, but what a pain. And it always seems as though the backup was done right before you made those modifications yesterday. Monitoring bad disk blocks on devices are an easy way to get an indication that you have a potential problem. Today’s use case looks at this activity.

– By Ananth

100 Log Management uses #10 Failed access attempts

Today we are going to look at a good security use case for logs -reviewing failed attempts to access to shares. Sometimes an attempt to access directories or shares are simply clumsy typing, but often it is an attempt by internal users or hackers to snoop in places they have no need to be.

100 Log Management uses #9 Email trends

Email has become one of the most important communication methods for businesses — for better or worse! Today we look at using logs from an ISP mail service to get a quick idea of overall trends and availability. Hope you enjoy it.

-By Ananth

100 Log Management uses #8 Windows disk space monitoring

Today’s tip looks at using logs for monitoring disk usage and trends. Many windows programs (like SQL Server, for example) count on certain amounts of free space to operate correctly, and in general when a Windows machine runs out of disk space it often handles the condition in a less than elegant manner. In this example we will see how reporting on the free disk and trends gives a quick and easy early warning system to keep you out of trouble.

100 Log Management uses #7 Windows lockout

A couple of days ago we looked at password resets, today we are going to look at something related – account lockouts. This is something that is relatively easy to check – you’ll see many caused by fat fingers but when you start seeing lots of lockouts, especially admin lockouts, it is something you need to be concerned about.

[See post to watch Flash video] -Ananth

Learning from Walmart

H. Lee Scott, Jr. is the current CEO of WalMart. On Jan 14, 2009, he reflected on his 9 year tenure as CEO as a guest on the Charlie Rose show.

Certain basic truths, that we all know but bear repeating, were once again emphasized. Here are my top takeaways from that interview:

1) Listen to your customers, listen harder to your critics/opponents, and get external points of view. WalMart gets a lot of negative press and new store locations often generate bitter opposition from some locals. However the majority (who vote with their dollars) would appear to favor the store. WalMart’s top management team who consider themselves decent and fair business people, with an offering that the majority clearly prefers, were unable to understand the opposition. Each side retreated to their trenches and dismissed the other. Scott described how members of the board, with external experience, were able to get Wal-Mart management to listen carefully to what the opposition was saying and with dialog, help mitigate the situation.

2) Focus like a laser on your core competency. Walmart excels at logistics, distribution, store management — the core business of retailing. It is, however, a low margin business. With its enormous cash reserves should Wal-Mart go into other areas e.g. product development where margins are much higher? While it’s tempting, remember “Jack of trades, Master of none”? 111th Congress?

3) Customers will educate themselves before shopping. In the Internet age, expect everybody to be better educated about their choices. This means, if you are fuzzy on your own value proposition and cannot articulate it well on your own product website, then expect to do poorly.

4) In business – get the 80% stuff done quickly. We all know that the first 80% goes quickly, it’s the remaining 20% that is hard and gets progressively harder (Zeno’s Paradox ). After all more than 80% of code consists of error handling. While that 20% is critical for product development, it’s the big 80% done quickly that counts in business (and in government/policy).

The fundamentals are always hard to ingrain – eat in moderation, exercise regularly and all that. Worth reminding ourselves in different settings on a regular basis.

Ananth

100 Log Management uses #6 Password reset

Today we look at password reset logs. Generally the first thing a hacker does when hijacking an account is to reset the password. Any resets therefore are worth investigating, more so multiple password resets on an account.

-By Ananth

100 Log Management uses #5 Outbound Firewall traffic

A couple of days ago we looked at monitoring firewall incoming traffic. In many cases outbound traffic is as much a risk as incoming. Once hackers penetrate your network they will try to obtain information through spyware and attempt to get this information out. Also, outbound connections often chew up bandwidth — file sharing is a great example of this. We had a customer that could not figure out why his network performance was so degraded — it turned out to be an internal machine acting as a file sharing server. Looking at logs discovered this.

By Ananth

100 Log Management uses #4 Solaris BSM SU access failure

Today is a change of platform — we are going to look at how to identify Super User access failures on Solaris BSM systems. It is critical to watch for SU login attempts since once you are in as a SU or Root level the keys to the kingdom are in your pocket.

-By Ananth

100 Log Management uses – #3 Antivirus update

Today we are going to look at how you can use logs to ensure that everyone in the enterprise has gotten their automatic Antivirus update. One of the biggest security holes in an enterprise is individuals that don’t keep their machines updated, or turn auto-update off. In this video we will look at how you can quickly identify machines that are not updated to the latest AV definitions.

-By Ananth

100 Log Management uses – #2 Active Directory login failures

Yesterday we looked at firewalls, today we are shifting gears and looking at leveraging those logs from Active Directory. Hope you enjoy it.

– By Ananth

100 Log Management uses – #1 Firewall blocks

…and we’re back, with use-case# 1 – Firewall Blocks. In this video, I will talk about why it’s important to not just block undesirable connections but also monitor traffic that has been denied entry into your network.

By Ananth

100 uses of Log Management – Series

Here at Prism we think logs are cool, and that log data can provide valuable intelligence on most aspects of your IT infrastructure – from identifying unusual patterns that indicate security threats, to alerting on changes in configuration data, to detecting potential system downtime issues, to monitoring user activity. Essentially, Log Management is like a Swiss Army knife or even duct tape — it has a thousand and one applications.

Over the next 100 days, as the new administration takes over here in Washington DC, Ananth, the CEO of Prism Microsystems, will present the 100 most critical use-cases of Log Management in a series of videos focusing on real-world scenarios.

Watch this space for more videos, and feel free to rank and comment on your favorite use-cases.

By Ananth

The IT Swiss army knife EventTracker 6.3 and more

Log Management can find answers to every IT-related problem

Why can I say that? Because I think most problems get handled the same way. The first stage is someone getting frustrated with the situation. They then use tools to analyze whatever data is accessible to them. From this analysis, they draw some conclusions about the problem’s answer, and then they act. Basically, finding answers to problems requires the ability to generate intelligence and insight from raw data.

IT-related problems are no different. The only twist is that IT problems are growing in number, size and complexity at a faster rate than the budgets and resources targeted at those problems, even during good economic times. This means a lot of people (from CIOs to CFOs to security to operations managers) are frustrated with this situation. However, they lack a solution designed to analyze raw data and report intelligence and insight needed draw conclusions. What they need is a cost effective way to find answers from the available data.

The case for log management
Given this backdrop, it is fairly straightforward to see the logic behind my article title:
Step 1: Logs are a source of raw data for IT
Step 2: Log management solutions can make it easier to extract intelligence from IT data
Step 3: IT managers can use extracted intelligence to find answers to problems

Logs are a record of what a system is doing minute by minute. Each system log by itself is only mildly interesting (usually only to a technician when troubleshooting a problem). However, the aggregate of all logs contains more treasure than a Nicolas Cage movie. With the right search, query and reporting tools this raw data can turn into detailed understanding of most aspects of your business, from how consumers use your systems to purchase goods, to how the company’s risk profile changes over time, to how bottlenecks slow automated workflows, to identifying unusual patterns that indicate security threats.

The raw data for all of this understanding is already there. It is distributed on every IT asset with a log file because log files often contain electronic traces of interactions between assets and between users and assets. By examining these traces you can see patterns, by understanding patterns you can draw conclusions and plan actions. That is what it means to be proactive. That is what it means to work smarter not harder.

However, to turn gold ore (IT logs) into gold treasure (actionable answers) requires the ability to search, query, report, analyze the vast and restless sea of data generated by IT assets running your business’ operations to generate intelligence and insight. With that solution in place, it becomes a matter of applying that ability to generate intelligence to the specific scenario.

The gold coins for IT Operations include answers to questions such as:
• Have there been any unauthorized configuration changes? With this answer staff can act to prevent service outages, data leaks, SLA penalties and compliance issues.
• How many VMs are deployed right now and who owns them? With this answer staff can act to increase resource utilization and minimize capital costs.
• How is the new load-balancing policy actually allocating workloads? With this answer staff can act to ensure capacity is allocated according to business priorities.

For security teams, the treasure chest contains real-time gems and forensic jewels. Since enterprise environments are getting more complex and more dynamic, it is more difficult to rapidly investigate cause/effect during the crisis without automated correlation of configuration changes and events that logged by systems, applications, and network infrastructure. Forensic analysis of IT data allows staff to test potential answers (such as changing an operational policy, adding a new configuration check, or implementing a new correlation rule) to the “how do we prevent this from happening again” question.

Compliance officers can swim away with multiple gold medals because most analysts believe more regulations are coming, even if their computing environment remains relatively unchanged over the next 18 months. These new regulations are likely to involve analyzing and reporting the same raw IT data different ways to answer questions about:
• The integrity of systems, applications and processes,
• The ability to differentiate between good and bad interactions between systems and between employees and systems,
• The process for preventing and mitigating unauthorized changes, etc.

The effort involved in answering those management, security and governance questions could be days worth of remotely accessing systems and copying data into spreadsheets – or could be a mouse-click to view a dashboard or report generated by a log management solution. Similarly, each group could purchase separate solutions to generate their intelligence treasure – or could use an enterprise-wide solution flexible enough to address their critical needs in each area. It’s up to the company to decide by focusing on their needs.

Get started by focusing on critical needs
Financial crises tend to cut through the hazy grind of daily business operations and to focus people on critical needs. This global credit crunch is no different. For business executives, the two critical needs are:

  • protecting what they have by keeping service performance stable while lowering operational costs; and
  • adapting to unexpected situations and problems by increasing business agility while lowering risk management costs.

For business technologists, the two critical needs are meeting those business demands and holding onto their jobs.

The margin for error is very slim. Businesses that allow service performance to disintegrate during tough times or take risky actions to deal with market fluctuations, unexpected service problems or malicious attacks rarely make it through economic downturns in any shape to compete effectively in the future. Typically, survivor companies do not cut costs blindly. Instead they use tough times as a mandate for projects that dramatically improve the competitive value of their staff’s daily activities.

There is only one way to do that when your business services and competitiveness are IT-dependent – skyrocket productivity with a proactive approach to managing, securing and governing technology assets delivering business services and agility. Since there can be hundreds of technology assets per business employee, the only way operations, security and compliance staff can become more proactive is to get better intelligence, knowledge and insight.

This brings us right back to where we started. Having better intelligence is a key part of dealing with every IT-related issue and every additional demand that business executives challenge IT to meet without increasing its staff. Therefore, it is time to get IT intelligence (aka log management) solutions off of the wish list and into the hands of the staff that need it.

Jasmine Noel is founder and partner of Ptak, Noel & Associates.  With more than 10 years experience in helping clients understand how adoption of new technologies affects IT management, she tries to bring pragmatism (and hopefully some humor) to the business-IT alignment discussion.  Send any comments, questions or rants to jnoel@ptaknoelassociates.com

Industry News

Lock down that data
Another example of the insider threat to personally identifiable information has surfaced. In December, an employee in the human resources department of the Library of Congress was charged with conspiring to commit wire fraud for a scheme in which he stole information on at least 10 employees from library databases.

Did you know? EventTracker not only enables insider threat detection, but also provides a complete snapshot of a user’s activity including application usage, printer activity, idle-time, software install/uninstall, failed and successful interactive/non- interactive logins, changes in group policy, deleted files, websites visited, USB activity and more to deter unauthorized access

In the Vault
When it comes to protecting financial info, IT security professionals can never rest on their laurels. These organizations must adopt new technologies, ramp up online banking options, and deal with employee turnover. That’s why these firms continually need to review the security measures in place.

Did you know? EventTracker provides you with scheduled or on-demand reviews of security measures allowing you to proactively address potential weaknesses in security controls, while reacting to security incidents.

EventTracker melds Smart Search with Advanced SIEM capabilities
Best-of-both-worlds solution combines free-form, intuitive searching with intelligent analytics, correlation, mining and reporting in one turn-key package

What’s new in EventTracker 6.3 ? 
Free form Google-like search, user profiling and more… Watch video for detailed information.

Extreme logging or Too Much of a Good Thing

Strict interpretations of compliance policy standards can lead you up the creek without a paddle. Consider two examples:

  1. From PCI-DSS comes the prescription to “Track & monitor all access to network resources and cardholder data”. Extreme logging is when you decide this means a db audit log larger than the db itself plus a keylogger to log “all” access.
  2. From HIPAA 164.316(b)(2) comes the Security Rule prescription to “Retain … for 6 years from the date of its creation or the date when it last was in effect, whichever is later.” Sounds like a boon for disk vendors and a nightmare for providers.

Before you assault your hair follicles, consider:
1) In clarification, Visa explains “The intent of these logging requirements is twofold: a) logs, when properly implemented and reviewed, are a widely accepted control to detect unauthorized access, and b) adequate logs provide good forensic evidence in the event of a compromise. It is not necessary to log all application access to cardholder data if the following is true (and verified by assessors):
– Applications that provide access to cardholder data do so only after making sure the users are authorized
– Such access is authenticated via requirements 7.1 and 7.2, with user IDs set up in accordance with requirement 8, and
– Application logs exist to provide evidence in the event of a compromise.

2) The Office of the Secretary of HHS waffles when asked about retaining system logs- this can be reasonably interpreted to mean the six year standard need not be taken literally for all system and network logs.

Ananth