Detecting Zeus, Logging for incident response, and more

Logging for Incident Response: Part 1 – Preparing the Infrastructure

From all the uses for log data across the spectrum of security, compliance, and operations, using logs for incident response presents a truly universal scenario –you can be forced to use logs for incident response at any moment, whether you’re prepared or not.  An incident response (IR) situation is one where having as much log data as possible is critical. You might not use it all, and you might have to work hard to find the proverbial needle in the haystack of logs – still, having reliable log data from all – affected and unaffected – systems is indispensable in a hectic post-incident environment.

The security mantra “prevention-detection-response” still defines most of the activities of today’s security professionals. Each of these three components is known to be of crucial importance to the organization’s security posture. However, unlike detection and prevention, the response is impossible to avoid. While it is not uncommon for an organization to have weak prevention and nearly non-existent detection capabilities, they will often be forced into response mode by attackers or their evil creations – malware. Even in cases where ignoring the incident that happened might be the chosen option, the organization will implicitly follow a response plan, even if it is as ineffective as to do nothing.

In this paper, we will focus on how to “incident-response-proof” your logging – how to prepare your logging infrastructure for incident response. The previous six articles focused on specific regulatory issues, and it is not surprising that many organizations are doing log management just to satisfy compliance mandates. Still, technology and processes implemented for PCI DSS or other external mandates are incredibly useful for other uses such as incident response.  On top of this, many of the same regulations prescribe solid incident response practices (for additional discussion see “Incident management in the age of compliance”)

Even though a majority of incidents are still discovered by third parties (seeVerizon Breach Report 2010 and other recent research), it is clear that organizations should still strive to detect incidents in order to limit the damage stemming from extensive, long-term compromises. On the other hand, even for incidents detected by the third parties, the burden of investigation – and thus using logs for figuring out what happened –falls on the organization itself.

We have therefore identified two focal points for use of logs in incident response:

  • Detecting incidents
  • Investigating incidents

Sometimes the latter use-case is called “forensics” but we will stay away from such definitions since we would rather reserve the term “forensics” for legal processes.

Incident Response Model and Logs
While incidents and incident response will happen whether you want it to or not, a structured incident response process is an effective way to reduce the damage suffered by the organization.  The industry-standard SANS incident response model organizes incident response in six distinct stages (see ( Management 101 Preparation & Initial Response (aka Identification)  By: Robin Dickerson (posted on January 17, 2005)

Preparation includes tasks that need to be done before the incident: from assembling the team, training people, collecting, and building tools, to deploying additional monitoring and creating processes and incident procedures

  • Identification starts when the signs of an incident are seen and then confirmed, so that incident is declared
  • Containment is important for documenting what is going on, quarantining the affected systems, as well as possibly taking systems offline
  • Eradication is preparing to return to normal by evaluating the available backups, and preparing for either restoration or rebuilding of the systems
  • Recovery is where everything returns to normal operation
  • Follow-Up includes documenting and discussing lessons learned, and reporting the incident to management

Logs are extremely useful, not just for identification and containment as we mention above, but for all phases of incident response process.  Specifically, here is how logs are used at each stage of the IR process:

  1. Preparation: incident response logs help us verify controls (for example, review login success and failure histories), collect normal usage data (learn what log messages show up during routine system activity), and perform a baseline (create log-based metrics that describe such normal activity), etc.
  2. Identification: logs containing attack traces, other evidence of a successful attack, or insider abuse are pin-pointed, or alerts might be sent to notify about an emerging incident; also, a quick search and review of logs helps to confirm an incident, etc.
  3. Containment: logs help us scope the damage (for example, firewall logs show which other machines display the same scanning behavior in case of a worm or spyware infestation), and learn what else is lost by looking at logs from other systems that might contain traces similar to the one that is known to be compromised, etc.
  4. Eradication: while restoring from backups, we need to also make a backup of logs and other evidence:  preserving logs for the future is required, especially if there is risk of a lawsuit (even if you don’t plan to sue, the other side might)
  5. Recovery: logs are used for confirming the restoration and then measures are put in place to increase logging so that we have more data in case it happens again; incident response will be much easier next time
  6. Follow-Up: apart from summarizing logs for a final report, we might use the incident logs for peaceful purposes: training the new team members, etc.

As a result, the IT infrastructure has to be prepared for incident response logging way before the first signs of an incident are spotted.

Preparing the Infrastructure
In light of predominantly 3rd party incident discovery, the incident response process might need to be activated at any moment when notification of a possible incident arrives.  From this point onward, the security team will try to contain the damage and investigate the reason for the attack or abuse based on initial clues. Having logs will allow an organization to respond better and faster!

What logs needs to be collected for effective IR? This is very simple: any and all logs from networks, hosts, applications, and other information systems can be useful during response to an incident. The same applies to context data – information about users, assets, and vulnerabilities will come in handy during the panic of incident response. As we say above, having as much log data as possible will allow your organization to effectively investigate what happened, and have a chance of preventing its recurrence in the future.

Specifically, make sure that the following log sources have logs enabled and centrally collected:

  • Network Devices – routers and switches
  • Firewalls – including firewall modules in other Network Devices
  • IDS, IPS and UTM devices – while firewalls are ubiquitous and can create useful logs, IDS/IPS alerts add to a useful dimension to IR process
  • Servers running Windows, Unix and other common operating systems; logging should include all server components such as web servers, email servers, DNS servers, and other server components
  • VPN logs are often key to IR since they can reveal who was accessing your systems from remote locations
  • Web proxies – these logs are extremely useful for tracking “drive-by downloads” and other web malware attacks
  • Database – logs from RDBMS systems contain records indicating access to data as well as changes to database systems
  • Applications ranging from large enterprise applications such as SAP to custom and vertical applications specific to a company

Detailed discussion of logging settings on all those systems goes beyond the scope of this paper and might justify not just reading a document, but engaging specialty consultants focused on logging and log management.

Tuning Log Settings for Incident Response
What logs should be enabled on the systems covered above? While “log everything” makes for a good slogan, it also makes log analysis a nightmare by mixing together more relevant log messages with debugging logs which are used much less often, if at all. Still, many logging defaults should be changed as described below.

A typical Unix (Solaris, AIX, etc.) or Linux system will log the following into syslog: various system status and error messages, local and remote login/logout, some program failures, and system start/stop/restart messages. Logs that will not be found will be all logs tracking access to files, running processes, and configuration changes. For example, to log file access on Linux, one needs to use a kernel audit facility, and not simply default syslog.

Similarly, on Windows systems the Event Log will contain a plethora of system status and error messages, login/logout records, account changes, as well as system and component failures.  To have more useful data for incident response , one needs to modify the audit policy to start logging access to files and other objects.

Most web servers (such as Apache and Microsoft IIS) will record access to web resources located on a server, as well as access errors. Unlike the OS platforms, there is not a pressing need for more logging, but one can modify the /etc/http/httpd.conf to add logging of additional details, such as referrer and browser type.

Databases such as Oracle and MS SQL Server log painfully little by default, even though the situation is improving in recent database versions such as Oracle 11g. With older databases, you have to assume to have no database logs if you have not enabled them during the incident preparation stage. A typical database will log only major errors, restarts, and some administrator access, but will not log access, or changes to data or database structures.

Firewalls typically log denied or blocked connections, but not the allowed connections by default: as our case study showed, connection allowed logs are one of the most indispensable for incident response. Follow the directions for your firewall to enable such logging.

VPN servers will log connections, user login/logouts, errors; default logging will be generally sufficient.  Making sure that successful logins – not just failures-  are logged is one of the important preparation tasks for VPN concentrators.

Network IDS and IPS will usually log their alerts, various failures, user access to the sensor itself; the only additional type of “logging” is recording full packet payload.

Implementing Log Management
Log management tools that can collect massive volumes of diverse log data without issues are hugely valuable for incident response.  Having a single repository for all activity records, audit logs, alerts, and other log types allows incident responders to quickly assess what was going on during an incident, and what led to a compromise or insider abuse.

After logging is enabled and configured for additional details and additional logged events, the logs have to be collected and managed to be useful for incident response.  Even if a periodic log review process is not occurring, the logs have to be available for investigations.  Following the maturity curve (see, even simply having logs is a huge step forward for many organizations.

When organizations start collecting and retaining logs, the question of retention policy comes to the forefront.  Some regulations give specific answers: PCI DSS for example, mandates storing logs for one year.  However, determining proper log storage for incident response can be more difficult. One year might still be a good rule of thumb for many organizations, since it is likely that investigating incidents more than one year after they happened will be relatively uncommon,but certainly possible – so longer retention periods such as three years may be useful).

In the next paper, we will address how to start reviewing logs for discovering incidents, and also how to review logs during incident response. At this point, we have made a huge step forward by making sure that logs will be around when we really need them!

Even though compliance might compel organizations to enable logging, deploy log management, and even start reviewing logs, incident response scenarios allow the value of logs to truly manifest itself.

However, in order to use logs for incident response, the IT environment has to be prepared – follow the guidance and tips from this paper in order to “IR-proof” your logging infrastructure.  A useful resource to jumpstart your  incident response log review is “Critical Log Review Checklist for Security Incidents” which can be obtained at in various formats.

About Author

Dr. Anton Chuvakin ( is a recognized security expert in the field of log management and PCI DSS compliance.  He is an author of books “Security Warrior” and “PCI Compliance” and a contributor to “Know Your Enemy II”, “Information Security Management Handbook”; he is now working on a book about computer logs.  Anton has published dozens of papers on log management, correlation, data analysis, PCI DSS, security management (see list . His blog is one of the most popular in the industry.

In addition, Anton teaches classes (including his own SANS class on log management) and presents at many security conferences across the world; he recently addressed audiences in United States, UK, Singapore, Spain, Russia and other countries.  He works on emerging security standards and serves on the advisory boards of several security start-ups.

Currently, Anton is building his security consulting practice, focusing on logging and PCI DSS compliance for security vendors and Fortune 500 organizations.  Dr. Anton Chuvakin was formerly a Director of PCI Compliance Solutions at Qualys. Previously, Anton worked at LogLogic as a Chief Logging Evangelist, tasked with educating the world about the importance of logging for security, compliance and operations. Before LogLogic, Anton was employed by a security vendor in a strategic product management role. Anton earned his Ph.D. degree from Stony Brook University.

 Previously on EventSource: Logging for FISMA Part 1 and Part 2

Logs vs Bots and Malware Today

Despite the fact that security industry has been fighting malicious software – viruses, worms, spyware, bots and other malware since the late 1980s, malware still represents one of the key threat factors for organizations today. While silly viruses of the 1990s and noisy worms (Blaster, Slammer, etc.) of the early 2000’s have been replaced by commercial bots and so-called “advanced persistent threats,” the malware fight rages on.

In this month’s newsletter article, we take a look at using log data to understand and fight malicious software in your organization.

The first question we have to address is why are we even talking about using logs in this context when we have had dedicated “anti-virus” security software for nearly 30 years? One of the dirty secrets of the security industry is that the effectiveness of traditional anti-virus software has been dropping over the last few years. The estimates place anti-virus software effectiveness at 30 percent to 50 percent at best – which means that 50 percent to 70 percent of malicious software present on today’s computers is not detected automatically by leading anti-virus tools. Even such widely disputed estimates are hotly debated, as there is no single consistent methodology for testing antivirus software. Whatever the estimates, heavily customized malware will almost always be missed, and therefore needs to be detected using other means. Such malware has become much more common now that criminals have found a lucrative business in stealing bank credentials, card numbers, and other valuable information from consumers and businesses alike.

As a result, other technologies have to step in to help antivirus tools in their mission: stopping the spread of malicious software. Log data provides information about system and network activities that can be used to look for machines behaving “under the influence” of malicious software.

Logs to Fight Malware

So, how can we use logs to fight malicious software?

Let’s start with firewall logs. They can help reveal connectivity patterns from the network to the outside world, serving as proof that one system connected (in case of a successful connection message) or tried to connect (in case of a failed or blocked connection log messages) to another system. This is very useful to establish the path of the malware within your organization’s infrastructure – from the initial infection to the subsequent spreading of that infection.

Along the same lines, firewall logs and network flow data can serve as proof of a lack of connectivity: firewall blocking connections not followed by a successful attempt prove that the malware was unable to connect outside to its “headquarters” and sensitive data was most likely not stolen after being acquired by the malware. These logs are vital, and provide very useful information while assessing the cost and impact of a malware incident – assuming your firewall logs are being collected by your log management tool.

Logs can also help you detect malware initiated scans – combining multiple hits on the firewall into a single pattern – a scan – gives us the information about malware spread and reconnaissance activities. SIEM tools can create alerts upon seeing such a pattern in logs. Typically, if you see a scan by an internal system that hits (or tries to hit) a large number of external systems, you have an infected system inside your perimeter. On the other hand, spyware sometimes has its own log signatures, such as multiple attempts to connect to a small set of systems over port 80 or a high TCP port. In fact, one can match firewall logs to known “blacklists” of malware sites —please refer to the SANS Internet Storm Center and other sources for such lists.

Which Logs Are Best?

So, what types of logs are most useful for detecting and fighting malicious software?

As mentioned above, firewall logs are incredibly useful for malicious software tracking – but only as long as outbound connections (successful and blocked) are recorded in logs.

Since modern IDS and IPS devices have signatures for network malware detection including worms, viruses, and spyware, their logs are useful for learning the impact to infected systems, as well as the number and nature of infection attempts in your environment.

Even looking at the logs from your anti-virus can be incredibly helpful to detect situations when an anti-virus tools detects the “evil presence” but fails to clean it automatically. A characteristic log message is generated by most major antivirus vendor tools in such circumstances. This log may be your sole indication that the system is infected.

These logs are useful for detecting the occurrences where the malware tries to damage an antivirus tool or interfere with its update mechanism, thereby preventing the up-to-date virus signatures from being delivered. Whenever an anti-virus software process dies, a log is created by the system, and reviewing such log records can serve as early indication of a possible incident, as well as provide key evidence further in the investigation.

Additionally, modern anti-virus software will log when an update is applied, and indicate if an update fails, leaving the system unprotected, AND when an update succeeds. As a result, the log will serve as evidence as to the state of your protection. If the machine still has malware despite having updated anti-virus signatures, it means that the malware specimen is probably too new for the AV tool to catch.

Web proxy logs can be used for detection of file uploads and other outbound information transfers via the web, initiated by data-stealing malware. Looking for methods and content-type in combination with either known suspicious URLs or user-agent (i.e. web client type) can often reveal spyware infections that are actively collecting data and channeling it out of your environment. Admittedly, a well-written spyware can certainly fake the user-agent field, but it can be useful to add to our query above. Proxy logs may indicate a pattern of activity where a machine shows a set of connections and data uploads in rapid sequence with attempts to many systems suggesting malware may be the cause.

Operating system logs are also useful for malware tracking since modern operating systems will require software updates and process terminations – and both can be performed by malicious software. Even simply logging the application launches with process names allows us to match those names against known lists of malware applications, sometimes with surprising and scary results.

Quick Case Study

In one recent example, in a recent case a regular desktop was seen scanning all over the internal network. This was discovered by analyzing the firewall logs and uncovering a spike in volume after this scanning started en masse. The desktop was quickly cut from the network soon after this discovery, and an incident was declared. When the system was investigated, an impressive array of malware was discovered – along with a dead anti-virus software, killed by the malware. Logs also helped to answer the question “Did it infect anybody else?!” For this purpose, the same logs from firewalls revealed that no other system manifested such scanning apart from the investigated one. So, it was determined that the scanning campaign didn’t lead to infections of other systems.


To conclude, nowadays, anti-virus solutions are MUCH more likely to miss the malware, compared to a few years ago. Logs present a critical piece of information for detecting and investigating infections. Automatically collecting, baselining, and analyzing logs will sometimes result in faster detections then only using anti-virus tools. By using a log management tool to collect and analyze firewall, IDS/IPS, server and web proxy logs you can quickly find evidence of malware activities across systems and networks.

Portable drives and Working remotely in Today’s IT Infrastructure

So, Wikileaks announced this week that its next release will be 7 times as large as the Iraq logs. The initial release brought a very common problem that organizations of all sizes face to the top of the global stage – anyone with a USB drive or writeable CD drive can download confidential information, and walk right out the door. The reverse is true, and harmful malware, Trojans, and viruses can be placed onto the network, as seen with the Stuxnet virus. These pesky little portable media drives are more trouble than they are worth! OK, you’re right, let’s not cry “The sky is falling” just yet.

But, the Wikileaks and Stuxnet virus aside, how big is this threat?

  • A 2009 a study revealed that 59% of former employees stole data from their employers prior to leaving learn more
  • A new study in the UK reveals USB sticks (23%) and other portable storage devices (19%) are the most common devices for insider theft learn more

Right now, there are two primary schools of thought to this significant problem. The first is to take an alarmist approach, and disable all drives, so that no one can steal this data, or infect the network. The other approach is to turn a blind eye, and have no controls in place.

But how does one know who is doing what, and which files are being downloaded or uploaded? The answer is in your device and application logs, of course. The first step is to define your organization’s security policy concerning USB and readable CD drives:

1. Define the capabilities for each individual user as tied to their system login

  • Servers and folders they have permission to access
  • Allow/disallow USB and writeable CD drives
  • Create a record of the serial numbers of the CD drive and USB drive

2. Monitor log activity for USB drives and writeable CD drives to determine what information may have been taken, and by whom

Obviously, this is like closing the barn door after the horse has left. You will be able to know who did what, and when… but by then it may be too late to prevent any financial loss or harm to your customers.

The ideal solution is to support this organization-wide policy that defines the abilities of each individual user, and determine who has permission to use the writeable capabilities of the CD drive or USB drive at the workstation, while monitoring and controlling serial numbers and information access from the server level with automation… combing through all of the logs to look for this event, and being able to trace what happened would seem almost impossible.

With a SIEM/log management solution, this process can be automated, and your organization can be alerted to any event that occurs where the transfer of data does not match the user profile/serial number combination. It is even possible to prevent that data from being transferred by automatically disabling the device. In other words, if someone with a sales ID attempts to copy a file from the accounting server onto a USB drive where the serial number does not match their profile, you can have the drive automatically disabled and issue an incident to investigate this activity. By the same token, if someone with the right user profile/serial number combination copies a file they are permitted to access – something that is a normal, everyday event in conducting business – they would be allowed to do so.

This solution prevents many headaches, and will prevent your confidential data from making the headlines of the Los Angeles Times or the Washington Post.

To learn how EventTracker can actually automate this security initiative for you, click here 

-John Sennott

100 Log Management uses #68 Secure Auditing HPUX

Today we continue our series on Secure Auditing with a look at HPUX.   I apologize for the brief hiatus, and we will now be back on our regular schedule.

Lessons from Honeynet Challenge “Log Mysteries”

Ananth, from Prism Microsystems, provides in-depth analysis on the Honeynet Challenge “Log Mysteries” and his thoughts on what it really means in the real world. EventTracker’s Syslog monitoring capability protects your enterprise infrastructure from external threats. “Syslog monitoring”

Log review for incident response EventTracker Excels in UNIX Challenge

Log Review for Incident Response: Part 2

From all the uses for log data across security, compliance and operations (see, for example, LogTalk: 100 Uses for Log Management #67: Secure Auditing – Solaris), using logs for incident response presents a truly universal scenario: you can be forced to use logs for incident response at any moment, whether you are prepared to or not. As we discussed in a previous newsletter edition, having as much log data as possible in an incident response (“IR” or “IH” for incident handling) situation is critical. You might not use it all, and you might have to work hard to find the proverbial needle in the haystack, but having reliable log data from both affected and unaffected systems is indispensable in a hectic post-incident environment.

In the previous edition, we focused on how to “incident-response-proof” your logging which is how to prepare your logging infrastructure for incident response. In this article, we address how to start reviewing logs for discovering incidents, and how to review logs during an incident response.

Logs play a role at all stages of incident response. They are reviewed under two very different circumstances during incident response process:

  • Routine periodic log review – this is how an incident may be discovered;
  • Post-incident review – this may happen when initial suspicious activity signs are available, or during a full-blown incident investigation.

If you are looking for a quick answer to log review, the “Simple Incident Log Review Checklist” is available in various formats.

Periodic Log Review: Discover Incidents

The basic principle of periodic log review (referred to as “daily log review” even if it might not be performed daily) is to accomplish the following:

  • Detect ongoing intrusions and incidents (monitoring)
  • Look for suspicious signs that indicate an impending incident (proactive)
  • Find traces of past, unresolved incidents (reactive)

The daily log review is built around the concept of establishing a “baseline”, or learning and documenting the normal set of messages appearing in logs. Baselines are then followed by the process of finding “exceptions” from the normal routine and investigating them to assure that no breach of data has occurred or is imminent.

Build a baseline for each log source type (and sometimes even individual log sources) because it is critical to become familiar with normal activities logged on each of the applications. Initial baselines can be quickly built using the process described below.

In addition to this “event type”, it makes sense to perform a quick assessment of the overlap log entry volume for the past day (past 24 hr period). Significant differences in log volume should also be investigated using the procedures defined below. In particular, loss of logging (often recognized from a dramatic decrease in log entry volume) needs to be investigated and escalated as a security incident.

Building an Initial Baseline

To build a baseline using a log management tool do the following:

  1. Make sure that relevant logs are aggregated by the log management tool or a SIEM tool – also make sure that the tool can “understand” the logs
  2. Select a time period for an initial baseline ranging from one week at the low end to, ideally, 90 days
  3. Run a report that shows counts for each message type. This report indicates all the log types that are encountered over the baseline period of system operation
  4. Assuming that no breaches of data have been discovered, we can accept the above report as a baseline for “routine operation”

An additional step should be performed while creating a simple baseline: even though we assume that no compromise has taken place, there is a chance that some of the log messages recorded triggered some kind of action or remediation. Such messages are referred to as “known bad” and should be marked as such and not counted as normal baseline. System crashes, intrusion events, and unplanned maintenance are examples of such events.

How does one actually compare today’s batch of logs to a baseline? Two methods are widely used for log review, and the selection can be made based on the available resources and tools used.

The first method only considers log types not observed before and can be done manually as well as with tools. Despite its simplicity, it is extremely effective with many types of logs: simply noticing that a new log message type is produced is typically very insightful for security, compliance and operations.

For example, if log messages with IDs 1,2,3,4,5,6 and 7 are produced every day in large numbers, but a log message with ID 8 is never seen, each occurrence of such a log message is reason for an investigation. If it is confirmed that the message is benign and no action is triggered, it can be later added to the baseline.

So, the summary of comparison methods for daily log review is:

  • Basic method:
    • Log type not seen before (NEW log message type)
  • Advanced methods:
    • Log type not seen before (NEW log message type)
    • Log type seen more frequently than in baseline
    • Log type seen less frequently than in baseline
    • Log type not seen before (for particular user)
    • Log type not seen before (for particular application module)
    • Log type not seen before (on the weekend)
    • Log type not seen before (during work day)
    • New user activity noted (any log from a user not seen before on the system)

While following the advanced method, other comparison algorithms can be used by the log management tools as well. After the message is flagged as an exception, we move to a different stage in our daily workflow – from daily review to investigation and analysis.

Exception Investigation and Analysis: Incident Response

A message that does not fit the profile of a normal log is flagged “an exception.” It is important to note that an exception is not the same as a security incident, but it might be an early indication that one is taking place. Incident response might start at this stage, but it may also go back to normal.

At this stage, we have an individual log message that is outside of routine/normal operation. The following high-level investigative process is used on each “exception” entry:

  1. Look at log entries that occurred at the same time: this technique involves looking at an increasing range of time periods around the log message that is being investigated. Most log management products can allow you to review logs or to search all logs within a specific time frame. For example:
    • Look at other log messages triggered 1 minute before and 1 minute after the “suspicious” log message
    • Now look at other log messages triggered 10 minutes before and 10 minutes after the “suspicious” log message
    • Finally look at other log messages triggered 1 hour before and 1 hour after the “suspicious” log message (if needed – the volume of log messages can be significant)
  2. Look at other entries from the same user: this technique includes looking for other log entries produced by the activities of the same user. It often happens that a particular logged event of a user activity can only be interpreted in the context of other activities of the same user. Most log management products can allow you to “drill down into” or search for a specific user within a specific time frame.
  3. Look at the same type of entry on other systems: this method covers looking for other log messages of the same type, but on different systems in order to determine its impact. Learning when the same message was produced on other system may hold clues to understanding the impact of this log message.
  4. Look at entries from the same source (if applicable): this method involves reviewing all other log messages from the network source address (where relevant).
  5. Look at entries from the same app module (if applicable): this method involves reviewing all other log messages from the same application module or components. While other messages in the same time frame (see item 1. above) may be significant, reviewing all recent logs from the same components typically helps to reveal what is going on.

After following this process, the impact of the logged event on the organization should become more clear and further incident response process steps can be taken. Detailed discussion of incident response practices goes outside the scope of this newsletter.


Even though compliance might compel organizations to enable logging, deploy log management, and even start reviewing logs, an incident response scenario allows the value of logs to truly manifest itself. However, in order to use logs for incident response the organization needs to be prepared – follow the above guidelines and and “IR-proof” your logging infrastructure.
Log review, however, needs to happen on an ongoing basis. Build the baseline and then compare the events to the baseline in order to detect exceptions. Investigate those exception in order to qualify them as incidents, as well as assess their impact on the organization.

About Author

Dr. Anton Chuvakin ( is a recognized security expert in the field of log management and PCI DSS compliance.  He is an author of books “Security Warrior” and “PCI Compliance” and a contributor to “Know Your Enemy II”, “Information Security Management Handbook”; he is now working on a book about computer logs.  Anton has published dozens of papers on log management, correlation, data analysis, PCI DSS, security management (see list . His blog is one of the most popular in the industry.

In addition, Anton teaches classes (including his own SANS class on log management) and presents at many security conferences across the world; he recently addressed audiences in United States, UK, Singapore, Spain, Russia and other countries.  He works on emerging security standards and serves on the advisory boards of several security start-ups.

Currently, Anton is building his security consulting practice, focusing on logging and PCI DSS compliance for security vendors and Fortune 500 organizations.  Dr. Anton Chuvakin was formerly a Director of PCI Compliance Solutions at Qualys. Previously, Anton worked at LogLogic as a Chief Logging Evangelist, tasked with educating the world about the importance of logging for security, compliance and operations. Before LogLogic, Anton was employed by a security vendor in a strategic product management role. Anton earned his Ph.D. degree from Stony Brook University.


In this fifth installment of the Honeynet Challenge of 2010, EventTracker 7.0 was put to the test, and was used by 4 of the top 6 contestants to perform the forensic analysis for this competition — The challenge required participants to discern what had transpired on a virtual server, utilizing all of the logs from this potentially compromised UNIX server.

This challenge was created by the Honeynet Project, an international non-profit organization dedicated to raising the awareness of vulnerabilities and threats that exist on the vast expanse of the worldwide web. This section, called “Log Mysteries” was opened for competition on September 1, 2010, with finalists announced October 26, 2010 by. Contestants were provided the complete logs from a virtual server, and asked to determine the following:

  1. Was the system compromised and when? How do you know that for sure?
  2. If the (server) was compromised, what was the method used?
  3. Can you locate how many attackers failed? If some succeeded, how many were they? How many stopped attacking after the first success?
  4. What happened after the brute force attack?
  5. Locate the authentication logs. Was a brute force attack performed? If yes, how many?
  6. What is the timeline of significant events? How certain are you of the timing?
  7. Anything else that looks suspicious in the logs? Any misconfigurations? Other issues?
  8. Was an automatic tool used to perform the attack? If yes which one?
  9. What can you say about the attacker’s goals and methods?
  10. Bonus. What would you have done to avoid this attack? (Source

Working independently, 4 of the top 6 came to the same correct answers by importing the logs into EventTracker, and utilizing its standard functionality to quickly and accurately unravel this mystery. EventTracker accurately discovered invalid log- in attempts from existing users, and most importantly, exposed a brute-force attack on the server.

“Security challenges abound regardless of the network architecture. While the vulnerabilities and attack techniques are platform dependent EventTracker excels as a platform for log analysis. The ability to have user defined output and indexed search are especially useful features in such situations” said A.N. Ananth, CEO, Prism Microsystems.

In an EventTracker protected environment, , administrators would have been notified of this attack, or would have been programmed by the organization to take remedial action and protect the IT infrastructure from harm.

In the answer to the final question, one contestant utilizing EventTracker 7.0 for their submission provided the following nine steps for companies to avoid brute force attacks:

  1. Hide systems running services such as SSH behind a firewall
  2. Use strong passwords or public-key authentication
  3. Configure SSH servers to use a non-standard port
  4. Restrict access to SSH servers
  5. Utilize Intrusion Detection/Intrusion Prevention (in conjunction with EventTracker 7.0)
  6. Disable Root Access
  7. Use ‘iptables’ to block attacks
  8. Use tcp_wrappers to block attacks
  9. Use EventTracker 7.0 Reports

EventTracker 7 is here; Detailed FISMA guidance and more

Logging for FISMA part 2 : Detailed FISMA logging guidance in NIST 800-92 and SANS CSC20 

The Federal Information Security Management Act of 2002 (FISMA) “requires each federal agency to develop, document, and implement an agency-wide program to provide information security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source.”

While some criticize FISMA for being ‘all documentation and no action’, the law emphasizes the need for each “Federal agency to develop, document, and implement an organization-wide program to secure the information systems that support its operations and assets.”  At the very least, the intention of the law is noble indeed.

As we mentioned in the previous newsletter, the law itself does not prescribe any logging, log management or security monitoring since it stays on a high level of policy, planning and risk to federal systems.  In accordance with the law, detailed guidance has been developed by NIST to cover the specifics of FISMA compliance.   The main source for detailed guidance is NIST Special Publication 800-53 “Recommended Security Controls for Federal Information Systems” , now in revision 3 , that we covered in the previous issue.  Among other things, the document describes log management controls including the generation, review, protection, and retention of audit records, and steps to take in the event of audit failure.  On top of this, NIST has created a dedicated “Guide to Computer Security Log Management”  . The  guide states “Implementing the following recommendations should assist in facilitating more efficient and effective log management for Federal departments and agencies.”

In this newsletter we will offer practical tips on using the NIST 800-92 for building log management program for FISMA compliance and beyond. In addition, we would cover other sources for federal information security guidance related to logs, in particular “Twenty Critical Security Controls for Effective Cyber Defense: Consensus Audit Guidelines” by SANS ( . One of the top controls, “Critical Control 6: Maintenance, Monitoring, and Analysis of Audit Logs”, is about logging and log management. It maps to NIST 800-53 AU and other controls – in particular “AC-17 (1), AC-19, AU-2 (4), AU-3 (1,2), AU-4, AU-5, AU-6 (a, 1, 5), AU-8, AU-9 (1, 2), AU-12 (2), SI-4 (8)” that we also covered previously. Unlike end to end coverage of logging in NIST 800-92 which can be overwhelming for casual reader, SANS document  contains “quick wins” that agencies can follow immediately to ramp up their FISMA efforts.

How can you use 800-92 document in your organization? First, let’s become familiar with what is inside the 80 page document. The guide starts with an introduction to computer security log management  and follows with three main sections:

  • Log Management Infrastructure
  • Log Management Planning
  • Log Management Operational Processes

Indeed, this is the right way to think about any log management project since organizations usually have challenges with planning, logging architecture building and then with ongoing operation – that has to be maintained for as long as the organization exists.

The guide defines log management as “the process for generating, transmitting, storing, analyzing, and disposing of computer security log data.” By the way, keep in mind that security log management must cover both ‘Logs from Security Applications’ (e.g. IPS alerts) and “Security Logs from Applications” (e.g. user authentication decisions from a business application). Focusing on just one is a mistake.

In the area of log management infrastructure, the guide defines three tiers of log management architecture

  • Log Generation
  • Log Analysis and Storage
  • Log Monitoring

Following this order is simply log common sense; but many organizations, unfortunately, sometimes start from purchasing an expensive tool without thinking about their logging policy and they use for logs. Thinking of the needs (what you want to get from logs?) and logs (what logs can help me get there?) before thinking about boxes and software will save your organization many headaches.

Log management project planning starts from focusing on a key item – organization roles. Log management is inherently “horizontal” and touches many areas of an organization. NIST suggests that system and network administrators, security administrators, incident respondents, CSOs, auditors and – yes!- even internal application developers be invited to the party. This will help the organization choose and implement the right answer to their log management question.

Next – the real work start: creation of a logging policy. It is a common theme that security starts from a policy; this strongly applies to logging. According to the guide, such policies need to cover:

  • Log generation: what events are logged, with what level of detail
  • Log transmission: how logs are collected and centralized across the entire environment
  • Log storage and disposal: how and where the logs are retained and then disposed of
  • Log analysis: how are the logged events interpreted and what actions are taken as a result

What does this mean in practical terms? It means that configuring tools needs to happen only after the policy that covers what will be done is created. Goals first, infrastructure choices second! In case of privacy and other regulations on top of FISMA, the legal department should also have their say, however unpalatable it may be to the security team.

After defining policy and building the systems to implement the policy, as well as configuring the log sources, the hard work of building lasting ongoing program beings. The core of such a program is about performing periodic analysis of log data and taking appropriate responses to identified exceptions. Obviously, no external guide can define what is most important to your organization – but hopefully using this newsletter, NIST, and other  guidance, you already have some idea about what logs you would care the most about.

On a less frequent basis, the agency will perform tasks related to long-term management of log data. It is a surprisingly hard problem, if your log data volume goes into terabytes of data or more. NIST 800-92 suggests first choosing “a log format for the data to be archived” – original, parsed, etc. It also contains guidance on storing log data securely and with integrity verification, just like in PCI DSS.

So, how does NIST 800-92 helps you with your FISMA effort?

First, it gives a solid foundation to build a log management program – a lot of other mandates focus on tools, but this contains hugely useful program management tips, all the way down to how to avoid log analyst burnout from looking at too many logs.

Second, you can use the guide to learn about commonly overlooked aspects of log management: log protection, storage management, etc. For example, it contains a few useful tips on how to prioritize log records for review.

Third, it provides you with a way to justify your decisions in the area of log management – even if you don’t work for a  government agency.

At the same time, the guide is mostly about process, and less about bits and bytes. It won’t tell you which tool is the best for you.

In fact, even though NIST 800-92 is not binding guidance outside of federal government,  commercial organizations can profit from it as well. For example, one retail organization built its log management program based on 800-92 even though complying with PCI DSS was their primary goal. They used the NIST guide for tool selection, as a source of template policies and even to assure ongoing operational success for their log management project.

Other technical log management guidance for agencies subject to FISMA is published by SANS in the form of their “Twenty Critical Security Controls for Effective Cyber Defense” or CSC20. If you are looking for a quick actionable tips (called “QuickWins“ in the document), that is the resource for you. For example:

  • “Validate audit log settings for each hardware device and the software installed on it, ensuring that logs include a date, timestamp, source addresses, destination addresses, and various other useful elements of each packet and/or transaction. Systems should record logs in a standardized format such as syslog entries or those outlined by the Common Event Expression (CEE) initiative.”
  • “System administrators and security personnel should devise profiles of common events from given systems, so that they can tune detection to focus on unusual activity… “
  • “All remote access to an internal network, whether through VPN, dial-up, or other mechanism, should be logged verbosely.”

It is recommended to use all of NIST800-53, NIST 800-92 and SANS CSC20 to optimize your logging for FISMA compliance project.


To  conclude, NIST 800-92 and SANS CSC 20 teach us to do the following, whether for FISMA compliance alone, for multi-regulation programs, or simply improve security and operations.

  • Find the critical systems where logging is essential
  • Enable logging – and make sure that logs satisfy the “good log” criteria mentioned in the standards
  • Involve different teams in logging initiatives – logging cuts horizontally across the agency
  • Look at your logs! You’d be happy you started now and not tomorrow
  • Automate log management, where possible, and have solid repeatable process in all areas

On top of this, NIST 800-92 bring log management to the attention of people who thought “Logs? Let them rot.” Its process guidance is more widely represented than technical guidance which makes it very useful for IT management and not just for “in the trenches” people, who might already know that there is gold in the logs….

Did you know? In addition to automating compliance with FISMA requirements, EventTracker is the only SCAP-validated SIEM/Log Management solution to automate configuration assessment against the FDCC standard for comprehensive compliance

FISMA How To; Preview EventTracker 7 and more


The Federal Information Security Management Act of 2002 (FISMA) “requires each federal agency to develop, document, and implement an agency-wide program to provide information security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source.”

While some criticize FISMA for being ‘all documentation and no action’, the law emphasizes the need for each Federal agency to develop, document, and implement an organization-wide program to secure the information systems that support its operations and assets.

The law itself does not prescribe any logging, log management or security monitoring since it stays on a high level of policy, planning and risk to federal systems.  In accordance with the law, detailed guidance has been developed by NIST to cover the specifics of FISMA compliance.  For example, the following umbrella page covers how to plan a FISMA project at a federal agency.  In addition to NIST, OMB was tasked with collecting agency reports on compliance – FISMA periodic validation regime.

The main source for detailed guidance is NIST Special Publication 800-53 “Recommended Security Controls for Federal Information Systems” , now in revision 3  ( .  Among other things, the document describes log management controls including the generation, review, protection, and retention of audit records, and steps to take in the event of audit failure.

Let’s review the guidance in detail.

NIST 800-53 Logging Guidance

The section “AUDIT AND ACCOUNTABILITY POLICY AND PROCEDURES” (AU controls) focuses on AU-1 “AUDIT AND ACCOUNTABILITY POLICY AND PROCEDURES” covers “Formal, documented procedures to facilitate the implementation of the audit and accountability policy and associated audit and accountability controls.” This is indeed the right way to approach audit logging by starting from the logging policy and procedures for log collection and review. While audit controls in FISMA go beyond logging, the above guidance is very true for log management.

AU-2 “AUDITABLE EVENTS” refers to NIST 800-92, covered in the next part of the series.  As expected, risk assessment as well as logging needs for other organizational units needs to be considered for creating a list of auditable events. Events that are only audited under “special circumstances”, such as after an incident, are also defined here.

Logically, after the list of events to audit is established, AU-3 “CONTENT OF AUDIT RECORDS” clarifies the level of details recorded for each event.  Examples such as “time stamps, source and destination addresses, user/process identifiers, event descriptions, success/fail indications, filenames involved, and access control or flow control rules invoked” which should be in every good log records are provided. Refer to CEE standard work ( for further discussion of high –quality logging.

AU-4 “AUDIT STORAGE CAPACITY” and actually AU-11 “AUDIT RECORD RETENTION” cover the subject critical for many organizations – log retention.  Unlike PCI DSS NIST guidance only offers tips for selecting the right attention and not a simple answer (like,  1 year in PCI)

AU-5 “RESPONSE TO AUDIT PROCESSING FAILURES” mandates an important but commonly overlooked aspect of logging and log analysis – you have to act when logging fails. Examples that require action include “software/hardware errors, failures in the audit capturing mechanisms, and audit storage capacity being reached or exceeded” as well as other issues affecting logging.

AU-6 “AUDIT REVIEW, ANALYSIS, AND REPORTING”  is about what happens with collected log data.  Specifically it prescribes that organization “reviews and analyzes information system audit records for indications of inappropriate or unusual activity” at “organization-defined frequency.” Again, NIST/FISMA guidance stays away from giving a simple answer (like daily log reviews in PCI DSS)

AU-7 “AUDIT REDUCTION AND REPORT GENERATION” deals with reporting and summarization, the most common way to review log data.

AU-8 “TIME STAMPS” and AU-9 “PROTECTION OF AUDIT INFORMATION” as well as AU-10 “NON-REPUDIATION” address log reliability for investigative and monitoring purposes. Logs must be accurately timed and stored in a manner preventing changes. One mentioned choice is “hardware-enforced, write-once media.” The use of cryptography is another mentioned method.

AU-12 “AUDIT GENERATION” essentially makes sure that the organization “generates audit records for the list of audited events defined in AU-2 with the content as defined in AU-3.”

Next, logging guidance ends and security monitoring part begins: AU—13 “MONITORING FOR INFORMATION DISCLOSURE” focuses on information theft (“exfiltration” of sensitive data) and AU-14 “SESSION AUDIT” covers recording and analysis of user activity (“session data”). I often say that logging is largely futile without exception handling and response procedures.

Overall, here is what is likely needed for a successful FISMA-driven log management implementation. The way to address the requirement can vary across the type of an organization, as it is the case for all log management projects.

Approach to FISMA Logging

What do you actually need to do? The following distills FISMA/NIST guidance into actionable items that can be implemented and maintained, for as long as FISMA compliance is desired or mandated.

  • Logging policy comes first. But it means nothing without operational procedures which are developed base and policy and then put into  practice (AU-1)
  • This will likely require configuration changes to multiple types of systems; updates to  configuration standards prescribed elsewhere in the document is in order
  • Based on the policy, define which event will be logged (AU-2) and what details will be generated and recorded for each event (AU-3). Start the logging as per AU-12
  • Consider logging all outbound connectivity to detect exfiltration of data (as per AU-13) and make sure that user access sessions are recorded (AU-14)
  • Define log storage methods and retention times (AU-4 and AU-11) and retain the generated logs
  • Protect logs from changes, keep time accurate to preserve the evidentiary power of logs (AU8,9-10)
  • Also according to policy, implement log review procedures and report generation (AU-6, AU-7).  Distributing reports to parties that should see the information (also as per policy created in item 1)

At this point, your organization should be prepared for FISMA compliance on both policy level and technical level. It is now up to you to maintain that awareness for as long as needed.  A dangerous mistake that some organization make is to stay on the policy and procedure level and never configure actual systems for logging.  Remember – documents don’t stop malicious hackers, policies don’t help investigate incidents when log data is missing and talking about “alignment of compliance strategy” does not make you secure – or even compliant…


While FISMA might soon be updated with a new law (“FISMA 2.0”) prescribing continuous monitoring, it is highly likely that logging, log analysis and reporting will still be in. NIST 800-53 audit controls AU-1 to AU-14 as well as other references to audit logging in the document call for a comprehensive log management and log review program, linked to data leakage detection and security monitoring. We have distilled FISMA logging requirements into a clear strategy that can be implemented using any log management tool. It is also important to note that while FISMA logging starts from the logging policy and procedures, it must not stop there. Procedures don’t make you secure – diligently following them does! It is also useful to remember that FISMA is not about getting and storing logs, but about getting useful and actionable insight (AU-6 requirement). In the next part, we will dig deeper into how such logging program can be defined and implemented, and also how to tie it to SANS 20 Critical Security Controls for picking the priority items to implement first.

Industry News

NIST releases guide to Security Automation Protocol
NIST has published guidelines for using the SCAP for checking and validating security settings on IT systems. SCAP is widely used partly because the Office of Management and Budget requires agencies to use SCAP validates products for checking compliance with Federal Desktop Core Configuration Settings.

Did you know? EventTracker is to be certified for NIST SCAP FDCC certification.

The scary side of virtualization
After pushing forward with server virtualization, some IT executives are rethinking the security implications.

Did you know? Learn more about the current opinions and trends on the strategies for securing virtual environments in Prism’s 2010 state of virtualization security survey

10 tactics for securing enterprise data
The 2010 Data Breach Investigations Report reveals that companies are facing threats to their corporate data from more sources than ever before. This article highlights actions you can take starting today to protect your organization from a damaging and costly data breach.

Did you know? Log Management is a best practice for protecting your critical data from both internal and external threats. Learn how you can detect five of the most significant indications that a security breach is being attempted or is under way.

Customer review: MSSP uses EventTracker to monitor disparate environments
Read how JC Hanlon Consulting uses EventTracker to monitor multiple customer environments in real time to detect signs of intrusions and suspicious activity before costly damage is caused.

SC Magazine gives EventTracker 5 stars in product review
“EventTracker can not only provide SIEM functions, such as log monitoring, collection and analysis, but also USB device monitoring, system change management and automatic remediation by taking action to shutdown or restart systems or services based on policy.”

100 Log Management uses #67 Secure Auditing & Solaris

Today we continue our series on Secure Auditing with a look at Solaris and the C2 or BSM (Basic Security Module) option.

Logging for HIPAA Part 2; Secure auditing in Linux

HIPAA Logging HOWTO, Part 2

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) outlines relevant security and privacy standards for health information – both electronic and physical. The main mission of the law is “to improve portability and continuity of health insurance coverage in the group and individual markets, to combat waste, fraud, and abuse in health insurance and health care delivery” (HIPAA Act of 1996   A recent enhancement to HIPAA is called Health Information Technology for Economic and Clinical Health Act or HITECH Act. The act  seeks to “promote the adoption and meaningful use of health information technology” and “ addresses the privacy and security concerns associated with the electronic transmission of health information.“(HITECH Act of 2009 )

As we mentioned before (June 2010 EventSource Newsletters), HIPAA itself does not descend to the level of security controls and technologies to implement.  This requires the organizations affected by HIPAA – also known as “covered entities” –to try to follow the spirit of the regulation as opposed to its letter.  What is also interesting to note is that insurance companies and many hospitals that accept payment cards are subject to both HIPAA and PCI DSS (covered in our previous newsletters). Understandably, the scope of their applicability across the organization might be different since payment processing systems should not store patient health information and vice versa.  Still, considering the same technical and administrative controls for both regulations is prudent and will save money in both the short term and long term.

The previous newsletter focused on general HIPAA logging and log review processes and platform logging. This newsletter installment covers application logging issues specific to medical applications.

While platform level logging is useful for protecting sensitive health information, and it is a fact that a majority of health information is stored in databases and processed by healthcare specific applications and.  Such applications are either procured from specialty vendors or developed internally – war via outsourced developers.

HIPAA of audit controls, mentioned in Section 164.312(b), apply to application logging as much or more than to platform logging. This means that custom applications need to be engineered to have adequate logging.  Existing application logging needs to be assessed having adequate logging – it should be noted that many legacy applications will often not record sufficient details for events and might even skip logging events altogether. Thus, before embarking on this project, it makes sense to determine which applications within your organization contain Protected Health Information (PHI) and what their existing levels and methods of logging are.

Let’s define some of the guidance for what to log to satisfy the spirit and letter of HIPAA Security Requirement as well as NIST 800-66 HIPAA clarifications.

Application Logging Guidance

Before we can define good HIPAA logging,  let’s consider typical security use for log data.

From high-level, best audit logs tell you exactly what happened – when, where and how- as well as who was involved. Such logs are suitable for manual, semi-automated and automated analysis. Ideally, the can be analyzed without having the application that produced them at hand – and definitely without having the application developer on call.  In case of healthcare applications, such developer might not be an available at all and the security team will have to proceed on their own. From the log management point of view, the logs can be centralized for analysis and retention.  Finally, they should not slow the system down and can be proven reliable, if used as forensic evidence.

Two primary things need to be defined.

  • First, there are types of activities or events that always needs to be recorded.  For example, authentication decisions, health information access and system changes should always appear in logs.
  • Second, for each type of a recorded event there a particulate details that a mandatory for its interpretation, whether by a human or by an automated system.  For example, every log should have a reliable time stamp and every log related to user activity should contain the username of that user.

It should also be noted that certain details should never be logged.  The example is obvious: application or system passwords should never appear in logs (this, sadly, still happens for web applications sometimes).  Just as obviously, the health information itself should be kept out of logs.

What events to log?

What is the overall theme for selecting which events to log?

Clearly, we need to know who, when and why accesses any of health information.  We also need to know who adds, changes or deletes it.  But this is not all – we also need to note who tries but fails to read, change or delete information.  If we are unable to record access to each piece of data, we need to carefully record all access to the application itself.

Next, we need to know who performs other actions on systems that process health information as such activities might affect future access to healthcare data.  For example, we need to record if somebody turns logging off or adds a new component to the system which might enable unfettered access to data.  In addition, we need to record other critical events a caring on health information systems, as such events might present circumstantial evidence for unauthorized access.

The following list presents a structured view of the above criteria:

  • Authentication, Authorization, Access
    • Authentication/authorization decisions, successful and failed (see “status” below) – and especially privileged authentication
    • Recording user logoffs is also important for knowing when user no longer had access to the application
    • Switching from one user account to another
    • System access, data access, application component access
    • Network access to the application, including remote access from one application component to another in a distribute environment
  • Changes
    • System/application changes (especially privilege changes)
    • Data change (creation and destruction are changes too)
    • Application and component installation and updates as well as removals
    • Sensitive data changes, additions and deletions
  • Availability Issues
    • Startups and shutdowns of systems, applications and application modules/components
    • Faults and errors, especially those errors that affect the availability of the application
    • Backups successes and failures (affect availability)
  •  “Badness” / Threats
    • Invalid inputs other likely application abuses
    • Malicious software detection events
    • Attempts – successful and failed-  to disrupt or disable security controls or logging
    • Logging termination events and possibly attempts to modify or delete the logs
    • Other security issues that are known to affect the application

While creating a comprehensive ”what to log” list for every healthcare application in existence is probably impossible, the above list should give you a useful starting point for your relevant applications.  It can be converted into your application logging policy without much extra work.  Please refer to previous newsletter for setting up log monitoring and review process.

What details to log?

Next, what data should you log for each event, and at what level of detail should you log it?  The overall theme we use here is the following:

Who was involved?

What happened?

Where did it happen?

When did it happen?

Why did it happen?

How did it happen?

The list below gives you a starting point based on that theme:

Timestamp + time zone : this helps to answer “when” question, time zone is essential for distributed applications

System, application or component: this helps to answer “where” question and needs to provide relevant application context as well

Source:  for messages related to network connectivity or distributed application  operation, logs also need to answer “where from” question by providing a network source.

Username: this helps to answer “who” question – for those events that are relevant to user or administrator activities

Action: this helps to answer “what” question by providing the nature of the event that is recorded in the log

Object: this also helps to answer “what” question by helping to know which system component or other object (such as user account) has been affected

Status:  this also helps to answer “what” question by explaining whether the action aimed at the object succeeded or failed (other types of status are also possible, such as “deferred”)

Priority: last but not least, every logged event should have an indication of how important it is.  While creating a uniform scale for renting events by importance is impossible since different organizations will have different priorities (for example, events affecting availability vs. confidentiality of information might be read differently)

Thus, a useful application audit log message might look like this:

2010/12/31 10:00:01AM GMT+7 priority=3, system=mainserver, module=authentication, source=, user=anton, action=login, object=PHI_database, status=failed, reason=“password incorrect”

By the way, notice that another field is added to the above example log message in order to explain the reason for failure. Also notice that the above examples is not in XML – as we mention above, human readability is a useful property to have in logs and computers can deal with name=value pairs just as well as with XML. XML Health Level Seven (HL7) based messages can be easily converted to text, for those application that can log in HL7.

We also mentioned above that being able to easily centralize logs is essential for distributed log analysis either across multiple systems or across multiple application components of a distributed application.  While syslog has been the king of log centralization due to its easy UDP delivery, modern cross-platform application frameworks like a call for publish/subscribe model for log delivery, similar to the ones used in modern Windows versions.  In this case security monitoring tool can request a subscription for a particular type of a logged event – and receive all relevant logs in near real-time, if needed.


In addition to that very basic conclusion – you must log access to sensitive healthcare data– we have to remind our readers that the importance of logging will only grow – along with growing application complexity.  In particular, the need to analyze application behavior and movement of sensitive information across distributed and cloud-based application calls for us to finally get application logging under control.

Software architects and implementers need to “get” logging – there is NO other way since infrastructure logging from network devices and operating systems won’t do it  for detecting and investigating application level threats to ePHI. Security team – ultimately responsible for log review – will need to guide developers towards useful and effective logging that can be used for both monitoring and investigative activities.

Certainly, logging standards such as MITRE CEE ( will help – but it might take a few years before they developed and their adoption increases. Pending a global standard, an organization should quickly build and then use its own application logging standard for applications. HIPAA compliance presents a great motivation to creating and adopting such logging standards.

Next Month: Stay tuned for the first part of a 2-article series on Logging for FISMA by Dr. Chuvakin. Previous articles in the compliance series include Logging for HIPAA Part 1Logging for PCI Part 1 and Part 2.

Industry News

Pirate Bay hack exposes user booty
Security weaknesses in the hugely popular file-sharing website have exposed the user names, e-mail and Internet addresses of more than 4 million Pirate Bay users… An Argentinian hacker named Ch Russo said he and two of his associates discovered multiple SQL injection vulnerabilities that let them into the user database for the site.

Zeus is back with terrorism-themed spam run
Trojan-laden emails claiming to offer official terrorism information have been hitting inboxes… The emails are spoofed to look like they originate from the U.S. Department of Homeland Security, Pentagon or Transportation Security Administration. Users are encouraged to click on two links, supposedly leading to reports, but which are actually ZIP files containing the insidious Zeus, or Zbot, trojan.

Related Resource: Webinar – Learn how implementing change control in your enterprise can help you handle critical security challenges including BOTnet and zero-day attacks.

Database admin gets 12 months for hacking employer
A former database administrator for Houston’s GEXA Energy was sentenced to 12 months in prison and fined $100,000 for hacking into his former employer’s network. He remotely accessed the GEXA Energy network without authorization, impaired the availability of data and copied a database file containing personal information. GEXA Energy estimates that Kim’s actions resulted in a loss of at least $100,000.

Did you know?  Security violations by insiders are often the hardest to discover, but cause the greatest damage and cost the most to repair. EventTracker helps by monitoring all user and admin activity, automatically detecting policy violations and out-of ordinary or suspicious behavior

100 Log Management uses #66 Secure Auditing – LAuS

Today we continue our series on Secure Auditing with a look at the LAuS, the Linux Audit-Subsystem Design secure auditing implementation in Linux. Redhat and Open SUSE both have supported implementations but the LAuS is available in the generic Linux kernel as well.

[See post to watch Flash video] -Ananth

HIPAA Logging Howto; New attack bypasses AV protection

HIPAA Logging HOWTO, Part 1

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) outlines relevant security and privacy standards for health information – both electronic and physical. The main mission of the law is “to improve portability and continuity of health insurance coverage in the group and individual markets, to combat waste, fraud, and abuse in health insurance and health care delivery” (HIPAA Act of 1996

In particular , Title II of the law,  “Preventing Health Care Fraud and Abuse; Administrative Simplification; Medical Liability Reform”, contains Security Rule (section 2.3) that covers Electronic Protected Health Information (EPHI) and Privacy Rule (section 2.1) that covers all Protected Health Information (PHI).

A recent enhancement to HIPAA is called Health Information Technology for Economic and Clinical Health Act or HITECH Act. The act  seeks to “promote the adoption and meaningful use of health information technology” and “ addresses the privacy and security concerns associated with the electronic transmission of health information, in part, through several provisions that strengthen the civil and criminal enforcement of the HIPAA rules. “(HITECH Act of 2009

Unlike PCI DSS that we covered in previous newsletters, HIPAA itself does not descend to the level of security controls and technologies to implement.  This requires the organizations affected by HIPAA – also known as “covered entities” –to try to follow the spirit of the regulation as opposed to its letter.  What is also interesting to note is that insurance companies and many hospitals that accept payment cards are subject to both HIPAA and PCI DSS. Understandably, the scope of their applicability across the organization might be different since payment processing systems should not store patient health information and vice versa.  Still, considering the same technical and administrative controls for both regulations is prudent and will save money in both the short term and long term.

The following HIPAA requirements are broadly applicable to logging, log review and security monitoring.

  • Section 164.308(a)(5)(ii)(C) “Log-in Monitoring”  calls for monitoring the systems touching patient information for login and access.  The requirement applies to “login attempts” which implies both failed and successful logins.
  • Section 164.312(b)      “Audit Controls”  broadly covers audit logging and other audit trails on systems that deal with sensitive health information.  Review of such audit logs seem to be implied by this requirement.
  • Section 164.308(a)(1)(ii)(D)  “Information System Activity Review” prescribes review of various records of IT activities such as logs, systems utilization reports,  incident reports and other indications of security relevant activities
  • Other requirements in HIPAA might potentially affect logging as well.

The above reveals that, compared to PCI DSS, logging and monitoring requirements inside HIPAA itself do not really help companies answer key questions needed to deploy and operationalize logging and log management – from both technical and policy/procedure point of view.

In particular, the following questions are left unanswered:

  • What information should be logged by “audit controls”? What activities and events? What details for each activity or event?
  • Should the log records be centrally collected?
  • For how long should the records be retained?
  • What particular “activities” should be reviewed? How often?
  • How should security monitoring and “log-in monitoring” be performed?
  • How should audit records be protected?

In light of this, it is often noticed that HIPAA log collection and review seems to be a perpetual stumbling point for organizations of all sizes. Log requirements can be difficult for some companies, such as organizations with complex systems in place, or small shops that lack the time, money and expertise. And vague guidance does not help the organization to get motivated to do logging and log review. On top of this, logging and log review complexity rises dramatically when custom applications – not simply Windows servers or Cisco firewalls – are in scope. Despite the movement away from legacy and custom applications, a lot of medical data still sits inside home-grown applications where logging can be a nightmare to configure.

In addition to the above questions, another issue is unclear: do these controls apply to the actual application that handles sensitive health data or do they apply to the underlying platform as well.  The next newsletter installment will cover application logging issues specific to medical applications.

Fortunately, some additional details for HIPAA Security Rule implementation are covered in NIST Publication 800-66 “An Introductory Resource Guide for Implementing the Health Insurance Portability and Accountability Act (HIPAA) Security Rule” (see

NIST SP 800-66 guide details log management requirements for the securing of electronic protected health information – based on HIPAA security rule.

Section 4.1 of NIST 800-66 describes the need for regular review of information system activity, such as audit logs, information and system access reports and security incident tracking reports. The section asks questions (“How often will reviews take place?” and “Where will audit information reside (e.g., separate server)?”) rather than provides answers.

Section 4.15 attempts to provide additional guidance on “audit controls.”  While striving to provide the methodology and questions that implementers need to be asking (such as “What activities will be monitored (e.g., creation, reading, updating, and/or deleting of files or records containing EPHI)?” and “What should the audit record include (e.g., user ID, event type/date/time)?”, the document does not really address key implementation concern – in other words, it does not tell covered entities what they must do to be compliant.

Also, Section 4.22 specifies that documentation of actions and activities need to be retained for at least six years – and leaves the discussion of whether security activity records such as logs are considered “documentation” to implementers.

In light of the above ambiguous guidance, what are typical organization actions in response to HIPAA requirements?

A recommended strategy suggests that the company start from information security activity review policy and processes.  Using the guiding questions from NIST 800-66, one can formulate what such policy should cover: requirement applicability, recorded activities and, recorded details, review procedures, exception monitoring process, etc

Quoting from NIST 800-66:

  • “Who is responsible for the overall process and results?
  • How often will reviews take place?
  • How often will review results be analyzed?
  • What is the organization’s sanction policy for employee violations?
  • Where will audit information reside (e.g., separate server)?”

Next, the organization has to actually implement the above process for both logging and log review.  This would make sure that log records are created on covered systems and have sufficient details (logging). By the way, such details can be borrowed from the corresponding PCI DSS guidance.  Also, it will create the procedures to “regularly review records of information system activity, such as audit logs, access reports, and security incident tracking reports” (log review). While daily log reviews are not required, if they are performed for PCI DSS, they can be expanded to cover HIPAA systems as well.

On this, NIST 800-66 advices:

  • “Develop Appropriate Standard Operating Procedures
  • Determine the types of audit trail data and monitoring procedures that will be needed to derive exception reports.
  • How will exception reports or logs be reviewed?
  • Where will monitoring reports be filed and maintained?”

Only then is the organization ready to proceed to the next step and initiate logging and then start ongoing log reviews.

To conclude, even though HIPAA does not provide detailed step by step guidance on logging and log management, it gives companies an opportunity to follow the spirit of the regulation and not simply the letter.  Understandably, a few organizations might be waiting for  fines and enforcement activity to be started before taking any action.  Such shortsighted approach to logging simply plays for the “bad guys” side – allowing cyber-criminals to steal the most sensitive data all of us will ever have…

Next newsletter will cover how to approach actually medical application logging for HIPAA, including custom and vertical applications.

Related resource: Learn how EventTracker helps you achieve compliance with multiple HIPAA requirements.

Next Month: Stay tuned for the second part of the 2-article series on Logging for HIPAA by Dr. Chuvakin. Previous articles in the compliance series include Logging for PCI, Part 1 and Part 2.

Is correlation killing the SIEM market?

Correlation – what’s it good for? Absolutely nothing!*

* Thank you Edwin Starr.

Ok, that might be a little harsh, but hear me out.

The grand vision of Security Information and Event Management is that it will tell you when you are in danger, and the means to deliver this is through sifting mountains of log files looking for trouble signs. I like to think of that as big-C correlation. Big-C correlation is an admirable concept of associating events with importance. But whenever a discussion occurs about correlation or for that matter SIEM – it quickly becomes a discussion about what I call little-c correlation – that is rules-based multi-event pattern matching.

To proponents of correlation, correlation can detect patterns of behavior so subtle that it would be impossible for a human unaided to do the same. It can deliver the promise of SIEM – telling you what is wrong in a sea of data. Heady stuff indeed and partially true. But the naysayers have numerous good arguments against as well; in no particular order some of the more common ones:

• Rules are too hard to write
• The rule builders supplied by the vendors are not powerful enough
• Users don’t understand the use cases (that is usually a vendor rebuttal argument for the above).
• Rules are not “set and forget” and require constant tuning
• Correlation can’t tell you anything you don’t already know (you have to know the condition to write the rule)
• Too many false positives

The proponents reply that this is a technical challenge and the tools will get better and the problem will be conquered. I have a broader concern about correlation (little c) however, and that is just how useful is it to the majority of customer uses cases. And if it is not useful, is SIEM, with a correlation focus, really viable?

The guys over at Securosis have been running a series defining SIEM that is really worth a read. Now the method they recommend for approaching correlation is that you look at your last 4-5 incidents when it comes to rule-authoring. Their basic point is that if the goals are modest, you can be modestly successful. OK, I agree, but then how many of the big security problems today are really the ones best served by correlation? Heck it seems the big problems are people being tricked into downloading and running malware and correlation is not going to help that. Education and Change Detection are both better ways to avoid those types of threats. Nor will correlation help with SQL injection. Most of the classic scenarios for correlation are successful perimeter breaches but with a SQL attack you are already within the perimeter. It seems to me correlation is potentially solving yesterdays’ problems – and doing it, because of technical challenges, poorly.

So to break down my fundamental issue with correlation – how many incidents are 1) serious 2) have occurred 3) cannot be mitigated in some other more reasonable fashion and 4) the future discovery is best done by detecting a complex pattern?

Not many, I reckon.

No wonder SIEM gets a bad rap on occasion. SIEM will make a user safer but the means to the end is focused on a flawed concept.

That is not to say correlation does not have its uses – certainly the bigger and more complex the environment the more likely you are going to have cases where correlation could and does help. In F500 the very complexity of the environment can mean other mitigation approaches are less achievable. The classic correlation focused SEM market started in large enterprise but is it a viable approach?

Let’s use Prism as an example, as I can speak for the experiences of our customers. We have about 900 customers that have deployed EventTracker, our SIEM solution. These customers are mostly smaller enterprises, what Gartner defines as SME, however they still purchased predominantly for the classic Gartner use case – the budget came from a compliance drive but they wanted to use SIEM as a means of improving overall IT security and sometime operations.

In the case of EventTracker the product is a single integrated solution so the rule-based correlation engine is simply part of the package. It is real-time, extensible and ships with a bunch of predefined rules.

But only a handful of our customers actually use it, and even those who do, don’t do much.

Interestingly enough, most of the customers looked at correlation during evaluation but when the product went into production only a handful actually ended up writing correlation rules. So the reality was, although they thought they were going to use the capability, few did. A larger number, but still a distinct minority, are using some of the preconfigured correlations as there are some uses cases (such as failed logins on multiple machines from a single IP) that a simple correlation rule makes good sense for. Even with the packaged rules however customers tended to use only a handful and regardless these are not the classic “if you see this on a firewall, and this on a server, and this in AD, followed by outbound ftp traffic you are in trouble” complex correlation examples people are fond of using.

Our natural reaction was there was something wrong with the correlation feature so we went back to the installed base and started nosing about. The common response was, no nothing wrong, just never got to it. On further questioning we surfaced the fact for most of the problems they were facing – rules were simply not the best approach to solving the problem.

So we have an industry that, if you agree with my premise, is talking about core value that is impractical to all but a small minority. We are, as vendors, selling snake oil.

So what does that mean?

Are prospects overweighting correlation capability in their evaluations to the detriment of other features that they will actually use later? Are they setting themselves up to fail with false expectations into what SIEM can deliver?

From a vendor standpoint are we all spending R&D dollars on capability that is really simply demoware? Case in point is correlation GUIs. Lots of R&D $$ go into correlation GUIs because writing rules is too hard and customers are going to write a lot of rules. But the compelling value promised for correlation is the ability to look for highly complex conditions. Inevitably when you make a development tool simpler you compromise the power in favor of speed of development. In truth you have not only made it simpler, but also stupider, and less capable. And if you are seldom writing rules, per the Securosis approach, does it need to be fast and easy at all?

That is not to say SIEM is valueless, SIEM is extremely valuable, but we are focusing on its most difficult and least valuable component which is really pretty darn strange. There was an interesting and amusing exchange a week or so ago when LogLogic lowered the price of their SEM capability. This, as you might imagine, raised the hackles of the SEM apologists. Rocky uses Arcsight as an example of successful SIEM (although he conveniently talks about SEM as SIEM, and the SIEM use case is broader now than SEM) – but how much ESM is Arcsight selling down-market? I tend to agree with Rocky in large enterprise but using that as an indicator of the broad market is dangerous. Plus the example of our customers I gave above would lead one to believe people bought for one reason but using the products in an entirely different way.

So hopefully this will spark some discussion. This is, and should not be, a slag between Log Management, SEM or SIM because it seems to me the only real differences between SEM and LM these days is in the amount of lip service paid to real-time rules.

So let’s talk about correlation – what is it good for?

-Steve Lafferty

100 Log Management uses #65 Secure Auditing – Introduction

This post introduces the concepts behind secure auditing. In subsequent posts we will look at secure auditing implementations in several of the Unix (Solaris, AIX, HP-UX) and Linux distributions. My apologies that this intro is a bit long at about 10 minutes but I think the foundation is worthwhile.

SIEM or Log Management?

Mike Rothman of Securosis has a thread titled Understanding and Selecting SIEM/Log Management. He suggests both disciplines have fused and defines the holy grail of security practitioners as “one alert telling exactly what is broken”. In the ensuing discussion, there is a suggestion that SIEM and Log Mgt have not fused and there are vendors that do one but not the other.

After a number of years in the industry, I find myself uncomfortable with either term (SIEM or Log Mgt) as it relates to the problem the technology can solve, especially for the mid-market, our focus.

The SIEM term suggests it’s only about Security, and while that is certainly a significant use-case, it’s hardly the only use for the technology. That said if a user wishes to use the technology for only the security use case, fine, but that is not a reflection of the technology. Oh by the way, Security Information Management would perforce include other items such as change audit and configuration assessment data as well which is outside scope of “Log Management”.

The trouble with the term Log Management is that it is not tied to any particular use case and that makes it difficult to sell (not to mention boring). Why would you want to manage logs anyway? Users only care about solutions to real problems they have; not generic “best practice” because Mr. Pundit says so.

SIEM makes sense as “the” use case for this technology as you go to large (Fortune 2000) enterprises and here SIEM is often a synonym for correlation.
But to do this in any useful way, you will need not just the box (real or
virtual) but especially the expert analyst team to drive it, keep it updated and ticking. What is this analyst team busy with? Updating the rules to accommodate constantly changing elements (threats, business rules, IT components) to get that “one alert”. This is not like AntiVirus where rule updates can happen directly from the vendor with no intervention from the admin/user. This is a model only large enterprises can afford.

Some vendors suggest that you can reduce this to an analyst-in-a-box for small enterprise i.e., just buy my box, enable these default rules, minimal intervention and bingo you will be safe. All too common results are either irrelevant alerts or the magic box acts as the dog in the night time. A major reason for “pissed-off SIEM users”. And of course a dedicated analyst (much less a team) is simply not available.

This not to say that the technology is useless absent the dedicated analyst or that SIEM is a lost cause but rather to paint a realistic picture that any “box” can only go so far by itself; and given the more-with-less needs in this mid-market, obsessing on SIEM features obscures the greater value offered by this technology.

Most Medium Enterprise networks are “organically grown architectures” a response to business needs — there is rarely an overarching security model that covers the assets. Point solutions dominate based on incidents or perceived threats or in response to specific compliance mandates. See the results of our virtualization survey for example. Given the resource constraints, the technology must have broad features beyond the (essential) security ones. The smarter the solution, the less smart the analyst needs to be — so really it’s a box-for-an-analyst (and of course all boxes now ought to be virtual).

It makes sense to ask what problem is solved, as this is the universe customers live in. Mike identifies reacting faster, security efficiency and compliance automation to which I would add operations support and cost reduction. More specifically, across the board, show what is happening (track users, monitor critical systems/applications/firewalls, USB activity, database activity, hypervisor changes, physical eqpt etc), show what has happened (forensic, reports etc) and show what is different (change audit).

So back to the question, what would you call such a solution? SIEM has been pounded by Gartner et al into the budget line items of large enterprises so it becomes easier to be recognized as a need. However it is a limiting description. If I had only these two choices, I would have to favor Log Management where one (essential) application is SIEM.


PCI HOWTO Part 2; Revised NIST guidelines

PCI Logging HOWTO, Part 2

Payment Card Industry Data Security Standard (PCI DSS) was created by the major card brands and is now managed by the PCI Security Standards Council. Since its creation in 2006, PCI DSS continues to affect how thousands of organization approach security. PCI applies to all organizations that handle credit card transactions or that store or process payment card data – and such organizations number in the millions worldwide. Despite its focus on reducing payment card transaction risk, PCI DSS also makes an impact on broader data security as well as network and application security. Therefore, it can be considered to be the most influential security standard today.

Among other things, PCI DSS mandates producing and reviewing logs from systems within PCI compliance scope – those are the systems that process card data, are directly connected to them and the ones facing the hostile Internet. One should always remember that log collection and review are also critical for good security operations and incident response. In this article, we will continue to focus on operational aspects of logging and log management for PCI compliance.  In part 1, we have looked at the central logging requirements from PCI DSS Requirement 10. However, logging as a means of IT accountability is a perfect compliance technology; that is why it is implied in all twelve PCI DSS requirements! Let’s review where else logging is mandated or prescribed and what you should do about it.

On a high-level, logging and log management are used for two purposes in PCI DSS:

  • To directly satisfy a requirement – namely requirement for logging (Requirement 10.2 Implement automated audit trails for all system components and others) and log management (for example, Requirement 10.6 Review logs for all system components at least daily). These logging requirement usually get all of the attention of implementers and pundits as well.
  • To substantiate and enable other PCI DSS requirements such as user credential management (Requirement 8 Assign a unique ID to each person with computer access) or firewall rules (Requirement 1 Install and maintain a firewall configuration to protect cardholder data). These indirect requirements are no less important and no less mandatory as the other DSS guidelines.

Now, let’s dive deeper into the role of logs in order to further explain that logs are not only about the Requirement 10, which we covered in the previous paper. Just about every control that is deployed to satisfy the PCI DSS requirements, such as data encryption or anti-virus updates, can use log files to substantiate its validity.

Starting from Requirement 1 “Install and maintain a firewall configuration to protect cardholder data” we see that it mentions that organizations must have “a formal process for approving and testing all external network connections and changes to the firewall configuration.” However, after such process is established, one needs to validate that firewall configuration changes do happen in accordance with documented change management procedures and do not put the firewall configuration out of sync with DSS guidance. That is where logging becomes extremely handy, since it shows you what actually happened and not just what was supposed to happen according to a policy or process.

Other log-related areas within Requirement 1 include section 1.1.6 “Justification and documentation for any available protocols besides Hypertext Transfer Protocol (HTTP), SSL, Secure Shell (SSH), and VPN” where logs should be used to watch for all events triggered due to such communication. Seeing that “connection allowed” to port 21 (telnet) log should call for some attention!

Also, section 1.1.7 “Justification and documentation for any risky protocols allowed” (such as for example TFTP or even FTP that exposes plain text passwords as well as transferred data to sniffers), which includes the reason for use of protocol and security features implemented, where logs help to review and monitor the use of “risky” protocols. This especially applies to cases where the use of risky protocols is not “official.” What is worse is when payment data is actually exchanged using such protocols. While logs will not show you what data was transferred, the use of the protocols themselves will be apparent in the logs.

Further, the Requirement 1.3 contains guidance to firewall configuration, with specific statements about inbound and outbound connectivity. One must use firewall logs to verify this; a mere infrequent review of firewall configuration would not be sufficient, since only logs show “how it really happened” and not just “how it was configured.” In order to substantiate this requirement one can use firewall system logging such as records of configuration pushes and updates as well as the summaries of allowed connection to and from the “in-scope” environment.

In order to address these section of Requirement 1, make sure that firewalls that protect a cardholder environment  log their configuration changes and user modifications as well as inbound and outbound connections to and from the environment.

Similarly, Requirement 2 “Do not use vendor-supplied defaults for system passwords and other security parameters” talks about password management “best practices” as well as general security hardening, such as not running unneeded services. Logs can show when such previously disabled services are being started, either by misinformed system administrators or by attackers after they compromise a system. For example, if Apache web server is disabled on an a mail server system (due to PCI DSS Requirement 2.1.1 “One primary function per server”), a message such indicating its startup since the service should not be starting or restarting.

In order to address these section of Requirement 2, make sure that system status and authentication  events are logged and can be used to reviews such activities.

Further, Requirement 3 “Protect stored cardholder data”, which deals with data encryption, has direct and unambiguous links to logging. For example, the entire subsection 3.6 that deals with generation of key storage, periodic encryption key changes, etc implies having logs to verify that such activities actually take place.  Specifically, key generation, distribution, and revocation are logged by most encryption systems and such logs are critical for satisfying this requirement.  Requirement 4 “Encrypt transmission of cardholder data across open, public networks”, which also deals with encryption, has logging implications for similar reasons.

In order to address these section of Requirement 3 and 4, make sure that key management and other encryption system operations are recorded in logs and later reviewed.

Requirement 5 “Use and regularly update anti-virus software or programs” refers to defenses against malicious software. Of course, in order to satisfy section 5.2, which requires that you “Ensure that all anti-virus mechanisms are current, actively running, and capable of generating audit logs,” one needs to see such logs. This requirement is directly satisfied by producing and collecting anti-virus logs based on PCI DSS log management guidelines. By the way, PCI DSS assessment guidelines for QSA state the following in regards to this requirement: “For a sample of system components, verify that antivirus software log generation is enabled and that such logs are retained in accordance with PCI DSS Requirement 10.7.”

So, even the requirement to “use and regularly update anti-virus software” will likely generate requests for log data during the assessment, since the information is present in anti-virus assessment logs. It is also well-known that failed anti-virus updates, also reflected in logs, expose the company to malware risks, since anti-virus without the latest signature updates only creates a false sense of security and undermines the compliance effort.

In order to address these section of Requirement 5, make sure that anti-malware software produces logs and such logs are managed and reviewed in accordance with Requirement 10.

Also, PCI DSS Requirement 6 “Develop and maintain secure systems and applications” is in the same league: secure systems and application are unthinkable without a strong application logging functions and application security monitoring. This is the area where a security team must work with internal or outsource development team in order to assure that the applications they develop can be used in PCI DSS compliance environments. Specifically, if your organization makes an unfortunate choice of using homegrown payment applications, logging the operations with PANs and other cardholder data becomes an absolute imperative. A useful discussion of how to create security application logs can be found here.

In order to address these section of Requirement 6, make sure that newly created and deployed applications – especially applications directly handling card data – have extensive and useful security logs that at least satisfy the DSS Requirement 10.2.

Further, Requirement 7, “Restrict access to cardholder data by business need-to-know,” requires logs to validate who actually had access to said data. If the users that should be prevented from seeing the data appear in the log files as accessing such  data, remediation is needed.

In order to address these section of Requirement 7, one needs to make sure that access to such data, whether in databases or files, is logged and such logs are managed and reviewed as prescribed in Requirement 10.

In general,  assigning a unique ID to each user accessing the system fits with other basic security “best practices.” In PCI DSS, it is not just a “best practice”; it is a specific requirement (Requirement 8 “Assign a unique ID to each person with computer access”).  Obviously, one needs to “Control addition, deletion, and modification of user IDs, credentials, and other identifier Objects” (section 8.5.1) and most systems log such activities in order to assure that they are indeed taking place. In addition, Section 8.5.9, “Change user passwords at least every 90 days,” can also be verified by reviewing the logs files from the server in order to assure that all the accounts have their password changed at least every 90 days. Sadly, a few of the systems do not log password changes – in this case, other means of verifying such actions need to be used.

In order to address these section of Requirement 8, one needs to record key systems actions with user names. Using automated tools to detect when accounts are shares is a useful trick as well.

Requirement 9 “Restrict physical access to cardholder data” presents a new realm of security—physical access control.  At first glance, it might seem that the area of locks, video cameras and armed guards has nothing to do with syslog and computer audit traces. In reality, logs to make their appearance here as well: section 9.4 that covers maintaining a visitor log is connected to log management if such a visitor log is electronic (managing physical written logs falls outside of the scope of this article). Also, there are separate data retention requirements for such logs: “Use a visitor log to maintain a physical assessment trail of visitor activity. Retain this log for a minimum of three months, unless otherwise restricted by law.”

In order to address these section of Requirement 9, physical access should be  logged and the resulting logs managed in accordance with Requirement 9. It is curious that physical access logs don’t carry an explicit daily review requirement.

Requirement 11 “Regularly test security systems and processes” addresses the need to scan ( automatically test) the in-scope systems for vulnerabilities. However, it also calls for the use of intrusion detection or prevention systems (IDS or IPS) in Section 11.4:“Use network intrusion detection systems, host-based intrusion detection systems, and intrusion prevention systems to monitor all network traffic and alert personnel to suspected compromises. Keep all intrusion detection and prevention engines up-to-date.” Intrusion detection is only useful if monitored and IDS alerts and logs are the way such monitoring is performed. Here is an example IDS log that should be reviewed with other logs, as prescribed in Requirement 10, discussed in our previous article:

Jan 23 16:38:16 bastion snort[1131]: [1:3813:2] WEB-CGI configdir command execution attempt [Classification: Attempted User Privilege Gain] [Priority: 1]: {TCP} ->

In order to address these section of Requirement 11, IDS and IPS logs needs to be reviewed for threats and incident response should be followed when called for.

Requirement 12 “Maintain a policy that addresses information security for employees and contractors” covers security issues on a higher level— that of security policy as well as security standards and daily operational procedures (e.g., previously discussed procedure for daily log review mandated by Requirement 10 should be reflected here). However, it also has logging implications, since application logging should be a part of every security policy. In addition, incident response requirements are also tied to logging: “Establish, document, and distribute security incident response and escalation procedures to ensure timely and effective handling of all situations” is unthinkable to satisfy without effective collection and timely review of log data.

In order to address this section of Requirement 12, explicit logging policies and operational procedures needs to be created. A guidance from part 1 of this article can be used to jumpstart these efforts.

Thus, event logging and security monitoring in PCI DSS program go much beyond Requirement 10. Only through careful log data collection and review can help companies meet the broad requirements of PCI DSS.


To conclude, PCI security guidance mandates not only the creation of logs, retention and their review. Logs are features in PCI DSS at a higher level: as a means to substantiate other PCI DSS requirements. Thus your logging policy needs to focus higher and broader than a mere Requirement 10. Similarly, deploying a log management system allows you to not just “make Requirement 10 go away” but also create proof of following many of the other twelve requirement.  Following the principles and examples in this article, an organization can utilize logging to the fullest extent for PCI DSS compliance, security and operational efficiency.

Industry News

NIST guide: The imperative of real-time risk management
Revised guidelines for assessing security controls for government IT systems reflect a shifting emphasis toward continuously monitoring systems and making real-time risk assessments.

Did you know? Risk assessment begins with deep real-time visibility into log data produced by perimeter defense systems and IT assets inside the perimeter. Then, the application of powerful correlation and analytics point you to the things that matter for defense in depth.

Cyber security bill would penalize agencies for non-compliance
A House bill would give the government more authority to enforce cyber security measures in federal agencies… The bill directs civilian agencies to show they have complied with FISMA when submitting annual budgets. Under the bill, the cyberspace director can recommend the president withhold awards and bonuses for agencies that fail to prove they have secured networks.

Related resource Whitepaper: Meeting FISMA compliance with EventTracker

Virtualization security falls short among enterprises
Enterprise adoption of virtualization is still going strong, but security for those environments is not. A survey by Prism Microsystems shows many organizations are failing to enforce a separation of duties and deploy technologies to protect the hypervisor.

Related Resource: The complete results of the 2010 survey on virtualization security are now available. Download the PDF here

100 Log Management uses #64: Tracking user activity, Part III

Continuing our series on user activity monitoring, today we look at something that is very hard to do in Vista and later, and impossible in XP and earlier — that is reporting on system idle time. The only way to accomplish this in Windows is to setup a domain policy to lock the screen after a certain amount of time and then calculate from the time the screen saver is invoked to when it is cleared. In XP and prior, however, the invocation of the screensaver does not generate an event so you are out of luck. In Vista and later, an event is triggered so it is slightly better, but even there the information generated should only be viewed as an estimate as the method is not fool-proof. We’ll look at the Pro’s (few) and Con’s (many). Enjoy.

Logging for PCI HOWTO; New Trojan masquerades as Adobe update


Payment Card Industry Data Security Standard (PCI DSS) was created by the major card brands – Visa, MasterCard, American Express, JCB and Discover – and is now managed by the PCI Security Standards Council. Since its creation in 2006, PCI DSS continues to affect how thousands of organization approach security. PCI applies to all organizations that handle credit card transactions or that store or process payment card data – and such organization number in the millions worldwide. Despite its focus on reducing payment card transaction risk, PCI DSS also makes an impact on broader data security as well as network and application security. Therefore, it can be considered to be the most influential security standard today.

Among other things, PCI DSS mandates logging particular and reviewing logs from system in scope for PCI compliance. One should always remember that log collection and review are also critical for good security operations and incident response. In this article, we will focus on operational aspects of logging and log management for PCI compliance.

Recent surveys indicate that logging and monitoring are the most challenging aspects of PCI DSS compliance. The reason for it is that unlike other prescribed controls and tasks, which are annual or quarterly, log review activities are explicitly prescribed to be done every single day. Another reason is that DSS guidance prescribes “log review,” but does not explain what specific logs need to be looked at and how exactly such review should be undertaken.

If that’s not sufficient reason to do it right, logging is a perfect compliance technology which is implied in all twelve PCI DSS requirements (as well as in most other regulations), and specifically mandated in Requirement 10. Under this requirement, logs for all in-scope systems and all system components must be collected and reviewed at least daily (An in-scope system is one that is used to process or store credit card information, the one that passes unencrypted card data or the one directly connected to them without a firewall in between). Furthermore, PCI DSS states that the organization must ensure log integrity by implementing file integrity monitoring to ensure that existing log data cannot be changed without notice. Time synchronization across log sources is mandated in Requirement 10.4 (“Synchronize all critical system clocks and times.”) It also prescribes that logs to be stored for one year.

To summarize, for PCI DSS you:

  • Must have good logs
  •  Must collect logs
  •  Must store logs for at least 1 year
  •  Must protect logs
  • Must synchronize time
  •  Must review logs daily (using an automated system)

Despite fairly prescriptive DSS logging guidance, people continue to ask for even more details; down to “what configuration settings we should change on XYZ system?”, “what events to log?”, “what details to log for each event?”, “what logs to retain?”, “what logs to review?”, “how exactly to review the logs?”, etc.  Such guidance must cover both PCI logging requirements that are needed to achieve compliance, stay compliant with these requirements and those that are needed to get compliance validated by an on-site assessment or self-assessment.

Here is a recommended process to follow to get your logging and log management in shape for PCI compliance while getting other business benefits for security and operations.

Logging Process for PCIs

First, do determine the scope for PCI compliance, including systems, databases that store the data, application that process the data, network equipment that transmits unencrypted card data as well as any system which is not separated from the above by the firewall. In the case of so-called “flat network”, the entire IT environment is in scope and thus must be made compliant by implementing DSS controls.

Second, identify system components that touch the data: apart from the operating systems (Windows or Linux, for example) databases and web servers need to considered as well as payment application, middleware, Point-of-Sale (PoS) devices, etc.

Next, a logging policy must be created. PCI-derived logging policy should at least cover logged event types for each application and system deployed as well as details for all systems in scope for PCI DSS.  For custom applications, this logging policy needs to be communicated to internal or outsourced developers.

After logs are created, they need to be centralized. It will make sure that logs are retained in a controlled environment and not simply “left to rot” wherever they are produced across the cardholder data environment.

Next comes log retention: PCI DSS has an easy answer for your log retention policy: logs must be stored for one year with three month supply available in an easily accessible storage (not tape!)

Log protection and security are prescribed in PCI DSS. It mandates limiting access to logs and employing the technology to detect any possible changes of stored logs. It comes with no surprise that access to logs must be logged! Many automated tools will create an audit trail of log review activities which can be used to satisfy PCI requirement and prove to QSA that requirement are indeed being followed.

Finally – the hard part! Daily log review procedures and tasks needs to be created, communicated to those who would be performing then and then operationalized. This requirement is by far the most onerous to most organizations, especially smaller ones.  However, it does not mean that every single log must be read by a human being.  Automated tools can and must be used for automating log review, given that log volume might well go into gigabytes a day.

Log Review

Let’s now focus on log review in depth. PCI DSS states that one must “Review logs for all system components at least daily. Log reviews must also include those servers that perform security functions like intrusion-detection system (IDS) and authentication, authorization, and accounting protocol (AAA) servers” as well as VPN servers that grant access to in-scope environment. It then adds that “Log harvesting, parsing, and alerting tools may be used to meet compliance.”

Further, PCI DSS testing and validation procedures for log review order that a QSA should “obtain and examine security policies and procedures to verify that they include procedures to review security logs at least daily and that follow-up to exceptions is required.” QSA must also ”through observation and interviews, verify that regular log reviews are performed for all system components.”

To satisfy those requirements, it is recommended that an organization create “PCI System Log Review Procedures” and related workflows that cover:

  • Log review practices, patterns and tasks
  • Exception investigation and analysis
  • Validation of these procedures and management reporting.

The procedures can be provided for using automated log management tools as well as manually when tools are not available or not compatible with log formats produced by the payment applications.


In other words, periodic log review practices are performed every day (or less frequently, if daily review is impossible) and any discovered exceptions or are escalated to exception Investigation and analysis.  The basic principle of PCI DSS periodic log review (further referred to as “daily log review” even if it might not be performed daily for all the applications) is to accomplish the following:

  • Assure that card holder data has not been compromised by the attackers
  • Detect possible risks to cardholder data, as early as possible
  • Satisfy the explicit PCI DSS requirement for log review.

Even given that fact that PCI DSS is the motivation for daily log review, other goals are accomplished by performing daily log review:

  • Assure that systems that process cardholder data are operating  securely and efficiently
  • Reconcile all possible anomalies observed in logs with other systems activities (such as application code changes or patch deployments)

In addition, it makes sense to also perform a quick assessment of the log entry volume for the past day (past 24 hr period). Significant differences in log volume should also be investigated using the procedures. In particular, loss of logging (often recognized from a dramatic decrease in log entry volume) needs to be investigated and escalated as a security – and compliance!- incident.

What to Look For?

The following are some rough guidelines for marking some messages as being of interest for PCI log review. If found, these messages will be looked at first during the daily review process.

  • Login and other “access granted” log messages occurring at unusual hour
  • Credential and access modifications log messages occurring outside of a change window
  • Any log messages produced by the expired user accounts
  • Reboot/restart messages outside of maintenance window (if defined)
  • Backup/export  of data outside of backup windows (if defined)
  • Log data deletion
  • Logging termination on system or application
  • Any change to logging configuration on the system or application
  • Any log message that has triggered any action in the past: system configuration, investigation, etc
  • Other logs clearly associated with security policy violations.

As we can see, this list is also very useful for creating “what to monitor in near-real-time?” policy and not just for logging. Over time, this list should be expanded based on the knowledge of local application logs and past investigations.

Frequency of periodic log review

PCI DSS requirement 10.6 explicitly states that “Review logs for all system components at least daily.” It is assumed that daily log review procedures will be followed every day. Only your QSA may approve less frequent log reviews, based on the same principle that QSAs use for compensating controls. What are some of the reasons when less frequent log reviews may be approved? The list below contains some of the reasons why log review may be performed less frequently than every day:

  • Application or system does not produce logs every day. If log records are not added every day, then daily log review is unlikely to be needed
  • Log review is performed using a log management system that collects log in batch mode, and batches of logs arrive less frequently than once a day
  • Application does not handle or store credit card data; it is only in scope since it is directly connected to such systems.

Remember that only your QSA’s opinion on this is binding and nobody else is!

PCI Logging Mistakes

Finally, let’s review some common mistakes that were observed by the author in his recent consulting projects related to logging and PCI DSS.

  • Despite clear evidence – in the DSS itself! – PCI compliance DOES NOT equal simply collecting logs. If you collect logs, you are making a good step towards compliance, but you are nowhere near done. Daily log review is explicitly mentioned!
  • On the other hand, the belief that you need to read every log every day – manually – is just as misguided. Automated tools that parse and summarize the logs are perfectly adequate for compliance, security and operational log review.
  • Logging and especially log review are hard to do – thus some people maybe hope that maybe they will get “a free pass” and will not have to do it. Such hopes typically result in lack of compliance, conflict with PCI assessors and their acquiring bank and ultimately data breaches that persist for months without detection…

Following the guidelines in this article will help you avoid these and other mistakes and develop a security and compliant environment.


  • To conclude, PCI security guidance mandates not only the creation of logs and retention, but also their review.  It is essential that your logging policy and procedures cover such daily review tasks, whether using log management tools or manually.  This will allow you to get compliant, validate your compliance as well as stay compliant and secure on an ongoing basis. The major effect the age of compliance has had on log management is to turn it into a requirement rather than just a recommendation, and this change is certainly to the advantage of any enterprise subject to one of those regulations. With a bit of guidance such as this article, logging can be made understandable and reasonable, and not onerous.

Related Resource: This document identifies PCI-DSS requirements covering network security, data protection, vulnerability management, access control, monitoring and testing, and information security – and presents the log management solution for each.

Industry News

New Trojan masquerades as Adobe update
This malware bears identical icons and version details to an Adobe update, which enables it to bypass antivirus software and system analysts, and to trick users into believing that it is legitimate. The Trojan drops other malware and contacts a remote server for orders. It can be controlled by cybercriminals remotely to steal data unknowingly from the victim.

Did you know? Malware installed unknowingly by employees can lead to theft of sensitive corporate data, and anti-virus and firewalls are generally insufficient for detection since malware signatures are changing constantly. 

Data theft affected 24,000 HSBC accounts
HSBC, Europe’s biggest bank, said a theft of data by a former bank employee affected up to 24,000 client accounts, dealing a hefty blow to the reputation of its private bank. The bank had previously said “less than 10 clients” were affected – this number has now been revised to 24,000.

Did you know? Theft by malicious insiders is only growing in number, as seen by the wave of recent media coverage. Protecting data from theft by insiders is often as simple as monitoring access to critical resources by users and admins.

Prism Microsystems receives Network Products Guide 2010 Product Innovation Award 
Network Products Guide, a leading information technology research guide, has named Prism’s EventTracker solution a winner of the 2010 Product Innovation Award in the SIEM category. This annual award recognizes and honors companies from all over the world with innovative and ground-breaking products.

100 Log Management uses #63 Tracking user activity, Part II

Today we continue our series on user activity monitoring using event logs. The beginning of any analysis of user activity starts with the system logon. We will take a look at some sample events and describe the types of useful information that can be pulled from the log. While we are doing user logons, we will also take a short diversion into failed user logons. While perhaps not directly useful for activity monitoring paying attention to attempts to logon are also critical.

100 Log Management uses #62 Tracking user activity

Today we begin a new miniseries – looking at and reporting on user activities. Most enterprises restrict what users are able to do — such as playing computer games during work hours. This can be done through software that restricts access, but often it is simply enforced on the honor system. Regardless of which approach a company takes, analyzing logs presents a pretty good idea of what users are up to. In the next few sessions we will take a look at the various logs that get generated and what can be done with them.

100 Log Management uses #61: Static IP address conflicts

Today we look at an interesting operational use case of logs that we learned about by painful experience — static IP address conflicts. We have a pretty large number of static IP addresses assigned to our server machines. Typical of a smaller company we assigned IP addresses and recorded them in a spread sheet. Well, one of our network guys made a mistake and we ended up having problems with duplicate addresses. The gremlins came out in full force and nothing seemed to be working right! We used logs to quickly diagnosis the problem. Although I mention a windows pop-up as a possible means of being alerted to the problem I can safely say we did not see it, or if we did, we missed it.

– By Ananth

Anomaly detection and log management; State of virtualization security and more

Anomaly Detection and Log Management: What we Can (and Can’t) Learn from the Financial Fraud Space

Have you ever been in a store with an important purchase, rolled up to the cash register and handed over your card only to have it denied? You scramble to think why: “Has my identity been stolen?” “Is there something wrong with the purchase approval network?” “Did I forget to pay my bill?” While all of the above are possible explanations – there’s a very common one you may not think of immediately: anomaly detection. Specifically, if the purchase you have in your hand doesn’t match up with your buying history, your bank might think it’s fraud and refuse the transaction. Even small changes in buying habits can trigger an alert. For example, credit card holders traveling outside the US for the first time may find their card declined in Paris on a European vacation. Buyers that rarely charge items over a couple of hundred dollars in value could find their first large ticket item (like a couch or a piece of jewelry) purchase blocked, at least temporarily.

While moderately annoying, this kind of card block can usually be cleared up quickly with a call to the card company’s customer service department. And over the years, the sensitivity of these alerting mechanisms has become so accurate that a card may be simultaneously used for legitimate purchases while fraudulent ones are being denied. On-line banking anomalous fraud detection has also made significant advances in the past few years. Using information like originating IP (internet protocol) address and time of login, many banks can now flag suspicious activity and block fraudulent transactions before they occur.

Anomaly Detection Complexities 
One of the early promises from anomaly detection solution providers was that misuse on the corporate network could be flagged as quickly as credit card companies call out financial fraud. While this is a very appealing notion for IT managers, the reality is significantly more complex. Why? Well, in large part because our IT network use is more complex than our financial activities. An average corporate IT user may access 10 or even 100 different systems, applications, and services during their course of a standard business week. And to achieve highly accurate anomaly awareness, solutions must analyze activity on each of those systems with the same level of contextual awareness as a single credit card or bank account.

On the surface, the IT network complexity argument may sound a little specious. Aren’t these all simply transactions? Can’t the same metrics and algorithms be applied? Yes and no. Interdependency and rapidly shifting responsibilities in an organization do impact the correlative ability and anomaly detection capabilities. Consider a retail company that is going through a merger and is simultaneously moving messaging hygiene, some storage, and CRM (customer relationship management) to a cloud computing model. Roles and access from the company being merged aren’t normalized to the existing organization’s roles and new rules for DLP (data leak prevention) and management of customer PII (personally identifiable information) must be applied to maintain compliance in the cloud.

Trying to learn what’s “normal” for all of these new models may take months, or even years, from an anomaly detection standpoint while “traditional” access control methods (rules and policy controls) can be implemented immediately based on corporate requirements. As use of the new systems normalize, and review of the anomaly monitoring is tuned to decrease false positives, the company will probably be able to achieve higher accuracy with the anomaly detection assessment of the log data.

What can we learn? 
Financial services anomaly awareness for fraud detection teaches us that when the scope and dependencies of the data being analyzed is kept to a relatively narrow space, high accuracy rates can be achieved. Contributing to the accuracy is the history of usage data that’s available to card companies and bank to make what’s “normal” and what’s “abnormal” decisions.

To leverage this success, use anomaly detection for well-defined and well-known purposes. An activity that changes frequently or has unpredictable types of access is probably a poor fit; going too broad with assessment criteria will lead to too many false positives. For example, alerting on any access to a critical server’s configuration file is too loosely defined – if there is a business need to update that file frequently, this would result in significant false positive alert activity. Tuning the anomaly trigger to a more tightly defined set of criteria would bring the alerts down to a manageable number. In this case, setting rules for approved roles (administrator access only), devices (only approved IP addresses) and time (only during normal business hours) will enable the anomaly detection engine to be far more accurate.

Another key area where anomaly detection can be of use in risk assessment is in the realm of device and system activity – independent of user or service activity. A DDoS (distributed denial of service) attack will usually show up clearly in the log files and can be keyed in to alerts generated. While a SYN flood as recorded in the log files of a web server may not indicate a user is attempting to perpetrate fraud, it is anomalous and, in most instances, indicates an attack or other unwanted activity in process.

Behavior that’s out of the ordinary is often a red flag that fraud or misuse is occurring. Analyzing a single user’s credit card transaction history for fraud requires understanding of what’s “ordinary” in a relatively narrow assessment space. Understanding how a user interacts with a complex corporate IT ecosystem by parsing the log data from multiple services and systems is far less narrow. This doesn’t mean anomaly detection can’t be accomplished with intelligent log management detection solutions, but it does mean
that in order to be successful with anomaly detection on corporate IT systems, companies must be able to narrow down the scope of the rule sets and look for flags in areas where “ordinary” can be determined with a level of certainty. Other assessment criteria such as rule and role based access control settings, policy enforcement, and compliance requirements are necessary components of a comprehensive log and event management solution. Carefully tuned anomaly detection, however, is also a critical part of the misuse puzzle.

Related resource: EventTracker’s Anomaly Detection module detects new and significant variations from normal operations, a baseline that can be configured by users to match activity patterns specific to their organizations.

Next month: Stay tuned for a new series on Log Management and Compliance by Anton Chuvakin

Industry News

Military still gives thumbs down to thumb drives
Despite relaxing the ban on using portable storage devices on Defense Department computer systems, it appears thumb drives will not return to the military services anytime soon.

Did you know? As the article states, “The problem with bans is that employees find ways around them resulting in an even worse cybersecurity posture”. A more effective way to prevent loss/theft of data via USB devices is with the Trust and Verify approach that EventTracker provides.

Hacking human gullibility with social penetration – we don’t need no stinking exploits

Security penetration testers rely plenty on attacks that exploit weaknesses in websites and servers, but their approach is better summed up by the famous phrase “There’s a sucker born every minute”.

Related resource: Protecting your critical information assets from gullible employees is as important as defending against disgruntled, malicious employees. While the avenues of data loss via insiders are plenty, knowing what to monitor can often prevent accidents from taking place – Read about the top 10 insider threats and how you can defend against them.

Prism Microsystems named finalist in Info Security Products Guide Global Excellence Awards 2010

Info Security Products Guide, a leading publication on security related products and technologies has named EventTracker a finalist in the 2010 Global Product Excellence Awards under the Security Information and Event Management (SIEM) category. These awards were launched to recognize cutting-edge, advanced IT security products that have the highest level of customer trust worldwide.

100 Log Management uses #60 The top 10 workstation reports that must be reviewed to improve security and prevent outages

In the conclusion of our three part series on monitoring workstations we look at the 10 reports that you should run and review to increase your overall security and prevent outages.

100 Log Management uses #59 – 6 items to monitor on workstations

In part 2 of our series on workstation monitoring we look at the 6 things that are in your best interest to monitor — the types of things that if you proactively monitor will save you money by preventing operational and security problems. I would be very interested if any of you monitor other things that you feel would be more valuable. Hope you enjoy it.

100 Log Management uses #58 The why, how and what of monitoring logs on workstations

Today we are going to start a short series on the value of monitoring logs on Windows workstations. It is commonly agreed to that log monitoring on servers is a best practice, but until recently the complexity and expense of log management on workstations made most people shy away, but log monitoring on the workstation is valuable, and easy as well, if you know what to look for. These next 3 blogs will tell you the why, how and what.

SQL injection leaves databases exposed; zero-day flaw responsible for Google hack

Turning log information into business intelligence with relationship mapping

Now that we’re past January, most of us have received all of our W2 and 1099 tax forms. We all know that it’s important to keep these forms until we’ve filed our taxes and most of us also keep the forms for seven years after filing in case there is a problem with a previous year’s filing. But how many of us keep those records past the seven year mark? Keeping too much data can be as problematic as not keeping records at all. One of the biggest problems with retention of too much information is that storage needs increase and it becomes difficult to parse through the existing data to find what’s most important.

The challenge of balancing information with intelligence is often referred to as a “signal to noise ratio” problem. When there is too much noise, the signal gets lost. Without proper management, log data collection can quickly turn into a classic “white noise” scenario. Worst case, everything is stored, there is little organization, and the utility of the business intelligence is lost in terabytes of unsorted log entries.

Finding the Signal in the Noise 
In order for log data collection to be of highest value to an organization, it needs to be filtered and parsed for business intelligence purposes. Consider a log aggregation system that captures failed logins. The raw log data shows the logins that have failed, but without contextual awareness and business intelligence filtering, the effectiveness of this information is impacted. Are the logins due to sleepy Monday morning fat fingers? Or are the failed logins an indication that an account, and possibly the organization, is under attack?

Without applying rules and intelligence to the analysis of the log data, it would be hard to determine the underlying cause and assess the potential risk to the organization. However, if additional information is parsed using correlation rules, the picture of what’s really going on becomes clearer. For example, did the login come from the user’s PC on the internal corporate network during normal business hours? Did the user access approved systems within their business role? If so, this probably indicates the failed login was user error rather than an attack. But if the login came from outside the company on the remote access system at 3a on a Saturday, it’s more likely to be true attack scenario.

This is why the first to turning mountains of log data into readable and useful business intelligence is to understand the business and to baseline normal activity. Keep in mind that what’s normal will vary from business to business and even user role to user role. In the above login scenario, the attack looks far less suspicious if it comes from a remote worker on the weekend shift – in a different time zone.

Map out user flows and business processes before trying to implement correlation rules into the log management or SIEM. Understand which services and devices user roles require access to and any time or location related information associated with their required business activities. To ensure that the log data and event information is focused into an accurate picture of business risk and activity, write rules that can correlate flows between devices and users. Make sure that you are able to track what a user does from the point of login entry through the network as zones and servers are accessed, and even what activities are completed within applications and services.

Using single-sign on and other identity aggregation solutions are useful for generating a picture of a specific user’s activity throughout a day or week. Although many services and servers require multiple logins, it’s possible to bring the disparate user IDs back together with rules in the log management or SIEM console.

Relationship Mapping 
Once the baseline of business flow and user activity has been determined, it’s possible to turn up the signal even louder by diagramming and understanding the unique relationships between devices, systems, users, and applications. This data can feed risk posture awareness reporting and link back into compliance assessments and analysis.

Consider an unpatched operating system that is running in a low security zone and houses PDF scans of publically available marketing brochures. This server may be considered a low security priority with limited monitoring in effect. What if a member of the development team, needing some extra processing power to test a new internal HR module, puts a VM on the server and links it to the development network? Now a high priority service is linked to the low security server and the relationship and risk posture has been altered in a way that may put the organization at risk.

Although it’s true that strong change management procedures may have prevented the unauthorized installation, it’s also true that when changes slip through the cracks, an intelligence log management or SIEM system can catch those changes quickly.

Questions to ask when implementing relationship mapping:

  •  What systems support what applications?
  •  What systems contain what data?
  •  What compliance mandates govern this data?
  •  Which systems and services are connected
  •  Directly via APIs or other connectors
  •  Topographically
  •  Who needs to be informed of what (prioritization and remediation)?

Business Intelligence Use Case Walkthrough

Bringing the general concepts down to a very specific use case will help to illustrate what is meant by translating log data into business intelligence. In this scenario, adapted from a real-world example, we follow a celebrity as she checks into the hospital.

1. Celebrity actress/singer Lady Jen checks into a hospital in Los Angeles, CA

2. The hospital is a covered entity under HIPAA and is required to ensure that only approved, authorized staff view patient records

3. For HIPAA purposes, the hospital must also ensure patient records are not duplicated or distributed without proper approvals

4. Raw log data is aggregated and normalized – but without relationship mapping. This does not prove to the HIPAA auditor that only approved, authorized staff are accessing the patient records

5. For high security patients, access to their records are restricted to only certain terminals and userIDs

6. The tabloids get wind of Lady Jen’s hospitalization and offer staffers cash to report on her health status

7. One unscrupulous staffer accepts the offer and attempts to access Lady Jen’s record from a shared terminal

8. The staffer logs in with stolen credentials from Lady Jen’s attending physician

9. The log management system flags the login – although it came from a user approved for access, it did not come from a server/terminal approved for access

10. The log management system sends an alert to the security team, the unscrupulous staffer is located and fired, and Lady Jen’s privacy is protected

Millions of unsorted log events can be too much of a good thing. Cut down on the noise and turn up the signal intelligence on log data by using relationship mapping, usage baselines and carefully written correlation rules.

Next Month: Why Anomaly Detection in Financial Fraud doesn’t work for IT/Log Mgmt Fraud – And what we can learn from it

Industry News

Critical infrastructure security a mixed bag, report finds
A new report from the Center for Strategic and International Studies highlights the financial damage of cyber-attacks on critical infrastructure, but also paints a picture of IT security that is in turns good and bad. Among the findings is that only one-third of executives reported their organization had policies restricting or prohibiting the use of USB sticks or removable media, which has become a popular attack vector for malware.

Did you know? Prohibiting the use of USB devices can lead to unhappy employees and slash productivity. There is now another effective way to prevent costly damage from data stolen on USB devices without taking drastic measures.

Data breaches get costlier
The average total cost of a data breach rose from $6.65 million in 2008 to $6.75 million in 2009.Ponemon Institute conducted the study and said that 2009 brought “more sophisticated criminal attacks that didn’t show up on our radar screen” the previous year. These malicious attacks often involved botnets and were carried out for reasons of financial gain.

Did you know? Traditional perimeter defense systems do not present a comprehensive defense, especially against the more sophisticated, targeted attacks that are currently being witnessed. A comprehensive SIEM solution can address a number of security concerns including insider theft, website attacks, bruteforce attacks, external hacking, spyware, botnets and zero-day attacks

Strong demand for full-featured SIEM drives 3rd consecutive year of double-digit growth for Prism Microsystems 
Despite a sluggish worldwide economy, Prism charted double-digit gains in annual sales, driven by strong demand across major verticals, increased government spending for IT security and compliance initiatives, and stiffer non-compliance penalties of the HITECH act for healthcare organizations. The company closed 2009 with over 120 new customers including Nintendo, the Salvation Army, the US Senate, MITRE and NASA.

Sustainable vs. Situational Values

I am often asked that if Log Management is so important to the modern IT department, then how come more than 80% of the market that “should” have adopted it has not done so?

The cynic says “unless you have best practice as an enforced regulation (think PCI-DSS here)” then twill always be thus.

One reason why I think this is so is because earlier generations never had power tools and found looking at logs to be hard and relatively unrewarding work. That perception is hard to overcome even in this day and age after endless punditry and episode after episode has clarified the value.

Still resisting the value proposition? Then consider a recent column in the NY Times which quotes Dov Seidman, the C.E.O. of LRN who describes two kinds of values: “situational values” and “sustainable values.”

The article is in the context of the current political situation in the US but the same theme applies to many other areas.

“Leaders, companies or individuals guided by situational values do whatever the situation will allow, no matter the wider interests of their communities. For example, a banker who writes a mortgage for someone he knows can’t make the payments over time is acting on situational values, saying: I’ll be gone when the bill comes due.”

At the other end, people inspired by sustainable values act just the opposite, saying: I will never be gone. “I will always be here. Therefore, I must behave in ways that sustain — my employees, my customers, my suppliers, my environment, my country and my future generations.”

We accept that your datacenter grew organically, that back-in-the-day there were no power tools and you dug ditches with your bare hands outside when it was 40 below and tweets were for the birds…but…that was then and this is now.

Get Log Management, it’s a sustainable value.


100 Log Management uses #57 PCI Requirement XII

Today we conclude our journey through the PCI Standard with a quick look at Requirement 12. Requirement 12 documents the necessity to setup and maintain a policy for Information Security for employees and contractors. While this is mostly a documentation exercise it does have requirements for monitoring and alerting that log management can certainly help with.

5 cyber security myths, the importance of time synchronization, and more

Time won’t give me time: The importance of time synchronization for Log Management

Does this sound familiar? You get off a late night flight and wearily make your way to your hotel. As you wait to check in, you look at the clocks behind the registration desk and do a double-take. Could it really be 3:24:57 PM in Sydney, 1:36:02 PM in Tokyo, and 11:30:18 PM in New York? Of course not; time zones are separated by full hours – not minutes and seconds. The clocks have become de-synchronized and are showing incorrect readings.

But while de-synchronized clocks at a hotel are a minor nuisance, de-synchronized clocks across distributed servers in a corporate network are a serious and sometimes risky headache. This is all the more apparent when log aggregation and SIEM tools are in use to visualize and correlate activities across geographically distributed networks. Without an accurate timestamp on the log files, these solutions are unable to re-create accurate sequencing patterns for proactive alerting and post-incident forensic purposes.

Think a few minutes or even seconds of log time isn’t important? Consider the famous hacking case recounted by Clifford Stoll in his 1990 real-life thriller, The Cuckoo’s Egg. Using log information, a 75 cent (USD) accounting error was traced back to 9 seconds of unaccounted computer usage. Log data and a series of impressive forensic and tracking techniques enabled Stoll to track-back the attack to Markus Hess, in Hanover, Germany. Hess had been collecting information from US computers and selling the information to the Soviet KGB. A remarkable take-down that started with a mere 9 seconds of lost log data.

Needless to say, accurate synchronization of log file timestamps is a critical lynchpin in an effective log management and SIEM program. But how can organizations improve their time synchronization efforts?

Know what you have

If you don’t know what you’re tracking, it will be impossible to ensure all the log information on the targets is synchronized. First things first: start with a comprehensive inventory of systems, services, and applications in the log management/SIEM environment. Some devices and operating systems use a form of standardized time stamping format: for example, the popular syslogprotocol which is used by many Unix systems, routers, and firewalls, is an in process IETF standard. The latest version of the protocol includes parameters that indicate if the log and system is time synchronized (isSynced) to a reliable external time source and if the synchronization is accurate (synAccuracy).

Other parameters to check for that can impact the accuracy of the synchronization process include the time zone of the device or system and the log time representation, 24 hour clock or AM/PM format. Since all logs do not follow the same exact format, it’s also important that the log parsing engine in use for aggregation and management is capable of identifying where in the log file the timestamp is recorded. Some engines have templates or connectors that automatically parse the file to locate the timestamp and may also provide customizable scripts or graphical wizards where administrators can enter in the parameters to pinpoint the correct location for timestamps in the log. This function is particularly useful when log management systems are collecting log data from applications and custom services which may not be using a standard log format.


Once you know where the timestamp information is coming from (geographically, time zone, system, application, and/or service) it’s time to employ normalization techniques within the log management system itself. If a log is being consumed from a device that is known to have a highly accurate and trustworthy external time source, the original timestamp in the log may be deemed acceptable. Keep in mind, however, that the log management engine may still need to normalize the time information to recreate a single meta-time for all the devices so that correlation rules can run effectively.

For example, consider a company with firewalls in their London, New York City, and San Jose offices. The log data from the firewalls are parsed by the engine and alert that at 6:45 pm, 1:45pm, and 10:45am on January 15th 2010 a denial of service was detected. For their local zones, these are the correct timestamps, but if the log management engine normalizes the geographic time into a single meta-time, or Coordinated Universal Time (UTC), it’s clear that all three firewalls were under attack at the same time. Another approach is to tune the time reporting in the devices’ log files to reflect the desired universal time at the correlation engine rather than the correct local time.

For devices and logs that are not accurately synchronized with external time sources, the log management engine could provide its own normalization by tracking the time the log file information was received and time stamping it with an internal time value. This approach guarantees a single time source for the stamping, but accuracy can be impeded by delays in log transfer times and would be ineffective for organizations that batch transfer log information only a few times a day.

Trust the Source

Regardless of which kinds of normalization are used, reliability of the time source matters. During a criminal or forensic examination, the timestamps on your organizations network may be compared to devices outside. Because of this, you want to make sure the source you are using is as accurate as possible. One of the most common protocols in use for time synchronization is NTP (Network Time Protocol)3 which provides time information in UTC. Microsoft Windows systems implement NTP as WTS (Windows Time Service) and some atomic clocks provide data to the Internet for NTP synchronization. One example of this is the NIST Internet Time Service4.

There are some security concerns with NTP because it uses a stateless protocol for transport and is not authenticated. Also, there have been some incidents of denial of service attacks against NTP servers making them temporarily unavailable to supply time information. What can we do about that? Not much – despite the minor security concerns, NTP is the most widely used (and widely supported) protocol for network device time synchronization, so we can do our best to work around these issues. Consider adding extra monitoring and network segregation to authoritative time sources where possible.

All Together Now

When it comes to log management and alerting, the correct time is a must. Determine which devices and systems your log management system is getting inputs from, make sure the time information is accurate by synchronizing via NTP, and perform some kind of normalization on the information – either on the targets or within the log mgmt engine itself. It’s a little tricky to make sure all log information has the correct and accurate time information, but the effort is time well spent.


1 The Cuckoo’s Egg, by Cliff Stoll, 1990, Pocket Books, ISBN-13: 978-1416507789 and’s_Egg_(book)

2 IETF, RFC 5424,

3 The Network Time Protocol project at and IETF, RFC 1305,

4 NIST Internet Time Service (ITS),

Next Month: Turning Log Management into Business Intelligence with Relationship Mapping, by Diana Kelley

Industry News

Tech insight: Learn to love log analysis
Log analysis and log management are often considered dirty words to enterprises, unless they’re forced to adopt them for compliance reasons. It’s not that log analysis and management have a negative impact on the security posture of an organization — just the opposite. But their uses and technologies are regularly misunderstood, leading to the potential for security breaches going unnoticed for days, weeks, and sometimes months.

Heartland pays Amex $3.6M over 2008 data breach
Heartland Payment Systems will pay American Express $3.6 million to settle charges relating to the 2008 hacking of its payment system network. This is the first settlement Heartland has reached with a card brand since disclosing the incident in January of 2009.

Did you know?  A security breach can not just result in substantial clean-up costs, but also cause long term damage to corporate reputation, sales, revenue, business relationships and partnerships. Read how Log management solutions not only significantly reduce the risks associated with security breaches through proactive detection and remediation, but also generate significant business value

Five myths about cyber security
While many understand the opportunities created through this shared global infrastructure, known as cyberspace, few Americans understand the threats presented in cyberspace, which regularly arise at individual, organizational and state (or societal) levels.  And these are not small threats…

Did you know? From sophisticated, targeted cyber attacks aimed at penetrating a company’s specific defenses to insider theft, Log Management solutions like EventTracker help detect and deter costly security breaches.

Ovum/Butler Group tech audit of EventTracker
In this 8-page technology audit, a leading analyst firm analyses EventTracker’s product offering, with a focus on functionality, operation, architecture and deployment.

Jack Stose joins Prism Microsystems as VP of Sales
Jack Stose joins Prism’s senior leadership team as VP of sales to help Prism take advantage of the tremendous opportunity presented by the growing adoption of virtualization and cloud computing, and the resultant demand for security solutions that can span both physical and virtual environments.

100 Log Management uses #56 PCI Requirements X and XI

Today we look at the grand-daddy of all logging requirements in PCI — Section 10 (specifically, Section 10.5) and Section 11. As with most of PCI, the requirements are fairly clear and it is hard to understand how someone could accomplish them without log management.