Time won’t give me time: The importance of time synchronization for Log Management
Does this sound familiar? You get off a late night flight and wearily make your way to your hotel. As you wait to check in, you look at the clocks behind the registration desk and do a double-take. Could it really be 3:24:57 PM in Sydney, 1:36:02 PM in Tokyo, and 11:30:18 PM in New York? Of course not; time zones are separated by full hours – not minutes and seconds. The clocks have become de-synchronized and are showing incorrect readings.
But while de-synchronized clocks at a hotel are a minor nuisance, de-synchronized clocks across distributed servers in a corporate network are a serious and sometimes risky headache. This is all the more apparent when log aggregation and SIEM tools are in use to visualize and correlate activities across geographically distributed networks. Without an accurate timestamp on the log files, these solutions are unable to re-create accurate sequencing patterns for proactive alerting and post-incident forensic purposes.
Think a few minutes or even seconds of log time isn’t important? Consider the famous hacking case recounted by Clifford Stoll in his 1990 real-life thriller, The Cuckoo’s Egg. Using log information, a 75 cent (USD) accounting error was traced back to 9 seconds of unaccounted computer usage. Log data and a series of impressive forensic and tracking techniques enabled Stoll to track-back the attack to Markus Hess, in Hanover, Germany. Hess had been collecting information from US computers and selling the information to the Soviet KGB. A remarkable take-down that started with a mere 9 seconds of lost log data.
Needless to say, accurate synchronization of log file timestamps is a critical lynchpin in an effective log management and SIEM program. But how can organizations improve their time synchronization efforts?
Know what you have
If you don’t know what you’re tracking, it will be impossible to ensure all the log information on the targets is synchronized. First things first: start with a comprehensive inventory of systems, services, and applications in the log management/SIEM environment. Some devices and operating systems use a form of standardized time stamping format: for example, the popular syslogprotocol which is used by many Unix systems, routers, and firewalls, is an in process IETF standard. The latest version of the protocol includes parameters that indicate if the log and system is time synchronized (isSynced) to a reliable external time source and if the synchronization is accurate (synAccuracy).
Other parameters to check for that can impact the accuracy of the synchronization process include the time zone of the device or system and the log time representation, 24 hour clock or AM/PM format. Since all logs do not follow the same exact format, it’s also important that the log parsing engine in use for aggregation and management is capable of identifying where in the log file the timestamp is recorded. Some engines have templates or connectors that automatically parse the file to locate the timestamp and may also provide customizable scripts or graphical wizards where administrators can enter in the parameters to pinpoint the correct location for timestamps in the log. This function is particularly useful when log management systems are collecting log data from applications and custom services which may not be using a standard log format.
Once you know where the timestamp information is coming from (geographically, time zone, system, application, and/or service) it’s time to employ normalization techniques within the log management system itself. If a log is being consumed from a device that is known to have a highly accurate and trustworthy external time source, the original timestamp in the log may be deemed acceptable. Keep in mind, however, that the log management engine may still need to normalize the time information to recreate a single meta-time for all the devices so that correlation rules can run effectively.
For example, consider a company with firewalls in their London, New York City, and San Jose offices. The log data from the firewalls are parsed by the engine and alert that at 6:45 pm, 1:45pm, and 10:45am on January 15th 2010 a denial of service was detected. For their local zones, these are the correct timestamps, but if the log management engine normalizes the geographic time into a single meta-time, or Coordinated Universal Time (UTC), it’s clear that all three firewalls were under attack at the same time. Another approach is to tune the time reporting in the devices’ log files to reflect the desired universal time at the correlation engine rather than the correct local time.
For devices and logs that are not accurately synchronized with external time sources, the log management engine could provide its own normalization by tracking the time the log file information was received and time stamping it with an internal time value. This approach guarantees a single time source for the stamping, but accuracy can be impeded by delays in log transfer times and would be ineffective for organizations that batch transfer log information only a few times a day.
Trust the Source
Regardless of which kinds of normalization are used, reliability of the time source matters. During a criminal or forensic examination, the timestamps on your organizations network may be compared to devices outside. Because of this, you want to make sure the source you are using is as accurate as possible. One of the most common protocols in use for time synchronization is NTP (Network Time Protocol)3 which provides time information in UTC. Microsoft Windows systems implement NTP as WTS (Windows Time Service) and some atomic clocks provide data to the Internet for NTP synchronization. One example of this is the NIST Internet Time Service4.
There are some security concerns with NTP because it uses a stateless protocol for transport and is not authenticated. Also, there have been some incidents of denial of service attacks against NTP servers making them temporarily unavailable to supply time information. What can we do about that? Not much – despite the minor security concerns, NTP is the most widely used (and widely supported) protocol for network device time synchronization, so we can do our best to work around these issues. Consider adding extra monitoring and network segregation to authoritative time sources where possible.
All Together Now
When it comes to log management and alerting, the correct time is a must. Determine which devices and systems your log management system is getting inputs from, make sure the time information is accurate by synchronizing via NTP, and perform some kind of normalization on the information – either on the targets or within the log mgmt engine itself. It’s a little tricky to make sure all log information has the correct and accurate time information, but the effort is time well spent.
1 The Cuckoo’s Egg, by Cliff Stoll, 1990, Pocket Books, ISBN-13: 978-1416507789 and http://en.wikipedia.org/wiki/The_Cuckoo’s_Egg_(book)
2 IETF, RFC 5424, http://tools.ietf.org/html/rfc5424
3 The Network Time Protocol project at http://www.ntp.org/ and IETF, RFC 1305, http://www.ietf.org/rfc/rfc1305.txt
4 NIST Internet Time Service (ITS), http://tf.nist.gov/timefreq/service/its.htm
Next Month: Turning Log Management into Business Intelligence with Relationship Mapping, by Diana Kelley
Tech insight: Learn to love log analysis
Log analysis and log management are often considered dirty words to enterprises, unless they’re forced to adopt them for compliance reasons. It’s not that log analysis and management have a negative impact on the security posture of an organization — just the opposite. But their uses and technologies are regularly misunderstood, leading to the potential for security breaches going unnoticed for days, weeks, and sometimes months.
Heartland pays Amex $3.6M over 2008 data breach
Heartland Payment Systems will pay American Express $3.6 million to settle charges relating to the 2008 hacking of its payment system network. This is the first settlement Heartland has reached with a card brand since disclosing the incident in January of 2009.
Did you know? A security breach can not just result in substantial clean-up costs, but also cause long term damage to corporate reputation, sales, revenue, business relationships and partnerships. Read how Log management solutions not only significantly reduce the risks associated with security breaches through proactive detection and remediation, but also generate significant business value
Five myths about cyber security
While many understand the opportunities created through this shared global infrastructure, known as cyberspace, few Americans understand the threats presented in cyberspace, which regularly arise at individual, organizational and state (or societal) levels. And these are not small threats…
Did you know? From sophisticated, targeted cyber attacks aimed at penetrating a company’s specific defenses to insider theft, Log Management solutions like EventTracker help detect and deter costly security breaches.
Ovum/Butler Group tech audit of EventTracker
In this 8-page technology audit, a leading analyst firm analyses EventTracker’s product offering, with a focus on functionality, operation, architecture and deployment.
Jack Stose joins Prism Microsystems as VP of Sales
Jack Stose joins Prism’s senior leadership team as VP of sales to help Prism take advantage of the tremendous opportunity presented by the growing adoption of virtualization and cloud computing, and the resultant demand for security solutions that can span both physical and virtual environments.