How to recession proof IT; Get hard dollar savings today

Performing well during a security

“Every crisis offers you extra desired power” William Moulton Marston
Jasmine’s corollary: “Only if you perform well during that crisis.”

Crises will happen no matter how many precautions we take. The need to blame someone is a human desire and it is easy to focus that on the crisis response team, because they are visible. Yet when teams perform well during the crisis they don’t merely avoid blame. They do garner the potential to become powerful advisors or outright leaders. It’s even better if you can also demonstrate that lessons learned from past crises are making the current environment more secure. After all, the Justice League members wouldn’t be heroes if no one knew about their actions. But what does it mean to perform well in a crisis?

Not so long ago performing well during an IT security crisis was about how rapidly the security administrator could shore up firewall breaches or deliver anti-virus patches. But times have changed, now performing well in a security crisis is a team effort – security, network, system, application and desktop folks are involved. Team performance, however, is not simply a sum of the individual talent of team members – just ask the 2004 US Olympic basketball team, or the current Cincinnati Bengals for that matter.

Joking aside, I’m sure that if you look at every large scale disaster you will find dozens, if not hundreds, of competent people working extremely hard to deal with the situation. Yet their individual efforts are often overwhelmed by the complexity of the situation and the lack of coordination (the broad brush of 20-20 hindsight doesn’t help either). IT security situations are no different. A diverse team of people must perform well during the crisis to protect not only corporate infrastructure and business intelligence, but the “digital lives” of their customers as well. Which begs the question, how can IT increase its odds of performing well under these stressful situations? As far as I can tell, the basics involve:

1. Understanding what is happening

This starts with real-time collection and correlation of subtle configuration changes or seemingly disconnected events that span systems, applications, and network infrastructure. It’s likely that the next big security crisis will be a multi-stage attack designed by organizations employing well trained programmers (see the discussion in Symantec Internet Security Threat Report, published in April 2008). Since enterprise environments are getting more complex and more dynamic, it is more difficult to rapidly investigate cause/effect during the crisis without some level of automated analysis. The automation must sift through large volumes of semi-structured IT data and produce customized reporting that allows each team member to understand the significance of situation so they can act effectively.

2. Having well known contingency configurations and plans

You can work with various experts to develop responses to different scenarios (rerouting traffic, isolating systems, disabling accounts, etc). Luckily computing contingency plans are more readily automated than any other type of disaster planning. Automation means that the plans can be executed the same way every time that a particular situation occurs. However, this automation can’t be the ‘set it and forget it’ type. Enterprise computing environments and IT staff change too frequently. The automation itself needs to be reviewed and updated regularly to accommodate infrastructure, application, and regulatory changes. The last thing you need is the automation to violate a compliance policy. New IT employees also need education about these automated responses. The second-to-last thing you need is a clueless admin mistaking the automated response for the attack itself.

Contingency planning is not only about to-do-lists. It is also about decision-making and responsibilities. There are lots of people who can make good decisions under pressure. But a worse disaster will ensue if every one of them went off and did their own thing, in their own way, without telling anyone. This will happen every time if the crisis management team is poorly defined and no one has established:

  • who on that team is responsible for specific duties and decision
  • how people on that team interact with each other and with related organizations,
  • and most importantly, how information flows into, within, and out of that team.

If critical information doesn’t reach the right people, in the right way, at the right time, then you are in for many, many sleepless nights of preventable remediation work. It pays to clearly define the team, their responsibilities and information needs first – and then set up the emergency information consoles, reports, etc. that each team member needs.

3. Practicing

While I think the various uTube creations based on Allen Iverson’s practice rant are hilarious, I also know that practicing for a crisis is important. First, when people don’t know what they are supposed to do, then they waste a lot of time figuring out what they should be doing. They are usually doing this with inaccurate or incomplete information, which means they will get it right only if they are very, very lucky.

Secondly, practice helps everyone understand that the crisis response plan is not a blame game in disguise. Instead, it is an opportunity to get people to trust the plan and the people involved. This is particularly important in large enterprises because there are more people involved, and those people are often not in the habit of collaborating. It is hard to work with someone new in stressful conditions because no one knows what they’ll do. Practice overcomes that.

4. Auditing everything and then some

You can never go wrong documenting everything that is part of the plan, shows the on-going efforts to comply with any related regulations, happens during practices, and happens during the actual crisis. Remember you’ll still need to demonstrate that your crisis efforts are compliant with various regulations. Auditors will want some visibility into what, where, why, and how financial systems or private information were handled. They’ll also take a fine-toothed comb to your compliance documentation. Lack of evidence (or the inability to find it in a sea of poorly archived log data) is the quickest path to nasty fines.

5. Dealing with the aftermath

Most technical folks assume this is mostly about in-depth forensic analysis to determine how to undo any damage that occurred and to determine if your strategic security plan needs tweaking or if a tactical prevention (such as changing an operational policy, or adding a new configuration check, or implementing a new event analysis rule) will do. While all of this is absolutely necessary, it is only partly true.

The other part of the aftermath is dealing with the hordes of misinformation that will be disseminated about the situation. Blogs, posted comments, and poorly worded customer notifications can add up to chaos. And good luck if you find yourself setting up a customer call center without a pre-negotiated contract; or you set up a ‘crisis info’ website that promptly crashes from zillions of hits; or you are dragged to a press conference without being able to explain everything from why it happened to the extent of the damage in non-technical terms.

But really, things don’t have to go this way. That’s what crisis planning, solutions and practice is for. Real IT executives have lived through these things and still have their jobs. Hopefully we can all be as effective.

Jasmine Noel is founder and partner of Ptak, Noel & Associates. With more than 10 years experience in helping clients understand how adoption of new technologies affects IT management, she tries to bring pragmatism (and hopefully some humor) to the business-IT alignment discussion. Send any comments, questions or rants to

Industry News

Looking for hard dollar savings today? Consider SIEM technology. It not only reduces the risk of costly breaches and non-compliance, but provides tangible cost savings

Credit-card security standard issued after much debate
The Payment Card Industry Security Standards Council, the organization that sets technical requirements for processing credit and debit-cards, has issued revised security rules. The council also indicated that next year it will focus on new guidelines for end-to-end encryption, payment machines and virtualization.

Did you know? EventTracker enables compliance with PCI section 10 and 11 with its integrated Log Management and Change Monitoring solution

Data breaches reach record high
The hits keep coming when it comes to U.S. data breaches. The Identity Theft Resource Center reports data breaches in 2008 have already exceeded the record breaches of 2007. Enterprise breaches continue to lead the pack with breaches tied to mobile data topping the incident reports.

Did you know? EventTracker helps safeguard critical data, whether at rest, in use or in motion

Cool Tools and Tips

Understanding Change Management
Understand how Change Management can help you:

  • Analyze change data to quickly identify and back-out faulty changes.
  • Identify new viruses before your Anti-Virus provider comes up with a patch.
  • Have insurance when installing new software or making major configuration changes.
  • Enhance security by having detailed information about all changes and accesses.
  • Reduce dependence on human input to diagnose and resolve system/application problems.

MSSP /SaaS /Cloud Computing – Confused? I know I am

There is a lot of discussion around Security MSSPs, SaaS (Security as a Service) and Cloud Computing these days. I always felt I had a pretty good handle on MSSPs and SaaS. The way I look at it, you tend to outsource the entire task to Security MSSPs. If you outsource your firewall security, for instance, you generally have no one on staff that worries about firewall logs and you count on your MSSP partner to keep you secure – at least with regards to the firewall. The MSSP collects, stores and reviews the logs. With SaaS, using the same firewall example above, you outsource the delivery of the capability — the mechanics of the collection and storage tasks and the software and hardware that enable it, but you still have IT personnel on staff that are responsible for the firewall security. These guys review the logs, run the reports etc. This general definition is the same for any security task, whether it is email security, firewall or SIEM.

OK, so far, so good. This is all pretty simple.

Then you add Cloud Computing and everything gets a little, well, cloudy. People start to interchange concepts freely, and in fact when you talk to somebody about cloud computing and what it means to them, it is often completely different than what you thought cloud computing to be. I always try to ask – Do you mean security IN the cloud, i.e. using an external provider to manage some part of the collection, storage and analysis of your security data (If so go to SaaS or MSSP)? Or do you mean security OF the cloud — the collection/management of security information from corporate applications that are delivered via SaaS (Software as a Service, think Salesforce)?

The latter case has really nothing to do with either Security SaaS or MSSP since you could be collecting the data from the applications such as Salesforce into a security solution you own and host. The problem is an entirely different one. Think about how to collect and correlate data from applications you have no control over, or, how these outsourced applications affect your compliance requirements. Most often compliance regulations require you to review access to certain types of critical data. How do you do that when the assets are not under your control? Do you simply trust that the service provider is doing it right? And what will your auditor do when they show up to do an audit? How do you guarantee chain of custody of the log data when you have no control over how, when, and where it was created? Quickly a whole lot of questions suddenly pop up that there appear to be no easy answers.

So here are a few observations:

  • Most compliance standards do not envision compliance in a world of cloud computing.
  • Security OF the cloud is undefined.
  • Compliance standards are reaching further down into more modest-sized companies, and SaaS for enterprise applications is becoming more appealing to enterprises of all sizes.
  • When people think about cloud computing, they tend to equate “it is in the cloud” to “I have no responsibility”, and when critical data and apps migrate to the cloud that is not going to be acceptable.

The combination of the above is very likely going to become a bigger and bigger issue, and if not addressed will prevent the adoption of cloud computing.

Steve Lafferty

Outsource? Build? Buy?

So you decided that it’s time to manage your security information. Your trigger was probably one of a) Got handed a directive from up high “The company shall be fully compliant with applicable regulation [insert one] PCI/HIPAA/SOX/GLBA/FISMA/Basel/…” b) Had a security incident and realized OMG we really need to keep those logs.

Choice: Build
Upside: It’ll be perfect, it’ll be cheap, it’ll be fun
Downside: Who will maintain, extend, support (me?), how will it scale?

Choice: Outsource
Upside: Don’t need the hardware or staff, pay-go, someone else will deal with the issues
Downside: Really? Someone else will deal with the issues? How do you get access to your info? What is the SLA?

Choice: Buy
Upside: Get a solution now, upgrades happen, you have someone to blame
Downside: You still have to learn/use it, is the vendor stable?

What is the best choice?
Well, how generic are your requirements?
What sort of resources can you apply to this task?
How comfortable are you with IT? [From ‘necessary evil’…to… ‘We are IT!’] What sort of log volume and sources do you have?

Outsource if you have – generic requirements, limited sources/volume and low IT skills

Build if you have – programming skills, fixed requirements, limited sources/volume

Buy if you have – varied (but standard) sources, good IT skills,
moderate-high volume

As Pat Riley says “Look for your choices, pick the best one, then go with it.”


Data leakage and the end of the world

Data leakage and the end of the world

Most of the time when IT folk talk about data leakage they mean employees emailing sensitive documents to Gmail accounts or exposing the company through peer-to-peer networks or the burgeoning use of social networking services. CNet News reports “Nearly 40 percent of IT staff at mid to large companies in North America said they believed that unintentional leaks by employees are a bigger threat to the security of their data than spyware or malicious software…” A Government Technology article quote “According to research 70 percent of businesses are concerned about sensitive material falling into the wrong hands as a result of data leakage via e-mail. 

These concerns are serious and far reaching in impact.  Consider all the firms that had to notify clients that names, birth dates, and social security numbers were potentially exposed in one way or another.  In a recent article, ComputerWorld reported “the emergence of several data aggregators whose sole purpose seems to be collecting information on P2P networks for their own illegal uses or to resell to other miscreants.”  As a consumer of multiple online services, I’m not exactly encouraged by these stories, but I’ll sign up for credit monitoring and hope for the best.

As an industry analyst in the IT management space, I have a sneaking suspicion that the data leakage situation is on the verge of becoming a much broader-based IT issue. The reason is IT’s logging, auditing and reporting processes creates the potential for data leakage as well.

Think about all the sensitive technical information that is routinely captured in computing logs. Database passwords, data schemas, system configuration information are the most obvious – and the easiest to leverage in malicious ways by those with the technical know-how, be they external criminals or unstable employees.  Yet most non-technical people with ‘incidental’ access to this information, probably buried in the details of automatically generated audit reports, would have no clue that this information is either useful or sensitive.  I can just see this information flitting about an enterprise because people need to see a summary chart but it’s easier to just forward the original email with the whole file.  Ninety-nine percent of the time nothing comes of this, but then there is that one bad apple that takes advantages of weak controls over good processes.

Another thing that bothers me is that the public disclosure of our judicial system to try and punish these bad apples can actually add to the problem.  Consider the current case against Terry Childs who locked up the San Francisco networks.  According to an InfoWorld article a list of VPN group names, passwords, and associated subnets was entered into evidence without editing or redaction.  Having this information is the first step towards gaining illicit network access.  While the network may not in immediate danger it, will create a ton of extra work for the remaining network administrators as they will have to reconfigure their VPN clients. While this reconfiguration is happening, all VPN access will probably be suspended, which means thousands of people will not be able to work from home, which is thousands of more cars on the road every day, which means longer traffic jams, which means more greenhouse gases released into the air, which will hasten global warming and bring about the destruction of life on earth!!!!!

Obviously I’m diving into the deep end of the crazy pool to make a point. Let me get back to reality.

The reality is that non-technical people with the best intentions can open gaping security and privacy holes by releasing technical data discovered through investigative auditing into the public domain.  Those holes have consequences that businesses, organizations and individuals would be best off avoiding.  Avoiding these consequences after this technical data is public generates a lot of extra work for IT staff, who are already overworked and worried about looming budget cuts.

We are just at the beginning of this trend.  A wider range of non-technical people are going to make use of IT data for a variety of different tasks beyond compliance auditing and legal investigation. As more business services and business models include online strategies the more IT data will be used for decision making, product development, marketing and so on. For example, there are business intelligence analysis solutions that directly interface with and leverage IT log data and other IT-based data sources.  The users of these solutions are marketing analysts and business managers who wouldn’t intentionally put the company in harm’s way, but could if their laptops are lost or stolen.

As the use of IT-based data becomes more pervasive it becomes important to think about how to proactively prevent IT data leakage.  What you can do now is make sure to cover some basics, starting with your auditing processes. For example, if IT data (logs, events, analysis, etc) is continually collected in a central repository for an extended period of time for compliance reasons, then it must also be securely stored for that time.  This means the data must be protected with layered administrative controls and encryption.  The next step could be integration with external authentication systems that allow centralized management of users and privileges according to corporate policies from outside of the compliance tool.  Similarly, if IT data will be leveraged by multiple business analysis tools, then log scraping capabilities may be essential.  Log scraping would automatically identify sensitive content, such as system passwords that are output to logs during audited provisioning and patching jobs, and remove the content from centrally-stored logs.  Auditors would still be able to identify who did what and when to the target systems, however, there will be no leaking of plain-text system passwords or database schemas into the public domain.

And while you’re at it, consider using your analysis solution to set up automatic action that finds those pesky P2P file-sharing programs, uninstalls them, and emails the installer a P2P data leakage horror story.  After all, isn’t teaching by repetition is a time honored practice.

Jasmine Noel is founder and partner of Ptak, Noel & Associates.  With more than 10 years experience in helping clients understand how adoption of new technologies affects IT management, she tries to bring pragmatism (and hopefully some humor) to the business-IT alignment discussion.  Send any comments, questions or rants to

Industry News

Wider implications of the redhat breach
Reports of data losses and system breaches are almost becoming passé but from time to time events happen that take on a life of their own and have effects far beyond what the initial breach would normally represent. Late last week there was an announcement that key servers belonging to both the Fedora and Red Hat Linux distributions were compromised. With this breach they join the ranks of Ubuntu, Debian and Gentoo as Linux distributions that have suffered severe server breaches.

Infamous Phishing gang joins stealthy botnet
The infamous Rock Phish gang appears to have moved its operations to a notoriously stealthy botnet in an effort to more aggressively spread and expand its phishing attacks.

Did you know?EventTracker helps companies change their security strategy from reactive to proactive to withstand the explosion of emerging threats and new attack vectors.

Sound compliance policies, practices reduce legal costs
How much you spend on legal costs does not depend so much on the size of your organization, but, rather, on the policies, processes and practices you have in place, according to results of a survey of 235 U.S. firms released today by the IT Policy Compliance Group.

Did you know? SIEM (Security Information and Event Log Management) is a best practice for satisfying multiple regulatory standards while improving security.

Cool Tools and Tips

How to detect the 5 code-red security threats to Windows Servers
This document identifies and describes the 5 most significant security threats to Windows servers, so they can be addressed and corrected by IT personnel in the most efficient manner. Critical alert notifications and an effective resolution strategy will reduce IT costs, while increasing service availability and enhancing the security of your enterprise.

Featured Whitepaper

Managing USB Mass Storage Devices – Best Practices
In the last few years, portable, high capacity USB storage devices like thumb/flash drives have become increasingly prevalent in corporations, and devices such as cell phones, PDA’s, iPods all can serve as USB storage devices. These devices are incredible productivity aids – large files can be moved from computer to computer without the need to maintain shared drives, or even worry about file sizes preventing email. Personnel are also able to take files home to work on home computers off hours. The issue is that all these advantages introduce significant security vulnerabilities at the same time.

This White Paper discusses how you can take advantage of the power of these devices without leaving your operation wide open to critical company information being misappropriated. Until now the choice has been to either shut down USB devices – either in Active Directory or through more extreme methods (the “glue in the USB port” trick comes to mind) – or simply trust every user to do the right thing. This paper introduces a third way that Prism Microsystems calls “Trust but Verify” which is made possible by EventTracker’s advanced USB monitoring capability.


Watch this webinar that demonstrates how EventTracker provides advanced monitoring and analysis of the usage of USB devices including:

  • Track Insert/Removal
  • Record all Activity (file writes to)
  • Disable according to predefined policy

Compliance: Did you get the (Pinto) Memo?

The Ford Pinto was a subcompact manufactured by Ford (introduced on 9/11/70 — another infamous coincidence?). It became a focus of a major scandal when it was alleged that the car’s design allowed its fuel tank to be easily damaged in the event of a rear-end collision, which sometimes resulted in deadly fires and explosions. Ford was aware of this design flaw but allegedly refused to pay what was characterized as the minimal expense of a redesign. Instead, it was argued, Ford decided it would be cheaper to pay off possible lawsuits for resulting deaths. The resulting liability case produced a judicial opinion that is a staple of remedy courses in American law schools.

What brought this on? Well, a recent conversation with a healthcare institution went something like this:

Us: Are you required to comply with HIPAA?

Them: Well, I suppose…yes

Us: So how do you demonstrate compliance?

Them: Well, we’ve never been audited and don’t know anyone that has

Us: So you don’t have a solution in place for this?

Them: Not really…but if they ever come knocking, I’ll pull some reports and wiggle out of it

Us: But there is a better, much better way with all sorts of upside

Them: Yeah, yeah whatever…how much did you say this “better” way costs?

Us: Paltry sum

Them: Well why should I bother? A) I don’t know anyone that has been audited. B) I’ve got better uses for the money in these tough times. C) If they come knocking, I’ll plead ignorance and ask for “reasonable time” to demonstrate compliance. D) In any case, if I wait long enough Microsoft and Cisco will probably solve this for me in the next release.

Us: Heavy sigh

Sadly..none of this is true and there is overwhelming evidence of that.

Regulations are not intended to be punitive of course and implementing log management in reality provides positive ROI

– Ananth

Hot virtualization and cold compliance; New EventTracker 6.2 and more

Hot server virtualization and cold compliance

Without a doubt, server virtualization is a hot technology.  NetworkWorld reported: “More than 40% of respondents listed consolidation as a high priority for the next year, and just under 40% said virtualization is more directly on their radar.”  They also reported that server virtualization remains one of IT’s top initiatives even as IT executives are bracing themselves for potential spending cuts.  Another survey of 100 US companies shows 60% of the respondents are currently using virtualization in production to support non-mission-critical business services.  In other words, they are using it in a “production sandbox” before deploying it on a large scale.

Server virtualization is hot because surveys such as the one at report cost reduction, improved disaster recovery, faster provisioning, and business flexibility benefits from virtualization projects.  These benefits are not surprising because server virtualization gives system administrators enormous flexibility in deploying server stacks (see note below) at will.  This is great for putting multiple server stacks on a single physical server.  Consolidating servers lowers costs because it drives higher resource utilization, which means less capacity twiddling its thumbs while waiting for a large workload to show up.  It is also great for copying production servers to disaster recovery facilities and for adding capacity for seasonal demand or demand driven by splashy marketing events.  IT can provision/copy/reconfigure a server stack in 30 minutes or less, instead of the weeks previously required.

All of this is great stuff…but there is a compliance catch.

Datacenter changes have long been the enemy of configuration control, security, and compliance reporting.  The more things change, the more difficult it is to manually manage, track and report those changes.  Since server virtualization greatly simplifies the adding, moving and changing of server stacks, it is only a matter of time before governance and compliance issues arise.  It is unlikely that enterprises will dodge these issues as the research also shows that 58% of the respondents plan to use virtualization to support their accounting and finance business services.

Early adopters of virtualization have already identified problems with server sprawl (i.e. server stacks are so easy to deploy that no one de-provisions them, leaving hundreds of unused virtual servers lounging around in the datacenter) and difficulty getting a consistent view on server performance and utilization. Virtualization also will fundamentally change basic system management tasks such as patching and identifying mal-ware signatures, since there is no longer a direct link between the virtual application stacks and the physical hardware/OS on which they run.  All of this demands meticulous configuration control, auditing and reporting processes and solutions.  Yet, many enterprises are giving compliance concerns the cold shoulder.  Only 24% of the survey respondents listed governance as a top challenge to virtualization success.  It seems that many enterprises are poised to fall into the mode of “waiting until I’ve been shot at before I’ll wear my Kevlar vest.”

So how do you make sure that your virtualization projects will be different?

The short answer is: Ensuring that your control and compliance processes and solutions are able to keep up with the deluge of manual, semi-automated and automated changes that server virtualization will unleash.

The long answer includes:

1) Simplifying communication between different groups.  Applications, systems, network, and security managers need to know what is happening so they can do their jobs effectively.  For example, virtualization topped the list of emerging technologies creating monitoring challenges for network engineers attending InterOp.  The larger your enterprise, the bigger that group of managers becomes, and the more important it becomes to have audited data readily available for that diverse group of IT managers.  For example, if a server stack is automatically deployed (in response to predefined application performance conditions) the other management solutions should know about it instantly.

2) Implementing de-provisioning policies with security and auditing components.  The first step to implementing de-provisioning policies is to simply create de-provisioning policies as part of the initial provisioning process.  By doing so, you can head off the virtualization sprawl before it happens.  These policies can be as simple as “check with Bob the Business Manager every 90 days” or as complex as “de-provision a server if cluster utilization falls to 70% during peak power cost periods.”   The second step is to create automated auditing and reporting to check for obvious issues.  For example, if a server stack is no longer supposed to exist but someone has logged on to the system, then you know you have a problem.

3) Start thinking about how to create and integrate a “capacity timeline” into compliance reports or as a response to auditor requests.  For example, if someone asks IT for “all compliance events related to application X during June 2007,” and application X is deployed as a cluster of virtual systems, how easy will it be to report that from June 1-15 the cluster had three servers with these logs/events and that from June 16-30 the cluster had five servers with these logs/events.  While I’ve yet to see a specific case requiring that type of reporting, my fevered imagination can see a future smart-aleck lawyer with a class-action identity theft case asking an enterprise IT organization to prove a negative with their log data and compliance reports.  The last thing you’ll want is to pull all your IT staff to manually jigger something together in the week the judge gives you to hand over the documents.

4) Automatically find and remediate configuration drift.  Configuration drift typically happens because of mistakes, for example, when administrators performing the same tasks a little differently each time because there are no standardized best practices, or when they are under pressure to do something quickly and in an effort to save time don’t completely follow their own best practices.  Creating automated checks (such as a pre-deployment check against current policies related to patch levels, software updates, and configuration tweaks) can prevent many compliance issues altogether.

Basically, the best attitude to take is that auditing and compliance reporting are not the enemy.  They are mechanisms to make sure that you know what you think you know about your virtualized environments.  The companies that can put those controls in place in an automated way will not only dodge auditor and security bullets more nimbly but can keep the virtual sprawl from spreading like dandelions in your lawn.

Jasmine Noel is founder and partner of Ptak, Noel & Associates.  With more than 10 years experience in helping clients understand how adoption of new technologies affects IT management, she tries to bring pragmatism (and hopefully some humor) to the business-IT alignment discussion.  Send any comments, questions or rants to

Note – I prefer the phrase “server stacks” to “virtual machines” because “virtual machines” implies a specific type of virtualization, however, there are several different ways one can virtualize a datacenter’s physical resources.

Industry News

Are SIEM and log management the same thing?

Like many things in the IT industry, there’s a lot of market positioning and buzz tossed around regarding how the original term of SIM (Security Information Management), the subsequent marketing term SEM (Security Event Management), the newer combined term of SIEM (Security Information and Even Management) relate to the long standing process of log management.

Did you know? – EventTracker combines both Log Management and SIEM functionalities including real-time collection, consolidation, correlation, analysis, alerting and reporting. Find out more here

Researchers Raise Alarm Over New Iteration of Coreflood Botnet

Password-stealing Trojan is spreading like a worm – and targeted directly at the enterprise

Did you know? EventTracker can detect zero-day attacks with its powerful change monitoring feature. Find out more here

Prism Microsystems releases EventTracker v6.2; offers advanced USB tracking for protection from inside theft

  • Read press release
  • Get more information on new features

EventTracker wins Network Products Guide 2008 Readers Trust Award

EventTracker wins in two categories – Event Management and Computer Forensics. Thanks to those who voted for us. We appreciate your support.

Let he who is without SIM cast the first stone

In a recent post Raffael Marty points out the shortcomings of a “classic” SIM solution including high cost in part due to a clumsy, expensive tuning process.

More importantly, he points out that SIM’s were designed for network-based attacks and these are on the wane, replaced by host-based attacks.

At Prism, we’ve long argued that a host-based system is more appropriate and effective. This is further borne out by the appearance of polymorphic strains such as Nugache that now dominate Threatscape 2008.

However is “IT Search” the complete answer? Not quite. As a matter of fact, any such “silver bullet” has never worked out. Fact is, users (especially in mid-tier) are driven by security concerns, so proactive correlation is useful (in moderation), compliance remains a major driver and event reduction with active alerting is absolutely essential for the overworked admin. That said “IT Search” is a useful and powerful tool in the arsenal of the modern, knowledgeable Security Warrior.

A “Complete SIM” solution is more appropriate for the enterprise. Such a solution blends the “classic” approach which is based on log consolidation and multi-event correlation from host and network devices PLUS a white/greylist scanner PLUS the Log Search function. Long term storage and flexible reporting/forensic tools round out the ideal feature set. Such a solution has better potential to satisfy the different user profiles. These include Auditors, Managers and Security Staff, many of who are less comfortable with query construction.

One dimensional approaches such as “IT Search” or “Network Behavior Anomaly Detection” or “Network Packet Correlation” while undeniably useful are in themselves limited.

Complete SIM, IT Search included, that’s the ticket.


Fear boredom and the pursuit of compliance

Fear, boredom and the pursuit of compliance

When it comes right down to it, we try to comply with regulations and policies because we are afraid of the penalties. Penalties such as corporate fines and jail time may be for the executive club, but everyone is affected when the U.S. Federal Trade Commission starts directly overseeing your security audits and risk assessment programs for 20 years. Just ask the IT folks at TJX Cos Inc. Then there are the hits to the top line as customers get shy about using their credit cards with you, and the press has fun raking you through the mud. Not to mention your sneaking suspicion that all the checked boxes on the regulatory forms are not really making you more secure. With all of that, there is a lot of fear associated with compliance.

On the other hand, compliance is difficult because it requires consistency, diligence, and close attention to detail – three things are extremely tedious and boring. Most human beings simply do not behave that way for long periods of time. It is more compelling for us to react to an event (such as a crash diet to fit into a wedding dress) than it is for us to eat healthy and exercise every day. The situation is also complicated by the fact that enterprises do not like the price tag of having highly skilled technologists manually collect data and run compliance reports. Yes it is insurance against bad things, but who really likes paying for it. So what happens is that IT managers rarely have the time or resources to dedicate to manually troll through logs looking for compliance issues on a daily basis, in spite of the fact that doing so is a basic good practice.

What’s the result? People find creative ways to avoid having to comply or avoid the axe when it falls.

Yet auditing and compliance are not the enemy. They are mechanisms to make sure that you know what you think you know. When done consistently with minute by minute diligence IT’s control over the whole environment improves. So a better way would be to let technology do the basic work for us. Computing is great at tedium and terrible at creative thinking. The trick is getting technology with the right combination attributes (notice I didn’t say features):

  • non-intrusive
  • pervasive
  • adaptable to different data collection situations
  • adaptable to different data analysis situations

Auditing and compliance works best when it is non-intrusive. Well-meaning people can unintentionally do terrible things to their systems – making a configuration change to improve network performance that leaves a security hole wide enough to drive a truck through. Unscrupulous people behave differently when they know they are being watched. The combination of these can be terrifying – just ask Hannaford. IT managers rarely have the time or resources to manually troll through logs looking for compliance issues on a daily basis, in spite of the fact that doing so is a basic good practice. So it’s a win-win-win situation when the auditing solution is non-intrusive enough to collect data, conduct routine analysis and report results without additional IT effort.

Pervasiveness of a compliance solution is growing in importance because, quite simply, there are no more disconnected systems. Consider how auditors try to determine if an IT system is relevant to SOX. They ask if the system is directly related to the timely production of financial reports, or if the application is characterized by high-value and/or high-volume transactions with straight-through processing, and whether the application is shared by many business units across the enterprise.

These questions are almost nonsensical from a technical perspective, particularly as economic and business reality has forced the continued development of modular, distributed computing environments that can and will change at increasingly rapid rates. With loosely coupled architectures such as SOA a single ordering application is now a composite of multiple services developed by different groups in different business units. These services will also be reused in other business processes. New services components can be added at any time. Today, composite applications typically have only one or two connections, however, the benefits of SOA are so compelling that over time the number of connections per application will explode. But wait – there’s more. Virtualization and automated provisioning means that new application servers, storage devices, or networking equipment can be deployed or reallocated within minutes. The entire datacenter could be reconfigured in six months. Well maybe that is a little extreme, but you get my point.

In this situation the concept of auditing only the servers that support the ordering application makes little sense, instead there needs to be an assumption that every system will interact with other and those interactions will change over time. What is important is differentiating between good and bad or authorized and unauthorized interactions even as the environment is constantly evolving. The last thing you need is to have a meeting where you are asked why the ordering application is transacting with the employee database every two minutes and your only response is – “but that’s not how that app is supposed to work.”

And yet the raw data needed to make those differentiations is available. Traces of new transaction paths, new service connections, configuration changes, resource reallocations and so on are typically logged by the infrastructure itself. The question is whether the auditing solution is pervasive enough to capture the full picture of what is happening.

Adaptability of the data collection process is also very important for handling the constant changes in how businesses use technology. For example, consider some of the short lived analytical applications currently developed by financial analysts to meet immediate needs of specific customer transactions. Often these applications are a unique combination of desktop productivity tools connected to a variety of corporate databases and applications. The applications also exist only as long as the customer requires them, typically a few weeks to a few months (hopefully we won’t get to 24-hour application life-spans any time soon). These short-lived applications are a great competitive advantage, but present unique difficulties for auditors and risk managers. An enterprise has to prove that specific transactions under audit where completed by an application and that the virtual computing environment in which it operated (both of which no longer exist) were compliant with regulatory and corporate risk management policies. In other words, prove that the application had integrity during its short life-span.

Today, what some companies are doing is making their IT folks do a manual inventory of these applications (from desktop to everything it touches) every day. Imagine how tedious that is – the burnout rate for those admins must be incredible. Why do it this way? Because their existing tools were not extensible enough to cover this new use case.

OK, we’ve covered technology change, business change but we are not done yet because the regulations and policies themselves change and this is were the adaptable to different data analysis situations comes in. Not only do auditors become more sophisticated in what they are asking for, but the minute you expand your company internationally you have to deal with a slew of new regulations and policies. Much of the time these new regulations are simply about analyzing the same raw data in a different way. But producing six slightly different reports based on the same data should not be a manual effort. You should be able to hit a ‘print reports’ button and let the pdf-ing software take care of the rest of it.

Besides, even auditors are human. Surely they prefer the more creative and investigational aspects of their work over the tedium of generating multiple versions of the same quarterly compliance reports.

Jasmine Noel is founder and partner of Ptak, Noel & Associates. With more than 10 years experience in helping clients understand how adoption of new technologies affects IT management, she tries to bring pragmatism (and hopefully some humor) to the business-IT alignment discussion. Send any comments, questions or rants to

Industry News

Products to help detect insider threats

While insider threats aren’t as prevalent as attacks from outside a network, insiders’ malicious activity tends to have far greater consequences. Insiders know precisely where to go to access the most sensitive information, and they often have ready means to carry out malicious actions. One way to detect and protect against such threats is to log, monitor and audit employee online actions. Today we’ll look at three products that are well suited to detecting insider threats.

Featured Case Study

LeHigh Valley Hospital uses EventTracker to comply with HIPAA and improve IT Security

Architectural Chokepoints

I have been thinking a bit on scalability lately – and I thought it might be an interesting exercise to examine a couple of the obvious places in a SIEM solution where scalability problems can be exposed. In a previous post I talked about scalability and EPS. The fact is there are multiple areas in a SIEM solution where the system may not scale and anyone thinking of a SIEM procurement should be thinking of scalability as a multi-dimensional beast.

First, all the logs you care about need to be dependably collected. Collection is where many vendors build EPS benchmarks – but generally the number of events per second is based on a small normalized packet. Event size varies widely depending on source so understand your typical log size, and calculate accordingly. The general mitigation strategies for collection are faster collection hardware (collection is usually a CPU intensive task), distributed collection architecture, and log filtering.

One thing to think off — log generation is often quite “bursty” in nature. You will, for instance, get a slew of logs generated on Monday mornings when staff arrive to work and start logging onto system resources. You should evaluate what happens if the system gets overloaded – do the events get lost, does the system crash?

As a mitigation strategy, Event filtering is sometimes pooh-poohed , however the reality is that 90% of traffic generated by most devices consists of completely useless (from a security perspective) status information. Volume varies widely depending on audit settings as well. A company generating 600,000 events per day on a windows network can easily generate 10-fold as much by increasing their audit settings slightly. . If you need the audit levels high, filtering is the easiest way to ease pressure on the entire down-stream log system.

Collection is a multi-step process also. Simply receiving an event is too simplistic a view. Resources are expended running policy and rules against the event stream. The more processing, the more system resources consumed. The data must be committed to the event store at some point so it needs to get written to disk. It is highly advisable to look at these as 3 separate activities and validate that the solution can handle your volume successfully.

A note on log storage for those who are considering buying an appliance with a fixed amount of onboard storage – be sure it is enough, and be sure to check out how easy it is to move off, retrieve and process records that have been moved to offline storage media. If your event volume eats up your disk you will likely be doing a lot of the moving off, moving back on activity. Also, some of the compliance standards like PCI require that logs must be stored online a certain amount of time. Here at Prism we solved that problem by allowing events to be stored anywhere on the file system, but most appliances do not afford you that luxury.

Now let’s flip our attention over to the analytics and reporting activities. This is yet another important aspect of scalability that is often ignored. If a system can process 10 million events per minute but takes 10 hours to run a simple query you probably are going to have upset users and a non-viable solution. And what happens to the collection throughput above when a bunch of people are running reports? Often a single user running ad-hoc reports is just fine, put a couple on and you are in trouble.

A remediation strategy here is to look for a solution that can offload the reporting and analytics to another machine so as not to impact the aggregation, correlation and storage steps. If you don’t have that capability absolutely press the vendor for performance metrics if reports and collection are done on the same hardware.

– Steve Lafferty

The EPS Myth

Often when I engage with a prospect their first question is “How many events per second (EPS) can EventTracker handle?” People tend to confuse EPS with scalability so by simply giving back an enormous-enough number (usually larger than the previous vendor they spoke with) it convinces them your product is, indeed, scalable. The fact is scalability and Events per Second (EPS) are not the same and many vendors get away from the real scalability issue by intentionally using the two interchangeably. A high EPS rating does not guarantee a scalable solution.If the only measure of scalability available is an EPS rating, you as a prospect should be asking yourself a simple question. What is the vendor definition of EPS? You will generally find that the answer is different with each vendor.

  • Is it number of events scanned/second?
  • Is it number of events received/second?
  • Is it number of events processed/second?
  • Is it number of events inserted in the event store/second?
  • Is it a real time count or a batch transfer count?
  • What is the size of these events? Is it some small non-representative size, for instance, 100 bytes per event or is it a real event like a windows event which may vary from 1000 to 6,000 bytes?
  • Are you receiving these events in UDP mode or TCP mode?
  • Are they measuring running correlation rules against the event stream? How many rules are being run?
  • And let’s not even talk about how fast the reporting function runs, EPS does not measure that at all.

At the end of the day, an EPS measure is generally a measure of a small, non-typical normalized event received. Nothing measured about actually doing something useful with the event, and indeed, pretty much useless.

With the lack of definition of what an event actually is, EPS is also a terrible comparative measure. You cannot assume that one vendor claiming 12,000EPS is faster than another claiming 10,000EPS as they are often measuring very different things. A good analogy would be if you asked someone how far away an object was, and they replied 100. For all the usefulness of the EPS measure the unit could be inches or miles.

EPS is even worse for ascertaining true solution capability. Some vendors market appliances that promise 2,000 EPS and 150 GB disk space for log storage. They also promise to archive security events for multiple years to meet compliance. For the sake of argument let’s assume the system is receiving, processing and storing 1000 windows events/sec with an average 1K event size (a common size for a Windows event). In 24 hours you will receive 86 million events. Compressed at 90% this consumes 8.6GB or almost 7% of your storage in a single day. Even with heavy compression it can handle only a few weeks of data with this kind of load. Think of buying a car with an engine that can race to 200MPH and a set of tires and suspension that cannot go faster that 75MPH. The car can’t go 200, the engine can, but the car can’t. A SIEM solution is the car in this example, not the engine. Having the engine does not do you any good at all.

So when asked about EPS, I sigh, and say it depends, and try to explain all this. Sometimes it sinks in, sometimes not. All in all don’t pay a lot of attention to EPS – it is largely an empty measure until the unit of measure is standardized, and even then it will only be a small part of overall system capability.

Steve Lafferty

EventTracker review; Zero-day attack protection and more

Creating lasting change from security management

Over the past year, I’ve dealt with how to implement a Pragmatic approach to security management and then dug  deeper into the specifics of how to successfully implement a security management environment successfully. Think of those previous tips as your high school level education in security management.

Now it’s time to kiss the parents, hug the dog, and head off to the great unknown that represents college, university or some other secondary education. The tools are in place and you have a quick win to celebrate, but the reality is these are still just band-aids. The next level of your education is about creating lasting change that results constant improvement of your security posture. Creating this kind of change means that your security management platform needs to:

  • Make you better – If there isn’t a noticeable difference in your ability to do your job, then the security management platform wasn’t worth the time or the effort to set it up. Everybody loses in that situation. You should be able to pinpoint issues faster and figure out what to investigate more accurately. These may sound like no-brainers, but many organizations spend big money to implement technology that doesn’t show any operational value.
  • Save you time – The reality is, as interesting as reports are for compliance, if using your platform doesn’t help you do your job faster, then you won’t use it. No one has discretionary time to waste doing things less efficiently. Thus, you need to be able to utilize your dashboard daily to investigate issues quickly and ensure you can isolate problems without having to gather data from a variety of places. Those penalties in time can make the difference between nipping a problem in the bud or cleaning up a major data breach.

I know those two objectives may seem a long way off when you are just starting the process, but let’s take a structured approach to refining our environment and before you know it, your security management environment will be a well-oiled machine, and dare I say it, you will be the closest thing to a hero on the security team.

Step 1: Revisit the metrics

Keep in mind that in the initial implementation (and while searching for the quick win), you gathered some data and started pulling reports on it to identify the low-hanging fruit that needed to be fixed right now.This is a good time to make sure you are gathering enough data to draw broader conclusions. Remember that we are looking mostly for anomalies. Since we defined normal for your environment during the initial implementation, now we need to focus on what is “not normal.” Here are a couple of areas to focus on:

  • Networks – This is the easiest of the data to gather because you are probably already monitoring much of it. Yes, that data coming out of your firewalls, IPS devices, and content gateways (web filtering and anti-spam), should already be pumped into the system.Data center – Many of the attacks now are targeted towards databases and servers because that’s where the “money” is. Thus pulling log feed from databases and server operating systems is another set of data sources that should be leveraged. Again, once you gather the baseline – you are in good shape to start to focus on behavior that is not “normal.”Endpoints – Depending on the size of your organization, this may not be feasible, but another area of frequent compromise are end user devices. Maybe they are copying data to a USB thumb drive or installing unauthorized applications. Periodically gathering system log information and analyzing it can also yield a treasure of information.
  • Applications – Finally, you can also gather data directly from the application logs. Who is accessing the application and what transactions are the performing. You can look for patterns, which in many cases could indicate a situation that needs to be investigated.

Step 2: Refine the Thresholds 

Remember the REACT FASTER doctrine? That’s all about making sure you learn about an issue as quickly as possible and act decisively to head off any real damage. Since you are gathering a very comprehensive set of data now (from Step 1), the key to being able to wade through all that data and make sense of it are thresholds.To be clear, initially your thresholds will be wrong and the system will tend to be a bit noisy. You’ll get notified about too much stuff because you are better off setting loose thresholds initially, then missing the iceberg (yes, it’s a Titanic reference). But over time (and time can be measured in weeks, not months), you can and should be tightening those thresholds to really narrow in on the “right” time to be alerted to an issue.The point is all about automation. You’d rather not have your nose buried in log data all day or watching the packets fly by, so you need to learn to trust your thresholds. Once you have them in a comfortable place (like the Three Bears) not too many false positives, but not too few either. Then you can start to spot check some of the devices, just to make sure.Constant improvement is all about finding the right mix of data sources and monitoring thresholds to make an impact. And don’t think you are done tuning the system – EVER. What’s right today is probably wrong tomorrow, given the dynamic nature of IT infrastructure and the attack space.

Step 3: Document thyself

Finally, once your system is operating well, it’s time to revisit all of those reports you generate. Look from a number of different perspectives:

  • Operational reporting – You probably want to be getting daily (weekly at a minimum) reports, which pinpoint things like attacks dropped at the perimeter, login failures, and other operational data. Make sure by looking at the ops reports you get a great feel for what is going on within your networks, data centers and applications. Remember that security professionals hate surprises. These reports help to eliminate surprises.
  • Compliance reporting – The reports that help you run your security operation are not necessarily what an auditor is going to want to see. Many of the security platforms have pre-built reports for regulations like PCI and HIPAA. Use these templates as a starting point and work with your auditor or assessor and make sure your reports are tuned to what they both expect and need. The less time you spend generating compliance reports, the more time you are spending fixing issues and building a security strategy.

Congratulations, you are ready for your diploma. If you generally follow some of the tips and utilize many of the resources built into your security management platform, you can make a huge impact in how you run your security environment. I won’t be so bold as to say you can “get ahead of the threat,” because you can’t. But you can certainly REACT FASTER and more effectively.

Good luck on your journey, and you can always find me at

Industry News

Adobe zero day flaw being actively exploited in wild

The widely used Adobe Flash Player has a zero day flaw that is being targeted by a number of attackers who set up more than 200,000 Web pages to exploit the flaw.

Exploiting Security Holes Automatically

Software patches, which are sent over the Internet to protect computers from newly discovered security holes, could help the bad guys as well as the good guys, according to research recently presented at the IEEE Symposium on Security and Privacy. The research shows that attackers could use patches to automatically generate software to attack vulnerable computers, employing a process that can take as little as 30 seconds.

Learn how you can protect your IT systems from zero-day attacks

There is always a lag between the time a new virus hits the web and the time a patch is created and antivirus definitions updated, which often gives the virus several hours to proliferate across thousands of machines (The Adobe flaw is a perfect case in point). In addition, virus signatures are changing constantly and often the same virus can come back with a slight variation that is enough to elude antivirus systems.

Auditing Drive Mappings – TECH TIP

Windows does not track drive mappings for auditing out of the box. To audit drive mappings you will need to do the following steps:

  1. Turn on Object Access Auditing via Group Policy on the system(s) in questionYou will need to perform the following steps on each system that you want to track the drive mappings
  2. Open the registry and drill down to HKEY_CURRENT_USERNetwork
  3. Right click on Network and choose Permissions (if you click on the plus sign you will see each of your mapped drive listed)
  4. Click on the Advanced button
  5. Click on the Auditing tab then click on the Add button
  6. In the Select User or Group box type in Everyone
  7. This will open the Auditing dialog box
  8. Select the settings that you want to audit for; stay away from the Full Control option and Read Control. I recommend the following settings: Create Subkey, Create Link and Delete.

Windows will now generate event ids 560, 567 and 564 when the drive mappings are added or deleted. 564 will be generated when a mapping is deleted, 567 will be created when a mapping is deleted or added and 560 will be generated both times as well. Event ID’s 567 and 564 will not give you the full information that you are looking for, they will tell you what was done to the mappings but not WHICH mapping. To determine which mapping you will need,the Handle ID code that will be found in the event description on the 564/567 events. The Handle ID will allow you to track back to the 560 event which will give you the mapping that is being added/deleted. Event ID 567 will only be generated on Windows XP or Windows 2003 systems, Windows 2000 will not generate 567.

– Isaac

Ten reasons you will be unhappy with your SIM solution – and how to avoid them

As the market matures we are increasingly being contacted by prospects that are looking not to implement SIM technology, but instead are looking to replace existing SIM technology. These are people that purchased a couple of budget cycles ago, struggled with their selection and are now throwing up their hands in frustration and moving on. From a high level, the reason for this adoption failure was not that SIM was bad or unnecessary. These people were, and remain, convinced of the benefits of a SIM solution, but at the time of their initial purchase many did not have a detailed understanding of both their business and technical requirements, nor a clear understanding of the actual investment in time and effort necessary to make their SIM implementation a success.

For new prospects just getting into SIM – is there a lesson to be learned from these people? The answer to that is a resounding “yes”, and it is worthwhile digging a little deeper than a generic “understand your requirements before you buy” (that is a really good thing, but a bit obvious!), and let you hear some of the more common themes we hear.

Just as a bit of stage setting, the majority of the customers Prism serves tend to be what are classically called SMEs (Small and Medium Enterprises). Although a company might be considered SME, it is not uncommon today for even smaller enterprises to have well in excess of a thousand systems and devices that need to be managed. Implementing Security Information Management (SIM) poses a special challenge for SMEs, as events and security data from even a thousand devices can be completely overwhelming. SMEs are “tweeners” – They have a relatively big problem (like large enterprises), but less flexibility (in terms of money, time and people) to solve it. SMEs are also pulled by vendors from all ends of the spectrum – you have low end vendors coming in and positioning very inexpensive, point solutions, and very high-end vendors pitching their wares, sometimes in a special package for you. So the gamut of options are very, very broad.

So here they are, the top 10, in no particular order as we hear these all too frequently:

  1. We did not evaluate the product we selected. The demo looked really good and all the reviews we read were good, but it just did not really fit our needs when we actually got it in house.
  2. Quite frankly during the demo we were so impressed by the sizzle that we never really looked at our core requirement in any depth. The vendor we chose said everyone did that and not to worry — and when we came to implement we found that the core requirement was not near as good as the sizzle we saw. We ended up never using the sizzle and struggling with the capability we really should have looked at.
  3. We evaluated the product in a small test network, and were impressed with results. However, deployment on a slightly larger scale was disappointing and way more complicated.
  4. A brand name or expensive solution did not mean a complete solution.
  5. We did not know our security data profile or event volumes in advance. This prevented us from running load or stress tests, but the vendor response was “don’t worry”. We should have. The solution scoped did not scale to our requirements, and the add-ons killed us as we were out of the cheap entry level bundles.
  6. Excessive false positives were generated by the threat analysis module of the SIM solution. It was too complicated to tune, and we failed to detect a real threat when it happened.
  7. Deployment was/is a nightmare, and we lacked the tools necessary to manage and maintain the solution. Professional Services was not in the budget.
  8. Once we gained experience with the product and decided to tune it to meet evolving business objectives, we found the architecture inflexible and licensing limitations in terms of the number of consoles and the separation of duties. In order to meet our new business requirements it was just too expensive, and we were annoyed at the vendor as this was not what they represented to us at all.
  9. We bought on the cheap, and realized the solution does not scale up. It works very well in a small domain with limited requirements but beyond that the solution is wanting.
  10. We bought the expensive solution because it had a lot of cool stuff we thought we would use, but the things we really needed were hard to use, and the stuff we really didn’t need, but impressed the heck out of us at demo time, we are never going to get to.


Is it better to leave some logs behind?

Is it better to leave some logs behind?

Log management has emerged in the past few years as a must-do discipline in IT for complying with regulatory standards, and protecting the integrity of critical IT assets. However, with millions of logs being spit out on a daily basis by firewalls, routers, servers, workstations, applications and other sources across a network, enterprises are deluged with log data and there is no stemming the tide. In fact, the tide is just beginning to come in. With always-on high-speed internet connectivity and an increasing number of servers and devices that an IT department has to manage, the task of collecting, storing and making sense of all this data is no mean feat. Adding to the confusion are non-specific regulatory requirements relating to logging and archiving that are entirely vague on what an IT department must do, coupled with the increasing pressure for data privacy. It is not surprising then that for many companies the default plan to keep the auditors happy is to simply collect and retain everything from every source. However, collecting and retaining every single log ever generated is often unnecessary from both a regulatory and forensic standpoint, and the retention of the data can often represent a security or liability risk itself.

This confusion in the log management space is further compounded by vocal proponents amongst the vendor community of the “collect everything” approach as necessary for being compliant and secure. My experience is that the world is not a black and white place but a myriad of grays. If you dig a little deeper you might find a reason for the extreme position. It turns out that some vendors really sell capacity for storing logs, others have license fees tied to log volume, yet others have no ability to enforce central configuration of filters across a large installation.

OK, putting aside cynicism, are they actually right? Is this one of those rare cases where the broad statement is simply the correct statement (“don’t smoke” immediately springs to mind)? Let’s explore this in some more detail.

Industry News

The essential guide to security audits

The security audit is a practice that could best be filed under the “necessary evil” category. While no business owner, executive or IT manager relishes the thought of enduring an end-to-end security examination, it’s generally understood that an audit is the best and only way to fully ensure that all of a business’s security technologies and practices are performing in accordance with established specifications and requirements.

PCI – Smart or Stupid ?

There is something odd about the payment card industry (PCI) standard. Its one of the best things to happen to the security of consumer data, yet many think it is as complex as rocket science.

Prism Microsystems and Finally Software bring SIEM to the EMEA market

Prism Microsystems, a leading provider of integrated SIEM (Security Information and Event Management) and Change Management solutions, recently announced a reseller agreement with Finally Software, a UK-based provider of security software solutions, to market and support Prism’s SIEM solution, EventTracker, in the EMEA region.

Featured Webinar

Beyond Traditional Security. Blending Proactive and Reactive Security to Protect the Enterprise

Traditional firewalls and Intrusion Detection Systems leave your organization unprotected from most zero-day and internal attacks. You need a combination of both event and change management practices to protect your organization. This webinar will discuss how to use a combination of proactive and reactive security strategies to shield your organization from these dangerous threats.

When you can’t work harder, work smarter

In life and business, the smart approach is to make the most of what you have. You can work for 8 hours and 10 hours and then 12 hours a day and hit your performance limit. How do you get more out of your work? By working smarter, not harder – Get others on board, delegate, communicate. Nowhere is this truer than with computer hardware. Poorly written software makes increasing demands on resources but cannot deliver quantum jumps in performance.

As we evaluated earlier versions of EventTracker it became clear that we were soon reaching the physical limits of the underlying hardware and the choke point to getting faster reports was not to work harder (optimize Code) but to work smarter (plan up-front, divide and conquer, avoid searching through irrelevant data).

This is realized in the Virtual Collection Point architecture that is available in version 6. By segregating log sources up front into virtual groups and stacking software processes from reception to archiving, improvement in performance is possible FOR THE SAME HARDWARE!

When comparing SIEM solutions for scalability, remember that if the only path is to add more hardware, it’s a weaker approach than making the best of what you already have.

– Ananth

The Weakest Link in Security

The three basic ingredients of any business are technology, processes and people. From an IT security standpoint, which of these is the weakest link in your organization? Whichever it is, it is likely to be the focus of attack.

Organizations around the globe routinely employ the use of powerful firewalls, anti-virus software and sophisticated intrusion-detection systems to guard precious information assets. Year in and year out, polls show the weakest link to be processes and the people behind them. In the SIEM world, the absence of a process to examine exception reports to detect non-obvious problems is one manifestation of process weakness.
The reality is that not all threats are obvious and detected/blocked by automation. You must apply the human element appropriately.

Another is to audit user activity especially privileged user activity. It must match approved requests and pass the reasonableness test (eg performed during business hours).

Earlier this decade, the focus of security was the perimeter and the internal network. Technologies such as firewalls and network based intrusion detection were all the rage. While these are necessary, vital even, defense in depth dictates that you look carefully at hosts and user activity.

– Ananth

Know your requirements

The Gartner Group has long produced its Hype Cycle for IT technologies to show when technologies begin to offer practical benefits and become widely accepted. In 2006, Security Information and Event Management (SIEM) was located in the ‘Trough of Disillusionment’. This segment of the curve is supposed to represent a technology that has failed to meet expectations and become unfashionable therefore causing less coverage in the press. Gartner predicted emergence into the ‘Slope of Enlightenment’ in 2-5 years.

What can you do to avoid disillusionment?
Three words — Know your requirements
The lack of this is the single largest reason for failure of IT projects.

Basic advice, you say? Amazing how basic advice is the hardest to follow.
Watch the ball, mind your footwork. That sort of thing.

The market is awash with product offerings, each with similar claims but different heritages and usually optimized for different use-cases. Selection criteria should be your own needs. Mature vendors dislike failed projects as much as the sponsors because of the negative energy generated by the failure. Sharing your requirements sets expectations more correctly and therefore reduces the chances of energy sapping failures.

Aside of maturation of the technology itself, the other reason for the ‘trough’ is customer expectation and implementation methodology, which is usually outside vendor control. As SIEM comes into the mainstream, the basics apply more than ever. A mature customer with robust practices will get better results with new technology than those with poor habits get from well established technologies.

As Sun Tzu said, “he who knows neither himself nor his enemy can never win, he who knows himself but does not know his enemy will sometimes win and sometimes lose, but he who knows himself and his enemy will never lose.”

– Ananth

The 5 W’s of Security Management

The 5 W’s of security management

I’ve seen it happen about a thousand times if I’ve seen it once. A high profile project ends up in a ditch because there wasn’t a proper plan defined AHEAD of time. I see this more often in “squishy” projects like security management because success isn’t easily defined. It’s not like installing a web application firewall, which will be deemed a success if it blocks web attacks.

Security management needs a different set of drivers and a more specific and focused discussion of what is “success,” before solutions are evaluated. Before vendors are consulted. Before you do anything. I know it’s hard, but I want you to take a deep breath. If you can’t answer the following questions about your project, then you have a lot of work to do before you are ready to start thinking about specific solutions.

First and foremost, you need to have a clear understanding of your goals and your budget and make sure to line up your executive support. Ultimately someone is going to have to pay the check for whatever it is you want to buy. So you will be a lot better off if you take a bit of time up front and answer all these sticky questions.

A favorite tactic of mine is to ask the 5 W’s. You remember those, right? It was a grade school thing. Who, what, where, when and why? Pretty much anything you need to do can be clarified and distilled by isolating the issues into the 5 W’s. I’m going to kick start your efforts a bit and walk you through the process I take with clients as they are trying to structure their security management initiative.

The first thing to understand is WHY you are thinking about security management? What is the driver for the project? Are important things falling through the cracks and impacting your operation efficiency? Did an incident show a distinct lack of data that hindered the investigation? Maybe an auditor mandated a more structured approach to security management? Each of these (and a ton of other reasons) is a legitimate driver for a security management project and will have a serious impact on what the project needs to be and accomplish.

Once you have a clear understanding of why, you need to line up the forces for the battle. That means making sure you understand who has money to pay for the project and who has the final approvals? If you don’t understand these things, it’s very unlikely you’ll drive the project through.

After you have a clear idea of which forces will be at your disposal, you can determine the WHO, or which folks need to be part of the project team. Do the network folks need to be involved, the data center folks and/or the application folks? Maybe it’s all of the above, although I’d push you to focus your efforts up front. You don’t want to be in a position where you are trying to boil the ocean. You want to be focused and you want to have the right people on the team to make sure you can achieve what you set out to achieve. Which brings us to the next question…

This gets down to managing expectations, which is a blind spot for pretty much every security professional I know. Let me broaden that. It’s an issue for everyone I know, regardless of what they do for a living. If you aren’t clear and thus your senior team isn’t clear about what this project is supposed to achieve, it’s going to be difficult to achieve it.

Any organization looking at security management needs to crisply define what the outcomes are going to be and design some success metrics to highlight those outcomes. If it’s about operations, how much more quickly will issues be pinpointed? What additional information can be gathered to assist in investigations, etc? This is really about making sure the project has a chance of success because the senior team (the ones paying the bill) knows where it’s going ahead of time.

This question is all about scope. Believe me, defining the scope effectively is perhaps the most critical thing you can do. Get it wrong on the low side and you have budget issues, meaning you don’t have nearly enough money to do what your senior team thinks is going to get done. Budget too high and you may have an issue pushing the project through or getting the approval in the first place.

Budgeting is much more of an art, rather than a skill. You need to understand how your organization gets things done to understand how you can finesse the economic discussion. A couple of questions to understand are: Is this an enterprise deployment? Departmental? Regional? Most importantly, is everyone on board with that potential scoping?

The last W is about understanding the timeline. What can/should be done first? This is where the concept of phases comes into play, especially if your budget is tight. How do you chunk up the project into smaller pieces that can be budgeted for separately? That usually makes a big number go down a bit easier.

The key is to make sure you have a firm understanding of the end goal, which is presumably an enterprise-wide deployment of a security management platform. You can get there in an infinite number of ways, depending on the project drivers, the budget, and the skill set you have at your disposal.

But you certainly can’t get there if you don’t ask these questions ahead of time and determine a logical strategic plan to get to where you need to be. Many projects fail from a lack of planning rather than a lack of execution. As long as all of your ducks are in a row when you start the process, you have a much better chance to get to the end of the process.

Or you can hope for a good outcome. I heard that’s a pretty dependable means of getting things done.

Industry News

Cyber-crime bigger threat than cyber-terror

Although the threat of cyber-terrorism exists, the greatest risk to Internet communication, commerce and security is from cyber-crime motivated by profit. Attacks have evolved from cracking passwords into vast coordinated attacks from thousands of hijacked computers for blackmail and theft.

SIEM in the days of recession

In October 2007 Gartner published a paper titled “Clients Should Prepare a ‘Recession Budget’ for 2008″. It suggested that IT organizations should be prepared to respond if a recession forces budget constraints in 2008. Its still early in 2008 but the FED appears to agree and has acted strongly by dropping key interest rates fast and hard.

Will this crimp your ability to secure funding for security initiatives? Vendor FUD tactics have been a bellwether but fear factor funding is waning for various reasons.These include

* crying wolf
* the perceived small impact of breaches (as opposed to the dire predictions)
* the absence of a widespread, debilitating (9/11 style) malware attack
* the realization that most regulations (eg HIPAA) have weak enforcement

As an InfoSec professional, how should you react?

For one thing, understand what drives your business and align with it as opposed to retreating into techno-speak. Accept that the company you work for is not in the business of being compliant or secure. Learn to have a business conversation about Infosec with business people. These are people that care about terms such as ROI, profit, shareholdervalue, labor, assets, expenses and so on. Recognize that their vision of regulatory compliance is driven mainly by the bottom line. In a recession year, these are more important than ever before.

For another thing, expect a cut in IT costs (it is after all most often viewed as a “cost-center”). This means staff, budgets and projects may be lost.

So how does a SIEM vendor respond? In a business-like way of course. By pointing out that one major reason for deploying such solutions is to “do more with less”, to automate the mundane thereby increasing productivity, by retaining company critical knowledge in policy so that you are less vulnerable to a RIF, by avoiding downtime which hurts the bottom line.

And as Gabriel Garcia Marquez observed , maybe it is possible to have Love in the Time of Cholera.

– Ananth

The role of host-based security

In the beginning, there was the Internet.
And it was good (especially for businesses).
It allowed processes to become web enabled.
It enabled efficiencies in both customer facing and supplier facing chains.

Then came the security attacks.
(Toto, I’ve got a feeling we’re not in Kansas any more).
And they were bad (especially for businesses).

So we firewalled.
And we patched.
And we implemented AV and NIDS.

Are we done then?
Not really.

According to a leading analyst firm, an estimated 70% of security breaches are committed from inside a networks perimeter. This in turn is responsible for more than 95% of intrusions that result in significant financial losses. As a reaction, nearly every industry is now subject to compliance regulations that can only be fully addressed by applying host based security methods.

Security Information and Event Management systems (SIEM) can be of immense value here.

An effective SIEM solution centralizes event log information from various hosts and applies correlation rules to highlight (and ideally thwart) intrusions. In the instance of the “insider” threat, exception reports and review of privileged user activity is a critical activity.

If your IT Security efforts are totally focused on the perimeter and the internal network, you are likely to be missing a large and increasingly critical “brick in the wall”.

-Posted by Ananth

Are you worth your weight in gold?

Interesting article by Russell Olsen on Windows Change Management on a Budget

He says: “An effective Windows change management process can be the difference between life and death in any organization. Windows managers who understand that are worth their weight in gold…knowing what changed and when it changed makes a big difference especially when something goes wrong. If this is so clear, why do so many of us struggle to implement or maintain an adequate change control process?”

Olsen correctly diagnoses the problem as one of discipline and   commitment. Like exercising regularly, its hard….but there is overwhelming evidence of the benefits.

The EventTracker SIM Edition makes it a little easier by automatically taking system (file and registry) snapshots of Windows machines at periodic intervals for comparison either over time or against a golden baseline.

Given the gap between outbreak and vaccine for malware and attacks, as well as the potential for innocuous human error when dealing with complex machinery, the audit function makes it all worthwhile. The CSI 2007 survey shows the annual loss from such incidents to be $350,000.

Avoiding such losses (and regular exercise) will make you worth your weight in gold.

– Ananth

Threatscape 2008 Computer security survey results

Understanding where SIM ends and log management begins

In my travels, I tend to run into two types of security practitioners. The first I’ll call the “sailor.” These folks are basically adrift in the lake in a boat with many holes. They’ve got a little cup and they work hard every day trying to make sure the water doesn’t overcome the little ship and sink their craft.

The others I’ll call the “builders,” and these folks have gotten past the sailor phase, gotten their ship to port and are trying to build a life in their new surroundings. Thus, they are trying to lay the foundation for a strong home that can withstand whatever the elements have to offer.

Yes, there is a point to these crazy analogies. When you are talking about security management, the sailors don’t have a lot of time to worry about anything. They do the least amount necessary to keep whatever limited security defenses they have up and running. The idea of security information management, log management, configuration management or pretty much [anything] followed by the word management, just isn’t in their vernacular.

In this piece I’m going to focus on the builders. These folks are looking for something a bit more strategic now and they are asking questions like, “do I need SIM?” and “what about log management?” If you are in that camp, consider yourself lucky because many practitioners don’t get there.

To be clear, the title is a little bit disingenuous. I don’t really think that SIM ends and log management begins anywhere. All of these disciplines are coming together into a next generation security management PLATFORM, and based on these platforms I see a lot of security professionals finally starting to make some inroads. You know, more effectively managing their environments.

I don’t have the space to tell the full history of security management, so in a nutshell the discipline has evolved from stand-alone consoles that were built specifically to manage a class of device (firewall, VPN, IPS, etc.) to a central console mentality. This has mapped cleanly to the evolution of most network security vendor’s product lines. They started as a specialist focusing on one discipline (firewall, IPS, etc.) and now they have broadened their offerings into integrated devices that offer multiple functions. Their management consoles reflect that.

But that doesn’t really solve most customer’s problem, which is that they’ve got a heterogeneous set of security devices and it’s neither time nor resource efficient to manage those devices separately. So an overlay management console dubbed SIM (security information management) was built, to integrate the data coming from these specific devices, correlate it, and then tell the administrator what they need to focus on.

This was a bit better (although first generation SIMs cost too much and took too long to get value) – it still didn’t address an emerging problem. That was the need for forensically clean information that could be used for compliance and incident investigations. Thus a few years ago, the log management business was born.

Now many practitioners want the best of both worlds. The nerve of you folks! Basically, you want to be able correlate operational data so you can react faster to imminent attacks, but make sure the data is gathered and stored in a way to ensure it’s useful for investigations and compliance reporting.

The good news is that isn’t too much to ask for, and a number of vendors are now bring these next generation security management platforms to market. What are some of the characteristics of these new offerings? Basically, I believe the PLATFORM must be built on a log management foundation.

Why? Because data integrity is paramount to ensuring the information will stand up in a court of law. So that means the log records (or any other gathered info like Netflow data or transactions) must be cryptographically signed and sequenced. This ensures the data hasn’t been tampered with and creates evidence that cannot be questioned, even by the savviest of vultures – I mean, defense attorneys.

You also want to make sure the data isn’t reduced. With first generation SIMs, the vendors didn’t have a choice but to use data reduction techniques in order to get on top of the sheer volume of information. That’s not really a problem due to the constant march of Moore’s Law on the technology industry. Now ALL of the data can be stored, and it should – at least for a certain amount of time.

Finally you want to make sure the security management platform’s management environment will fit into your own personal workflow. That’s absolutely critical because you’ll have to live in this tool a large portion of every working day. Does it provide you with the ability to customize the environment and provide the information YOU need, not what the vendor thinks you need?

Sounds like a cool vision, no? It is, but it’s usually a pretty big project to get there. So I advocate a phased approach allows you to focus on what problem you need to solve TODAY and build towards the future. It’s kind of like building a house. You may not need a pool today, but if that’s something you think you’d like – you better make sure there is space in the back yard to accommodate those plans.

That’s why I take a platform approach to building your security management environment. Take an application-centric approach, built on top of a common foundation (that’s the platform). SIM is an application. So is network behavior analysis and configuration management. These applications can be driven by the data stored in the platform and the platform can be extended to meet all of your requirements over time.

Industry News

2007 CSI computer security survey shows average loss shot up to over $350,000 due to security incidents

Other key findings:

– Financial fraud overtook virus attacks as the source of the greatest financial losses.
– Another significant cause of loss was system penetration by outsiders.
– Insider abuse of network access or e-mail edged out virus incidents as the most prevalent security problem, with 59 and 52 percent of respondents reporting each respectively.

Societe Generale: A cautionary tale of insider threats

The $7.2 billion in fraud against French banking giant Societe Generale wasn’t your garden variety cyber attack, but it illustrates an insider threat that gives IT pros nightmares.

FERC approves cyber security standard for power grid

Developed by the North American Electric Reliability Corp in 2006, the standard emphasizes log retention and review in sections R5.1.2, 6.4 and 6.5. Access a copy of the Cyber Security Standard for Systems Security Management here.

Protect your network from zero-day attacks

Selection criteria for pragmatic Log Management

As we wrap up our 6-month tour of Pragmatic Log Management, let’s focus on what are some of the important buying criteria that you should consider when looking at log management offerings. Ultimately, a lot of the vendors in the space have done a good job of making all the products sound the same. So really deciphering what differentiates one product versus another is an art form.

As you remember from the piece on Buying Log Management offerings, the objective is not to necessarily find the product that knocks all the other competitors out with a first round TKO. If that happens, all the better, but in order to get maximum negotiating leverage, as well as to optimize your selection you want to have a couple of companies on the long list. You also want to evaluate a handful of offerings, and ultimately start to negotiate with at least two.

I’m going to present a set of fairly generic criteria for your purchase. Ultimately these are just guidelines because what is important to you will likely be different. Or you will weigh some of the criteria differently. After studying this market from a number of different perspectives, I’ve simplified pages and pages of features into a couple of buckets that you absolutely need to be focused on:

  1. Connectivity/Integration
  2. Performance
  3. Data retrieval and analysis
  4. Reporting


Log management clearly has limited value if you are only able to log data from a subset of your networks, systems and/or applications. So first and foremost, your log management platform needs to easily support the data sources you need to log. Here are some of the considerations, in no specific order:

  • Syslog is the lowest common denominator of log formats. Basically almost everything I’ve come across supports some flavor of syslog-based logging. So at a minimum, you will look for your platform to be able to take syslog records.
  • Connectors – There are some products where you need to get a more granular level of information that can’t be provided by generic syslog records. You certainly don’t want to build (or support) these integrations yourself. So you should expect your log management vendor to provide a handful of the connectors that are most important to you.
  • Agents – Finally, when it’s impractical to build a connector, it may make more sense to have a little software that runs on the target system to pull the log information and transform it into a format that your log management platform is going to understand.

What’s going to differentiate solutions relative to connectivity/integration? It’s basically ease of integration and the breadth of support for the specific target systems that you need. How easy is it for you to add a new data source? How flexible is the data mapping, so you can put the right information in the right place? Don’t be fooled by a vendor crowing about having 10,000 connectors because that doesn’t matter if all of the leaders support the 25 targets that you really need.


The secret to effective log management is to gather ALL of the data. The aspects of LM that add value to investigations and compliance are seriously minimized if the integrity or completeness of the data can be questioned. Thus, it’s pretty important that your log management NOT drop data all over the floor. So you need a solution that will scale to your requirements. But here is the trick: you need to understand what your requirements REALLY are.

Everyone wants to think they need 100,000 rps (records per second). But do you? Really? The way to figure this out is through the evaluation. Maybe put up a syslog server and start directing traffic there to get a feel for the data flow and quantity. Of course, you’ll be taking a rough guess, and that’s OK. The point of the eval is to make sure the guess isn’t too rough and that you don’t end up with an environment that cannot scale to peak load.

The other thing to be wary of is that most vendors are a little optimistic in terms of their real throughput numbers. DO NOT make buying decisions based on the scalability numbers written on marketing materials. You need to try it for yourself, or suffer the consequences when you need double the number of logging devices to meet your needs.

Data Retrieval and Analysis

Data analysis is where some of the vendors start to really differentiate themselves. Basically, if you think back to the 6 months of the Pragmatic Log Management Series, you know there are 3 different aspects of log management: Operations, Investigations and Compliance.

In terms of operations, you want to make sure you can take the data from the LM system and gather some operational data from it. During the eval, make sure you understand how you’d pinpoint an issue (REACT FASTER) within each vendor’s interface and figure out how to find the areas for deeper investigation to isolate the problem.

You want to see the pre-built dashboards and play around a bit with the custom views that can be built within the interface. Make sure you are comfortable with the customization process and ensure it’ll be easy for you to do, since once you buy the product – don’t expect the vendor to spend a lot of time customizing it for you (unless you bring your checkbook for lots of additional consulting).

Since investigations are also a key part of LM, you want to make sure the system provides adequate filtering, correlation and drill down capabilities on historical data. It’s important to store the data in a forensically sound format, so scrutinize the approach the vendor has for that and make sure that it’ll stand up in court. The idea of being able to “play back” a scenario based on log data is a very nice feature that can help during an active investigation.

This is also when you want to check with references about REAL live instances of a customer using the LM system to isolate and find an attack or incident. You also want to talk to a customer that has used the data in both an audit and a legal proceeding. If the vendor hasn’t done that (or can’t provide it for you), then they probably aren’t the right vendor.


Finally, the last aspect of LM selection is making sure you have the reports you need, both for operations, as well as compliance. What does that mean? In a nutshell, you want to be able to clearly show how deployed controls meet the requirements of certain standards. This is most easily facilitated if the reports are grouped by functional category and then the data is mapped to those categories. To manage expectations, no product is going to pump out a generic report that exactly meets your needs. But you want to have 80% of those requirements met, as opposed to 20%.

It’s also very helpful for the vendors to take those categories (mentioned above) and associate them with specific regulations. For example, you should have a PCI oriented report that distinctly shows how the firewall records substantiate that you have a firewall in place to protect access to cardholder data.

Another key requirement from the reporting engine is trending analysis. From an operational standpoint, you want to be able to see your baseline behavior and then compare different timeframes to that baseline. This will help you to REACT FASTER to emerging issues.

Finally, you will likely need to customize the reports. So similar to getting comfortable with the process to customize the interface, you need to understand and like the process to tune your reports. The vendor will provide a set of pre-built reports and that’s a start, but if you need a PhD to generate a new report, it may not be the right product for you.

So that’s it, I think those are the 4 key selection criteria for a log management platform. To reiterate, your specific requirements will vary a bit, but if you pay attention to the Big 4, you’ll likely make a log management decision that you’ll be happy with for many years.

Industry News

Data loss prevention trends to watch in 2008
No doubt about it, 2007 was the year that high profile data breaches splashed across the front pages with as much sensation as paint on a Jackson Pollock canvas. Experts say this is just the tip of the iceberg…

Nugache worm kicking up a storm 

Although the infamous Storm worm enters 2008 with a reputation as the world’s most dangerous botnet, security experts say there’s an up-and-comer called Nugache that could give it a run for its money.

Protect your network from zero-day attacks

EventTracker offers a unique combination of event log analysis and change detection for a comprehensive SIEM solution that is capable of detecting all forms of cyber attacks including:
• Those that are readily recognized (100 login failures between 2-3 am)
• Those that are known but not easily recognized or obvious (http traffic from a server that should not have any)
• Zero-day attacks that are so new that threat profiles have not yet been created (such as the Storm and Nugache worms doing the rounds these days)

Why words matter…or do they?

Well this is starting to turn into a bit of a bun fight, which was not my intent as I was merely attempting to clarify some incorrect claims in the Splunk post. Well, now Anton has weighed in with his perspective:

“I think this debate is mostly about two approaches to logs: collect and parse some logs (typical SIEM approach) vs collect and index all logs (like, ahem, “IT search”).”

Yes, he is right in a sense. It is a great concise statement, but the statement needs to be looked at as there are some nuances here that need to be understood.

Just a bit of level-set before going to work on the meat of the statement.

Most SIEM solutions today have a real-time component, (typically a correlation engine), and some kind of analytics capability. Depending on the vendor some do one or the other better (and of course we all package and price them all differently).

Most of the “older” vendors started out as correlation vendors targeting F2000 enabling real-time threat detection in the SOC. The analytics piece was a bit of a secondary requirement, and secure, long term storage not so much as all. The Gartner guys called these vendors SEM or Security Event Management providers which is instructive – event to me implies a fairly short-term context. Since 2000, the analytics and reporting capability has become increasingly important as compliance has become the big driver. Many of the newer vendors in the SIEM market focused on solving the compliance use-case and these solutions typically featured secure and long term storage, compliance packs, good reporting etc. These new vendors were sometimes referred to as SIM or Security Information Management. These vendors fit a nice gap left in the capabilities of the correlation vendors. Some of the newer vendors like Loglogic made a nice business focusing on selling log collection solutions to large enterprise – typically as an augmentation to an existing SIM. Some of these newer vendors like Prism , focused on mid tier and provided lower-cost, easy to deploy solutions that did both compliance as well as provided real-time capabilities to companies that did not that did have the money or the people to afford the enterprise correlation guys. These companies had a compliance requirement and wanted to get some security improvements as well.

But really all of us, SIM/SEM, enterprise, mid-tier, Splunk were/are collecting the same darn logs – we were just doing slightly different things with them. So of course the correlation guys have released log aggregators (like Arcsight Logger), and the Log Management vendors have added or always had real-time capability. And at the end of the day we ended up getting lumped into the SIEM bucket, and here we are.

For anyone with a SIEM requirement… You should understand what your business requirements are and then look long and hard at the vendor’s capability – preferably by getting them in house to do an evaluation in your own environment. Buying according to which one claims to do the most events per second or supports the most devices, or even the one has the most mindshare in the market is really short sighted. Nothing beats using the solution in action for a few weeks, and this is a classic “the devil is in the details…”

So, back to Anton’s statement (finally!). When Anton refers to “collect and parse some logs” that is the typical simplification of the real-time security use case – you are looking for patterns of behavior and only certain logs are important because you are looking for attack patterns in specific event types.
The “collect and index all the logs” is the typical compliance use case. The indexing is simply the method of storing for efficient retrieval during analysis – again a typical analytics requirement.

Another side note. The importance of collecting all the logs is a risk assessment that the end user should do. Many people tend to collect “all” the logs because they don’t know what is important and it is deemed the easiest and safest approach. The biggest beneficiaries of that approach are the SIEM appliance vendors as they get to sell another proprietary box when the event volume goes through the roof, and of course those individuals that hold stock in EMC. Despite compression, a lot of logs is still a lot of logs!

Increasingly, customers I talk to are making a conscious decision to not collect or retain all the logs as there is overhead and a security risk in storing logs as they consider them sensitive data. Quite frankly you should look for a vendor that allows you to collect all the data, but also provides you with some fairly robust filtering capability in case you don’t want or need to. This is a topic for another day, however.

So when Anton claims that you need to do both – if you want to do real-time analysis as well as forensics and compliance -then yes, I agree, but when he claims the “collect and parse” is the typical SIEM approach then that is an overgeneralization, which really was the purpose of my post to begin with. I tend not to favor them as they simply misinform the reader.

– Steve Lafferty

More thoughts on SIEM vs. IT Search

I posted a commentary a while ago on a post by Raffy, who discussed the differences between IT Search (or Splunk, as they are the only folks I know who are trying to make IT Search a distinct product category) and SIEM. Raffy posted a clarification  in response to my commentary. What I was pointing out in my original post was that all vendors, SIEM or Splunk, are loading the same standard formats – and what needed to be maintained was, in fact, not the basic loader, but the knowledge (the prioritization, the reports, alerts etc) of what to do with all that data. And the knowledge is a core part of the value that SIEM solutions provide. On that we seem to agree. And as Raffy points out, the Splunk guys are busily beavering away producing knowledge as well. Although be careful — you may wake up one morning and find that you have turned into a SIEM solution!

Sadly the concept of the bad “parser” or loader continues to creep in – Splunk does not need it which is good. SIEM systems do, which is bad.

I am reasonably familiar with quite a few of the offerings out there for doing SIEM/log management, and quite frankly, outside of perhaps Arcsight (I am giving Raffy the benefit of the doubt here as he used to work at Arcsight, so he would know better than I), I can’t think of a vendor that writes proprietary connectors or parsers to simply load raw data. We (EventTracker) certainly don’t. From an engineering standpoint, when there are standard formats like Windows EVT, Syslog and SNMP it would be pretty silly to create something else. Why would you? You write them only when there is a proprietary API or data format like Checkpoint where you absolutely have to. No difference here. I don’t see how this parser argument is in any way, shape or form indicative of a core difference.

I am waiting on Raffy’s promised follow-on post with some anticipation  – he states that he will explain the many other differences between IT Search and SIEM, although he prefaced some of it with the Splunk is Google-like and Google is God ergo…

Google was/is a gamechanging application, and there are a number of things that made them unique – easy to use, fast, and the ability to return valuable information. But what made Google a gazillion dollar corporation is not the Natural Language Search – I mean, that is nice but simple “and” “or” “not” is really not a breakthrough in the grand scheme of things. Now the speed of the Google search, that is pretty impressive – but that is due to enormous server farms so that is mechanical. Most of the other early internet search vendors had both these capabilities. My early personal favorite was AltaVista, but I switched a long time ago to Google.

Why? What absolutely blew my socks off and continues to do so to this day about Google is their ability to figure out which of the 10 millions entries for my arbitrary search string are the ones I care about, and providing them, or some of them, to me in the first hundred entries. They find the needle in the proverbial haystack. Now that is spectacular (and highly proprietary) and the ranking algorithm is a closely guarded secret I hear. Someone once told me that lot of it is done around ranking from the millions of people doing similar searches – it is the sheer quantity of search users on the internet. The more searches they conduct the better they become. I can believe that. Google works because of the quantity of data and because the community is so large – and they have figured out a way to put the two together.

I wonder how an approach like that would work however, when you have a few admins searching a few dozen times a week. Not sure how that will translate, but I am looking forward to finding out!

– Steve Lafferty

Security or compliance?

Mid-size organizations continue to be tossed on the horns of the Security/Compliance dilemma. Is it reasonable to consider regulatory compliance a natural benefit of a security focused approach?

Consider why regulatory standards came into being in the first place. Some like PCI-DSS, FISMA and DCID/6 are largely driven by security concerns and the potential for loss of high value data. Others like Sarbanes-Oxley seek to establish responsibility for changes and are an incentive to blunt the insider threat. Vendor provided Best Practices have come about because of concerns about “attack surface” and “vulnerability”. Clearly security issues.

While large organizations can establish dedicated “compliance teams”, the high cost of such an approach precludes it as an option for mid tier organizations. If you could only have one team and effort and had to choose, its a no-brainer. Security wins. Accordingly, such organizations naturally consider that compliance efforts are folded into the security teams and budgets.

While this is a reasonable approach, recognize that some compliance regulations are more auditor and governance related and a strict security view is a misfit. An adaptation, is to transition the ownership of tools and their use from the security to the operational team.

The classic approach for mid-size organizations to the dilemma — start as a security focused initiative, transition to the operations team.

– Ananth 

Did you know? PCI-DSS forbids storage of CVV

Did you know? PCI-DSS forbids storage of CVV

A recent Ecommerce Checkout Report stated that “55% of the Top 100 retailers require shoppers to give a CVV2, CID, or CVC number during the checkout process.” That’s great for anti-fraud and customer verification purposes, but it also creates a high level of risk around inappropriate information storage.

To clarify, the CVV (Card Verification Value) is actually a part of the of the magnetic track data in the card itself. CVV2/CVC2/CID information is the 3 or 4 digit code on the back of the signature strip of a credit or debit card (or on the front of American Express cards).

The Payment Card Industry Data Security Standard (PCI DSS) clearly states* that there are three pieces of data that may not be stored after authorization is complete (regardless of whether you are handling card-present or card-not-present transactions):

  1. Magnetic stripe data (Track 1 or 2)
  2. PIN block data (and, yes, this means ‘encrypted PIN block’ too)
  3. CVV2/CVC2/CID

– Ananth

Difference between IT search and Log Management

Came across an interesting blog entry  by Raffy at Splunk. As a marketing guy I am jealous as they are generating a lot of buzz about “IT Search”. Splunk has led a lot of people that are knowledgeable to wonder how this is something different than what all the log management vendors have been providing.

Still, while Raffy touched on what is one of the real differences between IT Search and Log Management, he left a few of the salient points out in the discussion of a “connector” and how a connector puts you at the mercy of the vendor to produce the connector, and what happens when the log data format changes?

Let’s step back — at the most basic level in log management (or IT Search for that matter) you have to do 2 fundamental things, you have to help people  1) collect logs from a mess of different sources, and 2) help them do interesting things with them. The “do interesting things” includes the usual stuff like correlation, reporting, analytics, secure storage etc.

You can debate fiercely the relative robustness of collection architectures – and there are a number of differences if you are evaluating vendors you should look at. For the sake of this discussion however most any log management system worthy of its salt will have a collection mechanism for all the basic methods – if you handle (in no particular order) ODBC, Syslog, read the Windows event format, maybe SNMP, throw in a file reader for custom applications, well you have the collection pretty much covered..

The reality is, as Raffy points out, there are a few totally proprietary access methods to get logs like Checkpoint. It is far easier for a system or application vendor to write one of the standard methods. So getting access to the raw logs in some way, shape or form is straightforward.

So here is where the real difference between IT search and Log Management begins.

Raffy mentions a small change in the syslog format causing the connector to break. Well syslog is a standard so if it would not break any standard syslog receiver, what it actually meant is that the syslog message has not changed but the content had.

Log Management vendors provide “knowledge” about the logs beyond simple collection.

Let’s make an analogy – IT Search is like the NSA collecting all of the radio transmissions in all of the languages in the entire world. Pretty useful. However, if you want to make sense of the Russian ones you hire your Russian expert, Swahili, your Swahili expert and so on. You get the picture.

Logs are like languages — the fact of the matter is the only thing that is the same about logs is that the content is all different. If you happen to be an uber-log weenie and you understand the format of  20 different logs, simple IT Search is really powerful. If you are only concerned about a single log format like Windows (although Windows by itself is pretty darn arcane), IT Search can be a powerful tool.  If you are like the rest of us whose entire lives are not spent understanding multiple log formats, or get really rusty because many of us often don’t get exposed to certain formats all the time, well, it gets a little harder. What Log Management vendors do is to help you ( as the user) out with the knowledge – rules that categorize important event logs from unimportant ones, alerts, reports that are configured to look for key words in the different log streams. How this is done is different from vendor to vendor – some normalize, i.e. translate logs into a standard canonical format, others don’t. And this knowledge is what can conceivably get out of date.

In IT Search, there is no possibility for anything to get out of date mainly because there is no knowledge, only the ability to search the log in its native format. Finally, if a Log Management vendor is storing the original log and you can search on it, your Log Management application gives you all the capability of IT Search.

Seems to me IT Search is much ado about nothing…

– Steve Lafferty

Defining SIM/SEM Requirements

The rational approach to pretty much any IT project is the same…define the requirements, solutions, do a pilot project, implement/refine and operationalize.

Often you win or lose early at requirements gathering time.

So what should you keep in mind while defining requirements for a Security Information and Event Management (SIEM) project?

Look at it in two ways:

  1. What are the trends that you (and your peers) have seen and experienced?
  2. What are the experts saying?

Well, for ourselves, we see a clear increase in attacks from the outside.  These are increasingly sophisticated (which is expected I guess since it’s an arms race) and disturbingly indiscriminate. Attacks seem to be launched merely because we exist on the Internet and have connectivity and disconnecting from the Internet is not an option.

We see attacks that we recognize immediately (100 login failures between 2-3 AM). We see attacks that are not so obvious (http traffic from a server that should not have any). And we see the almost unrecognizable zero-day attacks. These appear to work their way through our defenses and manifest as subtle configuration changes.

Of the expert prognosticators, we (like many others) find that the PCI-DSS standard is a good middle ground between loosely defined guidelines (HIPAA anyone?) and vendor “Best Practices”.

The interesting thing is that PCI-DSS requirements seem to match what we see. Section 10 speaks to weaponry that can detect (and ideally remediate) the attacks and Section 11.5 speaks to the ability to detect configuration changes.

Its all SIEM, in the end.

So what are the requirements for SIEM?

  1. Gather logs from a variety of sources in real-time
  2. The ability to detect (and ideally remediate) well recognized attacks in real-time
  3. The ability (and more importantly habit) to extract value from raw logs for the non-obvious attacks
  4. The ability to detect configuration changes to the file and registry level for those zero-day attacks

As the saying goes — well begun is half done. Get your requirements correct and improve your odds of success.


Failed your security audit? Recover with a 5 step checklist

Buying a Pragmatic Log Management Solution

Over the past 4 months, we’ve discussed many of the reasons that log management is critical. To quickly review, log management can help you react faster from an operational aspect – so you can pinpoint an incident and remediate any issues well ahead of a significant loss. Secondly, log management helps in the event of an incident in terms of having rock-solid evidence to investigate a breach and hopefully bring the perpetrator to justice. Finally, log management also gathers data and can present it in a way to facilitate your compliance efforts. That is all good and well, but what do you do when you decide it’s time to buy a solution? Do you just go down to your local computer superstore and pick up a log management platform off of the shelf? Right, probably not. Moreover, you are the shepherd of corporate assets, so you need to buy in the most cost effective and efficient manner possible, while ensuring you meet the requirements of your company. I’ve been working with organizations of all sizes for the better part of the past 15 years on more effectively buying products. I’ve distilled that knowledge into a specific buying process for all security products and it definitely applies to log management as well. It’s really focused on making sure you are in control of the purchase process, ensuring that what you are buying will solve your BUSINESS problem. Here is the 8 Step Security Incite Buying Security Products (BSP) process:

  • Step 1: Clean Your Own House – It’s your responsibility, as the buyer, to know what you need to buy and why you are buying it. Vendors will try to create a buying catalyst when they contact you, but that is like pushing on a string. To buy something correctly, you’ve got to have a budget and an approved project AHEAD of time.
  • Step 2: Assemble the “Team” – If you are lucky enough to have resources, you want to assemble a team to drive the project. You’ll need a leader (someone who ultimately accepts accountability for the success of the project) and probably a technical team to do the actual evaluation.
  • Step 3: Educate – An educated buyer is the best buyer (whether the vendors admit this or not). So this step in the process is to give you (and maybe your project team) a broad understanding of the problem you are trying to solve and some best practices for how to solve it. The objective is not to learn 100% of what you need to know, which would take too long. It’s to get to maybe 75% knowledge and a pretty good understanding of what you don’t know.
  • Step 4: Engage – At this point, you know what you need to buy and you have a good understanding of the industry, so you can now approach vendors and/or resellers to start the actual procurement process. As we dive down into Step 4, a major topic will be developing the long list. This is where you also consider doing a formal RFI/RFP process, if your organization requires that kind of documentation.
  • Step 5: The Bake-off – Depending on the amount of lab resources (and the criticality of the project), you’ll want to test a few of the products on the long list. Probably not all of them, but more than two. I know, resources are precious, why test more than two? Well, you’ll have to wait for Step 5 to learn that.
  • Step 6: The Short-list – Most people think the short list is determined before the bake-off. Well, think again. Vendors make the short list if the lab evaluation shows that their product will meet your requirements and solve your business problem. Again, you want to have at least 2 vendors on the short list at this point, and then you can have some fun.
  • Step 7: Negotiation – Ah, my favorite part of the whole process. If you’ve done the job right, you have at least two vendors that can get the job done, so now you pit them against each other and watch the fireworks. Artfully done, you can save 50% off the initial bids because at this point, the vendors have invested enough in the deal that they don’t want to lose.
  • Step 8: Selection – As much fun as it is to see two (or more) vendors locked in a death struggle, eventually you’ll need to make a decision. With the correct process in place, the selection is easy. You’ll feel very good about one of the vendors and you’ll get the deal done. The other vendor(s) will be disappointed at the end of the process, but that’s life in the big city. As long as YOU feel good about the purchase, you’ve done your job.

So what is different for log management? Not much. You want to understand your problem and drivers. You want to learn about the market (which is probably why you are reading this in the first place). And then you want to figure out who can solve your problem. Those steps are pretty universal. The reality is the log management market is very crowded and it’s only going to get more crowded. I read about new vendors entering the space almost every week. But remember, you are buying quality, not quantity. Your objective is to find a number of providers that can meet your needs, then taking a look and finding out if the product/service will work in YOUR environment. That’s what the evaluation is for.  Then you get to your short list and you start to negotiate. It’s pretty straight forward at that point. You know which products will meet the need, and then it’s about picking the best fit from a company and economic standpoint. Depending on your requirements, price may be a more significant driver or maybe deployment services or flexibility. There is no generic “right” answer; it’s about meeting the needs of your organization.

A lot of folks let the procurement process get away from them. Using the BSP process you can stay in control and buy the best log management solution for the best price from a vendor that is going to keep you delighted. The process has been built to make sure that’s the case.

Featured Whitepaper

10 reasons why EventTracker is your best choice for an event log management solution

Industry News

2007 Security by the numbers

 Phishing, spam, bot networks, trojans, adware, spyware, zero-day threats, data theft, identity theft, credit card fraud… cybercrime isn’t just becoming more prevalent, it’s getting more sophisticated and subtle every day. At least that’s the conclusion suggested by recent threat reports from major industry players and government organizations.

TJX settles with banks for $41 million

More than 100 million account records were breached, retail giant reveals.TJX Companies has reached an agreement with Visa USA by which it will establish a $40.9 million fund for banks whose credit cards were exposed in the retailer’s mammoth security breach earlier this year. The settlement is TJX’s second in a series of lawsuits arising from the breach, in which years of credit card records were exposed.

The human element in IT security

In the last six months in the U.S., nearly 40 percent of firms surveyed by the Computing Technology Industry Association reported a major IT security breach. How many of these could have been prevented by considering the human element in the workplace?

So you failed a security audit, now what?

Learn why you failed and how to recover with this 5-step checklist