New Bill Promises to Penalize Companies for Security Breaches

On September 22, the Senate Judiciary Committee approved and passed Sen. Richard Blumenthal’s (D, Conn.) bill, the “Personal Data Protection and Breach Accountability Act of 2011,” sending it to the Senate floor. The bill will penalize companies for online data breaches and was introduced on the heels of several high profile security breaches and hacks that affected millions of consumers. These included the Sony breach which compromised the data of 77 million customers, and the DigiNotar breach which resulted in 300,000 Google GMail account holders having their mail hacked and read. The measure addresses companies that hold the personal information of more than 10,000 customers and requires them to put privacy and security programs in place to protect the information, and to respond quickly in the event of a security failure.

The bill proposes that companies be fined $5,000 per day per violation, with a maximum of $20 million per infringement. Additionally, companies who fail to comply with the data protection law (if it is passed) may be required to pay for credit monitoring services and subject to civil litigation by the affected consumers. The bill also aims to increase criminal penalties for identity theft, as well as crimes including the installing of a data collection program on someone’s computer and concealing any security breached in which personal data is compromised.

Key provisions in the bill include a process to help companies establish appropriate minimum security standards, notifications requirements, information sharing after a breach and company accountability.

While the intent of the bill is admirable, the problem is not a lack of laws to deter breaches, but the insufficient enforcement of these laws. Many of the requirements espoused in this new legislation already exist in many different forms.

SANS is the largest source for information security training and security certification, and their position is that we don’t need an extension to the Federal Information Security Management Act of 2002 (FISMA) or other compliance regulations, which have essentially encouraged a checkbox mentality: “I checked it off, so we are good.” This is the wrong approach to security but companies get rewarded for checking off criteria lists. Compliance regulations do not drive improvement. Organizations need to focus on the actual costs that can occur by not being compliant:

  • Loss of consumer confidence: Consumers will think twice before they hand over their personal data to an organization perceived to be careless with that information which can lead to a direct hit in sales.
  • Increased costs of doing business as with PCI-DSS: PCI-DSS is one example where enforcement is prevalent, and the penalties can be stringent. Merchants who do not maintain compliance are subject to higher rates charged by VISA, MasterCard, etc.
  • Negative press: One need only look at the recent data breaches to consider the continuing negative impact on the compromised company’s brand and reputation. In one case (DigiNotar), the company folded.

The gap does not exist in the laws, but rather, in the enforcement of those laws. Until there is enforcement any legislation or requirements are hollow threats.

Top 10 Pitfalls of Implementing IT Projects

It’s a dirty secret, many IT projects fail; maybe even as many as 30% of all IT projects.

Amazing, given the time, money and mojo spent on them, and the seriously smart people working in IT.

As a vendor, it is painful to see this. We see it from time to time (often helplessly from the sidelines), we think about it a lot, we’d like to see eliminated along with malaria, cancer and other “nasties.”

They fail for a lot of reasons, many of them unrelated to software.

At EventTracker we’ve helped save a number of nearly-failed implementations, and we have noticed some consistency of why they fail.

From the home office in Columbia MD, here are the top 10 reasons IT projects fail:

10. “It has to be perfect”

This is the “if you don’t do it right, don’t do it at all” belief system. With this viewpoint, the project lead person believes that the solution must perfectly fit existing or new business processes. The result is a massive, overly complicated implementation that is extremely expensive. By the time it’s all done, the business environment has changed and an enormous investment is wasted.

Lesson: Value does not mean perfection. Make sure the solution delivers value early and often, and let perfection happen as it may.

9. Doesn’t integrate with other systems

In almost every IT shop, “seamless integration with everything” is the mantra. Vendors tout it, management believes it, and users demand it. In other words to be all things to all people, IT project cannot exist in isolation. Integration has become a key component of many IT projects and it can’t exist alone anymore.

Lesson: Examine your needs for integration before you start the project. Find out if there are pre-built tools to accomplish this. Plan accordingly if they aren’t.

8. No one is in charge, everyone is in charge

This is the classic “committee” problem. The CIO or IT Manager decides the company needs an IT solution, so they assign the task of getting it done to a group. No one is accountable, no one is in charge. So they deliberate and discuss forever. Nothing gets done, and when it does, no one makes sure it gets driven into the organization. Failure is imminent.

Lesson: Make sure someone is accountable in the organization for success. If you are using a contractor, give that contractor enough power to make it happen.

7. The person who championed the IT solution quits, goes on vacation, or loses interest

This is a tough problem to foresee because employees don’t usually broadcast their departure or disinterest before bailing. The bottom line is that if the project lead leaves, the project will suffer. It might kill the project if no one else is up to speed. It’s a risk that should be taken seriously.

Lesson: Make sure that more than just one person is involved, and keep a new interim project manager shadowing and up-to-date.

6. Drive-by management

IT projects are often as much about people and processes as it is about technology. If the project doesn’t have consistent management support, the project will fail. After all, if no one knows how or why to use the solution, no one will

Lesson: Make sure you and your team have allocated time to define, test, and use your new solution as it is rolled out.

5. No one thought it through

One day someone realized, “hey we need a good solution to address the compliance regulations and these security gaps.” The next day someone started looking at packages, and a month later you buy one. Then you realized that there were a lot of things this solution affects, including core systems, router, applications and operations processes. But you’re way too far down the road on a package and have spent too much money to switch to something else. So you keep investing until you realize you are dumping money down a hole. It’s a bad place to be.

Lesson: Make sure you think it all through before you buy. Get support. Get input. Then take the plunge. You’ll be glad you did.

4. Requirements are not defined

In this all-too-common example, half way through a complex project, someone says “we actually want to rework our processes to fit X.” The project guys look at what they have done, realize it won’t work, and completely redesign the system. It takes 3 months. The project goes over budget. The key stakeholder says “hey this project is expensive, and we’ve seen nothing of value.” The budget vanishes. The project ends.

Lesson: Make sure you know what you want before you start building it. If you don’t know, build the pieces you do, then build the rest later. Don’t build what you don’t understand.

3. Processes are not defined

This relates to #4 above. Sometimes requirements are defined, but they don’t match good processes, because these processes don’t exist. Or no one follows them. Or they are outdated. Or not well understood. The point is that the solution is computer software: it does exactly what you tell it the same way every time, and it’s expensive to change it. Sloppy processes are impossible to create in software making the solution more of a hindrance than a help.

Lesson: Only implement and automate processes that are well understood and followed. If they are not well understood, implement them in a minimal way and do not automate until they are well understood and followed.

2. People don’t buy in

Any solution with no users is a very lonely piece of software. It’s also a very expensive use of 500Mb on your server. Most IT projects fail because they just aren’t used by anyone. They are a giant database of old information and spotty data.  That’s a failure.

Lesson: Focus on end user adoption. Buy training. Talk about the value that it brings your customers, your employees, and your shareholders. Make usage a part of your employee review process. Incentivize usage. Make it make sense to use it.

1. Key value is not defined

This is by far the most prevalent problem in implementing IT solutions: Businesses don’t take time to define what they want out of their implementation, so it doesn’t do what they want. This goes further than just defining requirements. It’s about defining what value the new software will deliver for the business. By focusing on the nuts and bolts, the business doesn’t figure out what they want from the system as a whole.

Lesson: Instead of starting with “hey I need something to accomplish X,” the organization should be asking “how can this software help us bring value to our security posture, to our internal costs, to our compliance requirements.”

This list is not exhaustive – there are many more ways to kill your implementation. However if your organization is aware of the pitfalls listed above, you have a very high chance of success.

A.N. Ananth

Five reasons for log apathy — and the antidote

Five Reasons for Log Apathy – and the Antidote

How many times have you heard people just don’t care about logs? That IT guys are selfish, stupid or lazy? That they would rather play with new toys than do serious work?
I argue that IT guys are amazing, smart and do care about the systems they curate, but native tools are such that log management is often like running into a brick wall — they encourage disengagement.

Here are five reasons for this perception and what can be done about them.

#1 Obscure descriptions: Ever see a raw log? A Cisco intrusion or a Windows failed object access attempt or a Solaris BSM record to mount a volume? Blech… it’s a description even the author would find hard to love. Not written to be easy to understand, rather its purpose is either debugging by the developer or meant to satisfy certification requirements. This is not apathy, it’s intentional exclusion.

To make this relevant, you need a relevant description which highlights the elements of value, enrichs the information (e.g., lookup an IP address or event id) and not just spew them in time sequence but present information in priority order of risk.

#2 Lack of access: What easier way to spur disengagement than by hiding the logs away in an obscure part of the file system, out of sight to any but the most determined; if they cannot see it, they won’t care about it.

The antidote is to centralize logging and throw up an easy to under display which presents relevant information – preferably risk ordered

#3 Unsexiness:  All the security stories are about wikileaks and credit card theft. Log review is considered dull/boring, it’s a rare occurrence to make it to the plot line of Hawaii Five-O .

Compare it to working out at the gym, it can be boring and there are 10 reasons why other things are more “fun” but it’s good for you and pays handsomely in the long run.

#4 Unsung Heroes: Who is the Big Man on your Campus? Odds are, it’s the guys who make money for the enterprise (think sales guys or CEOs).

Rarely is it the folks who keep the railroad running or god forbid, reduce cost or prevent incidents.

However, they are the wind beneath the wings of the enterprise. The organization that recognizes and values the guys who show up for work everyday and do their job without fuss/drama is much more likely to succeed. Heroes are the ones who make a voluntary effort over a long period of time to accomplish serious goals, not chosen ones with marks on their forehead, destined from birth to save the day.

#5 Forced Compliance: As long as management looks at regulatory compliance as unwarranted interference, it will be resented and IT is forced into checkbox mentality that benefits nobody.

It’s the old question “What comes first? Compliance (chicken) or security (egg)?” We see compliance as a result of secure practices. By making it easy to crunch the data and present meaningful scores and alerts, there is less need to force this.

I’ll say it again, I know many IT guys and gals who are amazing, smart and care deeply about the systems they manage. To combat log apathy, make it easier to deal with them.

Tip of the hat to Dave Meslin whose recent talk at Tedx in Toronto spurred this blog entry

A.N. Ananth

Personalization wins the day

Despite tough times for the corporate world in the past year, spending on IT security was a bright spot in an otherwise gloomy picture.

However if you’ve tried to convince a CFO to sign off on tools and software, you know just how difficult this can be. In fact, the most common way to get approval is to tie this request to an unrelenting compliance mandate. Sadly, a security incident can also help focus and trigger the approval of budget.

Vendors have tried hard to showcase their value by appealing to the preventive nature of their products. ROI calculations are usually provided to demonstrate quick payback but these are often dismissed by the CFO as self serving. Recognizing the difficulty of measuring ROI, an alternate model called ROSI has been proposed but has met with limited success.

So what is an effective way to educate and persuade the gnomes? Try an approach from a parallel field, presentation of medical data. Your medical chart: it’s hard to access, impossible to read — and full of information that could make you healthier if you just knew how to use it, pretty much like security information inside the enterprise. But if you have seen lab results, even motivated persons find it hard to decipher and take action, much less the disinclined.

In a recent talk at TED, Thomas Goetz, the executive editor of Wired magazine addressed this issue and proposed some simple ideas to make this data meaningful and actionable. The use of color, graphics and most important personalization of the information to drive action. We know from experience that posting the speed limit is less effective at getting motorists to comply as compared to a radar gun which posts the speed limit and framed by “Your speed is __”. Its all about personalization.

To make security information meaningful to the CFO, a similar approach can be much more effective than bland “best practice” prescriptions or questionable ROI numbers. Gather data from your enterprise and present it with color and graphs tailored to the “patient”.

Personalize your presentation; get a more patient ear and much less resistance to your budget request.

A. N. Ananth

Best Practice v/s FUD

Have you observed how “best practice” recommendations are widely known but not followed as much? While it seems more the case in IT Security, it is observed true in every other sphere as well. For example, dentists repeatedly recommend brush and floss after each meal as best practice, but how many follow this advice? And then there is the clearly posted speed limit on the road, more often than not, motorists are speeding.

Now the downside to non-compliance is well known to all and for the most part well accepted – no real argument. In the dentist example these include social hardships ranging from bad teeth and breath to health issues and the resulting expense. In the speeding example, there is potential physical harm and of course monetary fines. However it would appear that neither the fear of “bad outcomes” nor “monetary fine” spur widespread compliance. Indeed one observes that the persons who do indeed comply, appear to do so because they wish to; the fear or fine factors don’t play a major role for them.

In a recent experiment, people visiting the dentist were divided in two groups. Before the start, each patient was asked to indicate if they classified themselves as “generally listen to the doctors advice”. After the checkup, people from one group were given the advice to brush and floss regularly but then given a “fear” message on the consequences of non-compliance — bad teeth, social ostracism, high cost of dental procedures etc. People from the other group got the same checkup and advice but were given a “positive” message on the benefits of compliance– nice smile, social popularity, less cost etc. A follow up was conducted to determine which of the two approaches was more effective in getting patients to comply.

Those of us in IT Security battling for budget from unresponsive upper management have been conditioned to think that the “fear” message would be more effective … but … surprise, neither approach was more effective than the other in getting patients to comply with “best practice.”  Instead, those who classified themselves as “generally listen to doctors advice” were the one who did comply. The rest were equally impervious to either the negative or positive consequences, while not disputing them.

You could also point to the great reduction in smoking incidence but this best practice has required more than 3 decades of education to achieve the trend and still can’t be stamped out.

Lesson for IT Security — education takes time and behavior modification, even more so.

Subtraction, Multiplication, Division and Task Unification through SIEM and Log Management

When we originally conceived the idea of SIEM and log management solution for IT managers many years ago, it was because of the problems they faced dealing with high volumes of cryptic audit logs from multiple sources. Searching, categorizing/analyzing, performing forensics and remediation for system security and operational challenges evidenced in disparate audit logs were time consuming, tedious, inconsistent and unrewarding tasks.  We wanted to provide technology that would make problem detection, understanding and therefore remediation, faster and easier.

A recent article in Slate caught my eye; it was all about Infomercials…staple of late night TV and a pitch-a-thon that was conducted in Washington DC for new ideas. The question is just how would you know a “successful” idea if you heard it described?

By now, SIEM has “Crossed the Chasm” , indeed the Gartner MQ puts it well into mainstream adoption, but in the early days, there was some question as to whether this was a real problem or if, as is too often the case, if SIEM and log management was a solution in search of a problem.

Back to the question — how does one determine the viability of an invention before it is released into the market?  Jacob Goldenberg, a professor of marketing at Hebrew University in Jerusalem and a visiting professor at Columbia University, has coded a kind of DNA for successful inventions. After studying a year’s worth of new product launches, Goldenberg developed a classification system to predict the potential success of a new product. He found the same patterns embedded in every watershed invention.

The first is subtraction—the removal of part of a previous invention.

For example, an ATM is a successful invention because it subtracts the bank teller.

Multiplication is the second pattern, and it describes an invention with a component copied to serve some alternate purpose.  Example: the digital camera’s additional flash to prevent “red-eye.”

A TV remote exemplifies the third pattern: division. It’s a product that has been physically divided, or separated, from the original; the remote was “divided” off of the TV.

The fourth pattern, task unification, involves saddling a product with an additional job unrelated to its original function. The iPhone is the quintessential task unifier.

SIEM and log management solutions subtract (liberate) embedded logs and log management functionality from source systems.

SIEM and log management solutions (via aggregation) the problems that can be detected with correlation that would have gone unnoticed otherwise.

EventTracker also meets the last two criteria–arguably decent tools for managing logs ought to have been included by OS and platform vendors (Unix, Linux, Windows, Cisco all have very rudimentary tools for this, if anything); so one can say EventTracker provides something needed for operations (like the TV remote) but not included in the base product.

With the myriad features now available such as configuration assessment, change audit, netflow monitoring and system status, the task unification criteria is also satisfied; you can now address a lot of security and operational requirements that are not strictly “log” related – “task unification”.

When President Obama praised innovation as a critical element in the recovery in his State of the Union, he may not have had “As Seen on TV” in mind but does SIEM fit the bill?

What’s the message supposed to be?  That SIEM and log management solutions are (now?) a good invention? SIEM has crossed the chasm!

SIEM meets Hawaii Five-O

In 2010, CBS rebooted the classic series Hawaii Five-O. It features a fictional state police unit run by Detective Steve McGarrett and named in honor of Hawaii’s status as the 50th state. The action centers on a special task force empowered by Hawaii’s governor to investigate serious crime.

The tech guru on the show is a Detective Chin Ho Kelly (played by Daniel Dae Kim) and is shown to be adept at various forensic techniques, including…wait for it…SIEM (of all things).

In Season 1, Episode 15 (Kai e’ e) the island’s leading tsunami expert is kidnapped on the same day that ocean reports indicate that a huge tsunami is headed to Hawaii. However, Five-0 soon suspects that the report is a hoax and is related to the kidnapping.

During the investigation, Chin Ho uncovers two failed logins with the kidnapped expert’s username and a numeric password each time. This is followed by a successful login. This seems odd because the correct password is all alphabetical and totally unrelated to the numbers. Turns out the kidnapped person was trying to send a message to the cops, knowing the failed logins would get scrutiny. The clue is incomplete though, because the failed logins do not capture the originating IP address and so can’t be readily geolocated.

Its great that SIEM is now firmly entrenched in the mainstream….bodes well for our industry and for IT security.

When the bad guys attack your assets, use EventTracker to “book ‘em Danno”.

– A.N. Ananth

5 Myths about SIEM and Log Management

In the spirit of the Washington Posts’ regular column, “5 Myths”, here is “a challenge everything you think you know” about SIEM/Log Management.

Driven by compliance regulation and the unending stream of security issues, the IT community, over the past few years, has accepted SIEM and Log Management as must-have technology for the data center. The Analyst community lumps a number of vendors together as SIEM and marketing departments are always in overdrive to claim any or all possible benefits or budget. Consequently some “truths” are bandied about. This misinformation affects the decision-making process so let’s look at them.

1. Price is everything…all SIEM products are roughly equal in terms of features/functions. 

An August 2010 article in SC Magazine points out that “At first blush, these (SIEM solutions) looked like 11 cats in a bag” quickly followed by “But a closer look shows interesting differences in focus.”  Nice save but the first thought was the products were roughly equal, and for many that was a key take-away. As so many are influenced by the Gartner Magic Quadrant, the picture is taken to mean everything separated from the detailed commentary, even though that commentary states quite explicitly to look closely at features.

Even better, look at where vendor started?  Very different places it turns out, but then added the features and functionality to meet market (or marketing) needs. For example, NetForensics preaches that SIEM is really correlation; Logrhythm believes that focusing on your logs is the key to security; Tenable thinks vulnerability status is the key; Q1Labs offers network flow monitoring as the critical element; eIQ origins are as a firewall log analyzer.  So, while each solution may claim “the same features”, under the hood, they each started in a certain place, and packed additional feature/functionality around their core – they continue to focus on their core as being their differentiator; adding functionality as the market demands.

Also, some SIEM vendors are software-based, while others are appliance-based, which in itself differentiates the players in the market.

All the same? Hardly.

2. Appliances are a better solution.
Can you spell groupthink? It’s a way; neither better nor worse as a technical approach; perhaps easier for resellers to carry.

When does a software-based solution win?

– Sparing.  To protect your valuable IT infrastructure, you will need to calculate a 1xN relationship of live appliances to back-ups.  If your appliance breaks down and you don’t have a spare, you have to ship the appliance and wait for a replacement.  With software, if your device breaks down, you can simply install the software on existing capacity in your infrastructure, and be back up and running in minutes versus potentially days.

– Scalability.  With an appliance solution, your SIEM solution has a floor and a ceiling.  You need at least one device to get started, and it has a maximum capacity before you have to add another appliance at a high price.  With a software solution, you can scale incrementally… one IT infrastructure device at a time.
– Single Sign On. Integrate easily with Active Directory or LDAP; same username/password or smartcard authentication; very attractive

– Storage. What retention period is best for your logs? Weeks? Months? Years? With appliances, its dictated by the disk size provided; with software you decide or can use network based storage

So appliances must be easier to install? Plug in the box, provide an IP and you are done? Not really – more than 99% of the configuration is local to the user.

3. Your log volumes don’t matter…disk space is cheap.

Sure…but as Defense Secretary Rumsfeld used to say $10B and $10B there and pretty soon you’re talking real money.

Logs are voluminous, a successful implementation leads to much higher log volume and terabytes add up very quickly.  Compression is essential but the ability to access network based storage is even more important. The ability to backup/restore archives easily and natively to nearline or offline storage is critical.

If you consider an appliance solution, it is inherently limited in the available disk.

4. The technology is like an antivirus… just install it and forget it, and if anything is wrong, it will tell you.

Ahh, the magic bullet!  Like the ad says, “Set it and forget it!”  If only this were true… wishing will not make it so.  There is not one single SIEM vendor that can justify saying “open the box, activate your SIEM solution in minutes, and you will be fine!”  To say so, or even worse, to believe it would just be irresponsible!

If you just open the box and install it, you will only have the protection offered by the default settings.  With an antivirus solution, this is possible because you have all of the virus signatures to date, and it automatically looks to the virus database to see if there are any updates, and is constantly updated as signatures are added.  Too bad they cannot recognize a “Zero Day” attack when it happens, but that for now, is impossible.

With a SIEM solution, you need something you don’t need with an antivirus…  you need human interaction.  You need to tell the SIEM what your organization’s business rules are, define the roles and capabilities of the users, and have an expert analyst team monitor it, and adapt it to ever-changing conditions.  The IT infrastructure is constantly changing, and people are needed to adjust the SIEM to meet threats, business rules, and the addition or subtraction of IT components or users.

Some vendors imply that their SIEM solution is all that is needed, and you can just plug and play.  You know what the result is?  Unhappy SIEM users chasing down false positives or much worse false negatives.  All SIEM solutions require educated analysts to understand the information being provided, and turn it into actions.  These adjustments can be simplified, but again, it takes people.  If you are thinking about implementing a SIEM and forgetting about it…then fuhgeddaboutit!

5. Log Management is only meaningful if you have a compliance requirement.

Seen the recent headlines? From Stuxnet to Wikileaks to Heartland? There is a lot more to log management than merely satisfying compliance regulations. This myth exists because people are not aware of the internal and external threats that exist in this century!  SIEM/Log Management solutions provide some very important benefits to your organization beyond meeting a compliance requirement.

– Security.  SIEM/Log Management solutions can detect and alert you to a “Zero-Day” virus before the damage is done…something other components in your IT infrastructure can’t do.  They can also alert you to brute force attacks, malware, and trojans by determining what has changed in your environment…

– Improve Efficiency.  Face it!  There are two many devices transmitting too many logs, and the IT staff doesn’t have the time to comb through the logs and know if they are performing the most essential tasks in the proper order.  Many times order is defined by who is screaming the loudest.  A SIEM/Log Management solution help to know of a potential problem sooner, can automate the log analysis, prioritize the order in which issues are addressed, improving the overall efficiency of the IT team!  It is also much more efficient to perform forensic analysis to determine the cause and effect of an incident.

– Improve Network Performance.  Are the servers not working properly?  Are the applications going slowly?  The answer is in the logs, and with a SIEM/Log Management solution, you can quickly locate the problem and fix it.

– Reduce costs.  Implementing a SIEM enables organizations to reduce the number of threats both internal and external, and reduce the operating cost per device.   A SIEM can dramatically reduce the number of incidents that occur within your organization, which eliminates the cost it would take to figure out what actually happened.  Should an event occur, the amount of time it takes to perform the forensic analysis and fix the problem can be greatly shortened, reducing the total loss per incident.

– Ananth

Portable drives and Working remotely in Today’s IT Infrastructure

So, Wikileaks announced this week that its next release will be 7 times as large as the Iraq logs. The initial release brought a very common problem that organizations of all sizes face to the top of the global stage – anyone with a USB drive or writeable CD drive can download confidential information, and walk right out the door. The reverse is true, and harmful malware, Trojans, and viruses can be placed onto the network, as seen with the Stuxnet virus. These pesky little portable media drives are more trouble than they are worth! OK, you’re right, let’s not cry “The sky is falling” just yet.

But, the Wikileaks and Stuxnet virus aside, how big is this threat?

  • A 2009 a study revealed that 59% of former employees stole data from their employers prior to leaving learn more
  • A new study in the UK reveals USB sticks (23%) and other portable storage devices (19%) are the most common devices for insider theft learn more

Right now, there are two primary schools of thought to this significant problem. The first is to take an alarmist approach, and disable all drives, so that no one can steal this data, or infect the network. The other approach is to turn a blind eye, and have no controls in place.

But how does one know who is doing what, and which files are being downloaded or uploaded? The answer is in your device and application logs, of course. The first step is to define your organization’s security policy concerning USB and readable CD drives:

1. Define the capabilities for each individual user as tied to their system login

  • Servers and folders they have permission to access
  • Allow/disallow USB and writeable CD drives
  • Create a record of the serial numbers of the CD drive and USB drive

2. Monitor log activity for USB drives and writeable CD drives to determine what information may have been taken, and by whom

Obviously, this is like closing the barn door after the horse has left. You will be able to know who did what, and when… but by then it may be too late to prevent any financial loss or harm to your customers.

The ideal solution is to support this organization-wide policy that defines the abilities of each individual user, and determine who has permission to use the writeable capabilities of the CD drive or USB drive at the workstation, while monitoring and controlling serial numbers and information access from the server level with automation… combing through all of the logs to look for this event, and being able to trace what happened would seem almost impossible.

With a SIEM/log management solution, this process can be automated, and your organization can be alerted to any event that occurs where the transfer of data does not match the user profile/serial number combination. It is even possible to prevent that data from being transferred by automatically disabling the device. In other words, if someone with a sales ID attempts to copy a file from the accounting server onto a USB drive where the serial number does not match their profile, you can have the drive automatically disabled and issue an incident to investigate this activity. By the same token, if someone with the right user profile/serial number combination copies a file they are permitted to access – something that is a normal, everyday event in conducting business – they would be allowed to do so.

This solution prevents many headaches, and will prevent your confidential data from making the headlines of the Los Angeles Times or the Washington Post.

To learn how EventTracker can actually automate this security initiative for you, click here 

-John Sennott

Lessons from Honeynet Challenge “Log Mysteries”

Ananth, from Prism Microsystems, provides in-depth analysis on the Honeynet Challenge “Log Mysteries” and his thoughts on what it really means in the real world. EventTracker’s Syslog monitoring capability protects your enterprise infrastructure from external threats. “Syslog monitoring”

100 Log Management uses #66 Secure Auditing – LAuS

Today we continue our series on Secure Auditing with a look at the LAuS, the Linux Audit-Subsystem Design secure auditing implementation in Linux. Redhat and Open SUSE both have supported implementations but the LAuS is available in the generic Linux kernel as well.

[See post to watch Flash video] -Ananth

Is correlation killing the SIEM market?

Correlation – what’s it good for? Absolutely nothing!*

* Thank you Edwin Starr.

Ok, that might be a little harsh, but hear me out.

The grand vision of Security Information and Event Management is that it will tell you when you are in danger, and the means to deliver this is through sifting mountains of log files looking for trouble signs. I like to think of that as big-C correlation. Big-C correlation is an admirable concept of associating events with importance. But whenever a discussion occurs about correlation or for that matter SIEM – it quickly becomes a discussion about what I call little-c correlation – that is rules-based multi-event pattern matching.

To proponents of correlation, correlation can detect patterns of behavior so subtle that it would be impossible for a human unaided to do the same. It can deliver the promise of SIEM – telling you what is wrong in a sea of data. Heady stuff indeed and partially true. But the naysayers have numerous good arguments against as well; in no particular order some of the more common ones:

• Rules are too hard to write
• The rule builders supplied by the vendors are not powerful enough
• Users don’t understand the use cases (that is usually a vendor rebuttal argument for the above).
• Rules are not “set and forget” and require constant tuning
• Correlation can’t tell you anything you don’t already know (you have to know the condition to write the rule)
• Too many false positives

The proponents reply that this is a technical challenge and the tools will get better and the problem will be conquered. I have a broader concern about correlation (little c) however, and that is just how useful is it to the majority of customer uses cases. And if it is not useful, is SIEM, with a correlation focus, really viable?

The guys over at Securosis have been running a series defining SIEM that is really worth a read. Now the method they recommend for approaching correlation is that you look at your last 4-5 incidents when it comes to rule-authoring. Their basic point is that if the goals are modest, you can be modestly successful. OK, I agree, but then how many of the big security problems today are really the ones best served by correlation? Heck it seems the big problems are people being tricked into downloading and running malware and correlation is not going to help that. Education and Change Detection are both better ways to avoid those types of threats. Nor will correlation help with SQL injection. Most of the classic scenarios for correlation are successful perimeter breaches but with a SQL attack you are already within the perimeter. It seems to me correlation is potentially solving yesterdays’ problems – and doing it, because of technical challenges, poorly.

So to break down my fundamental issue with correlation – how many incidents are 1) serious 2) have occurred 3) cannot be mitigated in some other more reasonable fashion and 4) the future discovery is best done by detecting a complex pattern?

Not many, I reckon.

No wonder SIEM gets a bad rap on occasion. SIEM will make a user safer but the means to the end is focused on a flawed concept.

That is not to say correlation does not have its uses – certainly the bigger and more complex the environment the more likely you are going to have cases where correlation could and does help. In F500 the very complexity of the environment can mean other mitigation approaches are less achievable. The classic correlation focused SEM market started in large enterprise but is it a viable approach?

Let’s use Prism as an example, as I can speak for the experiences of our customers. We have about 900 customers that have deployed EventTracker, our SIEM solution. These customers are mostly smaller enterprises, what Gartner defines as SME, however they still purchased predominantly for the classic Gartner use case – the budget came from a compliance drive but they wanted to use SIEM as a means of improving overall IT security and sometime operations.

In the case of EventTracker the product is a single integrated solution so the rule-based correlation engine is simply part of the package. It is real-time, extensible and ships with a bunch of predefined rules.

But only a handful of our customers actually use it, and even those who do, don’t do much.

Interestingly enough, most of the customers looked at correlation during evaluation but when the product went into production only a handful actually ended up writing correlation rules. So the reality was, although they thought they were going to use the capability, few did. A larger number, but still a distinct minority, are using some of the preconfigured correlations as there are some uses cases (such as failed logins on multiple machines from a single IP) that a simple correlation rule makes good sense for. Even with the packaged rules however customers tended to use only a handful and regardless these are not the classic “if you see this on a firewall, and this on a server, and this in AD, followed by outbound ftp traffic you are in trouble” complex correlation examples people are fond of using.

Our natural reaction was there was something wrong with the correlation feature so we went back to the installed base and started nosing about. The common response was, no nothing wrong, just never got to it. On further questioning we surfaced the fact for most of the problems they were facing – rules were simply not the best approach to solving the problem.

So we have an industry that, if you agree with my premise, is talking about core value that is impractical to all but a small minority. We are, as vendors, selling snake oil.

So what does that mean?

Are prospects overweighting correlation capability in their evaluations to the detriment of other features that they will actually use later? Are they setting themselves up to fail with false expectations into what SIEM can deliver?

From a vendor standpoint are we all spending R&D dollars on capability that is really simply demoware? Case in point is correlation GUIs. Lots of R&D $$ go into correlation GUIs because writing rules is too hard and customers are going to write a lot of rules. But the compelling value promised for correlation is the ability to look for highly complex conditions. Inevitably when you make a development tool simpler you compromise the power in favor of speed of development. In truth you have not only made it simpler, but also stupider, and less capable. And if you are seldom writing rules, per the Securosis approach, does it need to be fast and easy at all?

That is not to say SIEM is valueless, SIEM is extremely valuable, but we are focusing on its most difficult and least valuable component which is really pretty darn strange. There was an interesting and amusing exchange a week or so ago when LogLogic lowered the price of their SEM capability. This, as you might imagine, raised the hackles of the SEM apologists. Rocky uses Arcsight as an example of successful SIEM (although he conveniently talks about SEM as SIEM, and the SIEM use case is broader now than SEM) – but how much ESM is Arcsight selling down-market? I tend to agree with Rocky in large enterprise but using that as an indicator of the broad market is dangerous. Plus the example of our customers I gave above would lead one to believe people bought for one reason but using the products in an entirely different way.

So hopefully this will spark some discussion. This is, and should not be, a slag between Log Management, SEM or SIM because it seems to me the only real differences between SEM and LM these days is in the amount of lip service paid to real-time rules.

So let’s talk about correlation – what is it good for?

-Steve Lafferty

SIEM or Log Management?

Mike Rothman of Securosis has a thread titled Understanding and Selecting SIEM/Log Management. He suggests both disciplines have fused and defines the holy grail of security practitioners as “one alert telling exactly what is broken”. In the ensuing discussion, there is a suggestion that SIEM and Log Mgt have not fused and there are vendors that do one but not the other.

After a number of years in the industry, I find myself uncomfortable with either term (SIEM or Log Mgt) as it relates to the problem the technology can solve, especially for the mid-market, our focus.

The SIEM term suggests it’s only about Security, and while that is certainly a significant use-case, it’s hardly the only use for the technology. That said if a user wishes to use the technology for only the security use case, fine, but that is not a reflection of the technology. Oh by the way, Security Information Management would perforce include other items such as change audit and configuration assessment data as well which is outside scope of “Log Management”.

The trouble with the term Log Management is that it is not tied to any particular use case and that makes it difficult to sell (not to mention boring). Why would you want to manage logs anyway? Users only care about solutions to real problems they have; not generic “best practice” because Mr. Pundit says so.

SIEM makes sense as “the” use case for this technology as you go to large (Fortune 2000) enterprises and here SIEM is often a synonym for correlation.
But to do this in any useful way, you will need not just the box (real or
virtual) but especially the expert analyst team to drive it, keep it updated and ticking. What is this analyst team busy with? Updating the rules to accommodate constantly changing elements (threats, business rules, IT components) to get that “one alert”. This is not like AntiVirus where rule updates can happen directly from the vendor with no intervention from the admin/user. This is a model only large enterprises can afford.

Some vendors suggest that you can reduce this to an analyst-in-a-box for small enterprise i.e., just buy my box, enable these default rules, minimal intervention and bingo you will be safe. All too common results are either irrelevant alerts or the magic box acts as the dog in the night time. A major reason for “pissed-off SIEM users”. And of course a dedicated analyst (much less a team) is simply not available.

This not to say that the technology is useless absent the dedicated analyst or that SIEM is a lost cause but rather to paint a realistic picture that any “box” can only go so far by itself; and given the more-with-less needs in this mid-market, obsessing on SIEM features obscures the greater value offered by this technology.

Most Medium Enterprise networks are “organically grown architectures” a response to business needs — there is rarely an overarching security model that covers the assets. Point solutions dominate based on incidents or perceived threats or in response to specific compliance mandates. See the results of our virtualization survey for example. Given the resource constraints, the technology must have broad features beyond the (essential) security ones. The smarter the solution, the less smart the analyst needs to be — so really it’s a box-for-an-analyst (and of course all boxes now ought to be virtual).

It makes sense to ask what problem is solved, as this is the universe customers live in. Mike identifies reacting faster, security efficiency and compliance automation to which I would add operations support and cost reduction. More specifically, across the board, show what is happening (track users, monitor critical systems/applications/firewalls, USB activity, database activity, hypervisor changes, physical eqpt etc), show what has happened (forensic, reports etc) and show what is different (change audit).

So back to the question, what would you call such a solution? SIEM has been pounded by Gartner et al into the budget line items of large enterprises so it becomes easier to be recognized as a need. However it is a limiting description. If I had only these two choices, I would have to favor Log Management where one (essential) application is SIEM.

-Ananth

100 Log Management uses #64: Tracking user activity, Part III

Continuing our series on user activity monitoring, today we look at something that is very hard to do in Vista and later, and impossible in XP and earlier — that is reporting on system idle time. The only way to accomplish this in Windows is to setup a domain policy to lock the screen after a certain amount of time and then calculate from the time the screen saver is invoked to when it is cleared. In XP and prior, however, the invocation of the screensaver does not generate an event so you are out of luck. In Vista and later, an event is triggered so it is slightly better, but even there the information generated should only be viewed as an estimate as the method is not fool-proof. We’ll look at the Pro’s (few) and Con’s (many). Enjoy.

100 Log Management uses #63 Tracking user activity, Part II

Today we continue our series on user activity monitoring using event logs. The beginning of any analysis of user activity starts with the system logon. We will take a look at some sample events and describe the types of useful information that can be pulled from the log. While we are doing user logons, we will also take a short diversion into failed user logons. While perhaps not directly useful for activity monitoring paying attention to attempts to logon are also critical.

100 Log Management uses #62 Tracking user activity

Today we begin a new miniseries – looking at and reporting on user activities. Most enterprises restrict what users are able to do — such as playing computer games during work hours. This can be done through software that restricts access, but often it is simply enforced on the honor system. Regardless of which approach a company takes, analyzing logs presents a pretty good idea of what users are up to. In the next few sessions we will take a look at the various logs that get generated and what can be done with them.

100 Log Management uses #61: Static IP address conflicts

Today we look at an interesting operational use case of logs that we learned about by painful experience — static IP address conflicts. We have a pretty large number of static IP addresses assigned to our server machines. Typical of a smaller company we assigned IP addresses and recorded them in a spread sheet. Well, one of our network guys made a mistake and we ended up having problems with duplicate addresses. The gremlins came out in full force and nothing seemed to be working right! We used logs to quickly diagnosis the problem. Although I mention a windows pop-up as a possible means of being alerted to the problem I can safely say we did not see it, or if we did, we missed it.

– By Ananth

100 Log Management uses #59 – 6 items to monitor on workstations

In part 2 of our series on workstation monitoring we look at the 6 things that are in your best interest to monitor — the types of things that if you proactively monitor will save you money by preventing operational and security problems. I would be very interested if any of you monitor other things that you feel would be more valuable. Hope you enjoy it.

100 Log Management uses #58 The why, how and what of monitoring logs on workstations

Today we are going to start a short series on the value of monitoring logs on Windows workstations. It is commonly agreed to that log monitoring on servers is a best practice, but until recently the complexity and expense of log management on workstations made most people shy away, but log monitoring on the workstation is valuable, and easy as well, if you know what to look for. These next 3 blogs will tell you the why, how and what.

Sustainable vs. Situational Values

I am often asked that if Log Management is so important to the modern IT department, then how come more than 80% of the market that “should” have adopted it has not done so?

The cynic says “unless you have best practice as an enforced regulation (think PCI-DSS here)” then twill always be thus.

One reason why I think this is so is because earlier generations never had power tools and found looking at logs to be hard and relatively unrewarding work. That perception is hard to overcome even in this day and age after endless punditry and episode after episode has clarified the value.

Still resisting the value proposition? Then consider a recent column in the NY Times which quotes Dov Seidman, the C.E.O. of LRN who describes two kinds of values: “situational values” and “sustainable values.”

The article is in the context of the current political situation in the US but the same theme applies to many other areas.

“Leaders, companies or individuals guided by situational values do whatever the situation will allow, no matter the wider interests of their communities. For example, a banker who writes a mortgage for someone he knows can’t make the payments over time is acting on situational values, saying: I’ll be gone when the bill comes due.”

At the other end, people inspired by sustainable values act just the opposite, saying: I will never be gone. “I will always be here. Therefore, I must behave in ways that sustain — my employees, my customers, my suppliers, my environment, my country and my future generations.”

We accept that your datacenter grew organically, that back-in-the-day there were no power tools and you dug ditches with your bare hands outside when it was 40 below and tweets were for the birds…but…that was then and this is now.

Get Log Management, it’s a sustainable value.

Ananth

100 Log Management uses #57 PCI Requirement XII

Today we conclude our journey through the PCI Standard with a quick look at Requirement 12. Requirement 12 documents the necessity to setup and maintain a policy for Information Security for employees and contractors. While this is mostly a documentation exercise it does have requirements for monitoring and alerting that log management can certainly help with.

100 Log Management uses #55 PCI Requirements VII, VIII & IX

Today we look at PCI-DSS Requirements 7, 8 and 9. In general these are not quite as applicable as the audit requirements in Requirement 10 which we will be looking at next time, but still log management is useful in several ancillary areas. Restricting access and strong access control are both disciplines log management helps you enforce.

Panning for gold in event logs

Ananth, the CEO of Prism is fond of remarking “there is gold in them thar logs…” this is absolutely true but the really hard thing about logs is figuring out how to get the gold out without needing to be the guy with the pencil neck and the 26 letters after their name that enjoys reading logs in their original arcane format. For the rest of us, I am reminded of the old western movies where prospectors pan for gold – squatting by the stream, scooping up dirt and sifting through it looking for gold, all day long, day after day. Whenever I see one of those scenes my back begins to hurt and I feel glad I am not a prospector. At Prism we are in the business of gold extraction tools. We want more people finding gold and lots of it. It is good for both of us.

One of the most common refrains we hear from prospects is they are not quite sure what the gold looks like. When you are panning for gold and you are not sure that glinty thing in the dirt is gold, well, that makes things really challenging. If very few people can recognize the gold we are not going to sell large quantities of tools.

In EventTracker 6.4 we undertook a little project where we asked ourselves “what can we do for the person that does not know enough to really look or ask the right questions?” A lot of log management is looking for the out-of-ordinary, after all. The result is a new dashboard view we call the Enterprise Activity Monitor.

Enterprise Activity uses statistical correlation to looks for things that are simply unusual. We can’t tell you they are necessarily trouble, but we can tell you they are not normal and enable you to analyze them and make a decision. Little things that are interesting – like if you get a new IP address coming into your enterprise 5000 times. Or if a user generally performs 1000 activities in a day, but suddenly does 10,000, or even as simple as a new executable showing up unexpectedly on user machines. Will you chase the occasionally false positive ? definitely, but a lot of the manual log review being performed by the guys with the alphabets after their names is really simply manually chasing trends – this enables you to stop wasting significant time in detecting the trend — all the myriad clues that are easily lost when you are aggregating 20 or 100 million logs a day.

The response from the Beta customers indicates that we are onto something. After all, any thing that can make our (hopefully more) customers’ lives less tedious and their backs hurt less, is all good!

Steve Lafferty

100 Log Management uses #54 PCI Requirements V & VI

Last we looked at PCI-DSS Requirements 3 and 4, so today we are going to look at Requirements 5 and 6. Requirement 5 talks about using AV software, and log management can be used to monitor AV applications to ensure they are running and updated. Requirement 6 is all about building and maintaining a secure network for which log management is a great aid.

-By Ananth