100 Log Management uses #4 Solaris BSM SU access failure


Today is a change of platform — we are going to look at how to identify Super User access failures on Solaris BSM systems. It is critical to watch for SU login attempts since once you are in as a SU or Root level the keys to the kingdom are in your pocket.

-By Ananth

100 Log Management uses – #3 Antivirus update


Today we are going to look at how you can use logs to ensure that everyone in the enterprise has gotten their automatic Antivirus update. One of the biggest security holes in an enterprise is individuals that don’t keep their machines updated, or turn auto-update off. In this video we will look at how you can quickly identify machines that are not updated to the latest AV definitions.

-By Ananth

100 Log Management uses – #2 Active Directory login failures


Yesterday we looked at firewalls, today we are shifting gears and looking at leveraging those logs from Active Directory. Hope you enjoy it.

– By Ananth

100 Log Management uses – #1 Firewall blocks


…and we’re back, with use-case# 1 – Firewall Blocks. In this video, I will talk about why it’s important to not just block undesirable connections but also monitor traffic that has been denied entry into your network.

By Ananth

100 uses of Log Management – Series


Here at Prism we think logs are cool, and that log data can provide valuable intelligence on most aspects of your IT infrastructure – from identifying unusual patterns that indicate security threats, to alerting on changes in configuration data, to detecting potential system downtime issues, to monitoring user activity. Essentially, Log Management is like a Swiss Army knife or even duct tape — it has a thousand and one applications.

Over the next 100 days, as the new administration takes over here in Washington DC, Ananth, the CEO of Prism Microsystems, will present the 100 most critical use-cases of Log Management in a series of videos focusing on real-world scenarios.

Watch this space for more videos, and feel free to rank and comment on your favorite use-cases.

By Ananth

The IT Swiss army knife EventTracker 6.3 and more


Log Management can find answers to every IT-related problem Why can I say that? Because I think most problems get handled the same way. The first stage is someone getting frustrated with the situation. They then use tools to analyze whatever data is accessible to them. From this analysis, they draw some conclusions about the problem’s answer, and then they act. Basically, finding answers to problems requires the ability to generate intelligence and insight from raw data.

Extreme logging or Too Much of a Good Thing


Strict interpretations of compliance policy standards can lead you up the creek without a paddle. Consider two examples:

  1. From PCI-DSS comes the prescription to “Track & monitor all access to network resources and cardholder data”. Extreme logging is when you decide this means a db audit log larger than the db itself plus a keylogger to log “all” access.
  2. From HIPAA 164.316(b)(2) comes the Security Rule prescription to “Retain … for 6 years from the date of its creation or the date when it last was in effect, whichever is later.” Sounds like a boon for disk vendors and a nightmare for providers.

Before you assault your hair follicles, consider:
1) In clarification, Visa explains “The intent of these logging requirements is twofold: a) logs, when properly implemented and reviewed, are a widely accepted control to detect unauthorized access, and b) adequate logs provide good forensic evidence in the event of a compromise. It is not necessary to log all application access to cardholder data if the following is true (and verified by assessors):
– Applications that provide access to cardholder data do so only after making sure the users are authorized
– Such access is authenticated via requirements 7.1 and 7.2, with user IDs set up in accordance with requirement 8, and
– Application logs exist to provide evidence in the event of a compromise.

2) The Office of the Secretary of HHS waffles when asked about retaining system logs- this can be reasonably interpreted to mean the six year standard need not be taken literally for all system and network logs.

Ananth

Security- A casualty in the Sovereignty vs Efficiency tradeoff


Cloud computing has been described as a trade off between sovereignty and efficiency. Where is security (aka Risk Transfer) in this debate?

Chris Hoff notes that yesterday’s SaaS providers (Monster, Salesforce) are now styled as cloud computing providers in his post .

CIOs, under increasing cost pressure, may begin to accept the efficiency argument that cloud vendors have economies of scale in both the acquisition and operations of the data center.

But hold up…

To what extent is the risk transferred when you move data to the cloud? To a very limited extent, at most to the SLA. This is similar to the debate where one claims compliance (Hannaford, NYC and now sadly Mumbai) but attacks take place anyway, causing great damage. Would an SLA save the Manager in such cases? Unlikely.

In any case, the generic cloud vendor does not understand your assets or your business. At most, they can understand threats, in general terms.  They will no doubt commit to the SLA but these usually refer to availability not security.

Thus far, general purpose, low cost utility or “cloud” infrastructure (such as Azure or EC2), or SaaS vendors (salesforce.com) do not have very sophisticated security features built in.

So as you ponder the Sovereignty v/s Efficiency tradeoff, spare a thought for security.

– Ananth

Auditing web 2.0; 2009 security predictions and more


Don’t look now, but the Web 2.0 wave is crashing onto corporate beaches everywhere.  Startups, software vendors, and search engine powerhouses are all providing online accounts and services for users to create wikis, blogs, etc. for collaborating and sharing corporate data, often without the knowledge or involvement of IT or in-house legal counsel. 

SIEM: What are you searching for?


Search engines are now well established as a vital feature of IT and applications continue to evolve in breadth and depth at dizzying rates.  It is tempting to try and reduce any and all problems to one of query construction against an index. Can Security Information and Event Management or SIEM be (force) fitted into the search paradigm?

The answer depends on what you are looking to do and your skill with query construction.

If you are an expert with detailed knowledge of log formats and content, you may find it easy to construct a suitable query. When launched against a suitably indexed log collection, results can be gratifyingly fast and accurate. This is however a limited use-case in the SIEM universe of use-cases. This model usually applies when Administrators are seeking to resolve Operational problems.

Security analysts however are usually searching for behavior and not simple text searches. While this is the holy grail of search engines, attempts from Excite (1996) to Accoona (RIP Oct 2008) never made the cut. In the SIEM world, the context problem is compounded by myriad formats and the lack of any standard to assign meaning to logs even within one vendor’s products and versions of a product.

All is not lost, SIEM vendors do offer solutions by way of pre-packaged reports and the best ones offer users the ability to perform analysis of behavior within a certain context (as opposed to simple text search). By way of example – show me all failed logins after 6PM; from this set, show only those that failed on SERVER57; from this set show me those for User4; now go back and show me all User4 activity after 6PM on all machines.

Don’t try this with a “simple” text search engine….or like John Wayne in The Searchers, you may become bitter and middle aged.

– Ananth

Cutting through SIEM/Log Management vendor hype


Cutting through SIEM/Log Management vendor hype While there is little doubt that SIEM solutions are critical for compliance, security monitoring or IT optimization, it is getting harder for buyers to find the right product for their needs. The reason for this is two fold; firstly, there are a number of products available and vendors have done a great job of making their products sound roughly the same in core features such as correlation, reporting, collection, etc.

Will SIEM and Log Management usage change with the economic slowdown?


When Wall Street really began to implode a couple of weeks ago one of the remarkable side-effects of the plunge was a huge increase of download activity in all items related to ROI on the Prism website. A sign of the times as ROI always becomes more important in times of tight budgets, and our prospects were seeing the lean times coming. So what does the likelihood of budget freezes or worse mean for how SIEM/Log Management is used or how it is justified in the enterprise?

Compliance is and will remain the great budget enabler of SIEM and Log Management but often a compliance project can be done in a far more minimal deployment and still meet the requirement. There is, however, enormous tangible and measurable benefit in Log Management beyond the compliance use case that has been largely ignored.

SIEM/Log Management for the most part has been seen (and positioned by us vendors) as a compliance solution with security benefits or in some cases a security solution that does compliance. Both of these have a hard ROI to measure as it is based on a company’s tolerance for risk.  A lot of SIEM functionality, and the log management areas in particular, is also enormously effective in increasing operational efficiencies – and provides clear, measurable, fast and hard ROI. Very simply, compliance will keep you out of jail, security reduces risk, but by using SIEM products for operations you will save hard dollars on administrator costs and reduce system down-time which in turn increases productivity that directly hits the bottom line. Plus you still get the compliance and security for free effectively. A year ago when we used to show these operational features to prospects (mostly security personnel) they were greeted 9 out of 10 times with a polite yawn. Not anymore.

We believe this new cost conscious buying behavior will also drive broader rather than deeper requirements in many mid-tier businesses. It is the “can I get 90% of my requirements, and 100% of the mandatory ones in several areas, and is that better than 110% in a single area?” discussion. Recently Prism added some enhanced USB device monitoring capability in EventTracker. While it is beyond what typical SIEM vendors provide in that we track files written and deleted on the USB drive in real-time, I would not consider it to be as good as a best of breed DLP provider. But for most people it gets them where they need to be and is included in EventTracker for no additional cost. It is amazing the level of interest this functionality receives today from prospects while at the same time you get correspondingly less interest in features with a dubious ROI like many correlation use cases. Interesting times.

-Posted by Steve Lafferty

The cloud is clear as mud


The Economist opines that the world is flirting with recession and IT may suffer; which in turn will hasten the move to “cloud computing”, which in a pithy distillation is described as “a trade-off between sovereignty and efficiency”.

Computing as a borderless utility? Whereas most privacy laws assume data resides in one place…the cloud makes data seem present everywhere and nowhere.

In a recent post Steve differentiated between security OF the cloud and security IN the cloud. This led us to an analysis of cloud computing as it is currently offered by Amazon AWS, Google Apps and Zoho.

From a risk perspective, security of content IN the cloud is essentially considered your problem by Amazon whereas Google and Zoho say “trust in me, just in me”. When pressed, Google says “we do not recommend Google Apps for content subject to compliance regulations” but is apparently working to assuage concerns about access control.

However moving your data to the cloud does not absolve you from responsibility on who accessed it for what purpose — the main concern of auditors everywhere.

How now?

At the present time, neither Google nor Zoho make any audit trail available to subscribers while at Amazon it’s your problem. We think widespread adoption by the business community (and what of the federal government?) will require significant transparency to provide visibility. This is also true for popular hosted applications like Intuit Quickbooks and Salesforce.

As Alex notes “…in order to gain that visibility, our insight into Cloud Risk Management must include significant provisions for understanding a joint ability to Prevent/Detect/Respond as well as provisions for managing the risk that one of the participants won’t provide that visibility or ability via SLA’s and penalties.”

Clear as mud.

 Ananth

Some Ruminations on the Security Impact of Software As A Service


In a recent post I talked a little about the security and compliance issues facing companies that adopt cloud-based SaaS for any mission-critical function. I referred to this as security OF the cloud to differentiate it from a cloud-based security offering or security IN the cloud. This is going to pose a major change in the security industry if it takes off.

Take a typical small business “SmallCo” as an example – They depend on a combination of Quickbooks and an accounting firm for their financial processes. For all practical purposes SmallCo outsources the entire accounting function. They use a hosting company to host Quickbooks for a monthly fee, and their external CPA, internal management and accounts staff access the application for data processing. Very easy to manage, no upfront investment, no servers to maintain, all the usual reasons why a SaaS model is so appealing.

One can easily argue the crown jewels of SmallCo’s entire business are resident in that hosted solution. SmallCo began to question whether this critical data was secure from being hacked or stolen. Would SmallCo be compliant if they were obligated to follow a compliance standard? Is it the role of the hosting provider to ensure security and compliance? To all of those questions there was and is no clear cut answer. SmallCo is provided access to the application and can have access to any audit capability that is supported in the Quickbook product (which is not a great deal), and there is no ability to collect that audit and usage data other than to manually run a report. At the time SmallCo began it did not seem to be important but as SmallCo grew so did their exposure.

Salesforce, another poster child for SaaS, is much the same. I read a while back they were going to put the ability to monitor changes in some of their database fields in their Winter 2008 release. But there appears to be nothing for user level auditing or even admin auditing (of your staff much less theirs). A trusted user can steal an entire customer list and not even have to be in the office to do it. The best DLP technology will not help you as it can be accessed and exported through any web browser on any machine. Having used Salesforce in previous companies I can personally attest, however, that it is a fine  CRM system, cost-effective, powerful and well-designed. But you have to maintain a completely separate access control list, and you have no real ability to monitor what is accessed by whom for audit purposes. For a prospect with privacy concerns is it really a viable, secure solution?

Cloud based computing changes the entire paradigm of security. Perimeter defense is the first step of a defense in depth to protect service availability and corporate data, but what happens when there is no data resident to be defended? In fact, when there are a number of services in the cloud, is event management going to be viable? Will the rules be the same when you are correlating events from different services on the cloud?

So here is the challenge I believe — as more and more mission critical processes are moved to the cloud, SaaS suppliers are going to have to provide log data in a real-time, straight forward manner, probably for their admins as well as their customers’ personnel. In fact since there is only a browser and login and no firewall, network or operating system level security  to breach, auditing would have to be very, very robust.  With all these cloud services is it feasible that an auditor will accept 50 reports from 50 providers and pass the company under audit? Maybe, but someone – either the end-user or a MSSP has to be responsible for monitoring for security and compliance, and unless the application and data is under the control of end-users, they will be unable to do so

So If I were an application provider like Salesforce I would be thinking really hard about being a good citizen in a cloud based world. Like providing real-time audit records for at least user log-on and log-off, log-on failures and a complete audit record of all data extracts as a first step, as well as a method to push the events out in real-time. I would likely do that before I worried too much about auditing fields in the database.

Interesting times.

Steve Lafferty

How to recession proof IT; Get hard dollar savings today


Performing well during a security “Every crisis offers you extra desired power” William Moulton Marston Jasmine’s corollary: “Only if you perform well during that crisis.” Crises will happen no matter how many precautions we take. The need to blame someone is a human desire and it is easy to focus that on the crisis response team, because they are visible. Yet when teams perform well during the crisis they don’t merely avoid blame. They do garner the potential to become powerful advisors or outright leaders. It’s even better if you can also demonstrate that lessons learned from past crises are making the current environment more secure. After all, the Justice League members wouldn’t be heroes if no one knew about their actions. But what does it mean to perform well in a crisis?

MSSP /SaaS /Cloud Computing – Confused? I know I am


There is a lot of discussion around Security MSSPs, SaaS (Security as a Service) and Cloud Computing these days. I always felt I had a pretty good handle on MSSPs and SaaS. The way I look at it, you tend to outsource the entire task to Security MSSPs. If you outsource your firewall security, for instance, you generally have no one on staff that worries about firewall logs and you count on your MSSP partner to keep you secure – at least with regards to the firewall. The MSSP collects, stores and reviews the logs. With SaaS, using the same firewall example above, you outsource the delivery of the capability — the mechanics of the collection and storage tasks and the software and hardware that enable it, but you still have IT personnel on staff that are responsible for the firewall security. These guys review the logs, run the reports etc. This general definition is the same for any security task, whether it is email security, firewall or SIEM.

OK, so far, so good. This is all pretty simple.

Then you add Cloud Computing and everything gets a little, well, cloudy. People start to interchange concepts freely, and in fact when you talk to somebody about cloud computing and what it means to them, it is often completely different than what you thought cloud computing to be. I always try to ask – Do you mean security IN the cloud, i.e. using an external provider to manage some part of the collection, storage and analysis of your security data (If so go to SaaS or MSSP)? Or do you mean security OF the cloud — the collection/management of security information from corporate applications that are delivered via SaaS (Software as a Service, think Salesforce)?

The latter case has really nothing to do with either Security SaaS or MSSP since you could be collecting the data from the applications such as Salesforce into a security solution you own and host. The problem is an entirely different one. Think about how to collect and correlate data from applications you have no control over, or, how these outsourced applications affect your compliance requirements. Most often compliance regulations require you to review access to certain types of critical data. How do you do that when the assets are not under your control? Do you simply trust that the service provider is doing it right? And what will your auditor do when they show up to do an audit? How do you guarantee chain of custody of the log data when you have no control over how, when, and where it was created? Quickly a whole lot of questions suddenly pop up that there appear to be no easy answers.

So here are a few observations:

  • Most compliance standards do not envision compliance in a world of cloud computing.
  • Security OF the cloud is undefined.
  • Compliance standards are reaching further down into more modest-sized companies, and SaaS for enterprise applications is becoming more appealing to enterprises of all sizes.
  • When people think about cloud computing, they tend to equate “it is in the cloud” to “I have no responsibility”, and when critical data and apps migrate to the cloud that is not going to be acceptable.

The combination of the above is very likely going to become a bigger and bigger issue, and if not addressed will prevent the adoption of cloud computing.

Steve Lafferty

Outsource? Build? Buy?


So you decided that it’s time to manage your security information. Your trigger was probably one of a) Got handed a directive from up high “The company shall be fully compliant with applicable regulation [insert one] PCI/HIPAA/SOX/GLBA/FISMA/Basel/…” b) Had a security incident and realized OMG we really need to keep those logs.

Choice: Build
Upside: It’ll be perfect, it’ll be cheap, it’ll be fun
Downside: Who will maintain, extend, support (me?), how will it scale?

Choice: Outsource
Upside: Don’t need the hardware or staff, pay-go, someone else will deal with the issues
Downside: Really? Someone else will deal with the issues? How do you get access to your info? What is the SLA?

Choice: Buy
Upside: Get a solution now, upgrades happen, you have someone to blame
Downside: You still have to learn/use it, is the vendor stable?

What is the best choice?
Well, how generic are your requirements?
What sort of resources can you apply to this task?
How comfortable are you with IT? [From ‘necessary evil’…to… ‘We are IT!’] What sort of log volume and sources do you have?

Outsource if you have – generic requirements, limited sources/volume and low IT skills

Build if you have – programming skills, fixed requirements, limited sources/volume

Buy if you have – varied (but standard) sources, good IT skills,
moderate-high volume

As Pat Riley says “Look for your choices, pick the best one, then go with it.”

Ananth

Data leakage and the end of the world


Data leakage and the end of the world Most of the time when IT folk talk about data leakage they mean employees emailing sensitive documents to Gmail accounts or exposing the company through peer-to-peer networks or the burgeoning use of social networking services.

Compliance: Did you get the (Pinto) Memo?


The Ford Pinto was a subcompact manufactured by Ford (introduced on 9/11/70 — another infamous coincidence?). It became a focus of a major scandal when it was alleged that the car’s design allowed its fuel tank to be easily damaged in the event of a rear-end collision, which sometimes resulted in deadly fires and explosions. Ford was aware of this design flaw but allegedly refused to pay what was characterized as the minimal expense of a redesign. Instead, it was argued, Ford decided it would be cheaper to pay off possible lawsuits for resulting deaths. The resulting liability case produced a judicial opinion that is a staple of remedy courses in American law schools.

What brought this on? Well, a recent conversation with a healthcare institution went something like this:

Us: Are you required to comply with HIPAA?

Them: Well, I suppose…yes

Us: So how do you demonstrate compliance?

Them: Well, we’ve never been audited and don’t know anyone that has

Us: So you don’t have a solution in place for this?

Them: Not really…but if they ever come knocking, I’ll pull some reports and wiggle out of it

Us: But there is a better, much better way with all sorts of upside

Them: Yeah, yeah whatever…how much did you say this “better” way costs?

Us: Paltry sum

Them: Well why should I bother? A) I don’t know anyone that has been audited. B) I’ve got better uses for the money in these tough times. C) If they come knocking, I’ll plead ignorance and ask for “reasonable time” to demonstrate compliance. D) In any case, if I wait long enough Microsoft and Cisco will probably solve this for me in the next release.

Us: Heavy sigh

Sadly..none of this is true and there is overwhelming evidence of that.

Regulations are not intended to be punitive of course and implementing log management in reality provides positive ROI

– Ananth

Hot virtualization and cold compliance; New EventTracker 6.2 and more


Hot server virtualization and cold compliance Without a doubt, server virtualization is a hot technology. NetworkWorld reported: “More than 40% of respondents listed consolidation as a high priority for the next year, and just under 40% said virtualization is more directly on their radar.” They also reported that server virtualization remains one of IT’s top initiatives even as IT executives are bracing themselves for potential spending cuts. Another survey of 100 US companies shows 60% of the respondents are currently using virtualization in production to support non-mission-critical business services. In other words, they are using it in a “production sandbox” before deploying it on a large scale.

Let he who is without SIM cast the first stone


In a recent post Raffael Marty points out the shortcomings of a “classic” SIM solution including high cost in part due to a clumsy, expensive tuning process.

More importantly, he points out that SIM’s were designed for network-based attacks and these are on the wane, replaced by host-based attacks.

At Prism, we’ve long argued that a host-based system is more appropriate and effective. This is further borne out by the appearance of polymorphic strains such as Nugache that now dominate Threatscape 2008.

However is “IT Search” the complete answer? Not quite. As a matter of fact, any such “silver bullet” has never worked out. Fact is, users (especially in mid-tier) are driven by security concerns, so proactive correlation is useful (in moderation), compliance remains a major driver and event reduction with active alerting is absolutely essential for the overworked admin. That said “IT Search” is a useful and powerful tool in the arsenal of the modern, knowledgeable Security Warrior.

A “Complete SIM” solution is more appropriate for the enterprise. Such a solution blends the “classic” approach which is based on log consolidation and multi-event correlation from host and network devices PLUS a white/greylist scanner PLUS the Log Search function. Long term storage and flexible reporting/forensic tools round out the ideal feature set. Such a solution has better potential to satisfy the different user profiles. These include Auditors, Managers and Security Staff, many of who are less comfortable with query construction.

One dimensional approaches such as “IT Search” or “Network Behavior Anomaly Detection” or “Network Packet Correlation” while undeniably useful are in themselves limited.

Complete SIM, IT Search included, that’s the ticket.

 Ananth

Fear boredom and the pursuit of compliance


Fear, boredom and the pursuit of compliance When it comes right down to it, we try to comply with regulations and policies because we are afraid of the penalties. Penalties such as corporate fines and jail time may be for the executive club, but everyone is affected when the U.S. Federal Trade Commission starts directly overseeing your security audits and risk assessment programs for 20 years. Just ask the IT folks at TJX Cos Inc. Then there are the hits to the top line as customers get shy about using their credit cards with you, and the press has fun raking you through the mud.

Architectural Chokepoints


I have been thinking a bit on scalability lately – and I thought it might be an interesting exercise to examine a couple of the obvious places in a SIEM solution where scalability problems can be exposed. In a previous post I talked about scalability and EPS. The fact is there are multiple areas in a SIEM solution where the system may not scale and anyone thinking of a SIEM procurement should be thinking of scalability as a multi-dimensional beast.

First, all the logs you care about need to be dependably collected. Collection is where many vendors build EPS benchmarks – but generally the number of events per second is based on a small normalized packet. Event size varies widely depending on source so understand your typical log size, and calculate accordingly. The general mitigation strategies for collection are faster collection hardware (collection is usually a CPU intensive task), distributed collection architecture, and log filtering.

One thing to think off — log generation is often quite “bursty” in nature. You will, for instance, get a slew of logs generated on Monday mornings when staff arrive to work and start logging onto system resources. You should evaluate what happens if the system gets overloaded – do the events get lost, does the system crash?

As a mitigation strategy, Event filtering is sometimes pooh-poohed , however the reality is that 90% of traffic generated by most devices consists of completely useless (from a security perspective) status information. Volume varies widely depending on audit settings as well. A company generating 600,000 events per day on a windows network can easily generate 10-fold as much by increasing their audit settings slightly. . If you need the audit levels high, filtering is the easiest way to ease pressure on the entire down-stream log system.

Collection is a multi-step process also. Simply receiving an event is too simplistic a view. Resources are expended running policy and rules against the event stream. The more processing, the more system resources consumed. The data must be committed to the event store at some point so it needs to get written to disk. It is highly advisable to look at these as 3 separate activities and validate that the solution can handle your volume successfully.

A note on log storage for those who are considering buying an appliance with a fixed amount of onboard storage – be sure it is enough, and be sure to check out how easy it is to move off, retrieve and process records that have been moved to offline storage media. If your event volume eats up your disk you will likely be doing a lot of the moving off, moving back on activity. Also, some of the compliance standards like PCI require that logs must be stored online a certain amount of time. Here at Prism we solved that problem by allowing events to be stored anywhere on the file system, but most appliances do not afford you that luxury.

Now let’s flip our attention over to the analytics and reporting activities. This is yet another important aspect of scalability that is often ignored. If a system can process 10 million events per minute but takes 10 hours to run a simple query you probably are going to have upset users and a non-viable solution. And what happens to the collection throughput above when a bunch of people are running reports? Often a single user running ad-hoc reports is just fine, put a couple on and you are in trouble.

A remediation strategy here is to look for a solution that can offload the reporting and analytics to another machine so as not to impact the aggregation, correlation and storage steps. If you don’t have that capability absolutely press the vendor for performance metrics if reports and collection are done on the same hardware.

– Steve Lafferty

The EPS Myth


Often when I engage with a prospect their first question is “How many events per second (EPS) can EventTracker handle?” People tend to confuse EPS with scalability so by simply giving back an enormous-enough number (usually larger than the previous vendor they spoke with) it convinces them your product is, indeed, scalable. The fact is scalability and Events per Second (EPS) are not the same and many vendors get away from the real scalability issue by intentionally using the two interchangeably. A high EPS rating does not guarantee a scalable solution.If the only measure of scalability available is an EPS rating, you as a prospect should be asking yourself a simple question. What is the vendor definition of EPS? You will generally find that the answer is different with each vendor.

  • Is it number of events scanned/second?
  • Is it number of events received/second?
  • Is it number of events processed/second?
  • Is it number of events inserted in the event store/second?
  • Is it a real time count or a batch transfer count?
  • What is the size of these events? Is it some small non-representative size, for instance, 100 bytes per event or is it a real event like a windows event which may vary from 1000 to 6,000 bytes?
  • Are you receiving these events in UDP mode or TCP mode?
  • Are they measuring running correlation rules against the event stream? How many rules are being run?
  • And let’s not even talk about how fast the reporting function runs, EPS does not measure that at all.

At the end of the day, an EPS measure is generally a measure of a small, non-typical normalized event received. Nothing measured about actually doing something useful with the event, and indeed, pretty much useless.

With the lack of definition of what an event actually is, EPS is also a terrible comparative measure. You cannot assume that one vendor claiming 12,000EPS is faster than another claiming 10,000EPS as they are often measuring very different things. A good analogy would be if you asked someone how far away an object was, and they replied 100. For all the usefulness of the EPS measure the unit could be inches or miles.

EPS is even worse for ascertaining true solution capability. Some vendors market appliances that promise 2,000 EPS and 150 GB disk space for log storage. They also promise to archive security events for multiple years to meet compliance. For the sake of argument let’s assume the system is receiving, processing and storing 1000 windows events/sec with an average 1K event size (a common size for a Windows event). In 24 hours you will receive 86 million events. Compressed at 90% this consumes 8.6GB or almost 7% of your storage in a single day. Even with heavy compression it can handle only a few weeks of data with this kind of load. Think of buying a car with an engine that can race to 200MPH and a set of tires and suspension that cannot go faster that 75MPH. The car can’t go 200, the engine can, but the car can’t. A SIEM solution is the car in this example, not the engine. Having the engine does not do you any good at all.

So when asked about EPS, I sigh, and say it depends, and try to explain all this. Sometimes it sinks in, sometimes not. All in all don’t pay a lot of attention to EPS – it is largely an empty measure until the unit of measure is standardized, and even then it will only be a small part of overall system capability.

Steve Lafferty

EventTracker review; Zero-day attack protection and more


Creating lasting change from security management Over the past year, I’ve dealt with how to implement a Pragmatic approach to security management and then dug deeper into the specifics of how to successfully implement a security management environment successfully. Think of those previous tips as your high school level education in security management.

Auditing Drive Mappings – TECH TIP


Windows does not track drive mappings for auditing out of the box. To audit drive mappings you will need to do the following steps:

  1. Turn on Object Access Auditing via Group Policy on the system(s) in questionYou will need to perform the following steps on each system that you want to track the drive mappings
  2. Open the registry and drill down to HKEY_CURRENT_USERNetwork
  3. Right click on Network and choose Permissions (if you click on the plus sign you will see each of your mapped drive listed)
  4. Click on the Advanced button
  5. Click on the Auditing tab then click on the Add button
  6. In the Select User or Group box type in Everyone
  7. This will open the Auditing dialog box
  8. Select the settings that you want to audit for; stay away from the Full Control option and Read Control. I recommend the following settings: Create Subkey, Create Link and Delete.

Windows will now generate event ids 560, 567 and 564 when the drive mappings are added or deleted. 564 will be generated when a mapping is deleted, 567 will be created when a mapping is deleted or added and 560 will be generated both times as well. Event ID’s 567 and 564 will not give you the full information that you are looking for, they will tell you what was done to the mappings but not WHICH mapping. To determine which mapping you will need,the Handle ID code that will be found in the event description on the 564/567 events. The Handle ID will allow you to track back to the 560 event which will give you the mapping that is being added/deleted. Event ID 567 will only be generated on Windows XP or Windows 2003 systems, Windows 2000 will not generate 567.

– Isaac

Ten reasons you will be unhappy with your SIM solution – and how to avoid them


As the market matures we are increasingly being contacted by prospects that are looking not to implement SIM technology, but instead are looking to replace existing SIM technology. These are people that purchased a couple of budget cycles ago, struggled with their selection and are now throwing up their hands in frustration and moving on. From a high level, the reason for this adoption failure was not that SIM was bad or unnecessary. These people were, and remain, convinced of the benefits of a SIM solution, but at the time of their initial purchase many did not have a detailed understanding of both their business and technical requirements, nor a clear understanding of the actual investment in time and effort necessary to make their SIM implementation a success.

For new prospects just getting into SIM – is there a lesson to be learned from these people? The answer to that is a resounding “yes”, and it is worthwhile digging a little deeper than a generic “understand your requirements before you buy” (that is a really good thing, but a bit obvious!), and let you hear some of the more common themes we hear.

Just as a bit of stage setting, the majority of the customers Prism serves tend to be what are classically called SMEs (Small and Medium Enterprises). Although a company might be considered SME, it is not uncommon today for even smaller enterprises to have well in excess of a thousand systems and devices that need to be managed. Implementing Security Information Management (SIM) poses a special challenge for SMEs, as events and security data from even a thousand devices can be completely overwhelming. SMEs are “tweeners” – They have a relatively big problem (like large enterprises), but less flexibility (in terms of money, time and people) to solve it. SMEs are also pulled by vendors from all ends of the spectrum – you have low end vendors coming in and positioning very inexpensive, point solutions, and very high-end vendors pitching their wares, sometimes in a special package for you. So the gamut of options are very, very broad.

So here they are, the top 10, in no particular order as we hear these all too frequently:

  1. We did not evaluate the product we selected. The demo looked really good and all the reviews we read were good, but it just did not really fit our needs when we actually got it in house.
  2. Quite frankly during the demo we were so impressed by the sizzle that we never really looked at our core requirement in any depth. The vendor we chose said everyone did that and not to worry — and when we came to implement we found that the core requirement was not near as good as the sizzle we saw. We ended up never using the sizzle and struggling with the capability we really should have looked at.
  3. We evaluated the product in a small test network, and were impressed with results. However, deployment on a slightly larger scale was disappointing and way more complicated.
  4. A brand name or expensive solution did not mean a complete solution.
  5. We did not know our security data profile or event volumes in advance. This prevented us from running load or stress tests, but the vendor response was “don’t worry”. We should have. The solution scoped did not scale to our requirements, and the add-ons killed us as we were out of the cheap entry level bundles.
  6. Excessive false positives were generated by the threat analysis module of the SIM solution. It was too complicated to tune, and we failed to detect a real threat when it happened.
  7. Deployment was/is a nightmare, and we lacked the tools necessary to manage and maintain the solution. Professional Services was not in the budget.
  8. Once we gained experience with the product and decided to tune it to meet evolving business objectives, we found the architecture inflexible and licensing limitations in terms of the number of consoles and the separation of duties. In order to meet our new business requirements it was just too expensive, and we were annoyed at the vendor as this was not what they represented to us at all.
  9. We bought on the cheap, and realized the solution does not scale up. It works very well in a small domain with limited requirements but beyond that the solution is wanting.
  10. We bought the expensive solution because it had a lot of cool stuff we thought we would use, but the things we really needed were hard to use, and the stuff we really didn’t need, but impressed the heck out of us at demo time, we are never going to get to.

-Jagat

Is it better to leave some logs behind?


Is it better to leave some logs behind? Log management has emerged in the past few years as a must-do discipline in IT for complying with regulatory standards, and protecting the integrity of critical IT assets. However, with millions of logs being spit out on a daily basis by firewalls, routers, servers, workstations, applications and other sources across a network, enterprises are deluged with log data and there is no stemming the tide.

When you can’t work harder, work smarter


In life and business, the smart approach is to make the most of what you have. You can work for 8 hours and 10 hours and then 12 hours a day and hit your performance limit. How do you get more out of your work? By working smarter, not harder – Get others on board, delegate, communicate. Nowhere is this truer than with computer hardware. Poorly written software makes increasing demands on resources but cannot deliver quantum jumps in performance.

As we evaluated earlier versions of EventTracker it became clear that we were soon reaching the physical limits of the underlying hardware and the choke point to getting faster reports was not to work harder (optimize Code) but to work smarter (plan up-front, divide and conquer, avoid searching through irrelevant data).

This is realized in the Virtual Collection Point architecture that is available in version 6. By segregating log sources up front into virtual groups and stacking software processes from reception to archiving, improvement in performance is possible FOR THE SAME HARDWARE!

When comparing SIEM solutions for scalability, remember that if the only path is to add more hardware, it’s a weaker approach than making the best of what you already have.

– Ananth

The Weakest Link in Security


The three basic ingredients of any business are technology, processes and people. From an IT security standpoint, which of these is the weakest link in your organization? Whichever it is, it is likely to be the focus of attack.

Organizations around the globe routinely employ the use of powerful firewalls, anti-virus software and sophisticated intrusion-detection systems to guard precious information assets. Year in and year out, polls show the weakest link to be processes and the people behind them. In the SIEM world, the absence of a process to examine exception reports to detect non-obvious problems is one manifestation of process weakness.
The reality is that not all threats are obvious and detected/blocked by automation. You must apply the human element appropriately.

Another is to audit user activity especially privileged user activity. It must match approved requests and pass the reasonableness test (eg performed during business hours).

Earlier this decade, the focus of security was the perimeter and the internal network. Technologies such as firewalls and network based intrusion detection were all the rage. While these are necessary, vital even, defense in depth dictates that you look carefully at hosts and user activity.

– Ananth