Archive

100 Log Management uses #55 PCI Requirements VII, VIII & IX


Today we look at PCI-DSS Requirements 7, 8 and 9. In general these are not quite as applicable as the audit requirements in Requirement 10 which we will be looking at next time, but still log management is useful in several ancillary areas. Restricting access and strong access control are both disciplines log management helps you enforce.

New EventTracker 6.4; 15 reasons why your business may be insecure


Tuning Log Management and SIEM for Compliance Reporting The winter holidays are quickly approaching, and one thing that could probably make most IT Security wish lists is a way to produce automated compliance reports that make auditors say “Wow!” In last month’s newsletter, we took a look at ways to work better with auditors. This month, we’re going to do a deeper dive into tuning of log management and SIEM for more effective compliance reporting.

Panning for gold in event logs


Ananth, the CEO of Prism is fond of remarking “there is gold in them thar logs…” this is absolutely true but the really hard thing about logs is figuring out how to get the gold out without needing to be the guy with the pencil neck and the 26 letters after their name that enjoys reading logs in their original arcane format. For the rest of us, I am reminded of the old western movies where prospectors pan for gold – squatting by the stream, scooping up dirt and sifting through it looking for gold, all day long, day after day. Whenever I see one of those scenes my back begins to hurt and I feel glad I am not a prospector. At Prism we are in the business of gold extraction tools. We want more people finding gold and lots of it. It is good for both of us.

One of the most common refrains we hear from prospects is they are not quite sure what the gold looks like. When you are panning for gold and you are not sure that glinty thing in the dirt is gold, well, that makes things really challenging. If very few people can recognize the gold we are not going to sell large quantities of tools.

In EventTracker 6.4 we undertook a little project where we asked ourselves “what can we do for the person that does not know enough to really look or ask the right questions?” A lot of log management is looking for the out-of-ordinary, after all. The result is a new dashboard view we call the Enterprise Activity Monitor.

Enterprise Activity uses statistical correlation to looks for things that are simply unusual. We can’t tell you they are necessarily trouble, but we can tell you they are not normal and enable you to analyze them and make a decision. Little things that are interesting – like if you get a new IP address coming into your enterprise 5000 times. Or if a user generally performs 1000 activities in a day, but suddenly does 10,000, or even as simple as a new executable showing up unexpectedly on user machines. Will you chase the occasionally false positive ? definitely, but a lot of the manual log review being performed by the guys with the alphabets after their names is really simply manually chasing trends – this enables you to stop wasting significant time in detecting the trend — all the myriad clues that are easily lost when you are aggregating 20 or 100 million logs a day.

The response from the Beta customers indicates that we are onto something. After all, any thing that can make our (hopefully more) customers’ lives less tedious and their backs hurt less, is all good!

Steve Lafferty

100 Log Management uses #54 PCI Requirements V & VI


Last we looked at PCI-DSS Requirements 3 and 4, so today we are going to look at Requirements 5 and 6. Requirement 5 talks about using AV software, and log management can be used to monitor AV applications to ensure they are running and updated. Requirement 6 is all about building and maintaining a secure network for which log management is a great aid.

-By Ananth

100 Log Management uses #53 PCI Requirements III & IV


Today we continue our journey through the Payment Card Industry Data Security Standard (PCI-DSS). We left off last time with Requirement 2, so today we look at Requirements 3 and 4, and how log management can be used to help ensure compliance.

-By Ananth

Tips for working well with auditors Inside the Walmart breach


Working Well with Auditors For some IT professionals, the mere mention of an audit conjures painful images of being trussed and stuffed like a Thanksgiving turkey. If you’ve ever been through an audit that you weren’t prepared for, you may harbor your own unpleasant images of an audit process gone wrong. As recently as 10-15 years ago, many auditors were just learning their way around the “new world” of IT, while just as many computer and network professionals were beginning to learn their way around the audit world.

PCI-DSS under the gun


Have you been wondering how some of the statements coming from the credit card processing industry seem a little contradictory? You hear about PCI compliant entities being hacked but the PCI guys are still claiming they have never had a compliant merchant successfully breached. Perhaps not, but if both statements are true, you certainly have an ineffective real world standard or problematic certification process at the very least.

Not to pick on Heartland again but Heartland passed their PCI mandated audit and were deemed compliant by a certified PCI Auditor approximately one month prior to the now infamous hack. Yet, at Visa’s Global Security Summit in Washington in March, Visa officials were adamant in pointing out that no PCI compliant organization has been breached.

Now, granted, Heartland was removed from their list of certified vendors after the breach although perhaps this was just a bizarre Catch 22 in play – you are compliant until you are hacked, but when you are hacked the success of the hack makes you non-compliant.

Logically it seems 4 things or a combination of the 4 could potentially have occurred at Heartland. 1) The audit could have been inadequate or the results incorrect leading to a faulty certification. 2) Heartland in the intervening month made a material change in the infrastructure such that it threw them out of compliance. 3) The hack was accomplished in an area outside of the purview of the DSS, or 4) Ms. Richey (and others) is doing some serious whistling past the graveyard.

What is happening in the Heartland case is the classic corporate litigation-averse response to a problem. Anytime something bad happens the blame game starts with multiple targets, and as a corporation your sole goal is to be sure to get behind one or the other (preferably larger) target because when the manure hits the fan the person in the very front is going to get covered. Unfortunately this behavior does not seem to really foster solving the problem as everyone has their lawyers and are not talking.

Regardless, maybe the PCI should not be saying things like “no compliant entity has ever been breached” and maybe say something like “perhaps we have a certification issue here”, or “how do we reach continuous compliance?” or even “what are we missing here?”

-Steve Lafferty

100 Log Management uses #52 PCI Requirement I & II – Building and maintaining a secure network


Today’s blog looks at Requirement 1 of the PCI Data Security Standard, which is about building and maintaining a secure network. We look at how logging solutions such as EventTracker can help you maintain the security of your network by monitoring logs coming from security systems.

-By Ananth

100 Log Management uses #51 Complying with PCI-DSS


Today we are going to start a new series on how logs help you meet PCI DSS. PCI DSS is one of those rare compliance standards that call out specific requirements to collect and review logs. So in the coming weeks, we’ll look at the various sections of the standard and how logs supply the information you need to become compliant. This is the introductory video. As always, comments are welcome.

– By Ananth

Lessons from the Heartland – What is the industry standard for security?


I saw a headline a day or so ago on BankInfoSecurity.com about the Heartland data breach: Lawsuit: Heartland Knew Data Security Standard was ‘Insufficient’. It is worth a read as is the actual complaint document (remarkably readable for legalese, but I suspect the audience for this document was not other lawyers). The main proof of this insufficiency seems to be contained in point 56 in the complaint. I quote:

56. Heartland executives were well aware before the Data Breach occurred that the bare minimum PCI-DSS standards were insufficient to protect it from an attack by sophisticated hackers. For example, on a November 4, 2008 Earnings Call with analysts, Carr remarked that “[w]e also recognize the need to move beyond the lowest common denominator of data security, currently the PCI-DSS standards. We believe it is imperative to move to a higher standard for processing secure transactions, one which we have the ability to implement without waiting for the payments infrastructure to change.” Carr’s comment confirms that the PCI standards are minimal, and that the actual industry standard for security is much higher. (Emphasis added)

Despite not being a mathematician, I do know that the lowest common denominator does not mean minimal or barely adequate, but that aside lets look at the 2 claims in the last sentence.

It is increasingly popular to bash compliance regulations in the security industry these days and often with good reason. We have heard and made the arguments many times before that compliant does not equal secure and further, don’t embrace the standard, embrace the spirit or intent of the standard. But to be honest the PCI DSS Standard is far from minimal, especially by comparison to most other compliance regulations.

The issue with standards has been the fear that they make companies complacent. Does PCI-DSS make you safe from attacks from sophisticated hackers? Well, no, but there is no single regulation, standard or practice out there that will. You can make it hard or harder to get attacked, and PCI-DSS does make it harder, but impossible, no.

Is the Data Security Standard perfect? No. Is the industry safer with it than without it? I would venture a guess that in the case of PCI DSS it is, in fact. That there was significant groaning and a lot of work on the part of the industry to implement the standard would lead one to believe that they were not doing it prior and that there are not a lot of worthless requirements in the DSS. PCI DSS makes a company take positive steps like run vulnerability scans, examine logs for signs of intrusion, and encrypt data. If all those companies handling credit cards prior to the standard were not doing these things, imagine what it was like before?

The second claim is where the real absurdity lays — the assertion that the industry standard for security is so much better than PCI DSS. What industry standard are they talking about exactly? In reality, the industry standard for security is whatever the IT department can cajole, scare, or beg the executives into providing them in terms of budget and resources – which is as little as possible (remember this is capitalism – profits do matter). Using this as a basis, the actual standard for security is to do as little as possible for the least amount of money to avoid being successfully sued, your executives put in jail or losing business. Indeed PCI DSS forced companies to do more, but emphasis on the forced. (So, come to think of it maybe Heartland did not do the industry standard, as they are getting sued, but let’s wait on that outcome!).

Here is where I have my real problem with the entire matter. The statements taken together imply that Heartland had some special knowledge to the DSS’s shortcomings and did nothing, and indeed did not even do what other people in the industry were doing – the “industry standard”. The reality is anyone with a basic knowledge of cyber security and the PCI DSS would have known the limitations, this included no doubt many, many people on the staffs of the banks that are suing. So whatever knowledge Heartland had, the banks that were customers of Heartland knew as well, and even if they did not, Mr. Carr went so far as to announce it in the call noted above. If this statement was so contrary to the norm, why didn’t the banks act in the interest of their customers and insist Heartland shape up or fire them? What happened to the concept of the educated and responsible buyer?

If Heartland was not compliant I have little sympathy for them, or if it can be proved they were negligent, well, have at them. But the banks here took a risk getting into the credit card issuing business– and no doubt made a nice sum of money – but they knew the risk of a data breach and the follow-on expense existed. I thought the nature of risk was that you occasionally lose and in the case of business risk impacts your profits. This lawsuit seems to be like the recent financial bailout – the new expectation of risk in the financial community is when it works, pocket the money, and when it does not, blame someone else to make them pay or get a bailout!

-Steve Lafferty

100 Log Management Uses #50 Data loss prevention (CAG 15)


Today we wrap up our series on the Consensus Audit Guidelines. Over the last couple of months we have looked at the 15 CAG controls that can be automated, and we have examined how log management and log management solutions such as EventTracker can help meet the Guidelines. Today we look at CAG 15 — data loss prevention and examine the many ways logs help in preventing data leakage.

By Ananth

Leverage the audit organization for better security Bankers gone bad and more


Log Management in virtualized environments Back in the early/mid-90s I was in charge of the global network for a software company. We had a single connection to the Internet and had set up an old Sun box as the gatekeeper between our internal network and the ‘net. My “log management” process consisted of keeping a terminal window open on my desktop where I streamed the Sun’s system logs (or “tailed the syslog”) in real time.

IT: Appliance sprawl – Where is the concern?


Over the past few years you have seen an increasing drumbeat in the IT community to server consolidation through Virtualization with all the trumpeted promises of cheaper, greener, more flexible customer focused data centers with never a wasted CPU cycle. It is a siren song to all IT personnel and quite frankly it actually looks like it delivers on a great many of the promises.

Interestingly enough, while reduced CPU wastage, increased flexibility, fewer vendors are all being trumpeted for servers there continues to be little thought provided to purchasing hardware appliances willy-nilly. Hardware appliances started out as specialized devices built or configured in a certain way to maximize performance – A SAN device is a good example, you might want high speed dual port Ethernet and a huge disk capacity with very little requirement for a beefy CPU or memory. These make sense to be appliances. Increasingly however an appliance is a standard Dell or rack mounted rack mounted system with an application installed on it, usually on a special Linux distribution. The advantages to the appliance vendor are many and obvious — a single configuration to test, increased customer lockin, and a tidy up sell potential as the customer finds their event volume growing. From the customer perspective it suffers all the downsides that IT has been trying to get away from – specialized hardware that cannot be re-purposed, more, locked-in hardware vendors, excess capacity or not enough, wasted power from all the appliances running, the list goes on and on and contains all the very things that have caused the move to virtualization. And the major benefit for appliances? Easy to install seems to be the major one. So to provision a new machine, install software might take an hour or so – the end-user is saving that and the downstream cost of maintaining a different machine type eats that up in short order.

Shortsighted IT managers still manage to believe that, even as they move aggressively to consolidate Servers, it is still permissible to buy an appliance even if it is nothing but a thinly veiled Dell or HP Server. This appliance sprawl represents the next clean-up job for IT managers, or will simply eat all the savings they have realized in server consolidation. Instead of 500 servers you have 1 server and 1000 hardware appliances – what have you really achieved? You have replaced relationships with multiple hardware vendors with multiple appliance vendors and worse when a server blew-up at least it was all Windows/Intel configurations so in general so you could keep the applications up and running. Good luck doing that with a proprietary appliance. This duality in IT organizations reminds me somewhat of people that go to the salad bar and load up on the cheese, nuts, bacon bits and marinated vegetables, then act vaguely surprised when the salad bar regimen has no positive effect.

-Steve Lafferty

100 Log Management Uses #49 Wireless device control (CAG control 14)


We now arrive at CAG Control 14. – Wireless Device Control. For this control specialty WIDS scanning tools are the primary defense, that and a lot of configuration policy. This control is primarily a configuration problem not a log problem. Log Management helps  in all the standard ways — collecting and correlating data, monitoring for signs of attack etc. Using EventTracker’s Change component, configuration data in the registry and file system of the client devices can also be collected and alerted on. Generally depending on how one sets the configuration policy, when a change is made it will generate either a log entry or a change in the registry or file system. In this way EventTracker provides a valuable means of enforcement.

By Ananth

Can you count on dark matter?


Eric Knorr, the Editor in Chief over at InfoWorld has been writing about “IT Dark Matter” which he defines as system device and application logs. Turns out half of enterprise data is logs or so-called Dark Matter. Not hugely surprising and certainly good news for the data storage vendors and hopefully for SIEM vendors like us! He described these logs or dark matter as “widely distributed and hidden” which got me thinking. The challenge with blogging is that we have to reduce fairly complex concepts and arguments into simple claims otherwise posts end up being on-line books. The good thing in that simplification, however, is that often gives a good opportunity to point out other topics of discussion.

There are two great challenges in log management – the first is being able to provide the tools and knowledge to make the log data readily available and useful, which leads to Eric’s comment on how Dark Matter is “Hidden” as it is simply too hard to mine without some advanced equipment. The second challenge, however, is preserving the record – making sure it is accurate, complete and unchanged. In Eric’s blog this Dark Matter is “widely distributed” and there is an implied assumption that this Dark Matter is just there to be mined – that the Dark Matter will and does exist and even more so, it is accurate. In reality it is, for all practical purposes, impossible to have logs widely distributed and expect them to be complete and accurate – this fatally weakens their usefulness.

Let’s use a simple illustration we all know well in computer security — almost the first thing a hacker will do once they penetrate a system is shut down logging, or as soon as they finish whatever they are doing, delete or alter the logs. Let’s use the analogy of video surveillance at your local 7/11. How useful would it be if you left the recording equipment out in the open at the cash register unguarded – not real useful, right? When you do nothing to secure the record, the value of the record is compromised, and the more important the record the more likely it is to be compromised or simple deleted.

This is not to imply that there are not useful nuggets to be mined even if the records are distributed. Without attempting to secure and preserve the logs, logs become the trash heap of IT. Archeologists spend much of their time digging through the trash of civilizations to figure out how people lived. Trash is an accurate indication of what really happened simply because 1) it was trash and had no value and 2) no one worried that someone 1000 years later was going to dig it up. It represents a pretty accurate, if fragmentary, picture of day to day existence. But don’t expect to find treasure, state secrets or individual records in the trash heap however. The usefulness of the record is 1) a matter of luck that the record was preserved and 2) directly inverse to the interest of the creating parties to modify it.

 Steve Lafferty

Security threats from well-meaning employees, new HIPAA requirements SMB flaw


The threat within: Protecting information assets from well-meaning employees Most information security experts will agree that employees form the weakest link when it comes to corporate information security. Malicious insiders aside, well-intentioned employees bear responsibility for a large number of breaches today. Whether it’s a phishing scam, a lost USB or mobile device that bears sensitive data, a social engineering attack or downloading unauthorized software, unsophisticated but otherwise well-meaning insiders have the potential of unknowingly opening company networks to costly attacks.

100 Log Management Uses #48 Control of ports, protocols and services (CAG control 13)


Today we look at CAG Control 13 – limitation and control of Ports, Protocols and Services. Hackers search for these kinds of things — software installs for example may turn on services the installer never imagined may be vulnerable, and it is critical to limit new ports being opened or services installed. It is also a good idea to monitor for abnormal or new behavior that indicates that something has escaped internal controls — for instance a system suddenly broadcasting or receiving network traffic on a new Port is something suspicious that should be investigated, new installs or new Services being run is also worth investigation — we will take a look at how Log Management can help you monitor for such occurrences.

By Ananth

Doing the obvious – Why efforts like the Consensus Audit Guidelines are valuable


I came across this interesting (and scary if you are a business person) article in the Washington Post. In a nutshell pretty much every business banks electronically. Some cyber gangs in Eastern Europe have come up with a pretty clever method to swindle money from small and medium sized companies. They do a targeted email attack on the finance guys and get them to click on a bogus attachment – when they do so, key logging malware is installed that harvests electronic bank account passwords. These passwords are then used to transfer large sums of money to the bad guys.

The article is definitely worth a read for a number of reasons, but what I found surprising was first that businesses do not have the same protection from electronic fraud as consumers do so the banks don’t monitor commercial account activity as closely, and second, just how much this type of attack is happening. Turns out businesses only have 2 days to report fraudulent activity instead of a consumer’s 60 days so businesses that suffer a loss usually don’t recover their money.

My first reaction was to ring up our finance guys and tell them about the article. Luckily their overall feel was that since Marketing spent the money as quickly as the Company made it, we were really not too susceptible to this type of attack as we had no money to steal – an unanticipated benefit of a robust (and well paid, naturally!) marketing group. I did make note of this helpful point for use during budget and annual review time.

My other thought was how this demonstrated the usefulness of efforts like the Consensus Audit Guidelines from SANS. Sometime security personnel pooh-pooh the basics but you can make it lot harder on the bad guys with some pretty easy blocking and tackling activity. CAG Control 12 talks about monitoring for active and updated anti-virus and anti-spyware on all systems. Basic, but it really helps – remember a business does not have 60 days but 2. You can’t notice the malware a week after the signatures finally get updated.

There are a number of other activities that can also really help to prevent these attacks in advanced tools such as EventTracker such as change monitoring, tracking first time executable launch, monitoring the AV application has not been shut down and monitoring network activity for anomalous behavior, but that is a story for another day. If you can’t do it all, at least start with the obvious – you might not be safe, but you will be safer.

Steve Lafferty

100 Log Management Uses #47 Malware defense (CAG control 12)


Today we continue our journey through the Consensus Audit Guidelines with a look at CAG 12 — Malware Defense. When people think about the pointy end of the stick for Malware prevention they typically think anti-virus, but log management can certainly improve your chances by adding defense in depth. We also examine some of the additional benefits log management provides.

By Ananth

Managing the virtualized enterprise historic NIST recommendations and more


Every drop in the business cycle brings out the ‘get more value for your money’ strategies.  For IT this usually means either use the tools you have to solve a wider range of problems or buy a tool that with fast initial payback and can be used to solve a wide range of other problems. This series looks at how different log management tasks can be applied to solve a wider range of problems beyond the traditional compliance and security drivers so that companies can get more value for their IT money.

100 Log Management Uses #46 Account Monitoring (CAG control 11)


Today’s Consensus Audit Guideline Control is a good one for logs — account monitoring. Account monitoring should go well beyond simply having a process to get rid of invalid accounts. Today we look at tips and tricks on things to look for in your logs such as excessive failed access to folders or machines, inactive accounts becoming active and other outliers that are indicative of an account being high-jacked.

By Ananth

100 Log Management Uses #45 Continuous vulnerability testing and remediation (CAG control 10)


Today we look at CAG Control 10 — continuous vulnerability testing and remediation. For this control, vulnerability scanning tools like Rapid7 or Tenable are the primary solutions, so how do logs help here? The reality is that most enterprises can’t patch critical infrastructure on a constant basis. There is often a fairly lengthy gap between when you have a known vulnerability and when the fix is applied and so it becomes even more important to monitor logs for system access, anti-virus status, changes in configuration and more.

By Ananth

100 Log Management Uses #44 Data access (CAG control 9)


We continue our journey through the Consensus Audit Guidelines and today look at Control 9 – data access on a need to know basis. Logs help with monitoring of the enforcement of these policies, and user activities such as file, folder access and trends should all be watched closely.

By Ananth

100 Log Management Uses #42 Administrator privileges and activities (CAG control 8)


Today’s CAG control is a good one for logs – monitoring administrator privileges and activities. As you can imagine, when an Admin account is hacked or when an Admin goes rogue, because of their power, the impact from the breach can be devastating. Luckily most Admin activity is logged so by analyzing the logs you can do a pretty good job of detecting problems.

By Ananth

100 Log Management Uses #41 Application Security (CAG control 7)


Today we move on to the Consensus Audit Guideline’s Control #7 on application security. The best approach to application security is to design it in from the start, but web applications are vulnerable in several fairly common ways many of which can lead to attacks that can be detected through analyzing web server logs.

By Ananth

100 Log Management Uses #40 Monitoring Audit Logs (CAG control 6)


Today on CAG we look at a dead obvious one for logging — monitoring audit logs! It is nice to see that the CAG authors put as much value behind a review of audit logs. We certainly believe it is a valuable exercise.

– By Ananth

100 Log Management Uses #39 Boundary defense (CAG control 5)


Today, after a brief holiday (it is Summer, after all), we continue our look at the SAN’s Consensus Audit Guidelines (CAG). Today we look at something very well suited for logs — boundary defense. Hope you enjoy it.

– By Ananth

EventTracker 6.3 review; Getting more from Log Management Correlation techniques and more


Smart Value: Getting more from Log Management Every dip in the business cycle brings out the ‘get more value for your money’ strategies, and our current “Kingda Ka style” economic drop only increases the strategy implementation urgency. For IT this usually means either use the tools you have to solve a wider range of problems or buy a tool that with fast initial payback and can be used to solve a wide range of other problems.

100 Log Management Uses #38 Meeting CAG controls 3 & 4


Today we continue our look at the Consensus Audit Guidelines, in this case CAG Controls 3 and 4 for maintaining secure configurations on system and network devices. We take a look at how log and configuration monitoring can ensure that configurations remain secure by detecting changes in the secured state.

By Ananth

100 Log Management Uses #37 Consensus Audit Guidelines (CAG) controls 1 and 2


Today we start in earnest on our Consensus Audit Guidelines (CAG) series by taking a look at CAG 1 and 2. Not hugely interesting from a log standpoint but there are some things that log management solutions like EventTracker can help you with.

By Ananth

100 Log Management uses #36 Meeting the Consensus Audit Guidelines (CAG)


Today we are going to begin another series on a standard that leverages logs. The Consensus Audit Guidelines, or CAG for short, is a joint initiative of SANS and a number of Federal CIO’s and CISO’s to put in place some lower level guidelines for FISMA. One of the criticisms of FISMA is that is it is very vague and implementation can be very different from agency to agency. The CAG is a series of recommendations that make it easier for IT to make measurable improvements in security by knocking off some low hanging targets. There are 20 CAG recommended controls and 15 of them can be automated. Over the next few weeks we will look at each one. Hope you enjoy it.

By Ananth

New NIST recommendations; Using Log Management to detect web vulnerabilities and more


Log and security event management tame the wild west environment of a university network Being a network administrator in a university environment is no easy task. Unlike the corporate world, a university network typically has few restrictions over who can gain access; what type or brand of equipment people use at the endpoint; how those endpoint devices are configured and managed; and what users do once they are on the network.

100 Log Management uses #35 OWASP web vulnerabilites wrap-up


We have been talking a lot recently about web vulnerabilities, specifically the OWASP Top 10 list. We have covered how logs can help detect signs of web attacks in OWASP A1 through A6. A7 – A10 cannot be detected by logging, but in this wrap-up of the OWASP series we’ll take a look at them.

-By Ananth

100 Log Management uses #34 Error handling in the web server


Today we conclude our series on OWASP vulnerabilities with a look at A6 — error handling in the web server. Careless or non-configuration of error handling in a web server gives a hacker quite a lot of useful information about the structure of your web application. While careful configuration can take care of many issues, hackers will still probe your application deliberately triggering error conditions to see what information is there to be had. In this video we look at how you can use web server logs to detect whether you are being probed by a potential hacker.

-By Ananth

100 Log Management uses #33 Detecting and preventing cross site request forgery attacks


Today’s video blog continues our series on web vulnerabilities. We look at OWASP A5 — cross site request forgery hacks and we discuss ways that Admins can help both prevent these attacks and detect them when they do occur.

-By Ananth

100 Log Management uses #32 Detecting insecure object references


Continuing on our OWASP series, today we look at Vulnerability A4, using object references to grab important information, and how logs can be used by Admins to detect signs of these attacks. We also look at some best practices you can employ on your servers to make these attacks more difficult.

By Ananth

100 Log Management uses #31 Detecting malicious file execution in the web server


Today’s video continues our series on web vulnerabilities. We look at OWASP A3 — malicious code execution attacks in the web server — and discuss ways that Admins can help both prevent these attacks and detect them when they do occur.

-By Ananth

Compromise to discovery


The Verizon Business Risk Team publishes a useful Data Breach Investigations Report drawn from over 500 forensic engagements over a four-year period.

The report describes a “Time Span of Breach” event broken into four stages of an attack. These are:

– Pre-Attack Research
– Point of Entry to Compromise
– Compromise to Discovery
– Discovery to Containment

The top two are under control of the attacker but the rest are under the control of the defender. Where log management is particularly useful would be in discovery. So what does the 2008 version of the DBIR show about the time between Compromise to Discovery? Months Sigh. Worse yet, in 70% of the cases, Discovery was the victim being notified by someone else.

Conclusion? Most victims do not have sufficient visibility into their own networks and equipment.

It’s not hard but it is tedious. The tedium can be relieved, for the most part, by a one-time setup and configuration of a log management system. Perhaps not the most exciting project you can think of but hard to beat for effectiveness and return on investment.

Ananth

100 Log Management uses #30 Detecting Web Injection Attacks


Today’s Log Management use case continues our look at web vulnerabilities from the OWASP website. We will look at vulnerability A2, or how injection techniques, particularly SQL injection can be detected by analyzing web server log files.

By Ananth

100 Log Management uses #29 Detecting XSS attacks


Today we begin our series on web vulnerabilities. The number 1 vulnerability on the OWASP list is cross site scripting or XSS. XSS seems to have replaced SQL injection as the new favorite for web attacker. We look at using web server logs to detect signs of these XSS attacks.

-Ananth

EventTracker gets 5 star review; 100 Log Management uses and more


Have your cake and eat it too- improve IT security, comply with multiple regulations while reducing operational costs and saving money Headlines don’t lie. The number and severity of security breaches suffered by companies has consistently increased over the past couple of years and statistics show that 9 out of 10 businesses will suffer an attack on their corporate network in 2009.

100 Log Management uses #28 Web application vulnerabilities


During my recent restful vacation down in Cancun I was able to reflect a bit on a pretty atypical use of logs. This actually turned into a series of 5 entries that look at using logs to trace web application vulnerabilities using the OWASP Top 10 Vulnerabilities as a base. Logs may not get all the OWASP top 10, but there are 5 that you can use logs to look for — and by periodic review ensure that your web applications are not being hacked. This is the intro. Hope you enjoy them.

[See post to watch Flash video] -Ananth

100 Log Management Uses #27 Printer logs


Back from my vacation and back to logs and log use cases! Here is a fairly obvious one — using logs to manage printers. IN this video, we look at the various events generated on Windows and what you can do with them.

-Ananth

Logs and forensics, a lesson in compliance and more


How logs support data forensics investigations Novak and his team have been involved in hundreds of investigations employing data forensics. He says log data is a vital resource in discovering the existence, extent and source of any security breach. “Computer logs are central and pivotal components to any forensic investigation,” according to Novak. “They are a ‘fingerprint’ that provides a record of computer and system activities that may demonstrate a data leak or security breach.” The incriminating activities might include failed login attempts

Some thoughts on SAAS


A few months ago I wrote some thoughts on cloud security and compliance.The other day I came across this interesting article in Network World about SaaS security and it got me thinking on the subject again. The Burton analyst quoted, Eric Maiwald, made some interesting and salient points about the challenges of SaaS security but he stopped short of explicitly addressing compliance issues. If you have a SaaS service and you are subject to any one of the myriad compliance regulations how will you demonstrate compliance if the SaaS app is processing critical data subject to the standard? And is the vendor passing a SAS-70 audit going to satisfy your auditors and free you of any compliance requirement?

Mr. Maiwald makes a valid point that you have to take care in thinking through the security requirements and put it in the contract with the SaaS vendor. The same can also be held true for any compliance requirement, but he raises an even more critical point where he states that SaaS vendors want to offer a one size fits all offering (rightly so, or else I would put forward we would see a lot of belly-up SaaS vendors). My question then becomes how can an SME that is generally subject to compliance mandates but lacks the purchasing power to negotiate a cost effective agreement with a SaaS vendor take advantage of the benefits such services provide? Are we looking at one of these chicken and egg situations where the SaaS vendors don’t see the demand because the very customers they would serve are unable to use their service without this enabling technology? At the very least I would think that SaaS vendors would benefit from putting in the same audit capability that the other enterprise application vendors are, and making that available (maybe for a small additional fee) to their customers. Perhaps it could be as simple as user and admin activity auditing, but it seems to me a no brainer – if a prospect is going to let critical data and services go outside their control they are going to want the same visibility as they had when it resided internally, or else it becomes a non-starter until the price is driven so far down that reward trumps risk. Considering we will likely see more regulation, not less, in the future that price may well be pretty close to zero.

– Steve Lafferty

Log Monitoring – real time or bust?


As a vendor of a log management solution, we come across prospects with a variety of requirements — consistent with a variety of needs and views of approaching problems.

Recently, one prospect was very insistent on “real-time” processing. This is perfectly reasonable but as with anything, when taken to an extreme, can be meaningless. In this instance, the “typical” use case (indeed the defining one) for the log management implementation was “a virus is making its way across the enterprise; I don’t have time to search or refine or indeed any user (slow) action; I need instant notification and ability to sort data on a variety of indexes instantly”.

As vendors we are conditioned to think “the customer is always right” but I wonder if the requirement is reasonable or even possible. Given specifics of a scenario, I am sure many vendors can meet the requirement — but in general? Not knowing which OS, which attack pattern, how logs are generated/transmitted?

I was reminded again by this blog by Bejtlich in which he explains that “If you only rely on your security products to produce alerts of any type, or blocks of any type, you will consistently be “protected” from only the most basic threats.”

While real-time processing of logs is a perfectly reasonable requirement, retrospective security analysis is the only way to get a clue as to attack patterns and therefore a defense.

 Ananth

100 Log Management uses #26 MS debug logs-Part II


Today is a continuation of our earlier look at Microsoft debug logs. Today we are going to look at logs from the Time and Task Scheduler services.

-By Ananth

100 Log Management uses #25 MS debug logs


MSdebug logs. Pretty arcane stuff but Sysadmins occasionally need to get deep into OS services such as group policy to debug problems in the OS. Logging for most of these types of services requires turning on in the registry as there is generally a performance penalty. We are going to look at a few examples over the next couple of days. Today we look at logs that are important on some older operating systems, while next time we look at services such as Time and Task Scheduler that are really most useful in the later Microsoft versions.

-By Ananth

100 Log Management uses #24 404 errors


Today’s log tip is a case of a non-obvious, but valuable, use of log collection. Web server logs provide lots of good information for web developers; today we look at some of the interesting information contained in 404 errors.

-By Ananth

The blind spot of mobile computing detecting a hack attempt and more


Overcoming the blind spot of mobile computing For many organizations, mobile computing has become a strategic approach to improve productivity for sales professionals, knowledge workers and field personnel. As a result, the Internet has become an extension of the corporate network. Mobile and remote workers use the Internet as the means to access applications and resources that previously were only available to “in-house” users – those who are directly connected to the corporate network.

100 Log Management uses #23 Server shutdown


Today we look at monitoring server shutdowns. Typically I would recommend that you set up an alert from your log management solution that immediately alerts you if any critical production server is shutdown or restarted, but even for non-critical servers it is wise to check on occasion what is going on. I do it on a weekly basis — servers shutting down can happen normally (win update, maintenance, etc), but can also indicate crashes and instability in the machine or someone simply screwing around; and by eyeballing a short report (it should be short) you will be able to quickly see any odd patterns.

100 Log Management uses #22 After hours login


Today we use logs to do a relatively easy check for unusual activity – in this case after hours log-ons. If your organization is mostly day shift, for example, your typical users will not be logging in after hours and if they are this is something worth checking out. This kind of simple analysis is a quick and easy way to look for unusual patterns of activity that could indicate a security problem.

-By Ananth

100 Log Management uses #21 File deletes


Today’s use case is a good one. Windows makes it very hard and resource expensive to track file deletes, but there are certain directories (like in our case, our price and sales quote folders), where files should not be deleted from. Making use of Object Access Auditing and a good log analysis solution you can pull a lot of valuable information from the logs that indicate unwarranted file deletions.

– By Ananth

Famous Logs


The Merriam Webster dictionary defines a log as “a record of performance, events, or day-to-day activities”. Though we think of logs in the IT context, over the years many famous logs have been written. Here are some of my favorites:

Dr Watson who logged the cases of Sherlock Holmes

The Journals of Lewis and Clark, one of the greatest voyages of discovery in human history.

The Motorcycle Diaries: Notes on a Latin American Journey

Fictional Prof. Pierre Arronax chronicled the fantastic travels of Capt. Nemo in Jules Vernes’ 20,000 Leagues Under the Sea

Diary of a Young Girl by Anne Frank, a vivid, insightful journal and one of the most moving and eloquent documents of the Holocaust.

Personal logs from captains of the Enterprise (Kirk, Picard, Janeway).

Samuel Pepys, the renowned 17th century diarist who lived in London, England.

The record by Charles Darwin, of his trip on the HMS Beagle

Bridget Jones Diary by Helen Fielding

Ananth

100 Log Management uses #20 Solaris BSM system boots


Today is another Solaris BSM example. The Basic Security Module of Solaris audits all system boots, and it is good practice to have checks in place to ensure that these critical systems are only being restarted at the correct times. Any unexpected activity is something that should be investigated.

– By Ananth

100 Log Management uses #19 Account Management


Today’s look at logs illustrates a typical use case of using logs to review for unexpected behavior. Within Active Directory you have users and groups that are created, deleted and modified. It is always a good idea to go in and review the activities of your domain admins just to be sure that it matches what you feel should be occurring. If it differs it is something to investigate further.

– By Ananth

100 Log Management uses #18 Account unlock by admin


Today we look at something a little different – reviewing admin activity for unlocking accounts. Sometimes a lockout occurs simply because a user has fat fingers, but often accounts are locked on purpose and unlocking one of these should be reviewed to see why

100 Log Management uses #17 Monitoring Solaris processes


The Solaris operating systems has some interesting daemons that warrant paying attention to. Today’s log use case examines monitoring processes like sendmail, auditd and sadm to name a few.

Security threats rise in recession Comply secure and save with Log Management


How LM / SIEM plays a critical role in the integrated system of internal controls Many public companies are still grappling with the demands of complying with the Sarbanes-Oxley Act of 2002 (SOX). SOX Section 404 dictates that audit functions are ultimately responsible for ensuring that financial data is accurate. One key aspect of proof is the absolute verification that sufficient control has been exercised over the corporate network where financial transactions are processed and records are held.

100 Log Management uses #16 Patch updates


I recorded this Wednesday — the day after patch Tuesday, so fittingly, we are going to look at using logs to monitor Windows Updates. Not being up to date on the latest patches leaves security holes but with so many machines and so many patches it is often difficult to keep up with them all. Using logs helps.

100 Log Management uses #15 Pink slip null


Today is a depressing log discussion but certainly a sign of the times. When companies are going through reductions in force, IT is called upon to ensure that the company’s Ip is protected. This means that personnel no longer with the company should no longer have access to corporate assets. Today we look at using logs to monitor if there is any improper access.

-Ananth

100 Log Management uses #14 SQL login failure


Until now, we have been looking mostly at system, network and security logs. Today, we shift gear and look at database logs, more specifically user access logs in SQL Server.

-By Ananth

100 Log Management uses #13 Firewall traffic analysis


Today, we stay on the subject of Firewalls and Cisco PIX devices in particular. We’ll look at using logs to analyze trends in your firewall activity to quickly spot anomalies.

-By Ananth

100 Log Management uses #12 Firewall management


Today’s and tomorrow’s posts look at your firewall. There should be few changes to your firewall and even fewer people making those changes. Changing firewall permissions is likely the easiest way to open up the most glaring security hole in your enterprise. It pays to closely monitor who makes changes and what the changes are, and today we’ll show you how to do that.

-By Ananth

100 Log Management uses #11 Bad disk blocks


I often get the feeling that one of these days I am going to fall victim to disk failure. Sure, most times it is backed up, but what a pain. And it always seems as though the backup was done right before you made those modifications yesterday. Monitoring bad disk blocks on devices are an easy way to get an indication that you have a potential problem. Today’s use case looks at this activity.

– By Ananth

100 Log Management uses #10 Failed access attempts


Today we are going to look at a good security use case for logs -reviewing failed attempts to access to shares. Sometimes an attempt to access directories or shares are simply clumsy typing, but often it is an attempt by internal users or hackers to snoop in places they have no need to be.

100 Log Management uses #9 Email trends


Email has become one of the most important communication methods for businesses — for better or worse! Today we look at using logs from an ISP mail service to get a quick idea of overall trends and availability. Hope you enjoy it.

-By Ananth

100 Log Management uses #8 Windows disk space monitoring


Today’s tip looks at using logs for monitoring disk usage and trends. Many windows programs (like SQL Server, for example) count on certain amounts of free space to operate correctly, and in general when a Windows machine runs out of disk space it often handles the condition in a less than elegant manner. In this example we will see how reporting on the free disk and trends gives a quick and easy early warning system to keep you out of trouble.

100 Log Management uses #7 Windows lockout


A couple of days ago we looked at password resets, today we are going to look at something related – account lockouts. This is something that is relatively easy to check – you’ll see many caused by fat fingers but when you start seeing lots of lockouts, especially admin lockouts, it is something you need to be concerned about.

[See post to watch Flash video] -Ananth

Learning from Walmart


H. Lee Scott, Jr. is the current CEO of WalMart. On Jan 14, 2009, he reflected on his 9 year tenure as CEO as a guest on the Charlie Rose show.

Certain basic truths, that we all know but bear repeating, were once again emphasized. Here are my top takeaways from that interview:

1) Listen to your customers, listen harder to your critics/opponents, and get external points of view. WalMart gets a lot of negative press and new store locations often generate bitter opposition from some locals. However the majority (who vote with their dollars) would appear to favor the store. WalMart’s top management team who consider themselves decent and fair business people, with an offering that the majority clearly prefers, were unable to understand the opposition. Each side retreated to their trenches and dismissed the other. Scott described how members of the board, with external experience, were able to get Wal-Mart management to listen carefully to what the opposition was saying and with dialog, help mitigate the situation.

2) Focus like a laser on your core competency. Walmart excels at logistics, distribution, store management — the core business of retailing. It is, however, a low margin business. With its enormous cash reserves should Wal-Mart go into other areas e.g. product development where margins are much higher? While it’s tempting, remember “Jack of trades, Master of none”? 111th Congress?

3) Customers will educate themselves before shopping. In the Internet age, expect everybody to be better educated about their choices. This means, if you are fuzzy on your own value proposition and cannot articulate it well on your own product website, then expect to do poorly.

4) In business – get the 80% stuff done quickly. We all know that the first 80% goes quickly, it’s the remaining 20% that is hard and gets progressively harder (Zeno’s Paradox ). After all more than 80% of code consists of error handling. While that 20% is critical for product development, it’s the big 80% done quickly that counts in business (and in government/policy).

The fundamentals are always hard to ingrain – eat in moderation, exercise regularly and all that. Worth reminding ourselves in different settings on a regular basis.

Ananth

100 Log Management uses #6 Password reset


Today we look at password reset logs. Generally the first thing a hacker does when hijacking an account is to reset the password. Any resets therefore are worth investigating, more so multiple password resets on an account.

-By Ananth

100 Log Management uses #5 Outbound Firewall traffic


A couple of days ago we looked at monitoring firewall incoming traffic. In many cases outbound traffic is as much a risk as incoming. Once hackers penetrate your network they will try to obtain information through spyware and attempt to get this information out. Also, outbound connections often chew up bandwidth — file sharing is a great example of this. We had a customer that could not figure out why his network performance was so degraded — it turned out to be an internal machine acting as a file sharing server. Looking at logs discovered this.

By Ananth

100 Log Management uses #4 Solaris BSM SU access failure


Today is a change of platform — we are going to look at how to identify Super User access failures on Solaris BSM systems. It is critical to watch for SU login attempts since once you are in as a SU or Root level the keys to the kingdom are in your pocket.

-By Ananth

100 Log Management uses – #3 Antivirus update


Today we are going to look at how you can use logs to ensure that everyone in the enterprise has gotten their automatic Antivirus update. One of the biggest security holes in an enterprise is individuals that don’t keep their machines updated, or turn auto-update off. In this video we will look at how you can quickly identify machines that are not updated to the latest AV definitions.

-By Ananth

100 Log Management uses – #2 Active Directory login failures


Yesterday we looked at firewalls, today we are shifting gears and looking at leveraging those logs from Active Directory. Hope you enjoy it.

– By Ananth

100 Log Management uses – #1 Firewall blocks


…and we’re back, with use-case# 1 – Firewall Blocks. In this video, I will talk about why it’s important to not just block undesirable connections but also monitor traffic that has been denied entry into your network.

By Ananth

100 uses of Log Management – Series


Here at Prism we think logs are cool, and that log data can provide valuable intelligence on most aspects of your IT infrastructure – from identifying unusual patterns that indicate security threats, to alerting on changes in configuration data, to detecting potential system downtime issues, to monitoring user activity. Essentially, Log Management is like a Swiss Army knife or even duct tape — it has a thousand and one applications.

Over the next 100 days, as the new administration takes over here in Washington DC, Ananth, the CEO of Prism Microsystems, will present the 100 most critical use-cases of Log Management in a series of videos focusing on real-world scenarios.

Watch this space for more videos, and feel free to rank and comment on your favorite use-cases.

By Ananth

The IT Swiss army knife EventTracker 6.3 and more


Log Management can find answers to every IT-related problem Why can I say that? Because I think most problems get handled the same way. The first stage is someone getting frustrated with the situation. They then use tools to analyze whatever data is accessible to them. From this analysis, they draw some conclusions about the problem’s answer, and then they act. Basically, finding answers to problems requires the ability to generate intelligence and insight from raw data.

Extreme logging or Too Much of a Good Thing


Strict interpretations of compliance policy standards can lead you up the creek without a paddle. Consider two examples:

  1. From PCI-DSS comes the prescription to “Track & monitor all access to network resources and cardholder data”. Extreme logging is when you decide this means a db audit log larger than the db itself plus a keylogger to log “all” access.
  2. From HIPAA 164.316(b)(2) comes the Security Rule prescription to “Retain … for 6 years from the date of its creation or the date when it last was in effect, whichever is later.” Sounds like a boon for disk vendors and a nightmare for providers.

Before you assault your hair follicles, consider:
1) In clarification, Visa explains “The intent of these logging requirements is twofold: a) logs, when properly implemented and reviewed, are a widely accepted control to detect unauthorized access, and b) adequate logs provide good forensic evidence in the event of a compromise. It is not necessary to log all application access to cardholder data if the following is true (and verified by assessors):
– Applications that provide access to cardholder data do so only after making sure the users are authorized
– Such access is authenticated via requirements 7.1 and 7.2, with user IDs set up in accordance with requirement 8, and
– Application logs exist to provide evidence in the event of a compromise.

2) The Office of the Secretary of HHS waffles when asked about retaining system logs- this can be reasonably interpreted to mean the six year standard need not be taken literally for all system and network logs.

Ananth

See EventTracker in action!

See EventTracker in action!

Join our next live demo August 6th at 2:00 p.m. EST.

Register Here