100 Log Management uses #18 Account unlock by admin

Today we look at something a little different – reviewing admin activity for unlocking accounts. Sometimes a lockout occurs simply because a user has fat fingers, but often accounts are locked on purpose and unlocking one of these should be reviewed to see why

100 Log Management uses #17 Monitoring Solaris processes

The Solaris operating systems has some interesting daemons that warrant paying attention to. Today’s log use case examines monitoring processes like sendmail, auditd and sadm to name a few.

Security threats rise in recession Comply secure and save with Log Management

How LM / SIEM plays a critical role in the integrated system of internal controls

Many public companies are still grappling with the demands of complying with the Sarbanes-Oxley Act of 2002 (SOX). SOX Section 404 dictates that audit functions are ultimately responsible for ensuring that financial data is accurate. One key aspect of proof is the absolute verification that sufficient control has been exercised over the corporate network where financial transactions are processed and records are held.

Where do auditors find that proof? In the data points logged by today’s SIEM tools, of course.

The logged data is a pure treasure trove of information that provides insight into every aspect of an organization’s information technology (IT) operations. As a compensating / detective control, the data is an integral part of an organization’s overall system of internal controls. Moreover, depending on the tools being utilized, the data also can be the starting point of a preventative control.

The proper distillation of critical log data is a bit like looking at a very large haystack and helping the auditor determine if a needle (i.e., a violation of a control) is buried within. A perspective of what guides the audit function as it pertains to SOX will help to explain the search for the elusive needle, if it even exists.

The COSO control framework guides the SOX audit function

The Committee of Sponsoring Organizations of the Treadway Commission (COSO) is a U.S. private-sector initiative whose major objective is to identify the factors that cause fraudulent financial reporting and to make recommendations to reduce its incidence. In 1992, COSO established a common definition of internal controls, standards and criteria against which companies and organizations can assess their control systems. This widely used framework provides a corporate governance model, a risk model and control components that together form the blueprint for establishing internal controls that minimize risk, help ensure the reliability of financial statements, and comply with various laws and regulations.

COSO is a general framework that is not specific to the IT area of a company— or to any other functional area, for that matter. However the COSO framework can be, and often is, applied specifically to IT processes and controls that are governed by SOX Section 404 compliance, the Assessment of Internal Control for all controls related to financial data and reporting.

According to the COSO framework, internal controls consist of five interrelated components. These components are derived from the way management runs a business and are integrated with the organization’s management processes. The components are: the Control Environment, Risk Assessment, Control Activities, Information and Communication, and Monitoring. And, as described below, log management has a crucial role in each of them.

  • The Control Environment – Coming from the Board of Directors and the executive management, a company’s control environment sets the tone of how the organization will conduct its business, thereby influencing the control consciousness of the entire workforce. The control environment provides discipline and structure, and includes factors such as corporate integrity, ethical values, management’s operating style, delegation of authority systems, and the processes for managing and developing people in the organization.

Log management aids corporate management in designing, implementing, and refining controls via its ability to establish a baseline, or snapshot, of an organization’s IT infrastructure and its activities; for example, knowing what devices exist, what applications are running on them, and who is accessing the applications.

  • Risk Assessment – Every organization has business objectives; for example, to produce a product or provide a service. Likewise, every organization faces a variety of risks to meeting its objectives. The risks, which come from both internal and external sources, must be identified and assessed. This risk assessment process is a prerequisite for determining how the risks should be managed.

Log data/management is a starting point of the iterative IT risk management process by providing baseline and near real-time insight into the condition of an organization’s infrastructure. This helps the company identify and assess the risks that may threaten the business objectives and provides the opportunity for the revision of an organization’s acceptable risk posture. And then with a continual feed, log data can be used to ascertain current conditions and to alert someone to the need for appropriate corrective action to mitigate a risk if one arises.

  • Control Activities – Control activities are the policies and procedures that help ensure management directives are carried out and that necessary actions are taken to address the risks to achieving the organization’s objectives. Control activities occur throughout the organization, at all levels and in all functions. Numerous control activities are utilized in the IT area, including access control, change control and configuration control, to name a few.

Log management provides automated event correlation/consolidation and reporting, thereby providing assurance that log data entries are presented to control stakeholders accurately and in a timely fashion. This reporting allows management to take corrective action if needed, as well as measure the effectiveness of designed processes and controls.

  • Information and Communication – Information systems play a key role in internal control systems as they produce reports including operational, financial and compliance-related information that make it possible to run and control the business. An effective communication system ensures that useful information is promptly distributed to the people who need it – outside as well as inside the organization – so they can carry out their responsibilities.

Within log management, this takes the form of automated generation and delivery of detail and summary reports and alerts of key events for appropriate management review and/or action.

  • Monitoring – Internal control systems need to be monitored – a process that assesses the quality of the system’s performance over time. This is accomplished through ongoing monitoring activities, separate evaluations or a combination of the two.

From a log manager’s view, “monitoring” is what he is doing on a daily basis – i.e., performing a “control activity.” From the COSO view, “monitoring” is the assessment of how well the control activities are performing. In other words, the latter is looking over the shoulder of the former to make sure the control activities are effective.

Once an organization has established its control structure(s), an auditor is charged with the independent review of the controls that have been implemented. He is ultimately responsible for assessing the effectiveness of the controls, including those IT controls designed to protect the accuracy and reliability of financial data. This is the heart of SOX Section 404.

A unified and comprehensive log management approach will continue to be the cornerstone of an IT organization’s control processes. It is the best way to get timely insight into all activities on the network that have a material impact on all systems, including financial systems.

Brian Musthaler, CISA – is a Principal Consultant with Essential Solutions Corp. A former audit and information systems manager, he directs the firm’s evaluations and analysis of enterprise applications, with a particular interest in security and compliance tools.

Industry News

PCI costs slow compliance projects in down economy
The economic recession is making it difficult for some information security pros in financial services to get the funding they need to accomplish their goals. A good example of a project that can help both the bottom line and PCI compliance is automated log management

Security threats rise in recession
Threats to data and network security increase during tough times, even as scarce resources make companies more vulnerable to attack.

Did you know? EventTracker allows you to meet a large number of requirements while helping you cut costs and boost productivity. Comply with standards such as PCI-DSS, secure critical servers, protect from inside theft and optimize IT operations while saving money at the same time! Need hard numbers? Take a look at our ROI calculator

Feds allege plot to destroy Fannie Mae data
A fired Fannie Mae contract worker pleaded not guilty Friday to a federal charge he planted a virus designed to destroy all the data on the mortgage giant’s 4,000 computer servers nationwide.

Did you know? Employees, especially disgruntled ones, can significantly increase the risk exposure of a company. EventTracker helps companies minimize this risk by tracking and alerting on all unusual/unauthorized user activity.

Prism Microsystems continues record revenue into 4th quarter
We had a great 4th quarter – get a recap of our performance and key product innovations in 2008

100 Log Management uses #16 Patch updates

I recorded this Wednesday — the day after patch Tuesday, so fittingly, we are going to look at using logs to monitor Windows Updates. Not being up to date on the latest patches leaves security holes but with so many machines and so many patches it is often difficult to keep up with them all. Using logs helps.

100 Log Management uses #15 Pink slip null

Today is a depressing log discussion but certainly a sign of the times. When companies are going through reductions in force, IT is called upon to ensure that the company’s Ip is protected. This means that personnel no longer with the company should no longer have access to corporate assets. Today we look at using logs to monitor if there is any improper access.


100 Log Management uses #14 SQL login failure

Until now, we have been looking mostly at system, network and security logs. Today, we shift gear and look at database logs, more specifically user access logs in SQL Server.

-By Ananth

100 Log Management uses #13 Firewall traffic analysis

Today, we stay on the subject of Firewalls and Cisco PIX devices in particular. We’ll look at using logs to analyze trends in your firewall activity to quickly spot anomalies.

-By Ananth

100 Log Management uses #12 Firewall management

Today’s and tomorrow’s posts look at your firewall. There should be few changes to your firewall and even fewer people making those changes. Changing firewall permissions is likely the easiest way to open up the most glaring security hole in your enterprise. It pays to closely monitor who makes changes and what the changes are, and today we’ll show you how to do that.

-By Ananth

100 Log Management uses #11 Bad disk blocks

I often get the feeling that one of these days I am going to fall victim to disk failure. Sure, most times it is backed up, but what a pain. And it always seems as though the backup was done right before you made those modifications yesterday. Monitoring bad disk blocks on devices are an easy way to get an indication that you have a potential problem. Today’s use case looks at this activity.

– By Ananth

100 Log Management uses #10 Failed access attempts

Today we are going to look at a good security use case for logs -reviewing failed attempts to access to shares. Sometimes an attempt to access directories or shares are simply clumsy typing, but often it is an attempt by internal users or hackers to snoop in places they have no need to be.

100 Log Management uses #9 Email trends

Email has become one of the most important communication methods for businesses — for better or worse! Today we look at using logs from an ISP mail service to get a quick idea of overall trends and availability. Hope you enjoy it.

-By Ananth

100 Log Management uses #8 Windows disk space monitoring

Today’s tip looks at using logs for monitoring disk usage and trends. Many windows programs (like SQL Server, for example) count on certain amounts of free space to operate correctly, and in general when a Windows machine runs out of disk space it often handles the condition in a less than elegant manner. In this example we will see how reporting on the free disk and trends gives a quick and easy early warning system to keep you out of trouble.

100 Log Management uses #7 Windows lockout

A couple of days ago we looked at password resets, today we are going to look at something related – account lockouts. This is something that is relatively easy to check – you’ll see many caused by fat fingers but when you start seeing lots of lockouts, especially admin lockouts, it is something you need to be concerned about.

[See post to watch Flash video] -Ananth

Learning from Walmart

H. Lee Scott, Jr. is the current CEO of WalMart. On Jan 14, 2009, he reflected on his 9 year tenure as CEO as a guest on the Charlie Rose show.

Certain basic truths, that we all know but bear repeating, were once again emphasized. Here are my top takeaways from that interview:

1) Listen to your customers, listen harder to your critics/opponents, and get external points of view. WalMart gets a lot of negative press and new store locations often generate bitter opposition from some locals. However the majority (who vote with their dollars) would appear to favor the store. WalMart’s top management team who consider themselves decent and fair business people, with an offering that the majority clearly prefers, were unable to understand the opposition. Each side retreated to their trenches and dismissed the other. Scott described how members of the board, with external experience, were able to get Wal-Mart management to listen carefully to what the opposition was saying and with dialog, help mitigate the situation.

2) Focus like a laser on your core competency. Walmart excels at logistics, distribution, store management — the core business of retailing. It is, however, a low margin business. With its enormous cash reserves should Wal-Mart go into other areas e.g. product development where margins are much higher? While it’s tempting, remember “Jack of trades, Master of none”? 111th Congress?

3) Customers will educate themselves before shopping. In the Internet age, expect everybody to be better educated about their choices. This means, if you are fuzzy on your own value proposition and cannot articulate it well on your own product website, then expect to do poorly.

4) In business – get the 80% stuff done quickly. We all know that the first 80% goes quickly, it’s the remaining 20% that is hard and gets progressively harder (Zeno’s Paradox ). After all more than 80% of code consists of error handling. While that 20% is critical for product development, it’s the big 80% done quickly that counts in business (and in government/policy).

The fundamentals are always hard to ingrain – eat in moderation, exercise regularly and all that. Worth reminding ourselves in different settings on a regular basis.


100 Log Management uses #6 Password reset

Today we look at password reset logs. Generally the first thing a hacker does when hijacking an account is to reset the password. Any resets therefore are worth investigating, more so multiple password resets on an account.

-By Ananth

100 Log Management uses #5 Outbound Firewall traffic

A couple of days ago we looked at monitoring firewall incoming traffic. In many cases outbound traffic is as much a risk as incoming. Once hackers penetrate your network they will try to obtain information through spyware and attempt to get this information out. Also, outbound connections often chew up bandwidth — file sharing is a great example of this. We had a customer that could not figure out why his network performance was so degraded — it turned out to be an internal machine acting as a file sharing server. Looking at logs discovered this.

By Ananth

100 Log Management uses #4 Solaris BSM SU access failure

Today is a change of platform — we are going to look at how to identify Super User access failures on Solaris BSM systems. It is critical to watch for SU login attempts since once you are in as a SU or Root level the keys to the kingdom are in your pocket.

-By Ananth

100 Log Management uses – #3 Antivirus update

Today we are going to look at how you can use logs to ensure that everyone in the enterprise has gotten their automatic Antivirus update. One of the biggest security holes in an enterprise is individuals that don’t keep their machines updated, or turn auto-update off. In this video we will look at how you can quickly identify machines that are not updated to the latest AV definitions.

-By Ananth

100 Log Management uses – #2 Active Directory login failures

Yesterday we looked at firewalls, today we are shifting gears and looking at leveraging those logs from Active Directory. Hope you enjoy it.

– By Ananth

100 Log Management uses – #1 Firewall blocks

…and we’re back, with use-case# 1 – Firewall Blocks. In this video, I will talk about why it’s important to not just block undesirable connections but also monitor traffic that has been denied entry into your network.

By Ananth

100 uses of Log Management – Series

Here at Prism we think logs are cool, and that log data can provide valuable intelligence on most aspects of your IT infrastructure – from identifying unusual patterns that indicate security threats, to alerting on changes in configuration data, to detecting potential system downtime issues, to monitoring user activity. Essentially, Log Management is like a Swiss Army knife or even duct tape — it has a thousand and one applications.

Over the next 100 days, as the new administration takes over here in Washington DC, Ananth, the CEO of Prism Microsystems, will present the 100 most critical use-cases of Log Management in a series of videos focusing on real-world scenarios.

Watch this space for more videos, and feel free to rank and comment on your favorite use-cases.

By Ananth

The IT Swiss army knife EventTracker 6.3 and more

Log Management can find answers to every IT-related problem

Why can I say that? Because I think most problems get handled the same way. The first stage is someone getting frustrated with the situation. They then use tools to analyze whatever data is accessible to them. From this analysis, they draw some conclusions about the problem’s answer, and then they act. Basically, finding answers to problems requires the ability to generate intelligence and insight from raw data.

IT-related problems are no different. The only twist is that IT problems are growing in number, size and complexity at a faster rate than the budgets and resources targeted at those problems, even during good economic times. This means a lot of people (from CIOs to CFOs to security to operations managers) are frustrated with this situation. However, they lack a solution designed to analyze raw data and report intelligence and insight needed draw conclusions. What they need is a cost effective way to find answers from the available data.

The case for log management
Given this backdrop, it is fairly straightforward to see the logic behind my article title:
Step 1: Logs are a source of raw data for IT
Step 2: Log management solutions can make it easier to extract intelligence from IT data
Step 3: IT managers can use extracted intelligence to find answers to problems

Logs are a record of what a system is doing minute by minute. Each system log by itself is only mildly interesting (usually only to a technician when troubleshooting a problem). However, the aggregate of all logs contains more treasure than a Nicolas Cage movie. With the right search, query and reporting tools this raw data can turn into detailed understanding of most aspects of your business, from how consumers use your systems to purchase goods, to how the company’s risk profile changes over time, to how bottlenecks slow automated workflows, to identifying unusual patterns that indicate security threats.

The raw data for all of this understanding is already there. It is distributed on every IT asset with a log file because log files often contain electronic traces of interactions between assets and between users and assets. By examining these traces you can see patterns, by understanding patterns you can draw conclusions and plan actions. That is what it means to be proactive. That is what it means to work smarter not harder.

However, to turn gold ore (IT logs) into gold treasure (actionable answers) requires the ability to search, query, report, analyze the vast and restless sea of data generated by IT assets running your business’ operations to generate intelligence and insight. With that solution in place, it becomes a matter of applying that ability to generate intelligence to the specific scenario.

The gold coins for IT Operations include answers to questions such as:
• Have there been any unauthorized configuration changes? With this answer staff can act to prevent service outages, data leaks, SLA penalties and compliance issues.
• How many VMs are deployed right now and who owns them? With this answer staff can act to increase resource utilization and minimize capital costs.
• How is the new load-balancing policy actually allocating workloads? With this answer staff can act to ensure capacity is allocated according to business priorities.

For security teams, the treasure chest contains real-time gems and forensic jewels. Since enterprise environments are getting more complex and more dynamic, it is more difficult to rapidly investigate cause/effect during the crisis without automated correlation of configuration changes and events that logged by systems, applications, and network infrastructure. Forensic analysis of IT data allows staff to test potential answers (such as changing an operational policy, adding a new configuration check, or implementing a new correlation rule) to the “how do we prevent this from happening again” question.

Compliance officers can swim away with multiple gold medals because most analysts believe more regulations are coming, even if their computing environment remains relatively unchanged over the next 18 months. These new regulations are likely to involve analyzing and reporting the same raw IT data different ways to answer questions about:
• The integrity of systems, applications and processes,
• The ability to differentiate between good and bad interactions between systems and between employees and systems,
• The process for preventing and mitigating unauthorized changes, etc.

The effort involved in answering those management, security and governance questions could be days worth of remotely accessing systems and copying data into spreadsheets – or could be a mouse-click to view a dashboard or report generated by a log management solution. Similarly, each group could purchase separate solutions to generate their intelligence treasure – or could use an enterprise-wide solution flexible enough to address their critical needs in each area. It’s up to the company to decide by focusing on their needs.

Get started by focusing on critical needs
Financial crises tend to cut through the hazy grind of daily business operations and to focus people on critical needs. This global credit crunch is no different. For business executives, the two critical needs are:

  • protecting what they have by keeping service performance stable while lowering operational costs; and
  • adapting to unexpected situations and problems by increasing business agility while lowering risk management costs.

For business technologists, the two critical needs are meeting those business demands and holding onto their jobs.

The margin for error is very slim. Businesses that allow service performance to disintegrate during tough times or take risky actions to deal with market fluctuations, unexpected service problems or malicious attacks rarely make it through economic downturns in any shape to compete effectively in the future. Typically, survivor companies do not cut costs blindly. Instead they use tough times as a mandate for projects that dramatically improve the competitive value of their staff’s daily activities.

There is only one way to do that when your business services and competitiveness are IT-dependent – skyrocket productivity with a proactive approach to managing, securing and governing technology assets delivering business services and agility. Since there can be hundreds of technology assets per business employee, the only way operations, security and compliance staff can become more proactive is to get better intelligence, knowledge and insight.

This brings us right back to where we started. Having better intelligence is a key part of dealing with every IT-related issue and every additional demand that business executives challenge IT to meet without increasing its staff. Therefore, it is time to get IT intelligence (aka log management) solutions off of the wish list and into the hands of the staff that need it.

Jasmine Noel is founder and partner of Ptak, Noel & Associates.  With more than 10 years experience in helping clients understand how adoption of new technologies affects IT management, she tries to bring pragmatism (and hopefully some humor) to the business-IT alignment discussion.  Send any comments, questions or rants to jnoel@ptaknoelassociates.com

Industry News

Lock down that data
Another example of the insider threat to personally identifiable information has surfaced. In December, an employee in the human resources department of the Library of Congress was charged with conspiring to commit wire fraud for a scheme in which he stole information on at least 10 employees from library databases.

Did you know? EventTracker not only enables insider threat detection, but also provides a complete snapshot of a user’s activity including application usage, printer activity, idle-time, software install/uninstall, failed and successful interactive/non- interactive logins, changes in group policy, deleted files, websites visited, USB activity and more to deter unauthorized access

In the Vault
When it comes to protecting financial info, IT security professionals can never rest on their laurels. These organizations must adopt new technologies, ramp up online banking options, and deal with employee turnover. That’s why these firms continually need to review the security measures in place.

Did you know? EventTracker provides you with scheduled or on-demand reviews of security measures allowing you to proactively address potential weaknesses in security controls, while reacting to security incidents.

EventTracker melds Smart Search with Advanced SIEM capabilities
Best-of-both-worlds solution combines free-form, intuitive searching with intelligent analytics, correlation, mining and reporting in one turn-key package

What’s new in EventTracker 6.3 ? 
Free form Google-like search, user profiling and more… Watch video for detailed information.

Extreme logging or Too Much of a Good Thing

Strict interpretations of compliance policy standards can lead you up the creek without a paddle. Consider two examples:

  1. From PCI-DSS comes the prescription to “Track & monitor all access to network resources and cardholder data”. Extreme logging is when you decide this means a db audit log larger than the db itself plus a keylogger to log “all” access.
  2. From HIPAA 164.316(b)(2) comes the Security Rule prescription to “Retain … for 6 years from the date of its creation or the date when it last was in effect, whichever is later.” Sounds like a boon for disk vendors and a nightmare for providers.

Before you assault your hair follicles, consider:
1) In clarification, Visa explains “The intent of these logging requirements is twofold: a) logs, when properly implemented and reviewed, are a widely accepted control to detect unauthorized access, and b) adequate logs provide good forensic evidence in the event of a compromise. It is not necessary to log all application access to cardholder data if the following is true (and verified by assessors):
– Applications that provide access to cardholder data do so only after making sure the users are authorized
– Such access is authenticated via requirements 7.1 and 7.2, with user IDs set up in accordance with requirement 8, and
– Application logs exist to provide evidence in the event of a compromise.

2) The Office of the Secretary of HHS waffles when asked about retaining system logs- this can be reasonably interpreted to mean the six year standard need not be taken literally for all system and network logs.


Security- A casualty in the Sovereignty vs Efficiency tradeoff

Cloud computing has been described as a trade off between sovereignty and efficiency. Where is security (aka Risk Transfer) in this debate?

Chris Hoff notes that yesterday’s SaaS providers (Monster, Salesforce) are now styled as cloud computing providers in his post .

CIOs, under increasing cost pressure, may begin to accept the efficiency argument that cloud vendors have economies of scale in both the acquisition and operations of the data center.

But hold up…

To what extent is the risk transferred when you move data to the cloud? To a very limited extent, at most to the SLA. This is similar to the debate where one claims compliance (Hannaford, NYC and now sadly Mumbai) but attacks take place anyway, causing great damage. Would an SLA save the Manager in such cases? Unlikely.

In any case, the generic cloud vendor does not understand your assets or your business. At most, they can understand threats, in general terms.  They will no doubt commit to the SLA but these usually refer to availability not security.

Thus far, general purpose, low cost utility or “cloud” infrastructure (such as Azure or EC2), or SaaS vendors (salesforce.com) do not have very sophisticated security features built in.

So as you ponder the Sovereignty v/s Efficiency tradeoff, spare a thought for security.

– Ananth

Auditing web 2.0; 2009 security predictions and more

Auditing Web 2.0

Don’t look now, but the Web 2.0 wave is crashing onto corporate beaches everywhere.  Startups, software vendors, and search engine powerhouses are all providing online accounts and services for users to create wikis, blogs, etc. for collaborating and sharing corporate data, often without the knowledge or involvement of IT or in-house legal counsel.  User adoption is growing in leaps and bounds because it is infinitely easier to fill out an online form than it is for IT operations to purchase and install corporate solutions like SharePoint.

What is interesting about these online Web 2.0 services (I guess the hot new name for this is Cloud Computing) is the level of blind faith users have that these solutions can ward off attacks and that their use of these solutions are therefore secure by extension.  Somehow people believe that because the solution provider has some security features then how they use the solution doesn’t matter – it will be safe.

This is worrying for several reasons.  Folks that implement security solutions for a living know that shoring up vulnerabilities is a task that is never done (kind of like renovating my house, but that is another story). For example “Web 2.0 – A Playground for the Good Old Mistakes” makes the point that “security is thought to be ‘built-in’ which is only partially true” and “the good old mistakes are still there, just playing in a bigger playground with new toys.”  In other words, it takes a lot of complex technology working together to deliver collaboration that is both universally available and universally easy to use, and it is hard to completely bake security into complex, interacting technologies. This means that much of the discussion about auditing Web 2.0 centers on solution-level security vulnerabilities. Obviously, vulnerabilities such as cross-site scripting have to be addressed by the solution providers, because users want to focus on using not on securing the platform.  However, this is not the whole scary story.

Another part of it is that lots of people unwittingly use secured systems in ways that jeopardize sensitive information.  A product development wiki for employees is great, but when someone can still access the wiki a year after getting fired is not great.  It jeopardizes the company’s competitive future because its employee community is using the Web 2.0 solution in an insecure way.

I’m not alone in thinking about this.  Steve Lafferty, Prism Microsystems’ VP, recently blogged  “When people think about cloud computing, they tend to equate “it is in the cloud” to “I have no responsibility”, and when critical data and apps migrate to the cloud that is not going to be acceptable.”  The potential for exposure of sensitive information or theft of intellectual property runs high when people abdicate responsibility.

But Jasmine, you’re an analyst that covers IT operations and management, not security and information lifecycle management, so what do you care.  Well, I care because IT operations is sitting on a gold-mine of log data that can let people collaborate while unobtrusively ensuring that corporate policies are upheld.  What I’m interested in is making sure that IT operations gets the tools they need to dig the gold out of the mountains of data (without killing the rain-forest, or spotted owls or polar bears or whatever else can be endangered by strip-mining :-D). It seems to me that the best way to do that is to get smarter about what operational data should be collected and what log analyses are completed automatically.

Now, I’m not really a big fan of President Ronald Reagan, but there a few things he said that I agree with 150 percent, and one of them is “Trust, but verify.”  I think that IT operations can be instrumental in making the verify part less intrusive to users – remember users want to focus on using not security and policy management.  But this means that IT needs to get involved with the users that want to set up these cloudy Web 2.0 collaboration accounts, for example:

  • Get corporate accounts with some of the more popular service providers and demand that they link it to IT’s existing corporate authentication systems (many of them do this already) so that you don’t have to maintain user IDs in multiple places.  Operations will also have to work with corporate information lifecycle people to put policies in place so that and IT can streamline user provisioning and deprovisioning from these services.  Remember, users will do their own thing if it takes weeks to set up or take down a community.
  • Educate users that corporate accounts exist. This is really important for large companies where many times people start using private accounts because they don’t know a corporate account exists.  Users also must be educated about why corporate policies are important. This means explaining through examples why the policies you’re setting up are really about covering users’ behinds.  For example, if an employee uses a personal account to set up a wiki, that person has complete and total control over the wiki’s users, their permissions and all the information put into the wiki.  That person retains complete control even when they leave the company – why – because it is a personal account.  Examples like this help corporate users understand why it’s best to use the corporate accounts for work related collaboration.
  • Collect and analyze usage information.  Collect not only login/out info but when documents are created and edited and by whom. This gives the data mountain for mining.  Automating log analysis that looks through the usage information helps IT operations spot how behaviors are changing. Those changes are the early warning signs that something may not be right.  For example, the automated analysis will tell you that people stopped using a wiki, maybe there is a technical problem, maybe the project is over, maybe the leader left the company, or maybe some other bad thing is happening.  But IT operation’s job is to eliminate the technical problem option and pass the alert on to others to determine which of the other maybes is right and what to do about it.

I think the key thing to keep in mind with this is that most people don’t mind having a safety net, so long as it doesn’t get in the way of their high-flying acrobatics.  I think some well designed log analytics can help companies deliver a safety net while letting their employees perform dazzling feats of coordination that would make the Cirque du Soleil people jealous.

Jasmine Noel is founder and partner of Ptak, Noel & Associates.  With more than 10 years experience in helping clients understand how adoption of new technologies affects IT management, she tries to bring pragmatism (and hopefully some humor) to the business-IT alignment discussion.  Send any comments, questions or rants to jnoel@ptaknoelassociates.com

Industry News

Pentagon bans Thumb Drives
By definition, zero-day attacks always beat anti-virus vigilantes to the punch. That’s because these destructive viruses are able to exploit unknown, undisclosed or newly discovered computer application vulnerabilities before a software developer is able to release a patch to the public — which can render anti-virus programs practically ineffective.

Did you know? Instead of banning USB drives, EventTracker provides a better alternative for managing external storage devices

IT Security – Expect more misery in 2009
One of the nation’s largest processors of pharmacy prescriptions said that extortionists are threatening to disclose personal and medical information on millions of Americans if the company fails to meet payment demands.

Did you know? EventTracker protects your data where it resides, instead of just monitoring the perimeter, to ensure defense in-depth from all kinds of attacks, emerging or traditional.

Prism Microsystems named Finalist in SC Magazine Award program 2009

SIEM: What are you searching for?

Search engines are now well established as a vital feature of IT and applications continue to evolve in breadth and depth at dizzying rates.  It is tempting to try and reduce any and all problems to one of query construction against an index. Can Security Information and Event Management or SIEM be (force) fitted into the search paradigm?

The answer depends on what you are looking to do and your skill with query construction.

If you are an expert with detailed knowledge of log formats and content, you may find it easy to construct a suitable query. When launched against a suitably indexed log collection, results can be gratifyingly fast and accurate. This is however a limited use-case in the SIEM universe of use-cases. This model usually applies when Administrators are seeking to resolve Operational problems.

Security analysts however are usually searching for behavior and not simple text searches. While this is the holy grail of search engines, attempts from Excite (1996) to Accoona (RIP Oct 2008) never made the cut. In the SIEM world, the context problem is compounded by myriad formats and the lack of any standard to assign meaning to logs even within one vendor’s products and versions of a product.

All is not lost, SIEM vendors do offer solutions by way of pre-packaged reports and the best ones offer users the ability to perform analysis of behavior within a certain context (as opposed to simple text search). By way of example – show me all failed logins after 6PM; from this set, show only those that failed on SERVER57; from this set show me those for User4; now go back and show me all User4 activity after 6PM on all machines.

Don’t try this with a “simple” text search engine….or like John Wayne in The Searchers, you may become bitter and middle aged.

– Ananth

Cutting through SIEM/Log Management vendor hype

Cutting through SIEM/Log Management vendor hype

While there is little doubt that SIEM solutions are critical for compliance, security monitoring or IT optimization, it is getting harder for buyers to find the right product for their needs. The reason for this is two fold; firstly, there are a number of products available and vendors have done a great job of making their products sound roughly the same in core features such as correlation, reporting, collection, etc. and secondly, vendors are too busy differentiating on shiny features that in many cases have little or nothing to do with core functionality. This is not surprising. It is easier to spin a shiny feature than slug it out on whose product actually meets core requirements.

SIEM solutions, in reality, are optimized for different use-cases and one size never fits all. The good news is that with the number of potential solutions to choose from, if you do your homework, you will find a product that meets your requirements. So how do you cut through all the vendor claims and hype and select the right solution for your environment and needs?

Read full article for the 7 steps for cutting through vendor hype

Industry News

The lowdown on zero-day attacks 
By definition, zero-day attacks always beat anti-virus vigilantes to the punch. That’s because these destructive viruses are able to exploit unknown, undisclosed or newly discovered computer application vulnerabilities before a software developer is able to release a patch to the public — which can render anti-virus programs practically ineffective.

Did you know? EventTracker detects zero-day attacks with its integrated Change Management module

Extortionists target major pharmacy processor
One of the nation’s largest processors of pharmacy prescriptions said that extortionists are threatening to disclose personal and medical information on millions of Americans if the company fails to meet payment demands.

Did you know? EventTracker safeguards your critical data whether it is at rest, in motion or in use and protects you from costly and embarrassing breaches.

3 reasons why employees don’t follow security rules
A recent survey finds employees continue to ignore security policies. (Surprise, surprise.) Here’s a reminder about what often is missing in organizations that tempts workers to walk the wrong side of security law.

Did you know? EventTracker tracks all employee activity including user rights and activities, file and object access, and logon/offs to ensure that corporate and security policies are being followed

Will SIEM and Log Management usage change with the economic slowdown?

When Wall Street really began to implode a couple of weeks ago one of the remarkable side-effects of the plunge was a huge increase of download activity in all items related to ROI on the Prism website. A sign of the times as ROI always becomes more important in times of tight budgets, and our prospects were seeing the lean times coming. So what does the likelihood of budget freezes or worse mean for how SIEM/Log Management is used or how it is justified in the enterprise?

Compliance is and will remain the great budget enabler of SIEM and Log Management but often a compliance project can be done in a far more minimal deployment and still meet the requirement. There is, however, enormous tangible and measurable benefit in Log Management beyond the compliance use case that has been largely ignored.

SIEM/Log Management for the most part has been seen (and positioned by us vendors) as a compliance solution with security benefits or in some cases a security solution that does compliance. Both of these have a hard ROI to measure as it is based on a company’s tolerance for risk.  A lot of SIEM functionality, and the log management areas in particular, is also enormously effective in increasing operational efficiencies – and provides clear, measurable, fast and hard ROI. Very simply, compliance will keep you out of jail, security reduces risk, but by using SIEM products for operations you will save hard dollars on administrator costs and reduce system down-time which in turn increases productivity that directly hits the bottom line. Plus you still get the compliance and security for free effectively. A year ago when we used to show these operational features to prospects (mostly security personnel) they were greeted 9 out of 10 times with a polite yawn. Not anymore.

We believe this new cost conscious buying behavior will also drive broader rather than deeper requirements in many mid-tier businesses. It is the “can I get 90% of my requirements, and 100% of the mandatory ones in several areas, and is that better than 110% in a single area?” discussion. Recently Prism added some enhanced USB device monitoring capability in EventTracker. While it is beyond what typical SIEM vendors provide in that we track files written and deleted on the USB drive in real-time, I would not consider it to be as good as a best of breed DLP provider. But for most people it gets them where they need to be and is included in EventTracker for no additional cost. It is amazing the level of interest this functionality receives today from prospects while at the same time you get correspondingly less interest in features with a dubious ROI like many correlation use cases. Interesting times.

-Posted by Steve Lafferty

The cloud is clear as mud

The Economist opines that the world is flirting with recession and IT may suffer; which in turn will hasten the move to “cloud computing”, which in a pithy distillation is described as “a trade-off between sovereignty and efficiency”.

Computing as a borderless utility? Whereas most privacy laws assume data resides in one place…the cloud makes data seem present everywhere and nowhere.

In a recent post Steve differentiated between security OF the cloud and security IN the cloud. This led us to an analysis of cloud computing as it is currently offered by Amazon AWS, Google Apps and Zoho.

From a risk perspective, security of content IN the cloud is essentially considered your problem by Amazon whereas Google and Zoho say “trust in me, just in me”. When pressed, Google says “we do not recommend Google Apps for content subject to compliance regulations” but is apparently working to assuage concerns about access control.

However moving your data to the cloud does not absolve you from responsibility on who accessed it for what purpose — the main concern of auditors everywhere.

How now?

At the present time, neither Google nor Zoho make any audit trail available to subscribers while at Amazon it’s your problem. We think widespread adoption by the business community (and what of the federal government?) will require significant transparency to provide visibility. This is also true for popular hosted applications like Intuit Quickbooks and Salesforce.

As Alex notes “…in order to gain that visibility, our insight into Cloud Risk Management must include significant provisions for understanding a joint ability to Prevent/Detect/Respond as well as provisions for managing the risk that one of the participants won’t provide that visibility or ability via SLA’s and penalties.”

Clear as mud.


Some Ruminations on the Security Impact of Software As A Service

In a recent post I talked a little about the security and compliance issues facing companies that adopt cloud-based SaaS for any mission-critical function. I referred to this as security OF the cloud to differentiate it from a cloud-based security offering or security IN the cloud. This is going to pose a major change in the security industry if it takes off.

Take a typical small business “SmallCo” as an example – They depend on a combination of Quickbooks and an accounting firm for their financial processes. For all practical purposes SmallCo outsources the entire accounting function. They use a hosting company to host Quickbooks for a monthly fee, and their external CPA, internal management and accounts staff access the application for data processing. Very easy to manage, no upfront investment, no servers to maintain, all the usual reasons why a SaaS model is so appealing.

One can easily argue the crown jewels of SmallCo’s entire business are resident in that hosted solution. SmallCo began to question whether this critical data was secure from being hacked or stolen. Would SmallCo be compliant if they were obligated to follow a compliance standard? Is it the role of the hosting provider to ensure security and compliance? To all of those questions there was and is no clear cut answer. SmallCo is provided access to the application and can have access to any audit capability that is supported in the Quickbook product (which is not a great deal), and there is no ability to collect that audit and usage data other than to manually run a report. At the time SmallCo began it did not seem to be important but as SmallCo grew so did their exposure.

Salesforce, another poster child for SaaS, is much the same. I read a while back they were going to put the ability to monitor changes in some of their database fields in their Winter 2008 release. But there appears to be nothing for user level auditing or even admin auditing (of your staff much less theirs). A trusted user can steal an entire customer list and not even have to be in the office to do it. The best DLP technology will not help you as it can be accessed and exported through any web browser on any machine. Having used Salesforce in previous companies I can personally attest, however, that it is a fine  CRM system, cost-effective, powerful and well-designed. But you have to maintain a completely separate access control list, and you have no real ability to monitor what is accessed by whom for audit purposes. For a prospect with privacy concerns is it really a viable, secure solution?

Cloud based computing changes the entire paradigm of security. Perimeter defense is the first step of a defense in depth to protect service availability and corporate data, but what happens when there is no data resident to be defended? In fact, when there are a number of services in the cloud, is event management going to be viable? Will the rules be the same when you are correlating events from different services on the cloud?

So here is the challenge I believe — as more and more mission critical processes are moved to the cloud, SaaS suppliers are going to have to provide log data in a real-time, straight forward manner, probably for their admins as well as their customers’ personnel. In fact since there is only a browser and login and no firewall, network or operating system level security  to breach, auditing would have to be very, very robust.  With all these cloud services is it feasible that an auditor will accept 50 reports from 50 providers and pass the company under audit? Maybe, but someone – either the end-user or a MSSP has to be responsible for monitoring for security and compliance, and unless the application and data is under the control of end-users, they will be unable to do so

So If I were an application provider like Salesforce I would be thinking really hard about being a good citizen in a cloud based world. Like providing real-time audit records for at least user log-on and log-off, log-on failures and a complete audit record of all data extracts as a first step, as well as a method to push the events out in real-time. I would likely do that before I worried too much about auditing fields in the database.

Interesting times.

Steve Lafferty