Archive

Compliance: Happy Holidays from EventTracker

Compliance: 
Compliance

Looking Back on the forecast of IT Trends and Comments for 2012

“The beginning of a new year marks a time of reflection on the past and anticipation of the future. The result for analysts, pundits and authors is a near irresistible urge to identify important trends in their areas of expertise…”  (from our January newsletter)

We made a lot of predictions this past year and now it’s time to review them and assess our accuracy.

Prediction:  Customers buy solutions, not technologies.   The best and most successful recognize and respond to this demand for comprehensive solutions to their customers’ expectations and demands. The emergence of affordable, fully integrated, modular and comprehensive solutions that address identifiable business and operational problems out-of-the-box will continue and become more competitive as more intelligence and power are embedded in IT solutions.

Year End Result:  More and more vendors with rising revenues are presenting and creating their products as integrated solutions to recognizable organizational and even market segment problems – this holds true for business, as well as IT specific target customers. This applies to vendors from the giants such as IBM to firms such as EventTracker.

Prediction: Private, public and hybrid Clouds continue to grow in number and application spreading across all market segments. Consumers of Cloud services will continue to be even more selective and careful as they choose their providers/suppliers/partners. High on their list will be concerns for stability, security and interoperation…However, a combination of improved architectures and customer interest in achieving very real Cloud…. financial, operational and competitive benefits will maintain adoption rates.

Year End Result:  Interest in the cloud is and remains high, so high that many are “cloud-wrapping” i.e. hyping and adding the name ‘cloud’ to clearly non-cloud products.  Despite some well-published failures, public clouds remain popular as interest grows and the cost of cloud implementation decreases.  HOWEVER, the cloud is no panacea and the vision and expertise of your cloud partner is critical to realizing the payback – choose carefully!

Prediction:  Standards and reference architectures will become more important as Clouds (public, private, and hybrid) proliferate. As business and IT consumers pursue the potential benefits of Cloud/IaaS/ PaaS/ SaaS, etc. it is becoming increasingly obvious that the link between applications/services and the underlying infrastructure must be broken. OASIS [1]-sponsored Topology and Orchestration Specification for Cloud Applications (TOSCA) will do just that.

Year End Result:  Clouds are one of the reasons that consumer interest and membership in standards groups (like Cloud Standards Consumer Council [2]) now number over 378 customer members.  This is motivating vendors to become more active in the technology standards groups where membership and delivery is accelerating (OpenStack has over 850 members) – one more measure of the success is that more vendors have joined IBM in aggressively integrating and marketing messages about their activities in standards groups!

Prediction:  Use of sophisticated analytics as a business and competitive tool spreads far and wide. The application of analytics to data to solve tough business and operational problems will accelerate as vendors compete to make sophisticated analytics engines easier to access and use, more flexible in application and the results easier to understand and implement. IT has provided…. mountains of data as well as the ability to collect and process big streams of live data, combined with concentrated efforts by vendors to wrap accessible user interfaces around the analytics will provide access to these tools to a much wider audience.

Year End Result:  The hype continues, delivery to market of easily accessible and useable analytic solutions isn’t what I expected, but things are improving as solution providers and consumers recognize the power of dynamic virtualization of data to communicate an impactful message.

Prediction:  Increasingly integrated, intelligent, real-time end-to-end management solutions enable high-end, high-value services. End-to-end monitoring and management for proactive action to assure a consistent, high quality end-user experience….The primary goal is prediction to avoid problems. Identifying correlated events can be as effective as or even more effective than recognizing cause in providing an early warning.  The fact is that while knowledge of causation is necessary for repair, both correlation and causation work for predictive problem avoidance.

Year End Result:  Okay, so it’s taking a bit longer for this to become attention grabbing, OR, they are being embedded in and over-shadowed by the arrival of intelligent, highly integrated systems that are workload aware and managed to optimize delivery.

Prediction:  PM (Application Performance Management) converges on BPM (Business Process Management). The definition of APM is expanding to include a focus on the end-user to infrastructure performance optimization as a prime motivator for corrective action. Business managers care about infrastructure performance only to the extent it negatively impacts the service experience. …… Enhanced real-time predictive analytics are specifically used to improve the user’s interactive experience by more quickly alerting IT staff to infrastructure behaviors that can disrupt service delivery.

Year End Result: With the emphasis on optimization of and in service performance, service delivery and dynamic infrastructure – as well as the increasing number of performance management solutions, which allow for integrating business policies for control – GOOD CALL

The impact of the consumerization of IT will continue to become more significant. Consumers of services are increasingly intolerant of making any concessions to the idiosyncrasies of their access devices (iPad, iPod, Smartphone, Nook, etc.)……..Technology will increasingly and automatically detect, adapt to and serve the user.

Year End Result:  The consumer ‘ethics and experience’ dominates not only product design, development and delivery but is fundamentally altering the work environment – at least that’s what we’re told repeatedly by pundits, marketing executives, vendors, etc. and it is actually happening.

Prediction:  Virtualization – acts as a ‘gateway’ step to the Cloud and fully ‘service’ infrastructure.  Virtualization will continue to be subsumed by Cloud. Virtualization is now recognized as an enabling technology and necessary building block to Cloud implementations. It is simply the first step toward achieving a truly adaptive infrastructure that operates with the flexibility, reliability and robustness to respond to the evolving and changing needs of the business and consumer of IT services….

Year End Result:  Smart (and successful) vendors have jumped on this opportunity to provide solutions to accelerate and facilitate the movement from virtualized to Cloud environments – systems, services and software have been retooled, created and delivered in support of these efforts – the caveat which we’ve discussed here and on our website [3] is the need to know your supplier AND understand your needs.

Have a Happy Holiday Season!

[1] www.oasis-open.org/committees/tc_home.php?wg_abbrev=tosca

Five Leadership Lessons from Simpson-Bowles

In January 2010 the U.S. Senate was locked in a sharp debate about the country’s debt and deficit crisis. Unable to agree on a course of action, some Senators proposed the creation of a fiscal commission that would send Congress a proposal to address the problem with no possibility of amendments.   It was chaired by former Senator, Alan Simpson, and former White House chief of staff, Erskine Bowles.

Darrel West and Ashley Gabriele of Brookings examined the leadership lessons in this article. I was struck by the application of some of the lessons to the SIEM problem.

1) Stop Fantasizing About Easy Fixes

Cutting waste and fraud is not sufficient to address long-term debt and deficit issues. To think that we can avoid difficult policy choices simply by getting rid of wasteful spending is a fantasy.   It’s also tempting to think that the next Cisco firewall, Microsoft OS or magic box will solve all security issues; that the hard work of reviewing logs, changes and assessing configuration will not be needed. It’s high time to stop fantasizing about such things.

2) Facts Are Informative

Senator Daniel Patrick Moynihan famously remarked that “everyone is entitled to his own opinion, but not to his own facts.” This insight often is lost in Washington D.C. where leaders invoke “facts” on a selective or misleading basis. The Verizon Data Breach report has repeatedly shown that attacks are not highly difficult, that most breaches took weeks or more to be discovered and that almost all were avoidable through simple controls.   We can’t get away from it — looking at logs is basic and effective.

3) Compromise Is Not a Dirty Word

One of the most challenging aspects of the contemporary political situation is how bargaining, compromise, and negotiation have become dirty words. Do you have this problem in your Enterprise? Between the Security and Compliance teams? Between the Windows and Unix teams? Between the Network and Host teams? Is it preventing you from evaluating and agreeing on a common solution? If yes, this lesson is for you — compromise is not a dirty word.

4) Security and Compliance Have Credibility in Different Areas

On fiscal issues, Democrats have credibility on entitlement reform because of their party’s longstanding advocacy on behalf of Social Security, Medicare, and Medicaid. Meanwhile, Republicans have credibility on defense issues and revenue enhancement because of their party’s history of defending the military and fighting revenue increases. In our world, the Compliance team has credibility on regular log review and coverage of critical systems, while the Security team has credibility on identifying obvious and subtle threats (out-of-ordinary behavior). Different areas, all good.

5) It’s Relationships, Stupid!

Commission leaders found that private and confidential discussions and trust-building exercises were important to achieving the final result. They felt that while public access and a free press were essential to openness and transparency, some meetings and most discussions had to be held behind closed doors. Empower the evaluation team to have frank and open discussion with all stakeholders — including those from Security, Compliance, Operations and Management. Such a consensus built in advance leads to a successful IT project.

Top 5 Security Threats of All Time

The newspapers are full of stories of the latest attack. Then vendors rush to put out marketing statements glorifying themselves for already having had a solution to the problem, if only you had their product/service, and the beat goes on.

Pause for a moment and compare this to health scares. The top 10 scares according to ABC News include Swine Flu (H1N1), BPA, Lead paint on toys from China, Bird Flu (H5N1) and so on.   They are, no doubt, scary monsters but did you know that the common cold causes 22 million school days to be lost in the USA alone?

In other words, you are better off enforcing basic discipline to prevent days lost from common infections than stockpiling exotic vaccines. The same is true in IT security. Here then, are the top 5 attack vectors of all time. Needless to say these are not particularly hard to execute, and are most often successful simply because basic precautions are not in place or enforced. The Verizon Data Breach Report demonstrates this year in and year out.

1. Information theft and leakage

Personally Identifiable Information (PII) data stolen from unsecured storage is rampant. The Federal Trade Commission says 21% of complaints are related to identity theft and have accounted for 1.3M cases in 2009/10 in the USA. The 2012 Verizon DBIR shows 855 incidents and 174M compromised records.

Lesson learned: Implement recommendations like SANS CAG or PCI-DSS.

2. Brute force attack

Hackers leverage cheap computing power and pervasive broadband connectivity to breach security. This is a low cost, low tech attack that can be automated remotely.   It can be easily detected and defended against, but it requires monitoring and eyes on the logs. It tends to be successful because monitoring is absent.

Lesson learned: Monitor logs from firewalls and network devices in real time. Set up alerts which are reviewed by staff and acted upon as needed. If this is too time consuming, then consider a service like SIEM Simplified.

3. Insider breach

Staff on the inside is often privy to a large amount of data and can cause much larger damage. The Wikileaks case is the poster child for this type of attack.

4. Process and Procedure failures

It is often the case that in the normal course of business, established process and procedures are ignored. Unfortunate coincidences can cause problems.   Examples of this are e-mailing interim work products to personal accounts, taking work home in USB sticks and then losing them, sending CDROMs with source code by mail and then they are lost, etc.

Lesson learned: Reinforce policies and procedures for all employees on a regular basis. Many US Government agencies require annual completion of a Computer Security and Assessment Test.   Many commercial banks remind users via message boxes in the login screen.

5. Operating failures

This includes oops moments, such as backing up data to the wrong server and sending backup data off-site where it can be restored by unauthorized persons.

Lesson learned: Review procedures and policies for gaps. An external auditor can be helpful in identifying such gaps and recommending compensating controls to cover them.

Big Data, Old News. Got Humans?

Did you know that big data is old news in the area of financial derivatives?   O’Connor & Associates  was founded in 1977 by mathematician Michael Greenbaum, who had run risk management for Ed & Bill O’Connor’s options trading firm. What made O’Connor and Associates successful was the understanding that expertise is far more important than any tool or algorithm. After all, absent expertise, any tool can only generate gibberish; perfectly processed and completely logical, of course, but still gibberish.

Which brings us back to the critical role played by the driver of today’s enterprise tools. These tools are all full featured and automate the work of crushing an entire hillside of dirt to locate tiny grams of gold — but “got human”? It comes back to the skilled operator who knows how and when to push all those fancy buttons. Of course deciding which hillside to crush is another problem altogether.

This is a particularly difficult challenge for midsize enterprises which struggle with SIEM data; billions of logs, change and configuration data all now available thanks to that shiny SIEM you just installed. What does it mean? What are you supposed to do next? Large enterprises can afford a small army of experts to extract value, whereas the small business can ignore the problem completely but for the midsize enterprises, it’s the worst of all worlds – Compliance regulations, tight budgets, lean staff and the demand for results?

This is why our  SIEM Simplified  offering was created: to allow customers to outsource the heavy lifting part of the problem while maintaining control over the critical and sensitive decision making parts. At the EventTracker Control Center (ECC), our expert staff watches your incidents and reviews log reports daily, and alert you to those few truly critical conditions that warrant your attention. This frees up your staff to take care of things that cannot be outsourced. In addition, since the ECC enjoys economies of scale, this can be done at lesser cost than do-it-yourself. This has the advantage of inserting the critical human component back into the equation but at a price point that is affordable.

As Grady Booch observed “A fool with a tool is still a fool.”

tool

Choosing The Solution That Works For You

Troubleshooting problems with enterprise applications and services are often exercises in frustration for IT and business staff. The reasons are well documented – complex architectures, disparate, unintegrated monitoring solutions, and minimal coordination between technology and product experts while attempting to pinpoint and resolve problems under the pressures of an escalating negative impact of delays and/or downtime on revenues, customer satisfaction and the delivery of services.

Simplifying infrastructure and application performance analysis across multiple technologies and coordinating efforts between all staff involved in the problem resolution process is a top priority for IT and operations staff whether the consumer is in research, government, financial, education or an enterprise. Minimizing downtime risks and lowering the cost of reliable services is the goal.

SIEM solutions are increasingly proving their worth and vital role in addressing the challenge. Such efforts do pay off:  one company saved $1 million in one month after they implemented an integrated incident and problem management workflow. However, success isn’t automatic. It is important to have structured process to evaluate potential solutions. This holds true whether the search is for a SIEM solution or a comprehensive infrastructure performance management one. Consider some of the following criteria and attributes that should be included in the process of solution evaluation.

Solution Evaluation and Selection Checklist

1.     Policy-based:

  1. Rules must be easy to create and maintain.
  2. Rule evaluation and execution must be based on real-time data, collected and interpreted within an operational context that reflects business demands as well process and infrastructure reality.
  3. The time period within which rules are evaluated and responses automatically initiated must be configurable and highly agile – seconds (or less) matter.
  4. Policies must be applied with a level of granularity that is both application and business process specific.

2.     Non-invasive operation:

  1. The solution must leverage existing processes, application operation and infrastructure realities.
  2. It must not require application modifications or extensive, proprietary modifications to the operational infrastructure to be effective.
  3. It must automatically integrate and adapt to infrastructure and business process changes.

3.     Platform agnostic:

  1. The solution must accommodate a heterogeneous IT environment. There should be no “designed in” dependencies on hardware or software features.
  2. It must be built and based on the utilization and application of open standards and interfaces.

4.     Extensible:

  1. The solution must be able to expand and scale with the operational environment.
  2. The solution must be modular in design so that functionality can be extended as needed.
  3. The solution should have the ability to utilize both in- and out-of-band operational metrics.
  4. There must be no architectural, structural or operational bottlenecks that will prevent the solution from operating as the infrastructure and business environment grows.
  5. It must interoperate and integrate with existing on-site commercial /proprietary tools.

5.     Automated:

  1. The solution must provide comprehensive automated monitoring, management, and control of operations.
  2. Monitoring and response activities must be able to monitor, manage (as appropriate) and report on performance and events across all desired infrastructure and devices.

6.     Distributed, fault tolerant operation:

  1. If the solution supports business critical operations it must operate in a fault tolerant manner with no single point of failure.
  2. Data collection, data analysis, operational intelligence, policy-definition and implementation must be distributed to assure reliable operation even if part of the environment fails.

7.     Operational and state reports that are easy to use and understand:

  1. The solution must provide for clear reporting to aid in designing and defining appropriate policies for appropriate remedial, repair or avoidance response as appropriate.
  2. Both the user interface and reporting must be designed to present an easily understood and consumable holistic view of infrastructure data and business information as desired and needed.

8.     Process-based:

  1. Hard won experience in automating business processes, leads us to conclude that model-based, policy-driven automation helps to relieve the burden and risk associated with manual processes.
  2. In addition, such a solution provides the flexibility, adaptability, and scalability deliverable with a timeframe and ease of implementation that exceeds alternative structures.

This is not an exhaustive list. These important requirements must be supplemented with the ones that are unique to the specific situation and organization. There exist operational idiosyncrasies for each that impact the selection of an appropriate solution approach to its problems. These can be process, technology, procedural, or even politically based. In any case, they need to be identified and considered when defining solution requirements.

Leveraging The User To Improve IT Solutions

I’ve spent the last 20 years analyzing the Information Technologies market. My work with vendors has ranged from developing business strategies and honing messaging to defining product requirements and identifying significant trends. My work with IT enterprise decision-makers has been to help define requirements, identify and evaluate alternatives, and recommend solutions, etc. We’ve always worked closely with our clients to understand first what they are trying to accomplish, then providing the advice, support and services that we believe will be most effective in achieving those goals.

Over the years I’ve noticed there are recurring cycles to solutions and how they are developed and sold. Certain themes which reappear are: centralization of resources vs. decentralization; work from customer wish-lists but don’t miss the next ‘big thing’! My personal favorite of the recurring themes is ‘listen to and understand the customer’. This is actually really good advice, but it must be executed correctly. It requires active listening, interaction, and learning in-depth how the customer uses the product as well as what is not being used. Clearly its time in the cycle has come around again.

Over the last 18 to 24 months, I’ve seen more of our clients spending significant time making a concerted effort to work with and gain feedback from customers. They’re not shy about discussing the depth and breadth of their efforts, and the rapid, recursive changes to the product that we find interesting and highly beneficial.

So, what’s the big deal about listening to clients? There are a variety of ways to acquire and process data from customers. The issue isn’t about a failure in these but in the changes taking place in the processes of data is collection, the focus of the inquiry and how the data is used and applied.

Data Collection: Social media, agile development and consumerization change it all! What was once a structured and prolonged process of meetings, discussions and eventual integration into a development plan, has become a looser, more interactive and faster track to development and integration of new features and capabilities. Social media facilitates and speeds communications between the vendor and the user. Agile development allows user comments and requests to more directly influence the product and feature development process and workflow. One client described how data from users were streamed into their process of continuous development. Teams were able to make incremental adjustments during the development process. Customers provided ongoing feedback about what worked, didn’t work and what almost worked based on evaluating snapshots and prototypes. Today’s technologies and workflows allow for a continuous input to improve the product during the development cycle. It turns out that it can be economically more efficient to facilitate potentially disruptive communication that allows those adjustments to meet customer needs, than discovering a major gap between delivered functionality and need at the end of the development cycle.

Focus of the Inquiry: Speeds n’ feeds don’t cut it anymore! It used to be about shaping tools, adding features and functionality. Today, changes in technology, capacity expansion and the change in the operations environment shift the focus to simplicity of use, integration of capabilities and consistency across platforms. Users have more responsibility and, frequently, less training. Over-specialization makes less sense from both an economic perspective with the advent of virtualization. IT staff need management and administrative tools that integrate tasks and functions and leverage the capabilities of the technology to to optimize the delivery of reliable services. They need solutions that will help them to anticipate, and identify the source of disruptions wherever it occurs and help them to provide the best user experience. This means vendors must work more closely with their users to acquire deep understanding into how their products are used and how to improve them.

Applying the data: Integration, intelligence and leverage what is known! Features and functionality enhancement remain important, but now the focus is easier to use and to apply to changing circumstances. IT staff need to be able to use and integrate what is available in terms of data and tools, as well as usage patterns, customer behavior, and knowledge about change. The focus is on moving beyond simply collection, aggregating and reporting to using patterns, analytics and acquired expertise to make the user more effective and proactive (when appropriate) in the use of the tool. The extension can be as straightforward as building the ability to group, search, correlate and manipulate massive amounts of data or feedback with a new user interface. Or, it can involve the automated correlation and analysis of data to pinpoint a potential failure during a simulated service delivery, then using simulation to suggest repair or work-around alternatives to avoid the problem.

The point of all this is that vendors and users are becoming more closely linked. The consumerization of IT and penetration of social media provide more opportunities for interaction and collaboration that benefits both. The partnership between vendors, channel and technology partners and the consumer provide a rich ground for cooperative efforts that will benefit all three. In our practice, we are seeing more and more of such collaboration. It is both highly welcome and much needed. The results that we have seen so far lead us to believe this will continue.

Five myths about PCI-DSS

In the spirit of the Washington Posts’ regular column, “5 Myths,” here we “challenge everything you think you know” about PCI-DSS Compliance.

1. One vendor and product will make us compliant

While many vendors offer an array of services and software which target PCI-DSS, no single vendor or product fully addresses all 12 of the PCI-DSS v2.0 requirements. Marketing departments often position offerings in such a manner as to give the impression of a “silver bullet.”   The PCI Security Standards Council warns against reliance on a single product or vendor and urges a security strategy that focuses on the big picture.

2. Outsourcing card processing makes us compliant

Outsourcing may simplify payment card processing but does not provide automatic compliance. PCI-DSS also calls for policies and procedures to safeguard cardholder transactions and data processing when you receive them — for example, chargebacks or refunds. You should request an annual certificate of compliance from the vendor to ensure that their applications and terminals are compliant.

3. PCI is too hard, requires too much effort

The 12 requirements can seem difficult to understand and implement to merchants without a dedicated IT department, however these requirements are basic steps for good security. The standard offers the alternative of compensating controls, if needed. The market is awash with many products and services to help merchants achieve compliance. Also consider that the cost of non-compliance can often be higher, including fines, legal fees, lost business and reputation.

4. PCI requires us to hire a Qualified Security Assessor (QSA)

PCI-DSS offers the option of doing a self-assessment with officer sign-off if your merchant bank agrees. Most large retailers prefer to hire a QSA because they have complex environments, and QSAs provide valuable expertise including the use of compensating controls.

5. PCI compliance will make us more secure

Security exploits are non-stop and an ever escalating war between the bad and good guys. Achieving PCI-DSS compliance, while certainly a “brick in the wall” of your security posture, is only a snapshot in time. “Eternal vigilance is the price of liberty,” said Wendell Phillips.

Does Big Data = Better Results? It depends…

If you could offer your IT Security team 100 times more data than they currently collect – every last log, every configuration, every single change made to every device in the entire enterprise at zero cost – would they be better off? Would your enterprise be more secure? Completely compliant? You already know the answer – not really, no. In fact, some compliance-focused customers tell us they would be worse off because of liability concerns (you had the data all along but neglected to use it to safeguard my privacy), and some security focused customers say it will actually make things worse because we have no processes to effectively manage such archives.

As Micheal Schrage noted, big data doesn’t inherently lead to better results. Organizations must grasp that being “big data-driven requires more qualified human judgment than cloud-based machine learning.” For big data to be meaningful, it has to be linked to a desirable business outcome, or else executives are just being impressed or intimidated by the bigness of the data set. For example, IBMs DeepQA project stores petabytes of data and was demonstrated by Watson, the successful Jeopardy playing machine – that is big data linked clearly to a desirable outcome.

In our corner of the woods, the desirable business outcomes are well understood.   We want to keep bad guys out (malware, hackers), learn about the guys inside that have gone bad (insider threats), demonstrate continuous compliance, and of course do all this on a leaner, meaner budget.

Big data can be an embarrassment of riches if linked to such outcome.   But note the emphasis on “qualified human judgment.”   Absent this, big data may be just an embarrassment. This point underlines the core problem with SIEM – we can collect everything, but who has the time or rule-set to make the valuable stuff jump out? If you agree, consider a managed service. It’s a cost effective way to put big data to work in your enterprise today – clearly linked to a set of desirable outcomes.

Are you a Data Scientist?

The advent of the big data era means that analyzing large, messy, unstructured data will increasingly form part of everyone’s work. Managers and business analysts will often be called upon to conduct data-driven experiments, to interpret data, and to create innovative data-based products and services. To thrive in this world, many will require additional skills. In a new Avanade survey, more than 60 percent of respondents said their employees need to develop new skills to translate big data into insights and business value.

Are you:

Ready and willing to experiment with your log and SIEM data? Managers and security analysts must be able to apply the principles of scientific experimentation to their log and SIEM data. They must know how to construct intelligent hypotheses. They also need to understand the principles of experimental testing and design, including population selection and sampling, in order to evaluate the validity of data analyses. As randomized testing and experimentation become more commonplace, a background in scientific experimental design will be particularly valued.

Adept at mathematical reasoning? How many of your IT staff today are really “numerate” — competent in the interpretation and use of numeric data? It’s a skill that’s going to become increasingly critical. IT Staff members don’t need to be statisticians, but they need to understand the proper usage of statistical methods. They should understand how to interpret data, metrics and the results of statistical models.

Able to see the big (data) picture? You might call this “data literacy,” or competence in finding, manipulating, managing, and interpreting data, including not just numbers but also text and images. Data literacy skills should be widespread within the IT function, and become an integral aspect of every function and activity.

Jeanne Harris blogging in the Harvard Business Review writes, “Tomorrow’s leaders need to ensure that their people have these skills, along with the culture, support and accountability to go with it. In addition, they must be comfortable leading organizations in which many employees, not just a handful of IT professionals and PhDs in statistics, are up to their necks in the complexities of analyzing large, unstructured and messy data.

“Ensuring that big data creates big value calls for a reskilling effort that is at least as much about fostering a data-driven mindset and analytical culture as it is about adopting new technology. Companies leading the revolution already have an experiment-focused, numerate, data-literate workforce.”

If this presents a challenge, then co-sourcing the function may be an option. The EventTracker Control Center here at Prism offers SIEM Simplified, a service where trained and expert IT staff perform the heavy lifting associated with big data analysis, as it relates to SIEM data. By removing the outliers and bringing patterns to your attention at greater efficiencies because of scale, focus and expertise, you can focus on the interpretation and associated actions.

Seven deadly sins of SIEM

1) Lust: Be not easily lured by the fun, sexy demo. It always looks fantastic when the sales guy is driving. How does it work when you drive? Better yet, on your data?

2) Gluttony: Know thy log volume. When thee consumeth mucho more raw logs than thou expected, thou shall pay and pay dearly. More SIEM budgets die from log gluttony than starvation.

3) Greed: Pure pursuit of perfect rules is perilous. Pick a problem you’re passionate about, craft monitoring, and only after it is clearly understood do you automate remediation.

4) Sloth: The lazy shall languish in obscurity. Toilers triumph. Use thy SIEM every day, acknowledge the incidents, review the log reports. Too hard? No time you say?     Consider SIEM Simplified.

5) Wrath: Don’t get angry with the naysayers. Attack the problem instead. Remember “those who can, do; those who cannot, criticize.” Democrats: Yes we can v2.0.

6) Envy: Do not copy others blindly out of envy for their strategy. Account for your differences (but do emulate best practices).

7) Pride: Hubris kills. Humility has a power all its own. Don’t claim 100% compliance or security. Rather you have 80% coverage but at 20% cost and refining to get the rest. Republicans: So sayeth Ronald Reagan.

Trending Behavior – The Fastest Way to Value

Our  SIEM Simplified  offering is manned by a dedicated staff overseeing the EventTracker Control Center (ECC). When a new customer comes aboard, the ECC staff is tasked with getting to know the new environment, identifying which systems are critical, which applications need watching, and what access controls are in place, etc. In theory, the customer would bring the ECC staff up to speed (this is their network, after all) and keep them up to date as the environment changes. Reality bites and this is rarely the case. More commonly, the customer is unable to provide the ECC with anything other than the most basic of information.

How then can the ECC “learn” and why is this problem interesting to SIEM users at large?

Let’s tackle the latter question first. A problem facing new users at a SIEM installation is that  they get buried in getting to know the baseline pattern and the enterprise (the very same problem the ECC faces). See this  article  from a practitioner.

So it’s the same problem. How does the ECC respond?

Short answer: By looking at behavior trends and spotting the anomalies.

Long answer: The ECC first discovers the network and learns the various device types (OS, application, network devices etc.). This is readily automated by the StatusTracker module. If we are lucky, we get to ask specific the customer questions to bolster our understanding. Next, based on this information and the available knowledge packs within EventTracker, we schedule suitable daily and weekly reports and configure alerts. So far, so good, but really no cigar. The real magic lies in taking these reports  and creating flex reports where we control the output format to focus on parameters of value that are embedded within the description portion of the log messages (this is always true for syslog formatted messages but also for Windows style events). When these parameters are trended in a graph, all sorts of interesting information emerges.

In one case, we saw that a particular group of users was putting their passwords in the username field then logging in much more than usual — you see a failed login followed by a successful one; combine the two and you have both the username and password. In another case, we saw repeated failed logon after hours from a critical IBM i-Series machine and hit the panic button. Turns out someone left a book on the keyboard.

Takeaway: Want to get useful value from your SIEM but don’t have gobs of time to configure or tune the thing for months on end? Think trending behavior, preferably auto-learned. It’s what sets EventTracker apart from the search engine based SIEMs or from the rules based products that need an expen$ive human analyst chained to the product for months on end. Better yet, let the ECC do the heavy lifting for you. SIEM Simplified, indeed.

Compliance Challenge Continues

Despite its significant costs and a mixed record of success, the compliance-related load imposed on today’s enterprise has yet to decrease. Current trends driven by government legislative efforts, and adopted at the executive level, favor the continuing proliferation of monitoring and reporting in operations, decision-making and service delivery. Even if existing legislation is repealed, it is not certain that compliance edicts will cease.

The response and responsibility for monitoring, recording, analyzing and reporting on compliance efforts will continue to heavily impact IT operations. Data is where it all starts; IT remains the main repository of enterprise information and data including the responsibility for maintaining and operating the network links between all parts of the organization. Therefore, they will experience the bulk of the operational load.

Enterprise compliance activities break down into three steps:

  1. Assessment – the effort undertaken by the enterprise to determine the operational differences between current operational procedures and those required to comply with legislated mandates. This can include defining activities to eliminate the gap.
  2. Implementation – the effort to design the required solution, acquire infrastructure, processes and products to implement the solution, and, finally, the actual implementation effort.
  3. Review, analysis, and reporting – this is the cycle of activities to get actionable information from the data collected, the reporting on the day-to-day state of compliance, warning when noncompliance threatens, and progress towards achieving compliance.

The first two tasks have benefited from the interest and efforts of a range of aggressive solution providers. The third continues to get an increasing amount of attention as experience demonstrates its criticality to assuring compliance in an evolving climate of control. The enterprise must be able to demonstrate that it has policies and procedures in place, but also that it monitors to assure these are followed (and initiating corrective action when they are not). Enterprise executives also become liable if abuses or weaknesses creep in to their systems as things change over time as the result of growth (organic, acquisition, etc.) or consolidation.

For all but the smallest enterprise, the task of monitoring activities, collecting data, analyzing, and reporting on the data is far too complex and time-consuming for manual completion. Complicating matters is the tendency for mandates to include demands for ‘timely’ reporting and ‘prompt’ corrective action. With an environment with little tolerance for slow responses, few enterprises can afford to run the risk of being perceived as being non-compliant with the attendant legal and financial penalties.

Today’s enterprise operates in an environment of growing complexity, escalating competition and, in the case of compliance regulations, increasing ambiguity. Ambiguity means that different groups can, and will interpret performance and operational actions in differing ways. This increases the risk of non-compliance. It also means that the parameters and requirements of reporting can and will change. IT must be able to quickly and reliably adapt its processes to comply with these changes.

Finally, in addition to externally, mandated procedures are those that are required by the enterprise. Any solution must be able to monitor and prove compliance with these custom procedures. The solution is to automate the effort to track, manage, and report on the compliance process.

The primary responsibility for implementing the policies and reporting about these efforts falls on enterprise IT. IT must deal with these as well as expanded demands for service with staffs stretched to the limit with the technical and operational demands of increasingly complex day-to-day activities. It’s not possible to meet compliance monitoring and reporting with manual efforts. Such approaches are too slow, inconsistent in application, and unable to stay current with the pace of change in today’s dynamic enterprise.

Based on hard won experience in automating business processes, enterprises have embraced policy-driven automation to relieve the burden and risk associated with manual processes. This delivers flexibility, adaptability, and scalability with a timeframe and ease of implementation that meets compliance and operational needs. Hence, the popularity of SIEM and log management solutions and the popularity of integrated solutions that allow seamless growth and application across the enterprise environment.

It is important to know the specific requirements and results needed by the enterprise to avoid selecting a SIEM solution that is too complicated to use or feature-weak to meet enterprise needs. For most, a modular, automated, integrated, policy-driven and process-oriented solution will prove to be the most effective and flexible choice.

SIEM Fevers and the Antidote

SIEM Fever is a condition that robs otherwise rational people of common sense in regard to adopting and applying Security Information and Event Management (SIEM) technology for their IT Security and Compliance needs. The consequences of SIEM Fever have contributed to misapplication, misuse, and misunderstanding of SIEM with costly impact. For example, some organizations have adopted SIEM in contexts where there is no hope of a return on investment. Others have invested in training and reorganization but use or abuse the technology with new terminology taken from the vendor dictionary.   Alex Bell of Boeing first described these conditions.

Before you get your knickers in a twist due to a belief that it is an attack on SIEM and must be avenged with flaming commentary against its author, fear not. There are real IT Security and Compliance efforts wasting real money, and wasting real time by misusing SIEM in a number of common forms. Let’s review these types of SIEM Fevers, so they can be recognized and treated.

Lemming Fever: A person with Lemming Fever knows about SIEM simply based upon what he or she has been told (be it true or false), without any first-hand experience or knowledge of it themselves. The consequences of Lemming Fever can be very dangerous if infectees have any kind of decision making responsibility for an enterprise’s SIEM adoption trajectory. The danger tends to increase as a function of an afflictee’s seniority in the program organization due to the greater consequences of bad decision making and the ability to dismiss underling guidance. Lemming Fever is one of the most dangerous SIEM Fevers as it is usually a precondition to many of the following fevers.

Easy Button Fever: This person believes that adopting SIEM is as simple as pressing Staple’s Easy Button, at which point their program magically and immediately begins reaping the benefits of SIEM as imagined during the Lemming Fever stage of infection. Depending on the Security Operating Center (SOC) methodology, however, the deployment of SIEM could mean significant change. Typically, these people have little to no idea at all about the features which are necessary for delivering SIEM’s productivity improvements or the possible inapplicability of those features to their environment.

One Size Fits All Fever: Victims of One Size Fits All Fever believe that the same SIEM model is applicable to any and all environments with a return on investment being implicit in adoption. While tailoring is an important part of SIEM adoption, the extent to which SIEM must be tailored for a specific environment’s context is an important barometer of its appropriateness. One Size Fits All Fever is a mental mindset that may stand alone from other Fevers that are typically associated with the tactical misuse of SIEM.

Simon Says Fever: Afflictees of Simon Says Fever are recognized by their participation in SIEM related activities without the slightest idea as to why those activities are being conducted or why they important other than because they are included in some “checklist”. The most common cause of this Fever is failing to tie all log and incident review activities to adding value and falling into a comfortable, robotic regimen that is merely an illusion of progress.

One-Eyed King Fever: This Fever has the potential to severely impact the successful adoption of SIEM and occurs when the SIEM blind are coached by people with only a slightly better understanding of SIEM. The most common symptom occurring in the presence of One-Eyed King Fever is failure to tailor the SIEM implementation to its specific context or the failure of a coach to recognize and act on a low probability of return on investment as it pertains to a enterprise’s adoption.

The Antidote: SIEM doesn’t cause the Fevers previously described, people do. Whether these people are well intended have studied at the finest schools, or have high IQs, they are typically ignorant of SIEM in many dimensions. They have little idea about the qualities of SIEM which are the bases of its advertised productivity improving features, they believe that those improvements are guaranteed by merely adopting SIEM, or have little idea that the extent of SIEM’s ability to deliver benefit is highly dependent upon program specific context.

The antidote for the many forms of SIEM Fever is to educate. Unfortunately, many of those who are prone to the aforementioned SIEM infections are most desperately in need of such education, are often unaware of what they don’t know about SIEM, are unreceptive to learning about what they don’t know, or believe that those trying to educate them are simply village idiots who have not yet seen the brightly burning SIEM light.

While I’m being entirely tongue-in-cheek, the previously described examples of SIEM misuse and misapplication are real and occurring on a daily basis.   These are not cases of industrial sabotage caused by rogue employees planted by a competitor, but are instead self-inflicted and frequently continue even amidst the availability of experts who are capable of rectifying them.

Interested in getting help? Consider SIEM Simplified.

SIEM: Security, Incident AND Event MANAGEMENT, not Monitoring!

Unfortunately, IT is not perfect; nothing in our world can be. Compounding the inevitable failures and weaknesses in any system designed by fallible beings, are those with malicious or larcenous intent that search for exploitable system weaknesses. As a result, IT and the businesses, enterprises and users depending upon reliable operations are no strangers to disruptions, problems, even embarrassing, even ruinous releases of data and information.  The recent exposure of the passwords of hundreds of thousands of Yahoo! and Formspring [1] users are only two of the most recent, public occurrences that remind us of the risks and weaknesses that remain in the systems of even the most sophisticated service providers.

The wise, or maybe more correctly, experienced solution or system designer recognizes the risk of attempts at unauthorized access to files and data. To frustrate such attacks and minimize their impact, they will design and apply various fail-safe strategies and tactical protective mechanisms as part of good design tactics.  Issues of security are of prime concern and a barrier to use of many technologies (cloud in all its models represents a prime example) and implementation strategies, such as outsourcing services.

One of the reasons that solutions such as SIEM products exist is as a part of the operational response to the risks of failure in data and information protection. In the best implementations, they are used as part of a closed loop process. A basic process would include monitoring to detect suspect or anomalous behaviors which mark intrusion attempts and reveal suspect procedural patterns (e.g. repeated password failures). Upon identification and verification, they typically trigger an event alarm and report to a responsible party. The notified individual may accept and act on the notice. Alternatively, they may perform their own check to assure the alarm is valid. They will then determine what corrective action, if any, is needed and initiate the action. Some installations are set up to automatically trigger corrective action (such as isolating the system or port) in parallel with the notification. But, experience has shown that such process definition alone is not a guarantee of protection and risk reduction.

In fact, a review of a failed process that resulted in a major data leak at a service provider gives an indication of how the best designed system can fail. The company had in place a process which included all the proper activities, as well as a reasonable sequence of review and actions to take in response to an alert to an attempted intrusion or attack. Unfortunately, they had no process to oversee the process and assure that someone reviewed the notification of a suspicious event once it was sent. If the notification arrived out-of-hours, or if it was lost, there was no verification of receipt or provision to check to verify follow-up. The resulting debacle was all but inevitable.

Keep in mind when evaluating internal, as well as external data services that contractual guarantees, compliance audits, code testing and reviews have failed to be 100% effective to prevent data exposure, intrusion or leaks. There are no 100% fail-safe solutions; a workable solution should be viewed as one that reduces risk to an acceptable level.

An effective solution must include an on-going process of maintenance, review and validation testing to assure that it is working correctly, remains relevant and focused on the appropriate issues. Assumptions have to be documented, reviewed and tested to assure they match reality. Boundaries, trip-points and threshold limits need to be reviewed. This holds true even and especially for analyses designed to adjust automatically to circumstances to assure they do not ‘drift’ away from critical values.

SIEM solutions are available in a wide variety of service combinations.  The typical solution includes the functionality needed for event management, information management and network behavior analytics. This allows them to build a comprehensive view of what is happening based on a combination of real-time data and event log information.  Many additional options exist for those with more comprehensive concerns and management needs.  Additional frequently desired functionality includes risk analysis, vulnerability management, security controls, such as integration with identity and access management. Best practices in corporate governance have raised compliance monitoring and management capabilities, including the ability to assess and build compliance reports to be a critical extension.

Finally, any production process requires periodic maintenance and review to remain effective. Communication and reporting flows have to be verified to assure not only that the information and alert arrives, but that it is monitored and reviewed in a regular and timely manner. The temptation exists to assume that a solution would be complete based on functionality alone. It should now be clear that a successful system of data protection requires a combination of solution functionality, process and management that effectively reduces and maintains the risk of a breach to a level acceptable to the service and enterprise needs.

[1] http://www.bbc.co.uk/news/technology-18811300

Surfing the Hype Cycle for SIEM

The Gartner hype cycle is a graphic “source of insight to manage technology deployment within the context of your specific business goals.”     If you have already adopted Security Information and Event Management (SIEM) (aka log management) technology in your organization, how is that working for you? As candidate, Reagan famously asked “Are you better off than you were four years ago?”

Sadly, many buyers of this technology are wallowing in the “trough of disillusionment.”   The implementation has been harder than expected, the technology more complex than demonstrated, the discipline required to use/tune the product is lacking, resource constraints, hiring freezes and the list goes on.

What next? Here are some choices to consider.

Do nothing: Perhaps the compliance check box has been checked off; auditors can be shown the SIEM deployment and sent on their way; the senior staff on to the next big thing; the junior staff have their hands full anyway; leave well enough alone.
Upside: No new costs, no disturbance in the status quo.
Downside: No improvements in security or operations; attackers count on the fact that even if you do collect log SIEM data, you will never really look at it.

Abandon ship: Give up on the whole SIEM concept as yet another failed IT project; the technology was immature; the vendor support was poor; we did not get resources to do the job and so on.
Upside: No new costs, in fact perhaps some cost savings from the annual maintenance, one less technology to deal with.
Downside: Naked in the face of attack or an auditor visit; expect an OMG crisis situation soon.

Try managed service: Managing a SIEM is 99% perspiration and 1% inspiration;offload the perspiration to a team that does this for a living; they can do it with discipline (their livelihood depends on it) and probably cheaper too (passing on savings to you);   you deal with the inspiration.
Upside: Security usually improves; compliance is not a nightmare; frees up senior staff to do other pressing/interesting tasks; cost savings.
Downside: Some loss of control.

Interested? We call it SIEM SimplifiedTM.

Big Data Gotcha’s

Jill Dyche writing in the Harvard Business Review suggests that “the question on many business leaders’ minds is this: Does the potential for accelerating existing business processes warrant the enormous cost associated with technology adoption, project ramp up, and staff hiring and training that accompany Big Data efforts?

A typical log management implementation, even in a medium enterprise is usually a big data endeavor. Surprised? You should not be. A relatively small network of a dozen log sources easily generates a million log messages per day with volumes in the 50-100 million per day being commonplace. With compliance and security guidelines requiring that logs be retained for 12 months or more, pretty soon you have big data.

So let’s answer the question raised in the article:

Q1: What can’t we do today that Big Data could help us do?   If you can’t define the goal of a Big Data effort, don’t pursue it.

A1: Comply with regulations like PCI-DSS, SOX 404, and HIPAA etc.; be alerted to security problems in the enterprise; control data leakage via insecure endpoints; improve operational efficiency

Q2: What skills, technologies, and existing data development practices do we have in place that could help kick-start a Big Data effort? If your company doesn’t have an effective data management organization in place, adoption of Big Data technology will be a huge challenge.

A2: Absent a trained and motivated user of the power tool that is the modern SIEM, an organization that acquires such technology is consigning it to shelf ware.   Recognizing this as a significant adoption challenge in our industry, we offer Monitored SIEM as a service; the best way to describe this is SIEM simplified! We do the heavy lifting so you can focus on leveraging the value.

Q3: What would a proof-of-concept look like, and what are some reasonable boundaries to ensure its quick deployment? As with many other proofs-of-concept the “don’t boil the ocean” rule applies to Big Data.

A3:   The advantage of a software-only solution like EventTracker is that an on premises trial is easy to set up. A virtual appliance with everything you need is provided; set up as a VMware or Hyper-Virtual machine within minutes.   Want something even faster? See it live online.

Q4: What determines whether we green light Big Data investment? Know what success looks like, and put the measures in place.

A4: Excellent point; success may mean continuous compliance;   a 75% reduction in cost of compliance; one security incident averted per quarter; delegation of log review to a junior admin.

Q5: Can we manage the changes brought by Big Data? With the regular communication of tangible results, the payoff of Big Data can be very big indeed.

A5: EventTracker includes more than 2,000 pre-built reports designed to deliver value to every interested stakeholder in the enterprise ranging from dashboards for management, to alerts for Help Desk staff, to risk prioritized incident reports for the security team, to system uptime and performance results for the operations folk and detailed cost savings reports for the CFO.

The old adage “If you fail to prepare, then prepare to fail” applies. Armed with these questions and answers, you are closer to gaining real value with Big Data.

Sun Tzu would have loved Flame

All warfare is based on deception says Sun Tzu. To quote:

“Hence, when able to attack, we must seem unable; 
When using our forces, we must seem inactive; 
When we are near, we must make the enemy believe we are far away;  
When far away, we must make him believe we are near.”

With the new era of cyberweapons, Sun Tzu’s blueprint can be followed almost exactly: a nation can attack when it seems unable to. When conducting cyber-attacks, a nation will seem inactive. When a nation is physically far away, the threat will appear very, very near.

Amidst all the controversy and mystery surrounding attacks like Stuxnet and Flame, it is becoming increasingly clear that the wars of tomorrow will most likely be fought by young kids at computer screens rather than by young kids on the battlefield with guns.

In the area of technology, what is invented for use by the military or for space, eventually finds its way to the commercial arena. It is therefore a matter of time before the techniques used by Flame or Stuxnet become a part of the arsenal of the average cyber thief.

Ready for the brave new world?

Do Smart Systems mark the end of SIEM?

IBM recently introduced the IBM PureSystems line of intelligent expert integrated systems. Available in a number of versions, they are pre-configured with various levels of embedded automation and intelligence depending upon whether the customer wants these capabilities implemented with a focus on infrastructure, platform or application levels. Depending on what is purchased, IBM PureSystems can include server, network, storage and management capabilities. These are ‘smart’ systems that include the ability to monitor and adapt automatically to optimize performance and resource allocation based on pre-defined criteria.

These systems will significantly impact IT operations and staff in multiple ways, and raise the question of whether or not automated, integrated intelligence in monitoring and management threaten the future of SIEM.

The evolution and pace of change in services and variability in user demand for those services strains IT staff resources as they must monitor, manage and adjust infrastructure allocation. SIEM complements and improves overall systems management. IT staff must have an integrated view of operations, business needs and service delivery. They need significant help to be able to provision, configure, adapt and allocate available computing infrastructure assets (servers, network, storage and applications) to meet changing needs of the workload and the business environment. IT cannot succeed if they must rely on manual methods to apply policy-based expertise to change, release, provisioning, configuration and event management.

IT’s success depends upon their ability to be freed from a focus on the idiosyncrasies of the physical infrastructure. They need integrated, automated management solutions that allow them to concentrate on how to create new services and extract value from that infrastructure. Their responsibility is to get the best out of the infrastructure to address the problem or to create a new service. One of the long term arguments for the benefit of computerized systems is the promise of automating and consolidating operations, management and maintenance functions where it makes sense and is feasible. Typically, this has been done by in-house projects, scripts, manual instructions and directions that have been refined and passed along from expert to expert.

What’s new now is that vendors have taken on the task of integrating and embedding management and operational intelligence based on experience, best practices and expertise into pre-configured systems. The idea leverages a far broader and deeper knowledge base. At the same time, policies, technologies, processes, operating conditions are neither consistent nor identical for all users. Therefore, they have to be able to easily change and modify the embedded expertise over time and to make use of the wisdom of local staff. Therefore, we believe these systems increase the value to be derived from SIEM solutions and enable IT staff to more effectively leverage the data and insight they obtain from SIEM solutions.

Looking Forward

Let’s consider what this means. We have been approaching the limits of exploiting the speed and computation of hardware. With the emergence and embrace of virtualization, we’ve seen how manipulation by software can improve the utilization and performance of infrastructures. There is a lot more to be gained as we get smarter about such manipulations. The announcement of intelligent, integrated expert systems is a significant step forward toward eliminating the hard line that divides hardware and software as independent entities. We also know that it isn’t the technology and infrastructure that is the most critical for enterprise operations – it’s the workload or service to the user that is most important.

The implementation of intelligent systems, along with the evolution of the cloud architecture and efforts directed at defining interoperability standards for applications moves us along the path to an operating environment where the service/workload interact with the infrastructure to adapt automatically to meet the delivery goals of the enterprise that is providing and/or consuming the service.

Finally, it is the ingenuity and knowledge acquired from data available to the expert user that translates into the wisdom of successful operations. It all starts with the data, and it is only with and through that data that successful management is possible.

Learning from JPMorgan

The single most revealing moment in the coverage of JPMorgan’s multibillion dollar debacle can be found in this take-your-breath-away passage from The Wall Street Journal: On April 30, associates who were gathered in a conference room handed Mr. Dimon summaries and analyses of the losses. But there were no details about the trades themselves. “I want to see the positions!” he barked, throwing down the papers, according to attendees. “Now! I want to see everything!”

When Mr. Dimon saw the numbers, these people say, he couldn’t breathe.

Only when he saw the actual trades — the raw data — did Mr. Dimon realize the full magnitude of his company’s situation. The horrible irony: The very detail-oriented systems (and people) Dimon had put in place had obscured rather than surfaced his bank’s horrible hedge.

This underscores the new trust versus due diligence dilemma outlined by Michael Schrage. Raw data can have enormous impact on executive perceptions that pre-chewed analytics lack.   This is not to minimize or marginalize the importance of analysis and interpretation; but nothing creates situational awareness faster than seeing with your own eyes what your experts are trying to synthesize and summarize.

There’s a reason why great chefs visit the farms and markets that source their restaurants:   the raw ingredients are critical to success — or failure.

We have spent a lot of energy in building dashboards for critical log data and recognize the value of these summaries; but while we should trust our data, we also need to do the due diligence.

IT Data and Analytics don’t have to be ‘BIG’

Previously, we discussed looking for opportunities to apply analytics to the data in your own backyard. The focus on ‘Big Data’ and sophisticated analytics tends to obscure and cause business and IT staff to overlook the in-house data already abundantly present and available for analysis. As the cost of data acquisition and storage has dropped along with the cost of computing, the amount of data available, as well as the opportunity and ability to extensively analyze it has exploded. The task is to discover and unlock the information that is hidden in all the available data.

Data is collected as part of every process, operation and action in the data center, and throughout the enterprise or organization. Here are five steps you can take to get more information from that storehouse of data.

1.  Understand

As we’ve pointed out before, the object of IT in an organization is to directly support the achievement of organizational goals and objectives. The organizational structure can be a for-profit business, a medical and health service provider, a non-profit charity, a government or military operation, etc.  Each is different in some way, but each has their own reason to exist with goals and objectives that it establishes to achieve those ends.

IT exists within that organization solely to contribute to the achievement of these goals. With all of the alternative delivery models that exist for IT services, it is more and more critical that IT understand, plan and executes its plans with that role and responsibility in mind.  Beyond the application of the technology, IT has to be focused on how it can creatively exploit organizational assets and resources to more effectively support the organization. Increasingly, that task is facilitated through creative use of data.

2.  Inventory

You need to know what data is available to work with and the capabilities for analysis. Both can change over time, adding dependencies as the environment grows and evolves. A data inventory will reveal what kinds of data and from what sources is being collected. Defining known dependencies and relationships, as well as the flow can provide insight into what and how the data can be used to yield more insight.  Find out if operational (machine status, scheduling, etc.), as well as functional (accounting, order cycles, pricing, etc.) data is being collected. This inventory should also include a survey of the available data, not just what is currently being analyzed. For this inventory, the focus is on identifying and categorizing all available data, not just what is being collected and analyzed. Unutilized or under-utilized data will be examined more closely in later steps.

Second, prepare an inventory of the available collection, computational and analytic capabilities. What tools for analysis and reporting are available? Are all capabilities of the SIEM suite being fully put to use? Are there functions, reports or analytic capabilities that are not being utilized or exploited to their fullest? Are there capabilities being offered by contracted outsourcing (SaaS or Cloud) services that can be used to provide more data or extend current analysis/reporting?

3.  Review

Review what data is being analyzed and how it is being used to yield the most amount of information possible. Consider how it could be used. With today’s level of complex, multiple dependencies and interaction, it is well worth the effort to explore and investigate for unsuspected interactions.  This is the result of the low cost of data collection, the minimal effort required to make it accessible, as well as the power for computation and analytics.

4.  Reach out

With some understanding and insight into what data is being used and how it is being applied, you can move to the next step to see what isn’t being exploited and determine its potential usefulness. The object is not simply to amass a large volume of data, but to identify how combinations of new data can be used to help IT contribute to the organization’s success.

The key here is to take a fresh look at how data can be used from one part of the organization to benefit or inform another. IT does not do this alone, nor should it; it requires cooperation and communication with non-IT business and functional staff to creatively apply technology. IT understands the power of technology, as well has how to focus that power and must proactively inform and engage with other, non-IT staff within the organization.

5.  Inform

The pace of change and expansion in the ability to leverage data continues at an accelerating pace. The variety of ways to manipulate data to get better information continues to grow and the costs decreases as vendors [1] seek to ease access to and expand the use of analytics. Research big data efforts underway in areas related to your own organization and industry to get other ideas for additional analysis.

[1] For example, Amazon Web Services data analysis – http://aws.amazon.com/

Big Data – Does insight equal decision?

In information technology, big data consists of data sets that grow so large that they become awkward to work with using whatever database management tools are on-hand. For that matter, how big is big? It depends on when you need to reconsider data management options – in some cases it may be 100Gb, in others, it may be 100Tb. So, following up on our earlier post about big data and insight, there is one more important consideration:

Does insight equal decision?

The foregone conclusion from big data proponents is that each nugget of “insight” uncovered by data mining will somehow be implicitly actionable and the end user (or management) will gush with excitement and praise.

The first problem is how can you assume that “insight” is actionable? It very well may not be, so what do you do then? The next problem is how can you convince the decision maker that the evidence constitutes an imperative to act? Absent action, the “insight” remains simply a nugget of information.

Note that management typically responds to “insight” with skepticism, seeing the message bearer as yet another purveyor of information (“insight”) and insisting that this new method is the silver bullet, thereby adding to workload.

Being in management myself, my team often comes to me with their little nuggets … some are gold, but some are chicken.   Rather than purvey insight, think about a recommendation backed up by evidence.

Big Data, does more data mean more insight?

In information technology, big data consists of data sets that grow so large they become unwieldy to work with using available database management tools. How big is big? It depends on when you need to reconsider data management options – in some cases it may be 100 Gigabytes, in others, as great as 100 Terabytes.

Does more data necessarily mean more insight?

The pro-argument is that larger data sets allow for greater incidences of patterns, facts, and insights. Moreover, with enough data, you can discover trends using simple counting that are otherwise undiscoverable in small data using sophisticated statistical methods.

On the other hand, while this is perfectly valid in theory, for many businesses the key barrier is not the ability to draw insights from large volumes of data; it is asking the right questions for which insight is needed.

The ability to provide answers does depend on the question being asked and the relevance of the big-data set to that question. How can one generalize to an assumption that more data will always mean more insight?   It isn’t always the answer that’s important, but the questions that are key.

Silly human – logs are for machines (too)

Here is an anecdote from a recent interaction with an enterprise application in the electric power industry:

1. Dave the developer logs all kinds of events. Since he is the primary consumer of the log, the format is optimized for human-readability. For example:

02-APR-2012 01:34:03 USER49 CMD MOD0053: ERROR RETURN FROM MOD0052 RETCODE 59

Apparently this makes perfect sense to Dave:   each line includes a timestamp and some text.

2. Sam from the Security team needs to determine the number of daily unique users. Dave quickly writes a parser script for the log and schedules it. He also builds a little Web interface so that Sam can query the parsed data on his own. Peace reigns.

3. A few weeks later, Sam complains that the web interface is broken. Dave takes a look at the logs, only to realize that someone else has added an extra field in each line, breaking his custom parser. He pushes the change and tells Sam that everything is okay again. Instead of writing a new feature, Dave has to go back and fill in the missing data.

4. Every 3 weeks or so, repeat Step 3 as others add logs.

Finding an Application of Analytics to ‘Big Data’ in your own backyard

Back in January, I said that the use of sophisticated analytics as a business and competitive tool would become widespread. Since then, the number of articles, blogs and announcements relating to analytics has increased dramatically:  an internet search for the term ‘Business Analytics’ using Bing yields over 47 million hits. Smart Analytics (an IBM term) shrinks that number to approximately 12.3 million hits. If we change the search term to ‘Applied Analytics,’ the number decreases to a little less than 7 million hits.

Analytics has certainly captured the attention of government [1], business [2], the industry press and management. The question, though, is whether it’s being put to use in the trenches. Are CIOs and IT staff searching out, acquiring and applying these tools to address their problems? After all, it isn’t enough to have access to analytic tools and services; you have to understand how to use and apply them to real problems. How many users are actually prepared to move forward into the big world of applied analytics to solve pressing business problems? Where does one go to begin to use these tools? Are analytics only of use for working with ‘Big Data’? What is ‘Big Data’?

There are a lot of questions there; too many to exhaustively address in this blog, and some that can’t be resolved without some detailed research. We’ll provide answers based on our own experiences in interacting with clients, research and informed opinion.

First, let’s agree on a few definitions. It often seems ‘Big Data’ is defined in as many ways as there are vendors offering solutions.  For our purpose, we’ll use a fairly loose definition that is based on the Volume, Velocity, Variety and Veracity of the data. Big Data comes in large enough volumes that it requires special software and hardware to process in a reasonable time (terabytes, petabytes and beyond!). At least some of the data, and perhaps all of it is ‘in motion’, coming in and moving out and changing very quickly. The source and form of the data is highly variable; it comes in different varieties, data types, structures and formats – audio, visual, media, structured unstructured, different sources, etc. The fourth characteristic is the question of data veracity i.e. uncertainty over the accuracy of data including questions of confidence in the source.

Second, analytics can cover a lot of ground, from manual number crunching to giant, specialty processors designed specifically to do real-time analysis in exploration for oil deposits. What we’re interested in, however, is the application of software-based analytics to collect, analyze and report on data collected in our IT and business environment.

Big Data and analytics are frequently paired; however, the relationship is far from exclusive. Analytics can be profitably applied to smaller data sets. The benefit comes from using the analytics to gain actionable information and insight across multiple business functions. This can be application of an investment analysis program to determine the potential profitability of a product development project by tracking development, packaging, marketing, delivery costs versus forecasts of revenue expected from sales and support under alternative market growth patterns. But, it can also be correlating event log data on application access, network traffic, file access and data routing of confidential files and initiating action to prevent those files from being published around the world.

It can also take the form of a Manager of Software Development recognizing his department has the potential to directly impact revenue. He has an idea that a regularly used, revenue-generating asset is not being used in a way that optimizes its potential for revenue generation. He’s convinced that if it were scheduled and managed more effectively this could be done. He knows the data to prove this is collected in logs and data files, but he needs to pull it all together. With some work, he can bootstrap a basic analysis from available tools to make his case to management for more detailed, integrated tools.

Those are typical examples of analytics in action today, and it is being done within the budgets of mid- to large-scale enterprises and without mathematical wizards on the payroll. Our discussions and experiences uncovered a lot more talk about Big Data and Analytics going on in the executive suite and among business and IT staff than previously. There is lot more planning and speculating about use going on among potential users and in enterprises and business of all sizes. But all too often, this isn’t translating into action.

The path to more effective use and application of analytics begins by using what you have today to its maximum advantage. Most businesses have a log management solution with at least some analytic capabilities. Start using the analytics if you aren’t already. Push your boundaries and use your imagination to identify new ways to use the analytics. Look at adding new data that can be correlated or plotted together to uncover new relationships. Extend the data view to adjacent, interacting and interdependent functions. The Software Dev manager spoken of earlier looked into the relationship of revenue generated with usage and scheduling to identify potentially profitable idle time. Look for a potential application and develop the case by using what you have to get to where you want to be.

Don’t be afraid to see what vendors are doing and offering to promote their own analytic solutions. You can get ideas about where to look and what problems are being solved by understanding what others have done. Look at vendor announcements to see how analytics are being promoted, then look for the opportunity in your own environment.

[1] Big Data Big Deal

[2] Big Data The Next Frontier for Innovation

SIEM in the Cloud

Prism Microsystem’s founders decided early on that their goal and reason for the company’s existence was to design, develop and deliver SIEM services. As executives with a successful history in entrepreneurship, product development and enterprise management, they knew the risk and seductive promise of distractive diversification in pursuit of expanded revenues. They committed to concentrating specifically on SIEM functions of monitoring, discovery and warning about threats to security, compliance (in its multiple modes) and operational commitments.

Early on, their experience and careful listening to customers allowed them to align their message and product with market needs. SIEM was and is a specialized, dynamic and evolving task. In 2005, the most frequent question from potential customers was “Why do I need SIEM?” Many companies operated, more or less successfully, with in-place efforts at manual, home-grown and commercial solutions adapted from other functions. In actuality, this meant that such ‘solutions’ were time consuming, diverted scarce talent and yielded results that all too frequently fell far short of justifying the effort applied. EventTracker, among others, entered the market with an integrated solution designed specifically to do SIEM. With EventTracker, IT staff had in hand a collection of streamlined processes that reliably developed accurate, actionable information from log data.  The benefits obtained from using these bespoke tools won over skeptics, as a result the SIEM solutions market grew. Enterprises became enthusiastic users of on-premise SIEM solutions.

By 2009, the increasing interdependencies resulting from infrastructure interactions and complex, dynamic service delivery elevated the need for fast, accurate analysis of vast amounts of data. SIEM solutions continued to evolve to meet the challenge of more adaptive and dynamic operating environments. Emerging trends in the application of IT technology led to increased integration of the infrastructure. IT responded to business and customer demand more reliable, faster delivery of high performance services. New services were created by assembling components. The result was increasing dynamic operation, increasing interaction of distributed components and infrastructure all of which had to be closely monitored to avoid problems with security, reliability, etc. This meant that SIEM was becoming both a more critical and specialized effort. Enterprises began looking for external expertise as the range of knowledge needed for SIEM management expanded with the an escalation in the depth and breadth of responsibility.

One example was the demand for organizational accountability resulting from well-publicized failures to protect private records. A wave of regulatory, governmental and enterprise operational mandates were put in place along with a sea-change in accountability. Executive managers were to be held accountable for the effective implementation of controls to assure compliance to an increasing number of continually evolving mandates covering security procedures, access control, performance, etc., as applied to a growing number of business functions. Just keeping current with external mandates was causing major headaches.

This focus on governance and responsibility irrevocably and dramatically changed the relationship between IT and business managers. IT operations and staff had long been intimately involved with and responsible for all aspects of data handling, process implementation, workflows, etc. necessary for compliance. Now, management and IT had to become partners in assuring effective compliance.

The result was increased complexity in maintaining effective monitoring and compliance mechanisms. SIEM had become an operationally critical issue and responsibility. As experience with compliance challenges accumulated and customer sophistication in SIEM matters increased, the demand was for more options. At all levels, including large enterprise, as well as mid-range (100 to 500 systems) and smaller businesses (below 100 systems), the demand was for more flexibility in selecting the range of available services, features and sophistication in analysis. They also were demanding pricing that allowed them to add functionality as their needs changed and the available budget grew.

The explosion of the Whatever-you-want-as-a-Service (XaaS) market also influenced customer demands and expectations. Companies were recognizing and accepting the fact that oftentimes it was to their advantage, both operationally and financially, to selectively outsource some IT services. XaaS services allowed customers to match the consumption of services to demand, spread out payments over a longer period of time, use only the services they needed and avoided responsibilities for maintenance, support, updates, etc.

By the end of 2010, it became clear that the interest in and demand for SIEM as a Cloud-based service was no flash-in-the-pan. Enterprise customers saw it as an increasingly attractive way to outsource services that required expertise and effort not part of their essential business competency and focus. SIEM as a managed service provided a way for the enterprise to free-up scarce IT resources to concentrate on improving competitive positioning, developing new services devoted to increasing revenues, lowering costs and improving performance to increase customer satisfaction.

The need for, and development of hosted enterprise-class SIEM and “Security monitoring as a Service” (SecaaS) became the next logical progression in the evolution of SIEM solutions.

There are several models for SecaaS:  There is the Shared Cloud for small and medium size business. Data is collected locally, compressed, encrypted and sent to a central location for processing. Cost is kept low because companies ‘share’ the infrastructure. They avoid costs of dedicated SIEM infrastructure and support staff, but are guaranteed notification of disruptive events and activities. The companies and their respective data are isolated and protected from each other.

Then, there is the virtual private cloud deployment, for larger enterprise. Each company has its own private virtualized SIEM and data storage environment within the virtual private cloud, which isolates data from other customers. The architecture can handle 100’s of millions of events per day for each customer. Again, the customer saves by not having to purchase and maintain SIEM-specific infrastructure and support staff.

Finally, the Managed SIEM Service for those with a SIEM implemented on-site on their own infrastructure. The enterprise either lacks the manpower to or wishes to free staff from monitoring the infrastructure.  It provides 24/7 monitoring and guarantees notification of any incidents or threat to managed services, key alerts and operating conditions.

At this point, we have to mention that today’s conventional wisdom consistently trumpets the superiority and lower cost of XaaS and Cloud solutions. However, recently this assumption is being challenged [1]. It is my believe that an Cost-Benefit comparison is a necessary best practice as part of any project analysis to determine which is the right way to go.  But, that’s a topic for another column.

[1]http://tinyurl.com/743kdev

What is your maximum NPH?

In The Information Diet, Clay Johnson wrote, “The modern human animal spends upwards of 11 hours out of every 24 in a state of constant consumption. Not eating, but gorging on information … We’re all battling a storm of distractions, buffeted with notifications and tempted by tasty tidbits of information. And just as too much junk food can lead to obesity, too much junk information can lead to cluelessness.”

Audit yourself and you may be surprised to find that you get more than 10 notifications per hour; they can be disruptive to your attention. I find myself trying hard (and often failing) to ignore the smartphone as it beeps softly to indicate a new distraction. I struggle to remain focused on the person in my office as the desktop tinkles for attention.

Should you kill off notifications though? Clay argues that you should and offers tools to help.

When designing EventTracker v7, minimizing notifications was a major goal. On Christmas Day in 2008, nobody was stirring, but the “alerts” console rung up over 180 items demanding review. It was obvious these were not “alerts.” This led to the “risk” score which dramatically reduces notifications.

We know that all “alerts”  are not equal: some merit attention before going to lunch, some before the end of the day, and some by the end of the quarter, budget permitting. There are a very rare few that require us to drop the coffee mug and attend instantly. Accordingly, a properly configured EventTracker installation will rarely “notify” you; but when you need to know — that alert will come screaming for your attention.

I am frequently asked what is the maximum events per second that can be managed. I think I’ll begin to ask how many notifications per hour (NPH) the questioner can handle. I think Clay Johnson would approve.

Data, data everywhere but not a drop of value

The sailor in The Rime of the Ancient Mariner relates his experiences after long sea voyage when his ship is blown off course:

“Water, water, every where,

And all the boards did shrink;

Water, water, every where,

Nor any drop to drink.”

An albatross appears and leads them out, but is shot by the Mariner and the ship winds up in unknown waters.  His shipmates blame the Mariner and force him to wear the dead albatross around his neck.

Replace water with data, boards with disk space, and drink with value and the lament would apply to the modern IT infrastructure. We are all drowning in data, but not so much in value. “Big data” are datasets that grow so large that managing them with on-hand tools is awkward. They are seen as the next frontier in innovation, competition, and productivity.

Log management is not immune to this trend. As the basic log collection problem (different sources, different protocols and different formats) has been resolved, we’re now collecting even larger datasets of logs. Many years ago we refuted the argument that log data belonged in a RDBMS, precisely because we saw the side problem of efficient data archival begin to overwhelm the true problem of extracting value from the data. As log data volumes continue to explode, that decision continues to be validated.

However, while storing raw logs in a database was not sensible, their power in extracting patterns and value from data is well established. Recognizing this, EventVault Explorer was released in 2011. Users can extract selected datasets to their choice of external RDBMS (a datamart) for fuzzy searching, pivot tables etc.   As was noted here , the key to managing big data is to personalize the results for maximum impact.

As you look under the covers of SIEM technology, pay attention to that albatross called log archives. It can lead you out of trouble, but you don’t want it around your neck.

Top 5 Compliance Mistakes

5.   Overdoing compensating controls

When a legitimate technological or documented business constraint prevents you from satisfying a requirement, a compensating control can be the answer after a risk analysis is performed. Compensating controls are not specifically defined inside PCI, but are instead defined by you (as a self-certifying merchant) or your QSA. It is specifically not an excuse to push PCI Compliance initiatives through completion at a minimal cost to your company. In reality, most compensating controls are actually harder to do and cost more money in the long run than actually fixing or addressing the original issue or vulnerability. See this article for a clear picture on the topic.

4. Separation of duty

Separation of duties is a key concept of internal controls. Increased protection from fraud and errors must be balanced with the increased cost/effort required.   Both PCI DSS Requirements 3.4.1 and 3.5 mention separation of duties as an obligation for organizations, and yet many still do not do it right, usually because they lack staff.

3. Principle of Least privilege

PCI 2.2.3 says they should “configure system security parameters to prevent misuse.” This requires organizations to drill down into user roles to ensure they’re following the rule of least privilege wherever PCI regulations apply.   This is easier said than done; more often it’s “easier” to grant all possible privileges rather than determine and assign just the correct set. Convenience is the enemy of security.

2. Fixating on excluding systems from scope

When you make the process of getting things out of scope a higher priority than addressing real risk, you get in trouble. Risk mitigation must come first and foremost. In far too many cases, out-of-scope becomes out-of-mind. This may make your CFO happy, but a hacker will get past weak security and not care if the system is in scope or not.

And drum roll …

1. Ignoring virtualization

Many organizations have embraced virtualization wholeheartedly given its efficiency gains. In some cases, virtualized machines are now off-premises and co-located at a service provider like Rackspace. This is a trend at federal government facilities.   However, “off-premises” does not mean “off-your-list”. Regardless of the location of the cardholder data, such systems are within scope as are the hypervisor. In fact, PCI DSS 2.0 says, if the cardholder data is present on even one VM, then the entire VM infrastructure is “in scope.”

IT Operations and SIEM Management Drive Business Success

While there are still some who question the ‘relevance’ of IT to the enterprise, and others who question the ‘future’ of IT, those involved in day-to-day business activities recognize and acknowledge that IT operations is integral to business success and this is unlikely to change in the immediate future.  Today’s IT staffer with security incident and event management (SIEM) responsibility must be able not only to detect, identify and respond to anomalies in infrastructure performance and operations, but also build processes, make decisions and take action based on the business impact of the incidents and events recorded in ubiquitous logs.

Since the earliest incarnations of IT infrastructure management, a lot of ingenuity and effort has been applied to detecting, identifying and notifying a responsible party to take action when something occurs that signals a potential problem. Competition, combined with creativity, led to a proliferation of tools able to monitor and alert to problematic events.

Consider how far we’ve come in the process from the old days of manual tracking and analysis. Isn’t the whole process, from detection through analysis to notification and even resolution, now fully automated for nearly all installations?  Aren’t we long past the days when we had to worry about (and avoided) automated management solutions since they likely introduced more problems than they solved? Now, even compliance-related monitoring has been automated. And with the advent of Cloud computing, along with SaaS, PaaS and IaaS, has the user been isolated from the infrastructure underlying service delivery? Doesn’t IT have bigger, more pressing problems to concentrate on than SIEM?

Simply, “no.” While it is true that SIEM has evolved considerably over time, the fact remains that even with more sophisticated, intelligent and automated solutions – there remains the need for IT staff to mine data logs for more information and insight into infrastructure operations, and to understand the impact of that interaction on the delivery of business services and the experiences of the user. IT must be able to identify and inform business staff on risks toSLA and performance commitments. IT must also be able to contribute defining and taking the actions needed to eliminate or reduce those risks.

The increasing complexity of IT operations, resulting from the expanding diversity and distribution of infrastructure combines with an evolving, dynamic integration and interaction in the delivery of business services to escalate service disruptions. This means that avoidance of such disruptions is of increasing importance. The link between the service user and the performance of the infrastructure has become more critical. The need for SIEM and for the IT staff to obtain actionable information from these management solutions is the combination that drives convergence of the management discipline sassociated with Applications Performance Management (APM), Business Process Management(BPM) and Business Service Management (BSM). Once considered and treated as separate areas of expertise, their overlapping interests and interdependencies become apparent. They cannot succeed if treated as organizational silos. Integrated SIEM with their detailed end-to-end data collection and analysis helps to end siloed operations.

For example, BPM assures processes execute precisely and consistently to complete a specific task with all intervening steps. BSM tracks the proper functioning of involved infrastructure at all stages of the service delivery and business process to assure a satisfactory end-user experience. APM optimizes infrastructure utilization and performance.  IT needs to understand the involvement and impact of infrastructure on service delivery – hence they need data from all three functions. This involves monitoring, analyzing and reporting on a staggering number of incidents and events to identify what is significant to initiate appropriate action. This is the environment in which today’s SIEM best solutions demonstrate their value.

Free, entry-level SIEM solutions (such as EventTracker Pulse) provide basic functionality to begin data gathering, analysis and reporting from multiple different sources. Such solutions eliminate error-prone and tedious manual efforts. They can also provide basic application and service management. More feature rich products such as EventTracker Enterprise offer sophisticated functionality like complex analysis and custom reporting of potentially problematic behavior.

High functionality SIEM solutions provide significant opportunity for IT to exercise and demonstrate its ability to contribute to business success. The ability to document that ability is even more necessary as virtualized infrastructures and Cloud proliferate as de facto operating models.

The idea that IT operates to support the overall success of the business, not simply manage infrastructure is no longer a matter of contention. Today, IT is also under increasing pressure to document and demonstrate its contributions to business success.  The difficultly, in an environment filled with competing solutions, comes in deciding just how to do this most effectively without breaking the budget. The answer is found in leveraging cost effective SIEM solutions.

The 5 Most Annoying Terms of 2011

Since every cause needs “Awareness,” here are my picks for management speak to camouflage the bloody obvious:

  5. Events per second

Log Management vendors are still trying to “differentiate” with this tired and meaningless metric as we pointed out in The EPS Myth.

  4. Thought leadership

Mitch McCrimmon describes it best.

  3. Cloud

Now here is a term that means all things to all people.

  2. Does that make sense?

The new “to be honest.” Jerry Weismann discusses it in the Harvard Business Review.

  1. Nerd

During the recent SOPA debate, so many self-described “country boys” wanted to get the “nerds” to explain the issue to them; as Jon Stewart pointed out, the word they were looking for was “expert.”

SIEM and the Appalachian Trail

The Appalachian Trail is a marked hiking trail in the eastern United States extending between Georgia and Maine. It is approximately 2,181 miles long and takes about six months to complete. It is not a particularly difficult journey from start to finish; yet even so, completing the trail requires more from the hiker than just enthusiasm, endurance and will.

Likewise, SIEM implementation can take from one to six months to complete (depending on the level of customization) and like the Trail, appears deceptively simple.   It too, can be filled with challenges that reduce even the most experienced IT manager to despair, and there is no shortage of implementations that have been abandoned or uncompleted.   As with the Trail, SIEM implementation requires thoughtful consideration.

1) The Reasons Why

It doesn’t take too many nights scurrying to find shelter in a lightning storm, or days walking in adverse conditions before a hiker wonders: Why am I doing this again? Similarly, when implementing any IT project, SIEM included, it doesn’t take too many inter-departmental meetings, technical gotchas, or budget discussions before this same question presents itself: Why are we doing this again?

  All too often, we don’t have a compelling answer, or we have forgotten it. If you are considering a half year long backpacking trip through the woods, there is a really good reason for it.   In the same way, one embarks on a SIEM project with specific goals, such as regulatory compliance, IT security improvement or to control operating costs.   Define the answer to this question before you begin the project and refer to it when the implementation appears to be derailing. This is the compass that should guide your way.   Make adjustments as necessary.

2) The Virginia Blues

Daily trials can include anything from broken bones to homesickness, a circumstance that occurs on the Appalachian Trail about four to eight weeks into the journey, within the state lines of Virginia. Getting through requires not just perseverance but also an ability to adapt.

For a SIEM project, staff turnover, false positives, misconfigurations or unplanned explosions of data can potentially derail the project. But pushing harder in the face of distress is a recipe for failure. Step back, remind yourself of the reasons why this project is underway, and look at the problems from a fresh perspective. Can you be flexible? Can you make find new avenues to go around the problems?

  3) A Fresh Perspective

In the beginning, every day is chock full of excitement, every summit view or wild animal encounter is exciting.   But life in the woods will become the routine and exhilaration eventually fades into frustration.

In  much the same way, after the initial thrill of installation and its challenges, the SIEM project devolves into a routine of discipline and daily observation across the infrastructure for signs of something amiss.

This is where boredom can set in, but the best defense against the lull that comes along with the end of the implementation is the expectation of it. The journey’s going to end.   Completing it does not occur when the project is implemented.   Rather, when the installation is done, the real journey and the hard work begins.

Humans in the loop – failsafe or liability?

Among InfoSec and IT staff, there is a lot of behind-the-scenes hand wringing that users are the weakest link.   But are InfoSec staff that much stronger?

While automation is and does have a place, Dan Geer, of CIA-backed venture fund In-Q-Tel, properly notes that while ” …humans can build structures more complex” than they can operate, ” …Are humans in the loop a failsafe or a liability? Is fully automated security to be desired or to be feared?”

We’ve considered this question before at Prism, when “automated remediation” was being heavily touted as a solution for mid-market enterprises, where IT staff is not abundant. We’ve found that human intervention is not just a fail-safe, but a necessity.   The interdependencies, even in medium sized networks are far too complex to automate.   We introduced the feature a couple of years back and in reviewing the usage, concluded that such “automated remediation” does have a role to play in the modern enterprise. Use cases include changes to group membership in Active Directory, unrecognized processes, account creation where the naming convention is not followed or honeypot access. In other words, when the condition can be well defined and narrowly focused, humans in the loop will slow things down. However for every such “rule” there are hundreds more that will be obvious to a human but missed by the narrow rule.

So are humans in the loop a failsafe or a liability? It depends on the scenario.

What’s your thought?

Will the cloud take my job?

Nearly every analyst has made aggressive predictions that outsourcing to the cloud will continue to grow rapidly. It’s clear that servers and applications are migrating to the cloud as fast as possible, but according to an article in The Economist, the tradeoff is efficiency vs. sovereignty.   The White House announced that the federal government will shut down 178 duplicative data centers in 2012, adding to the 195 that will be closed by the end of this year.

Businesses need motivation and capability to recognize business problems, solutions that can improve the enterprise, and ways to implement those solutions.   There is clearly a role for outsourced solutions and it is one that enterprises are embracing.

For an engineer, however, the response to outsourcing can be one of frustration, and concerns about short-sighted decisions by management that focus on short term gains at the risk of long term security. But there is also an argument why in-sourcing isn’t necessarily the better business decision:   a recent Gartner report noted that IT departments often center too much of their attention on technology and not enough on business needs, resulting in a “veritable Tower of Babel, where the language between the IT organization and the business has been confounded, and they no longer understand each other.”

Despite increased migration to cloud services, it does not appear that there is an immediate impact on InfoSec-related jobs.   Among the 12 computer-related job classifications tracked by the Department of Labor’s Bureau of Labor Statistics (BLS), information security analysts, along with computer and information research scientists, were among those whose jobs did not report unemployment during the first two quarters of 2011.

John Reed, executive director at IT staffing firm Robert Half Technology, attributes the high growth to the increasing organizational awareness of the need for security and hands-on IT security teams to ensure appropriate security controls are in place to safeguard digital files and vital electronic infrastructure, as well as respond to computer security breaches and viruses.

Simply put: the facility of using cloud services does not replace the skills needed to analyze and interpret the data to protect the enterprise.   Outsourcing to a cloud may provide immediate efficiencies, but it’s   the IT security staff who deliver business value that ensure long term security.

Threatscape 2012 – Prevent, Detect, Correct

The past year has been a hair-raising series of IT security breakdowns and headlining events reaching as high as RSA itself falling victim to a phishing attack.   But as the year set on 2011, the hacker group Anonymous remained busy, providing a sobering reminder that IT Security can never rest.

It turned out that attackers sent two different targeted phishing e-mails to four workers at its parent company, EMC.   The e-mails contained a malicious attachment that was identified in the subject line as “2011 Recruitment plan.xls” which was the point of attack.

Back to Basics:

Prevent:

Using administrative controls such as security awareness training, technical controls such as firewalls, and anti-virus and IPS, to stop attacks from penetrating the network.   Most industry and government experts agree that security configuration management is probably the best way to ensure the best security configuration allowable, along with automated patch management and updating anti-virus software.

Detect:

Employing a blend of technical controls such as anti-virus, IPS, intrusion detection systems (IDS), system monitoring, file integrity monitoring, change control, log management and incident alerting   can help to track how and when system intrusions are being attempted.

Correct:

Applying operating system upgrades, backup data restore and vulnerability mitigation and other controls to make sure systems are configured correctly and can prevent the irretrievable loss of data.

IT Trends and Comments for 2012

The beginning of a new year marks a time of reflection on the past and anticipation of the future. The result for analysts, pundits and authors is a near irresistible urge to identify important trends in their areas of expertise (real or imagined). I am no exception, so here are my thoughts on what we’ll see in the next year in the areas of application and evolution of Information Technology.

The past few years have been marked by a significant maturity in understanding of the capabilities, demands and expectations of educated consumers in the application of IT in their business and personal lives. The evolution in capability, ease of and ubiquity of availability and access, accelerated dramatically. This resulted from the combination of past trends, industry economics and general IT maturation driving its application into new areas while speeding and facilitating benefit realization.

These effects will continue into 2012 as a result of the following trends:

  1.  Customers buy solutions, not technologies. IT solution providers, regardless of size or product form (hardware, software, services) have become more sensitive and responsive to the needs of their target markets. Business buyers want immediate solutions to their problem with minimal complexity in its application. They do not want ‘tool kits’ or 75 per cent-complete products.  The best and most successful recognize and respond to this demand for comprehensive solutions to their customers’ expectations and demands. The emergence of affordable, fully integrated, modular and comprehensive solutions that address identifiable business and operational problems out-of-the-box will continue and become more competitive as more intelligence and power are embedded in IT solutions. The Prism Microsystems EventTracker product family provides a good example of how vendors are creating solutions in this model. It is true that some solutions will stand and operate on their own. However, an increasingly complex and evolving environment requires that solutions be able to co-exist and interoperate with data, products and services from many sources.
  2. Private, public and hybrid Clouds continue to grow in number and application spreading across all market segments. Service providers and vendors are in race to make Clouds more accessible, secure and functional. Consumers of Cloud services will continue to be even more selective and careful as they choose their providers/supplier/partners. High on their list will be concerns for stability, security and interoperability. The issue of stability tips the preference toward private and hybrid solutions. (We have already seen very public and dramatic failures from big vendor Cloud suppliers; there will be more.) However, a combination of improved architectures and customer interest in achieving very real Cloud/IaaS/PaaS/SaaS financial, operational and competitive benefits will maintain adoption rates. These also drive the following trend.
  3.  Standards and reference architectures will become more important as Clouds (public, private, and hybrid) proliferate. As business and IT consumers pursue the potential benefits of Cloud/IaaS/ PaaS/ SaaS, etc. it is becoming increasingly obvious that the link between applications/services and the underlying infrastructure must be broken. The big advantage, as well as the fundamental challenge is how to assure easy portability and access to any and all Cloud services.  But, this must be done in a way that allows Cloud solution systems to interoperate and co-exist with traditional structures.  You must provide a structure that allows for the creation, publication, access, use and release of assets in all environments. Vendors must cooperate to create multi-vendor standards and architectures to meet these expectations. This is a natural evolution of the pursuit of standards and techniques that disconnect the implementation of a service from its operational underpinnings. The effort goes back to the earliest days of creating machine independent languages (Cobol, Fortran, etc.) and all Open Systems and architectures (e.g. Unix). This new degree of structural dependence is just implemented at a higher level of abstraction.  The Cloud Standards Customer Council acts as an advocacy group for end-users interested in accelerating successful Clouds. They are addressing the standards, security and interoperability issues surrounding the transition to a Cloud operating environment. One example of a service implementation architecture we see as being particularly worthy of note is the OASIS-sponsored Topology and Orchestration Specification for Cloud Applications(TOSCA).
  4. Use of sophisticated analytics as a business and competitive tool spreads far and wide. The application of analytics to data to solve tough business and operational problems will accelerate as vendors compete to make sophisticated analytics engines easier to access and use, more flexible in application and the results easier to understand and implement. IT has provided the rest of the enterprise with mountains of data. The challenge has been in getting useful information and insight. Operations Research, simulation and analytics have been around and in use for decades (even centuries). Yet, their use has been limited to very large companies. Today’s more powerful computers, the ability to collect and process big streams of live data, combined with concentrated efforts by vendors to wrap accessible user interfaces around the analytics will provide tools to a wider audience. The power of IT servers allows the user to avoid underlying complexities and will do more so over time.
  5. Increasingly integrated, intelligent, real-time end-to-end management solutions enable high-end, high-value services. Think of Cisco Prime™ Collaboration Manager which provides proactive monitoring and corrective action based on the potential impact on the end-user. Predictive analysis is applied down to the event level (data logs provide significant insight) – and analytics to identify problem correlation and/or causation. The primary goal is prediction to avoid problems. Identifying correlated events can be as effective as or even more effective than recognizing cause in providing an early warning.  The fact is that while knowledge of causation is necessary for repair, both correlation and causation work for predictive problem avoidance.
  6. APM (Application Performance Management) converges on BPM (Business Process Management). The definition of APM is expanding to include a focus on the end-user to infrastructure performance optimization as a prime motivator for corrective action. Business managers care about infrastructure performance only to the extent it negatively impacts the service experience. They want high quality services guaranteed. BPM focuses on getting processes right so things are done correctly and efficiently.  IT cares about infrastructure, so traditionally, this is where APM has focused. The emphasis will continue shifting toward the consumer, blurring the lines between APM and BPM. BMC provides one example of the impact by adding analytic and functional capabilities to Application Performance Management to speed root cause as well as impact analysis.  Enhanced real-time predictive analytics are specifically used to improve the user’s interactive experience by more quickly alerting IT staff to infrastructure behaviors that can disrupt service delivery.
  7. The impact of the consumerization of IT will continue to become more significant. Consumers of services are increasingly intolerant of making any concessions to the idiosyncrasies of their access devices (iPad, iPod, Smartphone, etc.). They expect a consistent experience regardless of what is used to access data. Such expectations increase the pressure for service, software and platform standards, as well as drive the evolution of device capabilities and design. Previous efforts generally focused on the ‘mechanics’ and ‘ergonomics’ of the interface. Today the focus is increasingly on consistency of access, ‘look-and-feel’ and performance.  One example is the growing interest in and ability to deliver what AppSense is calling ‘user-centric IT’, where the user has consistent access to all of their desktop resources wherever they are and on whatever device or platform they use. Technology will increasingly and automatically detect, adapt to and serve the user. This goes beyond the existing concept of ‘application aware’ devices to one that associates and binds the user with a consistent, cross-platform experience.
  8.  Virtualization – acts as a ‘gateway’ step to the Cloud and fully ‘service’ infrastructure.  Virtualization will continue to be subsumed by Cloud. Virtualization is now recognized as an enabling technology and a necessary building block to Cloud implementations. It is the first step toward achieving a truly adaptive infrastructure that operates with the flexibility, reliability and robustness to respond to the evolving and changing needs of the business and consumer of IT services. Storage, servers and networks have been virtualized. The focus is shifting to providing applications and services as fully virtualized resources. The increasingly complex nature of ever more sophisticated services acts to accelerate and reinforce this trend.

There you have it: eight trends and influences IT will have to deal with in 2012. I expect to be commenting more on these efforts this year. Your comments, questions and discussion around any of these are welcome. I can be reached at rlptak@ptaknoel.com.

Echo Chamber

In the InfoSec industry, there is an abundance of familiar flaws and copycat theories and approaches. We repeat ourselves and recommend the same approaches. But what has really changed in the last year?

The emergence of hacking groups like Anonymous, LulzSec, and TeaMp0isoN.

In 2011, these groups brought the fight to corporate America, crippling firms both small (HBGary Federal) and large (Stratfor, Sony). As the year drew to a close these groups shifted from prank-oriented hacks for laughs (or “lulz”), to aligning themselves with political movements like Occupy Wall Street, and hacking firms like Stratfor, a Austin, Tex.-based security “think tank” that releases a daily newsletter concerning security and intelligence matters all over the world. After HBGary Federal CEO Aaron Barr publicly bragged that he was going to identify some members of the group during a talk in San Francisco at the RSA Conference week, Anonymous members responded by dumping a huge cache of his personal emails and those of other HBGary Federal executives online, eventually leading to Barr’s resignation. Anonymous and LulzSec then spent several months targeting various retailers, public figures and members of the security community. Their Operation AntiSec aimed to expose alleged hypocrisies and sins by members of the security community. They targeted a number of federal contractors, including IRC Federal and Booz Allen Hamilton, exposing personal data in the process. Congress got involved in July when Sen. John McCain urged Senate leaders to form a select committee to address the threat posed by Anonymous/LulzSec/Wikileaks.

The attack on RSA SecurId was another watershed event. The first public news of the compromise came from RSA itself, when it published a blog post explaining that an attacker had been able to gain access to the company’s network through a “sophisticated” attack. Officials said the attacker had compromised some resources related to the RSA SecurID product, which set off major alarm bells throughout the industry. SecurID is used for two-factor authentication by a huge number of large enterprises, including banks, financial services companies, government agencies and defense contractors. Within months of the RSA attack, there were attacks on SecurID customers, including Lockheed Martin, and the current working theory espoused by experts is that the still-unidentified attackers were interested in LM and other RSA customers all along and, having run into trouble compromising them directly, went after the SecurID technology to loop back to the customers.

The specifics of the attack were depressingly mundane (targeted phishing email with a malicious Excel file attached).

Then too, several certificate authorities were compromised throughout the year. Comodo was the first to fall when it was revealed in March that an attacker (apparently an Iranian national) had been able to compromise the CA infrastructure and issue himself a pile of valid certificates for domains belonging to Google, Yahoo, Skype and others. The attacker bragged about his accomplishments in Pastebin posts and later posted evidence of his forged certificate for Mozilla. Later in the year, the same person targeted the Dutch CA DigiNotar. The details of the attack were slightly different, but the end result was the same: he was able to issue himself several hundred valid certificates and this time went after domains owned by, among others, the Central Intelligence Agency. In the end, all of the major browser manufacturers had to revoke trust in the DigiNotar root CA.   The damage to the company was so bad that the Dutch government eventually took it over and later declared it bankrupt. Staggering, isn’t it? A lone attacker not only forced Microsoft, Apple and Mozilla to yank a root CA from their list of trusted roots, but he was also responsible for forcing a certificate authority out of business.

What has changed in our industry? Nothing really. It’s not a question “if” but “when” the attack will arrive on your assets.

Plus ça change, plus c'est la même, I suppose.