A smartphone named Desire

Is this true for you? That your smartphone has merged your private and work lives. Smartphones now contain—by accident or by design—a wealth of information about the businesses we work for.

If your phone is stolen, the chance of getting it back approaches zero. How about lost in an elevator or the back seat of a taxi? Will it be returned? More importantly, from our point of view, what about the info on it – the corporate info?

Earlier this year, the Symantec HoneyStick project conducted an experiment by “losing” 50 smartphones in five different cities: New York City; Washington D.C.; Los Angeles; San Francisco; and Ottawa, Canada. Each had a collection of simulated corporate and personal data on them, along with the capability to remotely monitor what happened to them once they were found. They were left in high traffic public places such as elevators, malls, food courts, and public transit stops.

Key findings:

  • 96% of lost smartphones were accessed by the finders of the devices
  • 89% of devices were accessed for personal related apps and information
  • 83% of devices were accessed for corporate related apps and information
  • 70%of devices were accessed for both business and personal related apps and information
  • 50% of smartphone finders contacted the owner and provided contact information

The corporate related apps included remote access as well as email accounts. What is the lesson for corporate IT staff?

  • Take inventory of the mobile devices connecting to your company’s networks; you can’t protect and manage what you don’t know about.
  • Track resource access by mobile devices. For example if you are using MS Exchange, then ActiveSync logs can tell you a whole lot about such access.
  • See our white paper on the subject
  • Track all remote login to critical servers

See our webinar, ‘Using Logs to Deal With the Realities of Mobile Device Security and BYOD.’

Should You Disable Java?

The headlines are ablaze with the news of a new zero-day vulnerability in Java which could expose you to a remote attacker.

The Department of Homeland Security recommends disabling Java completely and many experts are apparently concurring. Crisis communications 101 says maintain high-volume, multi-channel communications but there is a strange silence from Oracle, aside of the announcement of a patch for said vulnerability.

Allowing your opponents to define you is a bad mistake as any political consultant will tell you. Today it’s Java, tomorrow, some other widely used component. The shrillness of the calls also makes me wonder why the hullabaloo?  Upset by the Oracle stewardship of Java, perhaps?

So what should you make of the “disable Java” calls echoing across Cyberia?  Personally I think it’s bad advice, assuming you can even take the advice in the first place. Java is widespread in server side applications (usually enterprise software) and embedded devices. There is probably no easy way to “upgrade” a heart pump or elevator control or a POS system. As far as server side, this may be easier but spare a thought to backward compatibility and business applications that are “certified” on older browsers. Pause a moment, the vulnerability becomes exposed when you visit a malicious website which can then take advantage of the flaw and get on your machine.

Instead of disabling Java and thereby possibly breaking critical functionality, why don’t you limit access to outside websites instead? This is easily done by configuring proxy servers (good for desktops or mobile situations) or limiting devices to a subnet that only has access to the trusted internal hosts (this can work for bar code scanners or manufacturing equipment). This limits your exposure. Proxy server filtering at the internet perimeter is done by matching the user agent string. This is also a good way to limit those older insecure browsers that must be present for internal applications from accessing the outside and potentially being equally a source of infection in the enterprise.

This is a serious issue that merits a thoughtful response, not a panicked rush to comply and cripple your enterprise.

2013 Security Resolutions

A New Year’s resolution is a commitment that a person makes to one or more personal goals, projects, or the reforming of a habit.

  • The ancient Babylonians made promises to their gods at the start of each year that they would return borrowed objects and pay their debts.
  • The Romans began each year by making promises to the god Janus, for whom the month of January is named.
  • In the Medieval era, the knights took the “peacock vow” at the end of the Christmas season each year to re-affirm their commitment to chivalry.

Here are mine:

1)      Shed those extra pounds of logs:

Log retention is always a challenge — how much to keep, for how long? Keep them too long and they are just eating away storage space. Pitch them mercilessly and keep wondering if you will need them.  For guidance, look to any regulation that may apply. PCI-DSS says 365 days, for example; NIST 800-92 unhelpfully says “This should be driven primarily by organizational policies” and then goes on to classify logs into system, infrastructure and application levels. Bottom line, use your judgment because you know your environment best.

2)      Exercise your log analysis muscles regularly

As the Verizon Data Breach report says year in and year out, the bad guys are hoping that you are not collecting logs, and if you are, that you are not reviewing them. More than 96% of all attacks were not highly difficult and were avoidable (at least in hindsight) without difficult or expensive countermeasures. Easier said than done, isn’t it? Consider co-sourcing the effort.

3)      Play with existing toys before buying new ones

Know what configuration assessment is? It’s applying secure configurations to existing equipment. Agencies such as NIST, CIS and DISA provide detailed guidelines. Vendors such as Microsoft provide hardening guides. It’s a question of applying them to existing hardware. This reduces attack surface and contributes greatly to a more secure posture. You already have the equipment, just apply the secure configuration.  EventTracker can help measure results.

Happy New Year.

Five Leadership Lessons from Simpson-Bowles

In January 2010 the U.S. Senate was locked in a sharp debate about the country’s debt and deficit crisis. Unable to agree on a course of action, some Senators proposed the creation of a fiscal commission that would send Congress a proposal to address the problem with no possibility of amendments.   It was chaired by former Senator, Alan Simpson, and former White House chief of staff, Erskine Bowles.

Darrel West and Ashley Gabriele of Brookings examined the leadership lessons in this article. I was struck by the application of some of the lessons to the SIEM problem.

1) Stop Fantasizing About Easy Fixes

Cutting waste and fraud is not sufficient to address long-term debt and deficit issues. To think that we can avoid difficult policy choices simply by getting rid of wasteful spending is a fantasy.   It’s also tempting to think that the next Cisco firewall, Microsoft OS or magic box will solve all security issues; that the hard work of reviewing logs, changes and assessing configuration will not be needed. It’s high time to stop fantasizing about such things.

2) Facts Are Informative

Senator Daniel Patrick Moynihan famously remarked that “everyone is entitled to his own opinion, but not to his own facts.” This insight often is lost in Washington D.C. where leaders invoke “facts” on a selective or misleading basis. The Verizon Data Breach report has repeatedly shown that attacks are not highly difficult, that most breaches took weeks or more to be discovered and that almost all were avoidable through simple controls.   We can’t get away from it — looking at logs is basic and effective.

3) Compromise Is Not a Dirty Word

One of the most challenging aspects of the contemporary political situation is how bargaining, compromise, and negotiation have become dirty words. Do you have this problem in your Enterprise? Between the Security and Compliance teams? Between the Windows and Unix teams? Between the Network and Host teams? Is it preventing you from evaluating and agreeing on a common solution? If yes, this lesson is for you — compromise is not a dirty word.

4) Security and Compliance Have Credibility in Different Areas

On fiscal issues, Democrats have credibility on entitlement reform because of their party’s longstanding advocacy on behalf of Social Security, Medicare, and Medicaid. Meanwhile, Republicans have credibility on defense issues and revenue enhancement because of their party’s history of defending the military and fighting revenue increases. In our world, the Compliance team has credibility on regular log review and coverage of critical systems, while the Security team has credibility on identifying obvious and subtle threats (out-of-ordinary behavior). Different areas, all good.

5) It’s Relationships, Stupid!

Commission leaders found that private and confidential discussions and trust-building exercises were important to achieving the final result. They felt that while public access and a free press were essential to openness and transparency, some meetings and most discussions had to be held behind closed doors. Empower the evaluation team to have frank and open discussion with all stakeholders — including those from Security, Compliance, Operations and Management. Such a consensus built in advance leads to a successful IT project.

Top 5 Security Threats of All Time

The newspapers are full of stories of the latest attack. Then vendors rush to put out marketing statements glorifying themselves for already having had a solution to the problem, if only you had their product/service, and the beat goes on.

Pause for a moment and compare this to health scares. The top 10 scares according to ABC News include Swine Flu (H1N1), BPA, Lead paint on toys from China, Bird Flu (H5N1) and so on.   They are, no doubt, scary monsters but did you know that the common cold causes 22 million school days to be lost in the USA alone?

In other words, you are better off enforcing basic discipline to prevent days lost from common infections than stockpiling exotic vaccines. The same is true in IT security. Here then, are the top 5 attack vectors of all time. Needless to say these are not particularly hard to execute, and are most often successful simply because basic precautions are not in place or enforced. The Verizon Data Breach Report demonstrates this year in and year out.

1. Information theft and leakage

Personally Identifiable Information (PII) data stolen from unsecured storage is rampant. The Federal Trade Commission says 21% of complaints are related to identity theft and have accounted for 1.3M cases in 2009/10 in the USA. The 2012 Verizon DBIR shows 855 incidents and 174M compromised records.

Lesson learned: Implement recommendations like SANS CAG or PCI-DSS.

2. Brute force attack

Hackers leverage cheap computing power and pervasive broadband connectivity to breach security. This is a low cost, low tech attack that can be automated remotely.   It can be easily detected and defended against, but it requires monitoring and eyes on the logs. It tends to be successful because monitoring is absent.

Lesson learned: Monitor logs from firewalls and network devices in real time. Set up alerts which are reviewed by staff and acted upon as needed. If this is too time consuming, then consider a service like SIEM Simplified.

3. Insider breach

Staff on the inside is often privy to a large amount of data and can cause much larger damage. The Wikileaks case is the poster child for this type of attack.

4. Process and Procedure failures

It is often the case that in the normal course of business, established process and procedures are ignored. Unfortunate coincidences can cause problems.   Examples of this are e-mailing interim work products to personal accounts, taking work home in USB sticks and then losing them, sending CDROMs with source code by mail and then they are lost, etc.

Lesson learned: Reinforce policies and procedures for all employees on a regular basis. Many US Government agencies require annual completion of a Computer Security and Assessment Test.   Many commercial banks remind users via message boxes in the login screen.

5. Operating failures

This includes oops moments, such as backing up data to the wrong server and sending backup data off-site where it can be restored by unauthorized persons.

Lesson learned: Review procedures and policies for gaps. An external auditor can be helpful in identifying such gaps and recommending compensating controls to cover them.

Big Data, Old News. Got Humans?

Did you know that big data is old news in the area of financial derivatives?   O’Connor & Associates  was founded in 1977 by mathematician Michael Greenbaum, who had run risk management for Ed & Bill O’Connor’s options trading firm. What made O’Connor and Associates successful was the understanding that expertise is far more important than any tool or algorithm. After all, absent expertise, any tool can only generate gibberish; perfectly processed and completely logical, of course, but still gibberish.

Which brings us back to the critical role played by the driver of today’s enterprise tools. These tools are all full featured and automate the work of crushing an entire hillside of dirt to locate tiny grams of gold — but “got human”? It comes back to the skilled operator who knows how and when to push all those fancy buttons. Of course deciding which hillside to crush is another problem altogether.

This is a particularly difficult challenge for midsize enterprises which struggle with SIEM data; billions of logs, change and configuration data all now available thanks to that shiny SIEM you just installed. What does it mean? What are you supposed to do next? Large enterprises can afford a small army of experts to extract value, whereas the small business can ignore the problem completely but for the midsize enterprises, it’s the worst of all worlds – Compliance regulations, tight budgets, lean staff and the demand for results?

This is why our  SIEM Simplified  offering was created: to allow customers to outsource the heavy lifting part of the problem while maintaining control over the critical and sensitive decision making parts. At the EventTracker Control Center (ECC), our expert staff watches your incidents and reviews log reports daily, and alert you to those few truly critical conditions that warrant your attention. This frees up your staff to take care of things that cannot be outsourced. In addition, since the ECC enjoys economies of scale, this can be done at lesser cost than do-it-yourself. This has the advantage of inserting the critical human component back into the equation but at a price point that is affordable.

As Grady Booch observed “A fool with a tool is still a fool.”

tool

Five myths about PCI-DSS

In the spirit of the Washington Posts’ regular column, “5 Myths,” here we “challenge everything you think you know” about PCI-DSS Compliance.

1. One vendor and product will make us compliant

While many vendors offer an array of services and software which target PCI-DSS, no single vendor or product fully addresses all 12 of the PCI-DSS v2.0 requirements. Marketing departments often position offerings in such a manner as to give the impression of a “silver bullet.”   The PCI Security Standards Council warns against reliance on a single product or vendor and urges a security strategy that focuses on the big picture.

2. Outsourcing card processing makes us compliant

Outsourcing may simplify payment card processing but does not provide automatic compliance. PCI-DSS also calls for policies and procedures to safeguard cardholder transactions and data processing when you receive them — for example, chargebacks or refunds. You should request an annual certificate of compliance from the vendor to ensure that their applications and terminals are compliant.

3. PCI is too hard, requires too much effort

The 12 requirements can seem difficult to understand and implement to merchants without a dedicated IT department, however these requirements are basic steps for good security. The standard offers the alternative of compensating controls, if needed. The market is awash with many products and services to help merchants achieve compliance. Also consider that the cost of non-compliance can often be higher, including fines, legal fees, lost business and reputation.

4. PCI requires us to hire a Qualified Security Assessor (QSA)

PCI-DSS offers the option of doing a self-assessment with officer sign-off if your merchant bank agrees. Most large retailers prefer to hire a QSA because they have complex environments, and QSAs provide valuable expertise including the use of compensating controls.

5. PCI compliance will make us more secure

Security exploits are non-stop and an ever escalating war between the bad and good guys. Achieving PCI-DSS compliance, while certainly a “brick in the wall” of your security posture, is only a snapshot in time. “Eternal vigilance is the price of liberty,” said Wendell Phillips.

Does Big Data = Better Results? It depends…

If you could offer your IT Security team 100 times more data than they currently collect – every last log, every configuration, every single change made to every device in the entire enterprise at zero cost – would they be better off? Would your enterprise be more secure? Completely compliant? You already know the answer – not really, no. In fact, some compliance-focused customers tell us they would be worse off because of liability concerns (you had the data all along but neglected to use it to safeguard my privacy), and some security focused customers say it will actually make things worse because we have no processes to effectively manage such archives.

As Micheal Schrage noted, big data doesn’t inherently lead to better results. Organizations must grasp that being “big data-driven requires more qualified human judgment than cloud-based machine learning.” For big data to be meaningful, it has to be linked to a desirable business outcome, or else executives are just being impressed or intimidated by the bigness of the data set. For example, IBMs DeepQA project stores petabytes of data and was demonstrated by Watson, the successful Jeopardy playing machine – that is big data linked clearly to a desirable outcome.

In our corner of the woods, the desirable business outcomes are well understood.   We want to keep bad guys out (malware, hackers), learn about the guys inside that have gone bad (insider threats), demonstrate continuous compliance, and of course do all this on a leaner, meaner budget.

Big data can be an embarrassment of riches if linked to such outcome.   But note the emphasis on “qualified human judgment.”   Absent this, big data may be just an embarrassment. This point underlines the core problem with SIEM – we can collect everything, but who has the time or rule-set to make the valuable stuff jump out? If you agree, consider a managed service. It’s a cost effective way to put big data to work in your enterprise today – clearly linked to a set of desirable outcomes.

Are you a Data Scientist?

The advent of the big data era means that analyzing large, messy, unstructured data will increasingly form part of everyone’s work. Managers and business analysts will often be called upon to conduct data-driven experiments, to interpret data, and to create innovative data-based products and services. To thrive in this world, many will require additional skills. In a new Avanade survey, more than 60 percent of respondents said their employees need to develop new skills to translate big data into insights and business value.

Are you:

Ready and willing to experiment with your log and SIEM data? Managers and security analysts must be able to apply the principles of scientific experimentation to their log and SIEM data. They must know how to construct intelligent hypotheses. They also need to understand the principles of experimental testing and design, including population selection and sampling, in order to evaluate the validity of data analyses. As randomized testing and experimentation become more commonplace, a background in scientific experimental design will be particularly valued.

Adept at mathematical reasoning? How many of your IT staff today are really “numerate” — competent in the interpretation and use of numeric data? It’s a skill that’s going to become increasingly critical. IT Staff members don’t need to be statisticians, but they need to understand the proper usage of statistical methods. They should understand how to interpret data, metrics and the results of statistical models.

Able to see the big (data) picture? You might call this “data literacy,” or competence in finding, manipulating, managing, and interpreting data, including not just numbers but also text and images. Data literacy skills should be widespread within the IT function, and become an integral aspect of every function and activity.

Jeanne Harris blogging in the Harvard Business Review writes, “Tomorrow’s leaders need to ensure that their people have these skills, along with the culture, support and accountability to go with it. In addition, they must be comfortable leading organizations in which many employees, not just a handful of IT professionals and PhDs in statistics, are up to their necks in the complexities of analyzing large, unstructured and messy data.

“Ensuring that big data creates big value calls for a reskilling effort that is at least as much about fostering a data-driven mindset and analytical culture as it is about adopting new technology. Companies leading the revolution already have an experiment-focused, numerate, data-literate workforce.”

If this presents a challenge, then co-sourcing the function may be an option. The EventTracker Control Center here at Prism offers SIEM Simplified, a service where trained and expert IT staff perform the heavy lifting associated with big data analysis, as it relates to SIEM data. By removing the outliers and bringing patterns to your attention at greater efficiencies because of scale, focus and expertise, you can focus on the interpretation and associated actions.

Seven deadly sins of SIEM

1) Lust: Be not easily lured by the fun, sexy demo. It always looks fantastic when the sales guy is driving. How does it work when you drive? Better yet, on your data?

2) Gluttony: Know thy log volume. When thee consumeth mucho more raw logs than thou expected, thou shall pay and pay dearly. More SIEM budgets die from log gluttony than starvation.

3) Greed: Pure pursuit of perfect rules is perilous. Pick a problem you’re passionate about, craft monitoring, and only after it is clearly understood do you automate remediation.

4) Sloth: The lazy shall languish in obscurity. Toilers triumph. Use thy SIEM every day, acknowledge the incidents, review the log reports. Too hard? No time you say?     Consider SIEM Simplified.

5) Wrath: Don’t get angry with the naysayers. Attack the problem instead. Remember “those who can, do; those who cannot, criticize.” Democrats: Yes we can v2.0.

6) Envy: Do not copy others blindly out of envy for their strategy. Account for your differences (but do emulate best practices).

7) Pride: Hubris kills. Humility has a power all its own. Don’t claim 100% compliance or security. Rather you have 80% coverage but at 20% cost and refining to get the rest. Republicans: So sayeth Ronald Reagan.

Trending Behavior – The Fastest Way to Value

Our  SIEM Simplified  offering is manned by a dedicated staff overseeing the EventTracker Control Center (ECC). When a new customer comes aboard, the ECC staff is tasked with getting to know the new environment, identifying which systems are critical, which applications need watching, and what access controls are in place, etc. In theory, the customer would bring the ECC staff up to speed (this is their network, after all) and keep them up to date as the environment changes. Reality bites and this is rarely the case. More commonly, the customer is unable to provide the ECC with anything other than the most basic of information.

How then can the ECC “learn” and why is this problem interesting to SIEM users at large?

Let’s tackle the latter question first. A problem facing new users at a SIEM installation is that  they get buried in getting to know the baseline pattern and the enterprise (the very same problem the ECC faces). See this  article  from a practitioner.

So it’s the same problem. How does the ECC respond?

Short answer: By looking at behavior trends and spotting the anomalies.

Long answer: The ECC first discovers the network and learns the various device types (OS, application, network devices etc.). This is readily automated by the StatusTracker module. If we are lucky, we get to ask specific the customer questions to bolster our understanding. Next, based on this information and the available knowledge packs within EventTracker, we schedule suitable daily and weekly reports and configure alerts. So far, so good, but really no cigar. The real magic lies in taking these reports  and creating flex reports where we control the output format to focus on parameters of value that are embedded within the description portion of the log messages (this is always true for syslog formatted messages but also for Windows style events). When these parameters are trended in a graph, all sorts of interesting information emerges.

In one case, we saw that a particular group of users was putting their passwords in the username field then logging in much more than usual — you see a failed login followed by a successful one; combine the two and you have both the username and password. In another case, we saw repeated failed logon after hours from a critical IBM i-Series machine and hit the panic button. Turns out someone left a book on the keyboard.

Takeaway: Want to get useful value from your SIEM but don’t have gobs of time to configure or tune the thing for months on end? Think trending behavior, preferably auto-learned. It’s what sets EventTracker apart from the search engine based SIEMs or from the rules based products that need an expen$ive human analyst chained to the product for months on end. Better yet, let the ECC do the heavy lifting for you. SIEM Simplified, indeed.

SIEM Fevers and the Antidote

SIEM Fever is a condition that robs otherwise rational people of common sense in regard to adopting and applying Security Information and Event Management (SIEM) technology for their IT Security and Compliance needs. The consequences of SIEM Fever have contributed to misapplication, misuse, and misunderstanding of SIEM with costly impact. For example, some organizations have adopted SIEM in contexts where there is no hope of a return on investment. Others have invested in training and reorganization but use or abuse the technology with new terminology taken from the vendor dictionary.   Alex Bell of Boeing first described these conditions.

Before you get your knickers in a twist due to a belief that it is an attack on SIEM and must be avenged with flaming commentary against its author, fear not. There are real IT Security and Compliance efforts wasting real money, and wasting real time by misusing SIEM in a number of common forms. Let’s review these types of SIEM Fevers, so they can be recognized and treated.

Lemming Fever: A person with Lemming Fever knows about SIEM simply based upon what he or she has been told (be it true or false), without any first-hand experience or knowledge of it themselves. The consequences of Lemming Fever can be very dangerous if infectees have any kind of decision making responsibility for an enterprise’s SIEM adoption trajectory. The danger tends to increase as a function of an afflictee’s seniority in the program organization due to the greater consequences of bad decision making and the ability to dismiss underling guidance. Lemming Fever is one of the most dangerous SIEM Fevers as it is usually a precondition to many of the following fevers.

Easy Button Fever: This person believes that adopting SIEM is as simple as pressing Staple’s Easy Button, at which point their program magically and immediately begins reaping the benefits of SIEM as imagined during the Lemming Fever stage of infection. Depending on the Security Operating Center (SOC) methodology, however, the deployment of SIEM could mean significant change. Typically, these people have little to no idea at all about the features which are necessary for delivering SIEM’s productivity improvements or the possible inapplicability of those features to their environment.

One Size Fits All Fever: Victims of One Size Fits All Fever believe that the same SIEM model is applicable to any and all environments with a return on investment being implicit in adoption. While tailoring is an important part of SIEM adoption, the extent to which SIEM must be tailored for a specific environment’s context is an important barometer of its appropriateness. One Size Fits All Fever is a mental mindset that may stand alone from other Fevers that are typically associated with the tactical misuse of SIEM.

Simon Says Fever: Afflictees of Simon Says Fever are recognized by their participation in SIEM related activities without the slightest idea as to why those activities are being conducted or why they important other than because they are included in some “checklist”. The most common cause of this Fever is failing to tie all log and incident review activities to adding value and falling into a comfortable, robotic regimen that is merely an illusion of progress.

One-Eyed King Fever: This Fever has the potential to severely impact the successful adoption of SIEM and occurs when the SIEM blind are coached by people with only a slightly better understanding of SIEM. The most common symptom occurring in the presence of One-Eyed King Fever is failure to tailor the SIEM implementation to its specific context or the failure of a coach to recognize and act on a low probability of return on investment as it pertains to a enterprise’s adoption.

The Antidote: SIEM doesn’t cause the Fevers previously described, people do. Whether these people are well intended have studied at the finest schools, or have high IQs, they are typically ignorant of SIEM in many dimensions. They have little idea about the qualities of SIEM which are the bases of its advertised productivity improving features, they believe that those improvements are guaranteed by merely adopting SIEM, or have little idea that the extent of SIEM’s ability to deliver benefit is highly dependent upon program specific context.

The antidote for the many forms of SIEM Fever is to educate. Unfortunately, many of those who are prone to the aforementioned SIEM infections are most desperately in need of such education, are often unaware of what they don’t know about SIEM, are unreceptive to learning about what they don’t know, or believe that those trying to educate them are simply village idiots who have not yet seen the brightly burning SIEM light.

While I’m being entirely tongue-in-cheek, the previously described examples of SIEM misuse and misapplication are real and occurring on a daily basis.   These are not cases of industrial sabotage caused by rogue employees planted by a competitor, but are instead self-inflicted and frequently continue even amidst the availability of experts who are capable of rectifying them.

Interested in getting help? Consider SIEM Simplified.

Surfing the Hype Cycle for SIEM

The Gartner hype cycle is a graphic “source of insight to manage technology deployment within the context of your specific business goals.”     If you have already adopted Security Information and Event Management (SIEM) (aka log management) technology in your organization, how is that working for you? As candidate, Reagan famously asked “Are you better off than you were four years ago?”

Sadly, many buyers of this technology are wallowing in the “trough of disillusionment.”   The implementation has been harder than expected, the technology more complex than demonstrated, the discipline required to use/tune the product is lacking, resource constraints, hiring freezes and the list goes on.

What next? Here are some choices to consider.

Do nothing: Perhaps the compliance check box has been checked off; auditors can be shown the SIEM deployment and sent on their way; the senior staff on to the next big thing; the junior staff have their hands full anyway; leave well enough alone.
Upside: No new costs, no disturbance in the status quo.
Downside: No improvements in security or operations; attackers count on the fact that even if you do collect log SIEM data, you will never really look at it.

Abandon ship: Give up on the whole SIEM concept as yet another failed IT project; the technology was immature; the vendor support was poor; we did not get resources to do the job and so on.
Upside: No new costs, in fact perhaps some cost savings from the annual maintenance, one less technology to deal with.
Downside: Naked in the face of attack or an auditor visit; expect an OMG crisis situation soon.

Try managed service: Managing a SIEM is 99% perspiration and 1% inspiration;offload the perspiration to a team that does this for a living; they can do it with discipline (their livelihood depends on it) and probably cheaper too (passing on savings to you);   you deal with the inspiration.
Upside: Security usually improves; compliance is not a nightmare; frees up senior staff to do other pressing/interesting tasks; cost savings.
Downside: Some loss of control.

Interested? We call it SIEM SimplifiedTM.

Big Data Gotcha’s

Jill Dyche writing in the Harvard Business Review suggests that “the question on many business leaders’ minds is this: Does the potential for accelerating existing business processes warrant the enormous cost associated with technology adoption, project ramp up, and staff hiring and training that accompany Big Data efforts?

A typical log management implementation, even in a medium enterprise is usually a big data endeavor. Surprised? You should not be. A relatively small network of a dozen log sources easily generates a million log messages per day with volumes in the 50-100 million per day being commonplace. With compliance and security guidelines requiring that logs be retained for 12 months or more, pretty soon you have big data.

So let’s answer the question raised in the article:

Q1: What can’t we do today that Big Data could help us do?   If you can’t define the goal of a Big Data effort, don’t pursue it.

A1: Comply with regulations like PCI-DSS, SOX 404, and HIPAA etc.; be alerted to security problems in the enterprise; control data leakage via insecure endpoints; improve operational efficiency

Q2: What skills, technologies, and existing data development practices do we have in place that could help kick-start a Big Data effort? If your company doesn’t have an effective data management organization in place, adoption of Big Data technology will be a huge challenge.

A2: Absent a trained and motivated user of the power tool that is the modern SIEM, an organization that acquires such technology is consigning it to shelf ware.   Recognizing this as a significant adoption challenge in our industry, we offer Monitored SIEM as a service; the best way to describe this is SIEM simplified! We do the heavy lifting so you can focus on leveraging the value.

Q3: What would a proof-of-concept look like, and what are some reasonable boundaries to ensure its quick deployment? As with many other proofs-of-concept the “don’t boil the ocean” rule applies to Big Data.

A3:   The advantage of a software-only solution like EventTracker is that an on premises trial is easy to set up. A virtual appliance with everything you need is provided; set up as a VMware or Hyper-Virtual machine within minutes.   Want something even faster? See it live online.

Q4: What determines whether we green light Big Data investment? Know what success looks like, and put the measures in place.

A4: Excellent point; success may mean continuous compliance;   a 75% reduction in cost of compliance; one security incident averted per quarter; delegation of log review to a junior admin.

Q5: Can we manage the changes brought by Big Data? With the regular communication of tangible results, the payoff of Big Data can be very big indeed.

A5: EventTracker includes more than 2,000 pre-built reports designed to deliver value to every interested stakeholder in the enterprise ranging from dashboards for management, to alerts for Help Desk staff, to risk prioritized incident reports for the security team, to system uptime and performance results for the operations folk and detailed cost savings reports for the CFO.

The old adage “If you fail to prepare, then prepare to fail” applies. Armed with these questions and answers, you are closer to gaining real value with Big Data.

Sun Tzu would have loved Flame

All warfare is based on deception says Sun Tzu. To quote:

“Hence, when able to attack, we must seem unable; 
When using our forces, we must seem inactive; 
When we are near, we must make the enemy believe we are far away;  
When far away, we must make him believe we are near.”

With the new era of cyberweapons, Sun Tzu’s blueprint can be followed almost exactly: a nation can attack when it seems unable to. When conducting cyber-attacks, a nation will seem inactive. When a nation is physically far away, the threat will appear very, very near.

Amidst all the controversy and mystery surrounding attacks like Stuxnet and Flame, it is becoming increasingly clear that the wars of tomorrow will most likely be fought by young kids at computer screens rather than by young kids on the battlefield with guns.

In the area of technology, what is invented for use by the military or for space, eventually finds its way to the commercial arena. It is therefore a matter of time before the techniques used by Flame or Stuxnet become a part of the arsenal of the average cyber thief.

Ready for the brave new world?

Learning from JPMorgan

The single most revealing moment in the coverage of JPMorgan’s multibillion dollar debacle can be found in this take-your-breath-away passage from The Wall Street Journal: On April 30, associates who were gathered in a conference room handed Mr. Dimon summaries and analyses of the losses. But there were no details about the trades themselves. “I want to see the positions!” he barked, throwing down the papers, according to attendees. “Now! I want to see everything!”

When Mr. Dimon saw the numbers, these people say, he couldn’t breathe.

Only when he saw the actual trades — the raw data — did Mr. Dimon realize the full magnitude of his company’s situation. The horrible irony: The very detail-oriented systems (and people) Dimon had put in place had obscured rather than surfaced his bank’s horrible hedge.

This underscores the new trust versus due diligence dilemma outlined by Michael Schrage. Raw data can have enormous impact on executive perceptions that pre-chewed analytics lack.   This is not to minimize or marginalize the importance of analysis and interpretation; but nothing creates situational awareness faster than seeing with your own eyes what your experts are trying to synthesize and summarize.

There’s a reason why great chefs visit the farms and markets that source their restaurants:   the raw ingredients are critical to success — or failure.

We have spent a lot of energy in building dashboards for critical log data and recognize the value of these summaries; but while we should trust our data, we also need to do the due diligence.

Big Data – Does insight equal decision?

In information technology, big data consists of data sets that grow so large that they become awkward to work with using whatever database management tools are on-hand. For that matter, how big is big? It depends on when you need to reconsider data management options – in some cases it may be 100Gb, in others, it may be 100Tb. So, following up on our earlier post about big data and insight, there is one more important consideration:

Does insight equal decision?

The foregone conclusion from big data proponents is that each nugget of “insight” uncovered by data mining will somehow be implicitly actionable and the end user (or management) will gush with excitement and praise.

The first problem is how can you assume that “insight” is actionable? It very well may not be, so what do you do then? The next problem is how can you convince the decision maker that the evidence constitutes an imperative to act? Absent action, the “insight” remains simply a nugget of information.

Note that management typically responds to “insight” with skepticism, seeing the message bearer as yet another purveyor of information (“insight”) and insisting that this new method is the silver bullet, thereby adding to workload.

Being in management myself, my team often comes to me with their little nuggets … some are gold, but some are chicken.   Rather than purvey insight, think about a recommendation backed up by evidence.

Big Data, does more data mean more insight?

In information technology, big data consists of data sets that grow so large they become unwieldy to work with using available database management tools. How big is big? It depends on when you need to reconsider data management options – in some cases it may be 100 Gigabytes, in others, as great as 100 Terabytes.

Does more data necessarily mean more insight?

The pro-argument is that larger data sets allow for greater incidences of patterns, facts, and insights. Moreover, with enough data, you can discover trends using simple counting that are otherwise undiscoverable in small data using sophisticated statistical methods.

On the other hand, while this is perfectly valid in theory, for many businesses the key barrier is not the ability to draw insights from large volumes of data; it is asking the right questions for which insight is needed.

The ability to provide answers does depend on the question being asked and the relevance of the big-data set to that question. How can one generalize to an assumption that more data will always mean more insight?   It isn’t always the answer that’s important, but the questions that are key.

Silly human – logs are for machines (too)

Here is an anecdote from a recent interaction with an enterprise application in the electric power industry:

1. Dave the developer logs all kinds of events. Since he is the primary consumer of the log, the format is optimized for human-readability. For example:

02-APR-2012 01:34:03 USER49 CMD MOD0053: ERROR RETURN FROM MOD0052 RETCODE 59

Apparently this makes perfect sense to Dave:   each line includes a timestamp and some text.

2. Sam from the Security team needs to determine the number of daily unique users. Dave quickly writes a parser script for the log and schedules it. He also builds a little Web interface so that Sam can query the parsed data on his own. Peace reigns.

3. A few weeks later, Sam complains that the web interface is broken. Dave takes a look at the logs, only to realize that someone else has added an extra field in each line, breaking his custom parser. He pushes the change and tells Sam that everything is okay again. Instead of writing a new feature, Dave has to go back and fill in the missing data.

4. Every 3 weeks or so, repeat Step 3 as others add logs.

What is your maximum NPH?

In The Information Diet, Clay Johnson wrote, “The modern human animal spends upwards of 11 hours out of every 24 in a state of constant consumption. Not eating, but gorging on information … We’re all battling a storm of distractions, buffeted with notifications and tempted by tasty tidbits of information. And just as too much junk food can lead to obesity, too much junk information can lead to cluelessness.”

Audit yourself and you may be surprised to find that you get more than 10 notifications per hour; they can be disruptive to your attention. I find myself trying hard (and often failing) to ignore the smartphone as it beeps softly to indicate a new distraction. I struggle to remain focused on the person in my office as the desktop tinkles for attention.

Should you kill off notifications though? Clay argues that you should and offers tools to help.

When designing EventTracker v7, minimizing notifications was a major goal. On Christmas Day in 2008, nobody was stirring, but the “alerts” console rung up over 180 items demanding review. It was obvious these were not “alerts.” This led to the “risk” score which dramatically reduces notifications.

We know that all “alerts”  are not equal: some merit attention before going to lunch, some before the end of the day, and some by the end of the quarter, budget permitting. There are a very rare few that require us to drop the coffee mug and attend instantly. Accordingly, a properly configured EventTracker installation will rarely “notify” you; but when you need to know — that alert will come screaming for your attention.

I am frequently asked what is the maximum events per second that can be managed. I think I’ll begin to ask how many notifications per hour (NPH) the questioner can handle. I think Clay Johnson would approve.

Data, data everywhere but not a drop of value

The sailor in The Rime of the Ancient Mariner relates his experiences after long sea voyage when his ship is blown off course:

“Water, water, every where,

And all the boards did shrink;

Water, water, every where,

Nor any drop to drink.”

An albatross appears and leads them out, but is shot by the Mariner and the ship winds up in unknown waters.  His shipmates blame the Mariner and force him to wear the dead albatross around his neck.

Replace water with data, boards with disk space, and drink with value and the lament would apply to the modern IT infrastructure. We are all drowning in data, but not so much in value. “Big data” are datasets that grow so large that managing them with on-hand tools is awkward. They are seen as the next frontier in innovation, competition, and productivity.

Log management is not immune to this trend. As the basic log collection problem (different sources, different protocols and different formats) has been resolved, we’re now collecting even larger datasets of logs. Many years ago we refuted the argument that log data belonged in a RDBMS, precisely because we saw the side problem of efficient data archival begin to overwhelm the true problem of extracting value from the data. As log data volumes continue to explode, that decision continues to be validated.

However, while storing raw logs in a database was not sensible, their power in extracting patterns and value from data is well established. Recognizing this, EventVault Explorer was released in 2011. Users can extract selected datasets to their choice of external RDBMS (a datamart) for fuzzy searching, pivot tables etc.   As was noted here , the key to managing big data is to personalize the results for maximum impact.

As you look under the covers of SIEM technology, pay attention to that albatross called log archives. It can lead you out of trouble, but you don’t want it around your neck.

Top 5 Compliance Mistakes

5.   Overdoing compensating controls

When a legitimate technological or documented business constraint prevents you from satisfying a requirement, a compensating control can be the answer after a risk analysis is performed. Compensating controls are not specifically defined inside PCI, but are instead defined by you (as a self-certifying merchant) or your QSA. It is specifically not an excuse to push PCI Compliance initiatives through completion at a minimal cost to your company. In reality, most compensating controls are actually harder to do and cost more money in the long run than actually fixing or addressing the original issue or vulnerability. See this article for a clear picture on the topic.

4. Separation of duty

Separation of duties is a key concept of internal controls. Increased protection from fraud and errors must be balanced with the increased cost/effort required.   Both PCI DSS Requirements 3.4.1 and 3.5 mention separation of duties as an obligation for organizations, and yet many still do not do it right, usually because they lack staff.

3. Principle of Least privilege

PCI 2.2.3 says they should “configure system security parameters to prevent misuse.” This requires organizations to drill down into user roles to ensure they’re following the rule of least privilege wherever PCI regulations apply.   This is easier said than done; more often it’s “easier” to grant all possible privileges rather than determine and assign just the correct set. Convenience is the enemy of security.

2. Fixating on excluding systems from scope

When you make the process of getting things out of scope a higher priority than addressing real risk, you get in trouble. Risk mitigation must come first and foremost. In far too many cases, out-of-scope becomes out-of-mind. This may make your CFO happy, but a hacker will get past weak security and not care if the system is in scope or not.

And drum roll …

1. Ignoring virtualization

Many organizations have embraced virtualization wholeheartedly given its efficiency gains. In some cases, virtualized machines are now off-premises and co-located at a service provider like Rackspace. This is a trend at federal government facilities.   However, “off-premises” does not mean “off-your-list”. Regardless of the location of the cardholder data, such systems are within scope as are the hypervisor. In fact, PCI DSS 2.0 says, if the cardholder data is present on even one VM, then the entire VM infrastructure is “in scope.”

The 5 Most Annoying Terms of 2011

Since every cause needs “Awareness,” here are my picks for management speak to camouflage the bloody obvious:

  5. Events per second

Log Management vendors are still trying to “differentiate” with this tired and meaningless metric as we pointed out in The EPS Myth.

  4. Thought leadership

Mitch McCrimmon describes it best.

  3. Cloud

Now here is a term that means all things to all people.

  2. Does that make sense?

The new “to be honest.” Jerry Weismann discusses it in the Harvard Business Review.

  1. Nerd

During the recent SOPA debate, so many self-described “country boys” wanted to get the “nerds” to explain the issue to them; as Jon Stewart pointed out, the word they were looking for was “expert.”

SIEM and the Appalachian Trail

The Appalachian Trail is a marked hiking trail in the eastern United States extending between Georgia and Maine. It is approximately 2,181 miles long and takes about six months to complete. It is not a particularly difficult journey from start to finish; yet even so, completing the trail requires more from the hiker than just enthusiasm, endurance and will.

Likewise, SIEM implementation can take from one to six months to complete (depending on the level of customization) and like the Trail, appears deceptively simple.   It too, can be filled with challenges that reduce even the most experienced IT manager to despair, and there is no shortage of implementations that have been abandoned or uncompleted.   As with the Trail, SIEM implementation requires thoughtful consideration.

1) The Reasons Why

It doesn’t take too many nights scurrying to find shelter in a lightning storm, or days walking in adverse conditions before a hiker wonders: Why am I doing this again? Similarly, when implementing any IT project, SIEM included, it doesn’t take too many inter-departmental meetings, technical gotchas, or budget discussions before this same question presents itself: Why are we doing this again?

  All too often, we don’t have a compelling answer, or we have forgotten it. If you are considering a half year long backpacking trip through the woods, there is a really good reason for it.   In the same way, one embarks on a SIEM project with specific goals, such as regulatory compliance, IT security improvement or to control operating costs.   Define the answer to this question before you begin the project and refer to it when the implementation appears to be derailing. This is the compass that should guide your way.   Make adjustments as necessary.

2) The Virginia Blues

Daily trials can include anything from broken bones to homesickness, a circumstance that occurs on the Appalachian Trail about four to eight weeks into the journey, within the state lines of Virginia. Getting through requires not just perseverance but also an ability to adapt.

For a SIEM project, staff turnover, false positives, misconfigurations or unplanned explosions of data can potentially derail the project. But pushing harder in the face of distress is a recipe for failure. Step back, remind yourself of the reasons why this project is underway, and look at the problems from a fresh perspective. Can you be flexible? Can you make find new avenues to go around the problems?

  3) A Fresh Perspective

In the beginning, every day is chock full of excitement, every summit view or wild animal encounter is exciting.   But life in the woods will become the routine and exhilaration eventually fades into frustration.

In  much the same way, after the initial thrill of installation and its challenges, the SIEM project devolves into a routine of discipline and daily observation across the infrastructure for signs of something amiss.

This is where boredom can set in, but the best defense against the lull that comes along with the end of the implementation is the expectation of it. The journey’s going to end.   Completing it does not occur when the project is implemented.   Rather, when the installation is done, the real journey and the hard work begins.

Humans in the loop – failsafe or liability?

Among InfoSec and IT staff, there is a lot of behind-the-scenes hand wringing that users are the weakest link.   But are InfoSec staff that much stronger?

While automation is and does have a place, Dan Geer, of CIA-backed venture fund In-Q-Tel, properly notes that while ” …humans can build structures more complex” than they can operate, ” …Are humans in the loop a failsafe or a liability? Is fully automated security to be desired or to be feared?”

We’ve considered this question before at Prism, when “automated remediation” was being heavily touted as a solution for mid-market enterprises, where IT staff is not abundant. We’ve found that human intervention is not just a fail-safe, but a necessity.   The interdependencies, even in medium sized networks are far too complex to automate.   We introduced the feature a couple of years back and in reviewing the usage, concluded that such “automated remediation” does have a role to play in the modern enterprise. Use cases include changes to group membership in Active Directory, unrecognized processes, account creation where the naming convention is not followed or honeypot access. In other words, when the condition can be well defined and narrowly focused, humans in the loop will slow things down. However for every such “rule” there are hundreds more that will be obvious to a human but missed by the narrow rule.

So are humans in the loop a failsafe or a liability? It depends on the scenario.

What’s your thought?

Will the cloud take my job?

Nearly every analyst has made aggressive predictions that outsourcing to the cloud will continue to grow rapidly. It’s clear that servers and applications are migrating to the cloud as fast as possible, but according to an article in The Economist, the tradeoff is efficiency vs. sovereignty.   The White House announced that the federal government will shut down 178 duplicative data centers in 2012, adding to the 195 that will be closed by the end of this year.

Businesses need motivation and capability to recognize business problems, solutions that can improve the enterprise, and ways to implement those solutions.   There is clearly a role for outsourced solutions and it is one that enterprises are embracing.

For an engineer, however, the response to outsourcing can be one of frustration, and concerns about short-sighted decisions by management that focus on short term gains at the risk of long term security. But there is also an argument why in-sourcing isn’t necessarily the better business decision:   a recent Gartner report noted that IT departments often center too much of their attention on technology and not enough on business needs, resulting in a “veritable Tower of Babel, where the language between the IT organization and the business has been confounded, and they no longer understand each other.”

Despite increased migration to cloud services, it does not appear that there is an immediate impact on InfoSec-related jobs.   Among the 12 computer-related job classifications tracked by the Department of Labor’s Bureau of Labor Statistics (BLS), information security analysts, along with computer and information research scientists, were among those whose jobs did not report unemployment during the first two quarters of 2011.

John Reed, executive director at IT staffing firm Robert Half Technology, attributes the high growth to the increasing organizational awareness of the need for security and hands-on IT security teams to ensure appropriate security controls are in place to safeguard digital files and vital electronic infrastructure, as well as respond to computer security breaches and viruses.

Simply put: the facility of using cloud services does not replace the skills needed to analyze and interpret the data to protect the enterprise.   Outsourcing to a cloud may provide immediate efficiencies, but it’s   the IT security staff who deliver business value that ensure long term security.

Threatscape 2012 – Prevent, Detect, Correct

The past year has been a hair-raising series of IT security breakdowns and headlining events reaching as high as RSA itself falling victim to a phishing attack.   But as the year set on 2011, the hacker group Anonymous remained busy, providing a sobering reminder that IT Security can never rest.

It turned out that attackers sent two different targeted phishing e-mails to four workers at its parent company, EMC.   The e-mails contained a malicious attachment that was identified in the subject line as “2011 Recruitment plan.xls” which was the point of attack.

Back to Basics:

Prevent:

Using administrative controls such as security awareness training, technical controls such as firewalls, and anti-virus and IPS, to stop attacks from penetrating the network.   Most industry and government experts agree that security configuration management is probably the best way to ensure the best security configuration allowable, along with automated patch management and updating anti-virus software.

Detect:

Employing a blend of technical controls such as anti-virus, IPS, intrusion detection systems (IDS), system monitoring, file integrity monitoring, change control, log management and incident alerting   can help to track how and when system intrusions are being attempted.

Correct:

Applying operating system upgrades, backup data restore and vulnerability mitigation and other controls to make sure systems are configured correctly and can prevent the irretrievable loss of data.

Echo Chamber

In the InfoSec industry, there is an abundance of familiar flaws and copycat theories and approaches. We repeat ourselves and recommend the same approaches. But what has really changed in the last year?

The emergence of hacking groups like Anonymous, LulzSec, and TeaMp0isoN.

In 2011, these groups brought the fight to corporate America, crippling firms both small (HBGary Federal) and large (Stratfor, Sony). As the year drew to a close these groups shifted from prank-oriented hacks for laughs (or “lulz”), to aligning themselves with political movements like Occupy Wall Street, and hacking firms like Stratfor, a Austin, Tex.-based security “think tank” that releases a daily newsletter concerning security and intelligence matters all over the world. After HBGary Federal CEO Aaron Barr publicly bragged that he was going to identify some members of the group during a talk in San Francisco at the RSA Conference week, Anonymous members responded by dumping a huge cache of his personal emails and those of other HBGary Federal executives online, eventually leading to Barr’s resignation. Anonymous and LulzSec then spent several months targeting various retailers, public figures and members of the security community. Their Operation AntiSec aimed to expose alleged hypocrisies and sins by members of the security community. They targeted a number of federal contractors, including IRC Federal and Booz Allen Hamilton, exposing personal data in the process. Congress got involved in July when Sen. John McCain urged Senate leaders to form a select committee to address the threat posed by Anonymous/LulzSec/Wikileaks.

The attack on RSA SecurId was another watershed event. The first public news of the compromise came from RSA itself, when it published a blog post explaining that an attacker had been able to gain access to the company’s network through a “sophisticated” attack. Officials said the attacker had compromised some resources related to the RSA SecurID product, which set off major alarm bells throughout the industry. SecurID is used for two-factor authentication by a huge number of large enterprises, including banks, financial services companies, government agencies and defense contractors. Within months of the RSA attack, there were attacks on SecurID customers, including Lockheed Martin, and the current working theory espoused by experts is that the still-unidentified attackers were interested in LM and other RSA customers all along and, having run into trouble compromising them directly, went after the SecurID technology to loop back to the customers.

The specifics of the attack were depressingly mundane (targeted phishing email with a malicious Excel file attached).

Then too, several certificate authorities were compromised throughout the year. Comodo was the first to fall when it was revealed in March that an attacker (apparently an Iranian national) had been able to compromise the CA infrastructure and issue himself a pile of valid certificates for domains belonging to Google, Yahoo, Skype and others. The attacker bragged about his accomplishments in Pastebin posts and later posted evidence of his forged certificate for Mozilla. Later in the year, the same person targeted the Dutch CA DigiNotar. The details of the attack were slightly different, but the end result was the same: he was able to issue himself several hundred valid certificates and this time went after domains owned by, among others, the Central Intelligence Agency. In the end, all of the major browser manufacturers had to revoke trust in the DigiNotar root CA.   The damage to the company was so bad that the Dutch government eventually took it over and later declared it bankrupt. Staggering, isn’t it? A lone attacker not only forced Microsoft, Apple and Mozilla to yank a root CA from their list of trusted roots, but he was also responsible for forcing a certificate authority out of business.

What has changed in our industry? Nothing really. It’s not a question “if” but “when” the attack will arrive on your assets.

Plus ça change, plus c'est la même, I suppose.