Did Big Data destroy the U.S. healthcare system?

The problem-plagued rollout of healthcare.gov has dominated the news in the USA. Proponents of the Affordable Care Act (ACA) urge that teething problems are inevitable and that’s all these are. In fact, President Obama has been at pains to say the ACA is more than just a website. Opponents of the law see the website failures as one more indicator that it is unworkable.

The premise of the ACA is that young healthy persons will sign up in large numbers and help defray the costs expected from older persons and thus provide a good deal for all. It has also been argued that the ACA is a good deal for young healthies. The debate between proponents of the ACA and the opponents of ACA hinge around this point. See for example, the debate (shouting match?) between Dr. Zeke Emmanuel and James Capretta on Fox News Sunday. In this segment, Capretta says the free market will solve the problem (but it hasn’t so far, has it?) and so Emmanuel says it must be mandated.

So when then has the free market not solved the problem? Robert X. Cringely argues that big data is the culprit. Here’s his argument:

– In the years before Big Data was available, actuaries at insurance companies studied morbidity and mortality statistics in order to set insurance rates. This involved metadata — data about data — because for the most part the actuaries weren’t able to drill down far enough to reach past broad groups of policyholders to individuals. In that system, insurance company profitability increased linearly with scale, so health insurance companies wanted as many policyholders as possible, making a profit on most of them.

– Enter Big Data. The cost of computing came down to the point where it was cost-effective to calculate likely health outcomes on an individual basis.

– Result? The health insurance business model switched from covering as many people as possible to covering as few people as possible — selling insurance only to healthy people who didn’t much need the healthcare system. The goal went from making a profit on most enrollees to making a profit on all enrollees.

The VARs tale

The Canterbury Tales is a collection of stories written by Geoffrey Chaucer at the end of the 14th century. The tales were a part of a story telling contest between pilgrims going to Canterbury Cathedral with the prize being a free meal on their return. While the original is in Middle English, here is the VARs tale in modern day English.

In the beginning, the Value Added Reseller (VAR) represented products to the channel and it was good. Software publishers of note always preferred the indirect sales model and took great pains to cultivate the VAR or channel, and it was good. The VAR maintained the relationship with the end user and understood the nuances of their needs. The VAR gained the trust of the end user by first understanding, then recommending and finally supporting their needs with quality, unbiased recommendations, and it was good. End users in turn, trusted their VAR to look out for their needs and present and recommend the most suitable products.

Then came the cloud which appeared white and fluffy and unthreatening to the end user. But dark and foreboding to the VAR, the cloud was. It threatened to disrupt the established business model. It allowed the software publisher to sell product directly to the end user and bypass the VAR. And it was bad for the VAR. Google started it with Office Apps. Microsoft countered with Office 365. And it was bad for the VAR. And then McAfee did the same for their suite of security products. Now even the security focused VARs took note. Woe is me, said the VAR. Now software publishers are selling directly to the end user and I am bypassed. Soon the day will come when cats and dogs are friends. What are we to do?

Enter Quentin Reynolds who famously said, If you can’t lick ‘em, join them.” Can one roll back the cloud? No more than King Canute could stop the tide rolling in. This means what, then? It means a VAR must transition from being a reseller of product to one of services or better yet, a provider of services. In this way, may the VAR regain relevance with the end user and cement the trust built up over the years, between them.

Thus the VARs tale may have a happy ending wherein the end user has a more secure network, and the auditor being satisfied, returns to his keep and the VAR is relevant again.

Which service would suit, you ask? Well, consider one that is not a commodity, one that requires expertise, one that is valued by the end user, one that is not a set-and-forget. IT Security leaps to mind; it satisfies these criteria. Even more within this field is SIEM, Log Management, Vulnerability scan and Intrusion Detection, given their relevance to both security and regulatory compliance.

The air gap myth

As we work with various networks to implement IT Security in general and SIEM, Log Management and Vulnerability scanning in particular, we sometimes meet with teams that inform us that they have air gapped networks. An air gap is a network security measure that consists of ensuring physical isolation from unsecured networks (like the Internet for example). The premise here being harmful packets cannot “leap” across the air gap. This type of measure is more often seen in utility and defense installations. Are they really effective in improving security?

A study by the Idaho National Laboratory shows that in the utility industry, while an air gap may provide defense, there are many more points of vulnerability in older networks. Often, critical industrial equipment is of older vintage when insecure coding practices were the norm. Over the years, such systems have had web front ends grated on to them to ease configuration and management. This makes them very vulnerable indeed. In addition these older systems are often missing key controls such as encryption. When automation is added to such systems (to improve reliability or reduce operations cost), the potential for damage is quite high indeed.

In a recent interview, Eugene Kaspersky stated that the ultimate air gap had been compromised. The International Space Station, he said, suffered from virus epidemics. Kaspersky revealed that Russian astronauts carried a removable device into space which infected systems on the space station. He did not elaborate on the impact of the infection on operations of the International Space Station (ISS). Kaspersky doesn’t give any details about when the infection he was told about took place, but it appears as if it was prior to May of this year when the United Space Alliance, the group which oversees the operation of the ISS, moved all systems entirely to Linux to make them more “stable and reliable.”

Prior to this move the “dozens of laptops” used on board the space station had been using Windows XP. According to Kaspersky, the infections occurred on laptops used by scientists who used Windows as their main platform and carried USB sticks into space when visiting the ISS. A 2008 report on ExtremeTech said that a Windows XP laptop was brought onto the ISS by a Russian astronaut infected with the W32.Gammima.AG worm, which quickly spread to other laptops on the station – all of which were running Windows XP.

If the Stuxnet infection from June 2010 wasn’t enough evidence, this should lay the air gap myth to rest.

End(er’s) game: Compliance or Security?

Who do you fear more – The Auditor or The Attacker? The former plays by well-established rules, gives plenty of prior notice before arriving on your doorstep and is usually prepared to accept a Plan of Action with Milestones (POAM) in case of deficiencies. The latter gives no notice, never plays fair and will gleefully exploit any deficiencies. Notwithstanding this, most small enterprises, actually fear the auditor more and will jump through hoops to minimize their interaction. It’s ironic, because the auditor is really there to help; the attacker, obviously is not.

While it is true that 100% compliance is not achievable (or for that matter desirable), it is also true that even the most basic of steps towards compliance go a long way to deterring attackers. The comparison to the merits of physical exercise is an easy one. How often have you heard it said that even mild physical exercise (taking the steps instead of elevator) gives you benefit? You don’t have to be a gym rat, pumping iron for hours every day.

And so, to answer the question: What comes first, Compliance or Security? It’s Security really, because Compliance is a set of guidelines to help you get there with the help of an Auditor. Not convinced? The news is rife with accounts of exploits which in many cases are at organizations that have been certified compliant. Obviously there is no such thing as being completely secure, but will you allow the perfect to be the enemy of the good?

The National Institutes of Standards (NIST) released Rev 4 of its seminal publication 800-53, one that applies to US Government IT systems. As budgets (time, money, people) are always limited, it all begins with risk classification, applying  scarce resources in order of value. There are other guidelines such as the SANS Institute Consensus Audit Guidelines to help you make the most of limited resources.

You may not have trained like Ender Wiggin from a very young age through increasingly difficult games, but it doesn’t take a tactical genius to recognize “Buggers” as attackers and Auditors as the frenemies.

Looking for assistance with your IT Security needs? Click here for our newest publication and learn how you can simplify with services.

Three common SMB mistakes

Small and medium business (SMB) owners/managers understand that IT plays a vital role within their companies. However, many SMBs are still making simple mistakes with the management of their IT systems, which are costing them money.

1) Open Source Solutions In a bid to reduce overall costs, many SMBs look to open source applications and platforms. While such solutions appear attractive because of low or no license costs, the effort required for installation, configuration, operation, maintenance and ongoing upgrades should be factored in. The total cost of ownership of such systems are generally ignored or poorly understood. In many cases, they may require a more sophisticated (and therefore more expensive and hard to replace) user to drive them.

2) Migrating to the Cloud Cloud based services promise great savings, which is always music to an SMB manager/owner’s ears, and the entire SaaS market has exploded in recent years. However the costs savings are not always obvious or tangible. The Amazon ec2 service is often touted as an example of cost savings but it very much depends on how you use the resource. See this blog for an example. More appropriate might be a hybrid system that keeps some of the data and services in-house, with others moving to the cloud.

3) The Knowledge Gap Simply buying technology, be it servers or software, does not provide any tangible benefit. You have to integrate it into the day-to-day business operation. This takes expertise both with the technology and your particular business.

In the SIEM space, these buying objections have often stymied SMBs from adopting the technology, despite its benefits and repeated advice from experts. To overcome these, we offer a managed SIEM offering called SIEM Simplified.

The Holy Grail of SIEM

Merriam Webster defines “holy grail” as a goal that is sought after for its great significance”. Mike Rothman of Securosis has described a twofold response to what the “holy grail” is for a security practitioner, i.e.,

  1. A single alert specifying exactly what is broken, with relevant details and the ability to learn the extent of the damage
  2. Make the auditor go away, as quickly and painlessly as possible

How do you achieve the first goal? Here are the steps:

  • Collect log information from every asset on the enterprise network,
  • Filter it through vendor provided intelligence on its significance
  • Filter it through local configuration to determine its significance
  • Gather and package related, relevant information – the so-called 5 Ws (Who, What, Where, When and Why)
  • Alert the appropriate person in the notification method they prefer (email, dashboard, ticket etc.)

This is a fundamental goal for SIEM systems like EventTracker, and over the ten plus years working on this problem, we’ve got a huge collection of intelligence to draw on to help configure and tune the system to you needs. Even so, there is an undefinable element of luck to have it all work out for you, just when you need it. Murphy’s Law says that luck is not on your side. So now what?

One answer we have found is Anomalous Behavior detection. Learn “normal” behavior during a baseline period and draw the attention of a knowledgeable user to out of ordinary or new items. When you join these two systems, you get coverage for both known-knowns as well as unknown-unknowns.

The second goal involves more discipline and less black magic. If you are familiar with the audit process, then you may know that it’s all about preparation and presentation. The Duke of Wellington famously remarked that the “Battle of Waterloo was won on the playing fields of Eton” another testament to winning through preparation. Here again, to enable diligence, EventTracker Enterprise  offers several features including report/alert annotation, summary report on reports, incident acknowledgement and an electronic logbook to record mitigation and incident handling actions.

Of course, all this requires staff with the time and training to use the features. Lack time and resources you say? We’ve got you covered with SIEM Simplified, a co-sourcing option where we do the heavy lifting leaving you to sip from the Cup of Jamshid.

Have neither the time, nor the tools, nor budget? Then the story might unfold like this.

SIEM vs Search Engine

The pervasiveness of Google in the tech world has placed the search function in a central locus of our daily routine. Indeed many of the most popular apps we use every day are specialized forms of search. For example:

  • E-Mail is a search for incoming msgs; search by sender, by topic, by key phrase, by thread
  • Voice calling or texting is preceded by a search for a contact
  • Yelp is really searching for a restaurant
  • The browser address bar is in reality a search box

And the list goes on.

In the SIEM space, the rise of Splunk, especially when coupled with the promise of “big data”, has led to speculation that SIEM is going to be eclipsed by the search function. Let’s examine this a little more closely, especially from the viewpoint of an expert constrained Small Medium Enterprise (SME) where Data Scientists are not idling aplenty.

Big data and accompanying technologies are, at present, more developer level elements that require assembly with application code or intricate setup and configuration before they can be used by typical system administrators much less mid-level managers. To leverage the big-data value proposition of such platforms, the core skill required by such developers is thinking about distributed computing where the processing is performed in batches across multiple nodes. This is not a common skill set in the SME.

Assuming the assembly problem is somehow overcome, can you rejoice in your big-data-set and reduce the problems that SIEM solves to search queries? Well maybe, if you are a Data Scientist and know how to use advanced analytics. However, SIEM functions include things like detecting cyber-attacks, insider threats and operational conditions such as app errors – all pesky real-time requirements. Not quite so effective as a search on archived and indexed data of yesterday. So now the Data Scientist must also have infosec skills and understand the IT infrastructure.

You can probably appreciate that decent infosec skills such as network security, host security, data protection, security event interpretation, and attack vectors do not abound in the SME. There is no reason to think that the shortage of cyber-security professionals and the ultra-shortage of data scientists and experienced Big Data programmers will disappear anytime soon.

So how can an SME leverage the promise of big-data now? Well, frankly EventTracker has been grappling with the challenges of massive, diverse, fast data for many years before became popularly known as Big Data. In testing on COTS hardware, our recent 7.4 release showed up to a 450% increase in receiver/archiver performance over the previous 7.3 release on the same hardware. This is not an accident. We have been thinking and working on this problem continuously for the last 10 years. It’s what we do. This version also has advanced data-science methods built right in to the EventVault Explorer, our data-mart engine so that security analysts don’t need to be data scientists. Our behavior module incorporates data visualization capabilities to help users recognize hidden patterns and relations in the security data, the so-called “Jeopardy” problem wherein the answers are present in the data-set, the challenge is in asking the right questions.

Last but not the least, we recognize that notwithstanding all the chest-thumping above, many (most?) SMEs are so resource constrained that a disciplined SOC-style approach to log review and incident handling is out of reach. Thus we offer SIEM Simplified, a service where we do the heavy lifting leaving the remediation to you.

Search engines are no doubt a remarkably useful innovation that has transformed our approach to many problems. However, SIEM satisfies specific needs in today’s threat, compliance and operations environment that cannot be satisfied effectively or efficiently with a raw big-data platform.

Resistance is futile

The Borg are a fictional alien race that are a terrifying antagonist in the Star Trek franchise. The phrase “Resistance is futile” is best delivered by Patrick Stewart in the episode The Best of Both Worlds.

When IBM demonstrated the power of Watson in 2011 by defeating two of the best humans to ever play Jeopardy, Ken Jennings who won 74 games in a row admitted in defeat, “I, for one, welcome our new computer overlords.”

As the Edward Snowden revelations about the collection of metadata for phone calls became known, the first thinking was that it would be technically impossible to store data for every single phone call – the cost would be prohibitive. Then Brewster Kahle, one of the engineers behind the Internet Archive made this spreadsheet to calculate the storage cost to record and store one year’s-worth of all U.S. calls. He works the cost to about $30M which is non-trivial but not out of reach by any means for a large US Gov’t agency.

The next thought was – ok so maybe it’s technically feasible to record every phone call, but how could anyone possibly listen to every call? Well obviously this is not possible, but can search terms be applied to locate “interesting” calls? Again, we didn’t think so, until another N.S.A. document, cited by The Guardian, showed a “global heat map” that appeared to represent how much data the N.S.A. sweeps up around the world. If it were possible to efficiently mine metadata, data about who is calling or e-mailing, then the pressure for wiretapping and eavesdropping on communications becomes secondary.

This study in Nature shows that just four data points about the location and time of a mobile phone call, make it possible to identify the caller 95 percent of the time.

IBM estimates that thanks to smartphones, tablets, social media sites, e-mail and other forms of digital communications, the world creates 2.5 quintillion bytes of new data daily. Searching through this archive of information is humanly impossible, but precisely what a Watson-like artificial intelligence is designed to do. Isn’t that exactly what was demonstrated in 2011 to win Jeopardy?

The Dark Side of Big Data

study published in Nature looked at the phone records of some 1.5 million mobile phone users in an undisclosed small European country, and found it took only four different data points on the time and location of a call to identify 95% of the people. In the dataset, the location of an individual was specified hourly with a spatial resolution given by the carrier’s antennas.

Mobility data is among the most sensitive data currently being collected. It contains the approximate whereabouts of individuals and can be used to reconstruct individuals’ movements across space and time. A simply anonymized dataset does not contain name, home address, phone number or other obvious identifier. For example, the Netflix Challenge provided a training dataset of 100,480,507 movie ratings each of the form <user, movie, date-of-grade, grade> where the user was an integer ID.

Yet, if individual’s patterns are unique enough, outside information can be used to link the data back to an individual. For instance, in one study, a medical database was successfully combined with a voters list to extract the health record of the governor of Massachusetts. In the case of the Netflix data set, despite the attempt to protect customer privacy, it was shown possible to identify individual users by matching the data set with film ratings on the Internet Movie Database. Even coarse data sets provide little anonymity.

The issue is making sure the debate over big data and privacy keeps up with the science. Yves-Alexandre de Montjoye, one of the authors of the Nature article, says that the ability to cross-link data, such as matching the identity of someone reading a news article to posts that person makes on Twitter, fundamentally changes the idea of privacy and anonymity.

Where do you, and by extension your political representative, stand on this 21st Century issue?

The Intelligence Industrial Complex

If you are old enough to remember the 1988 election in the USA for President, then the name Gary Hart may sound familiar. He was the clear frontrunner after his second Senate term from Colorado was over. He was caught in an extra-marital affair and dropped out of the race. He has since earned a doctorate in politics from Oxford and accepted an endowed professorship at the University of Colorado at Denver.

In this analysis, he quotes President Dwight Eisenhower, “…we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. The potential for the disastrous rise of misplaced power exists, and will persist.”

His point is that the US now has an intelligence-industrial complex composed of close to a dozen and a half federal intelligence agencies and services, many of which are duplicative, and in the last decade or two the growth of a private sector intelligence world. It is dangerous to have a technology-empowered government capable of amassing private data; it is even more dangerous to privatize this Big Brother world.

As has been extensively reported recently, the Foreign Intelligence Surveillance Act (FISA) courts are required to issue warrants, as the Fourth Amendment  (against unreasonable search and seizure) requires, upon a showing that the national security is endangered. This was instituted in the early 1970s following the findings of serious unconstitutional abuse of power. He asks “Is the Surveillance State — the intelligence-industrial complex — out of the control of the elected officials responsible for holding it accountable to American citizens protected by the U.S. Constitution?

We should not have to rely on whistle-blowers to protect our rights.

In a recent interview with Charlie Rose of PBS, President Obama said, “My concern has always been not that we shouldn’t do intelligence gathering to prevent terrorism, but rather: Are we setting up a system of checks and balances?” Despite this he avoided answering how no request to a FISA court has ever been rejected, that companies that provide data on their customers are under a gag order that even prevents them for disclosing the requests.

Is the Intelligence-Industrial complex calling the shots? Does the President know a lot more than he can reveal? Clearly he is unwilling to even consider changing his predecessor policy.

It would seem that Senator Hart has a valid point. If so, its a lot more consequential than Monkey Business.

Introducing EventTracker Log Manager

The IT team of a Small Business has it the worst. Just 1-2 administrators to keep the entire operation running, which includes servers, workstations, patching, anti-virus, firewalls, applications, upgrades, password resets…the list goes on. It would be great to have 25 hours in a day and 4 hands per admin just to keep up. Adding security or compliance demands to the list just make it that much harder.

The path to relief? Automation, in one word. Something that you can “fit-and-forget”.

You need a solution which gathers all security information from around the network, platforms, network devices, apps etc. and that knows what to do with it. One that retains it all efficiently and securely for later if-needed for analysis, displays it in a dashboard for you to examine at your convenience, alerts you via e-mail/SMS etc. if absolutely necessary, indexes it all for fast search, and finds new or out-of-ordinary patterns by itself.

And you need it all in a software-only package that is quickly installed on a workstation or server. That’s what I’m talking about. That’s EventTracker Log Manager.

Designed for the 1-2 sys admin team.
Designed to be easy to use, quick to install and deploy.
Based on the same award-winning technology that SC Magazine awarded a perfect 5-star rating to in 2013.

How do you spell relief? E-v-e-n-t-T-r-a-c-k-e-r  L-o-g  M-a-n-a-g-e-r.
Try it today.

Secure your electronic trash

At the typical office, computer equipment becomes obsolete, slow etc. and periodically requires replacement or refresh. This includes workstations, servers, copy machines, printers etc. Users who get the upgrades are inevitably pleased and carefully move their data carefully to the new equipment and happily release the older ones. What happens after this? Does someone cart them off the local recycling post? Do you call for a dumpster? This is likely the case of the Small Medium Enterprise whereas large enterprises may hire an electronics recycler.

This blog by Kyle Marks appeared in the Harvard Business Review and reminds us that sensitive data can very well be leaked via decommissioned electronics also.

A SIEM solution like EventTracker is effective when leakage occurs from connected equipment or even mobile laptops or those that connect infrequently. However, disconnected and decommissioned equipment is invisible to a SIEM solution.

If you are subject to regulatory compliance, leakage is leakage. Data security laws mandate that organizations implement “adequate safeguards” to ensure privacy protection of individuals.  It’s equally applicable to that leakage comes from your electronic trash. You are still bound to safeguard the data.

Marks points out that detailed tracking data, however, reveals a troubling fact: four out of five corporate IT asset disposal projects had at least one missing asset. More disturbing is the fact that 15% of these “untracked” assets are devices potentially bearing data such as laptops, computers, and servers.

Treating IT asset disposal as a “reverse procurement” process will deter insider theft. This is something that EventTracker cannot help with but is equally valid in addressing compliance and security regulations.

You often see a gumshoe or Private Investigator in the movies conduct Trash Archaeology in looking for clues. Now you know why.

What did Ben Franklin really mean?

In the aftermath of the disclosure of the NSA program called PRISM by Edward Snowden to a reporter at The Guardian, commentators have gone into overdrive and the most iconic quote is one attributed to Benjamin Franklin “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety”.

It was amazing that something said over 250 years ago would be so apropos. Conservatives favor an originalist interpretation of documents such as the US Constitution (see Federalist Society) and so it seemed possible that very similar concerns existed at that time.

Trying to get to the bottom of this quote, Ben Wittes of Brookings wrote that it does not mean what it seems to say.

The words appear originally in a 1755 letter that Franklin is presumed to have written on behalf of the Pennsylvania Assembly to the colonial governor during the French and Indian War. The Assembly wished to tax the lands of the Penn family, which ruled Pennsylvania from afar, to raise money for defense against French and Indian attacks. The Penn family was willing to acknowledge the power of the Assembly to tax them.  The Governor, being an appointee of the Penn family, kept vetoing the Assembly’s effort. The Penn family later offered cash to fund defense of the frontier–as long as the Assembly would acknowledge that it lacked the power to tax the family’s lands.

Franklin was thus complaining of the choice facing the legislature between being able to make funds available for frontier defense versus maintaining its right of self-governance. He was criticizing the Governor for suggesting it should be willing to give up the latter to ensure the former.

The statement is typical of Franklin style and rhetoric which also includes “Sell not virtue to purchase wealth, nor Liberty to purchase power.”  While the circumstances were quite different, it seems the general principle he was stating is indeed relevant to the Snowden case.

What, me worry?

Alfred E. Nueman is the fictitious mascot and cover boy of Mad Magazine. Al Feldstein, who took over as editor in 1956, said, “I want him to have this devil-may-care attitude, someone who can maintain a sense of humor while the world is collapsing around him”.

The #1 reason management doesn’t get security is the sense that “It can’t happen to me” or “What, me worry?” The general argument goes – we are not involved in financial services or national defense. Why would anyone care about what I have? And in any case, even if they hack me, what would they get? It’s not even worth the bother. Larry Ponemon writing in the Harvard Business Review captures this sentiment.

Attackers are increasingly targeting small companies, planting malware that not only steals customer data and contact lists but also makes its way into the computer systems of other companies, such as vendors. Hackers might also be more interested in your employees than you’d think. Are your workers relatively affluent? If so, chances are the hackers are way ahead of you and are either looking for a way into your company, or are already inside, stealing employee data and passwords which (as they well know) people tend to reuse for all their online accounts.

Ponemon says “It’s literally true that no company is immune anymore. In a study we conducted in 2006, approximately 5% of all endpoints, such as desktops and laptops, were infected by previously undetected malware at any given time. In 2009—2010, the proportion was up to 35%. In a new study, it looks as though the figure is going to be close to 54%, and the array of infected devices is wider too, ranging from laptops to phones.”

In the recent revelations by Edward Snowden who blew the whistle on the NSA program called “Prism”, many prominent voices have said they are ok with the program and have nothing to hide. This is another aspect of “What, me worry?” Benjamin Franklin had it right many years ago, “Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety.”

Learning from LeBron

Thinking about implementing analytics? Before you do that, ask yourself “What answers do I want from the data?”

After the Miami Heat lost the 2011 NBA playoffs to the Dallas Mavericks, many armchair MVPs were only too happy to explain that LeBron was not a clutch player and didn’t have what it takes to win championships in this league. Both LeBron and Coach Erik Spolestra however were determined to convert that loss into a teaching moment.

Analytics was indicated. But what was the question?  According to Spoelstra, “It took the ultimate failure in the Finals to view LeBron and our offense with a different lens. He was the most versatile player in the league. We had to figure out a way to use him in the most versatile of ways — in unconventional ways.” In the last game of the 2011 Finals, James was almost listlessly loitering beyond the arc, hesitating, shying away, and failing to take advantage of his stature. His last shot of those Finals was symbolic: an ill-fated 25-foot jump shot from the outskirts of the right wing — his favorite 3-point shot location that season.

LeBron decided the correct answer was to work on the post-up game during the off season. He spent a week learning from the great Hakeem Olajuwon. He brought his own videographer to record the sessions for later review. LeBron arrived early for each session and was stretched and ready to go every time. He took the lessons to the gym for the rest of the off season. It worked. James emerged from that summer transformed. “When he returned after the lockout, he was a totally different player,” Spoelstra says. “It was as if he downloaded a program with all of Olajuwon’s and Ewing’s post-up moves. I don’t know if I’ve seen a player improve that much in a specific area in one offseason. His improvement in that area alone transformed our offense to a championship level in 2012.”

The true test of analytics isn’t just on how good they are but in how committed you are in using the data. At the 2012 NBA Finals, LeBron won the MVP title and Miami, the championship.

The lesson to learn here is to know what answers you are seeking form the data and commit to going where the data takes you.

Two classes of cyber threat to critical infrastructure

Dan Villasenor describes two classes of cyber threat confronting critical infrastructure. Some, like the power grid, are viewed by everyone as critical, and the number of people who might credibly target them is correspondingly smaller. Others, like the internal networks in the Pentagon, are viewed as a target by a much larger number of people. Providing a high level of protection to those systems is extremely challenging, but feasible. Securing them completely is not.

While I would agree that fewer people are interested/able to hack the power grid, it reminds me of the “insider threat” problem that enterprises face. When an empowered insider who has legitimate access goes rogue, the threat can be very hard to locate and the damage can be incredibly high. Most defense techniques for insider threat depend on monitoring and behavior anomaly detection. Adding to the problem is that systems like the power grid are harder to upgrade and harden. The basic methods to restrict access and enforce authentication and activity monitoring would be applicable. No doubt, this was all true for the Natanz processing plant in Iran and it still got hacked by Stuxnet. That system was apparently infected by a USB device carried in by an external contractor, so it would seem that restricting access and activity monitoring may have helped detect it sooner.

In the second class of threat, exemplified by the internal networks at the Pentagon, one assumes that all classic protection methods are enforced. Situational awareness in such cases becomes important. A local administrator who relies entirely on some central IT team to patrol, detect and inform him in time is expecting too much. It is said that God helps those who help themselves.

Villasenor also says: “There is one number that matters most in cybersecurity. No, it’s not the amount of money you’ve spent beefing up your information technology systems. And no, it’s not the number of PowerPoint slides needed to describe the sophisticated security measures protecting those systems, or the length of the encryption keys used to encode the data they hold. It’s really much simpler than that. The most important number in cybersecurity is how many people are mad at you.”

Perhaps we should also consider those interested in cybercrime? The malware industrial complex is booming and the average price for renting botnets to launch DDoS is plummeting.

The Post Breach Boom

A basic requirement for security is that systems be patched and the security products like antivirus be updated as frequently as possible. However, there are practical reasons which limit the application of updates to production systems. This is often the reason why the most active attacks are the ones which have been known for many months.

new report from the Ponemon Institute polled 3,529 IT and IT security professionals in the U.S., Canada, UK, Australia, Brazil, Japan, Singapore and United Arab Emirates, to understand the steps they are taking in the aftermath of malicious and non-malicious data breaches. Here are some highlights:

On average, it is taking companies nearly three months (80 days) to discover a malicious breach and then more than four months (123 days) to resolve it.

    • One third of malicious breaches are not being caught by any of the companies’ defenses – they are instead discovered when companies are notified by a third party, either law enforcement, a partner, customer or other party – or discovered by accident. Meanwhile, more than one third of non-malicious breaches (34 percent) are discovered accidentally.
    • Nearly half of malicious breaches (42 percent) targeted applications and more than one third (36 percent) targeted user accounts.
    • On average, malicious breaches ($840,000) are significantly more costly than non-malicious data breaches ($470,000). For non-malicious breaches, lost reputation, brand value and image were reported as the most serious consequences by participants. For malicious breaches, organizations suffered lost time and productivity followed by loss of reputation.

Want an effective defense but wondering where to start? Consider SIEM Simplified.

Cyber Attacks: Why are they attacking us?

The news sites are abuzz with reports on Chinese cyber attacks on Washington DC institutions both government and NGOs. Are you a possible target? It depends. Attackers funded by nation states have specific objectives and they will follow these. So if you are a dissident or enabling one, or have secrets that the attacker wants, then you may be a target. A law firm with access to intellectual property may be a target, but an individual has much more reason to fear cyber criminals who seek credit card details than a Chinese attack.

As Sun Tzu noted in the Art of War, “Know your enemy and know yourself, find naught in fear for 100 battles.”

So what are the Chinese after? Ezra Klein has a great piece in the Washington Post. He outlines three reasons:

1)      Asymmetric warfare – the US defense budget is larger than the next 13 countries combined and has been that way for a long, long time. In any conventional or atomic war, no conceivable adversary has any chance. An attack on critical infrastructure may help level the playing field. Operators of critical infrastructure and of course US DoD locations are at risk and should shore up defenses.

2)      Intellectual property theft – China and Russia want to steal the intellectual property (IP) of American companies, and much of that property now lies in the cloud or on an employee’s hard drive. Stealing those blueprints and plans and ideas is an easy way to cut the costs of product development. Law firms or employees with IP need protection.

3)      Chinese intelligence services [are] eager to understand how Washington works. Hackers often are searching for the unseen forces that might explain how the administration approaches an issue, experts say, with many Chinese officials presuming that reports by think tanks or news organizations are secretly the work of government officials — much as they would be in Beijing. This is the most interesting explanation but the least relevant to the security practitioner.

If none of these apply to you, then you should be worried about cyber criminals who are out for financial gain. Classic money-making things like credit cards or Social Security numbers that are used to defraud Visa/Mastercard or perpetrate Medicare fraud. This is by far much more widespread than any other type of hacking.

It turns out that many of the tools and tactics used by all these enemies are the same. Commodity attacks tend to be opportunistic and high volume. Persistent attacks tend to be low-and-slow. This in turn means the defenses for the one would apply to the other and often the most basic approaches are also the most effective. Effective approaches require discipline and dedication most of all. Sadly this is the hardest commitment for small and medium enterprises that are most vulnerable. If this is you, then consider a service like SIEM Simplified as an alternative to do-nothing.

Distinguished Warfare Medal for cyber warriors

In what probably was his last move as defense secretary, Leon E. Panetta announced on February 13, 2013 the creation of a new type of medal for troops engaged in cyber-operations and drone strikes, saying the move “recognizes the changing face of warfare.” The official description said that it, “may not be awarded for valor in combat under any circumstances,” which is unique. The idea was to recognize accomplishments that are exceptional and outstanding, but not bounded in any geographic or chronologic manner – that is, it’s not taking place in the combat zone. This recognized that people can now do extraordinary things because of the new technologies that are used in war.

On April 16, 2013, barely two months later, incoming Defense Secretary, Chuck Hagel has withdrawn the medal. The medal was the first combat-related award to be created since the Bronze Star in 1944.

Why was it thought to be necessary? Use the case of the mission that got the leader of al-Qaida in Iraq, Abu Musab al-Zarqawi in June 2006. Reporting showed that U.S. warplanes dropped two 500-pound bombs on a house in which Zarqawi was meeting with other insurgent leaders. A U.S. military spokesman said coalition forces pinpointed Zarqawi’s location after weeks of tracking the movements of his spiritual adviser, Sheik Abdul Rahman, who also was killed in the blast. A team of unmanned aerial systems, drone operators, tracked him down. It was over 600 hours of mission operational work that finally pinpointed him. They put the laser target on the compound that he was in, this terrorist leader, and then an F-16 pilot flew six minutes, facing no enemy fire, and dropped the bombs – computer-guided of course – on that laser. The pilot was awarded the Distinguished Flying Cross.

The idea behind the medal was that drone operators can be recognized as well. The Distinguished Warfare Medal was to rank just below the Distinguished Flying Cross. It was to have precedence over — and be worn on a uniform above — the Bronze Star with “V” device, a medal awarded to troops for specific heroic acts performed under fire in combat. It was intended to recognize the magnitude of the achievement, not the personal risk taken by the recipient.

The decision to cancel the medal is more reflective on the uneasiness about the extent to which UAVs are being used in war, rather than questioning the skill and dedication of the operators. In announcing the move, Secretary Hagel said a “device” will be affixed to existing medals to recognize those who fly and operate drones, whom he described as “critical to our military’s mission of safeguarding the nation.” It also did not help that the medal had a higher precedence than a Purple Heart or Bronze Star.

There is no getting away from it, warfare in the 21st Century is increasingly in the cyber domain.

Interpreting logs, the Tesla story

Did you see the NY Times review by John Broder, which was critical about the Tesla Model S? Tesla CEO Elon Musk was not pleased. They are not arguing over interpretations or anecdotal recollections of experiences, instead they are arguing over basic facts — things that are supposed to be indisputable in an environment with cameras, sensors and instantly searchable logs.

The conflicting accounts — both described in detail — carry a lesson for those of us involved in log interpretation. Data is supposed to be the authoritative alternative to memory, which is selective in its recollection. As Bianca Bosker said, “In Tesla-gate, Big Data hasn’t made good on its promise to deliver a Big Truth. It’s only fueled a Big Fight.”

This is a familiar scenario if you have picked through logs as a forensic exercise. We can (within limitations) try and answer four of the five W questions – Who, What, When and Where, but the fifth one -Why- is elusive and brings the analyst of the realm of guesswork.

The Tesla story is interesting because interested observers are trying to deduce why the reporter was driving around the parking lot – to find the charger receptacle or to deliberately drain the battery and make for a bad review. Alas the data alone cannot answer this question.

In other words, relying on data alone, big data included, to plumb human intention is fraught with difficulty. An analyst needs context.

What is your risk appetite?

In Jacobellis v. Ohio (1964), Justice Potter Steward was quoted as saying, “I don’t know what porn is, but I’ll know it when I see it.” This is not dissimilar to the position that many business leaders confront the concept of “risk”.

When a business leader can describe and identify the risk they are willing to accept, then the security team can put appropriate controls in place. Easy to say, but so very hard to do. It’s because the quantification and definition of risk varies widely depending on the person, the business unit, the enterprise and also the vertical industry segment.

What is the downside of not being able to define risk? It leaves the security team guessing about what controls are appropriate. Inadequate controls expose the business to leakage and loss, whereas onerous controls are expen$ive and even offensive to users.

What do you do about it? Communication between the security team and business stakeholders is essential. We find that scenarios that demonstrate and personalize the impact of risk resonate best. It’s also useful to have a common vocabulary as the language divide between the security team and business stakeholders is a consistent problem. Where possible, use terminology that is already in use in the business instead of something from a standard or framework.

Five telltale signs that your data security is failing and what you can do about it

5 telltale signs that your data security is failing and what you can do about it:

1) Security controls are not proportional to the business value of data

Protecting every bit of data as if it’s a gold bullion in Ft. Knox is not practical. Controls complexity (and therefore cost) must be proportional to the value of the items under protection. Loose change belongs on the bedside table; the crown jewels belong in the Tower of London. If you haven’t classified your data to know which is which, then the business stakeholders have no incentive to be involved in its protection.

2) Gaps between data owners and the security team

Data owners usually only understand business processes and activities and the related information – not the “data”. Security teams, on the other hand, understand “data” but usually not its relation to the business, and therefore its criticality to the enterprise. Each needs to take a half step into the others’ domain.

3) The company has never been penalized

Far too often, toothless regulation encourages a wait-and-see approach. Show me an organization that has failed an audit and I’ll show you one that is now motivated to make investments in security.

4) Stakeholders only see value in sharing, not the risk of leakage

Data owners get upset and push back against involving security teams in the setup of access management. Open access encourages sharing and improves productivity, they say. It’s my data, why are you placing obstacles in its usage? Can your security team effectively communicate the risk of leakage in terms that the data owner can understand?

5) Security is viewed as a hurdle to be overcome

How large is the gap between the business leaders and the security team?  The farther apart they are, the harder it is to get support for security initiatives. It helps to have a champion, but over-dependence on a single person is not sustainable. You need buy-in from senior leadership.

SIEM Simplified for the Security No Man’s Land

In this blog post, Mike Rothman described the quandary facing the midsize business. With a few hundred employees, they have information that hackers want to and try to get but not the budget or manpower to fund dedicated IT Security types, nor the volume of business to interest a large outsourcer. This puts them in no-man’s land with a bull’s-eye on their backs. Hackers are highly motivated to monetize their efforts and will therefore cheerfully pick the lowest hanging fruit they can get. It’s a wicked problem to be sure and one that we’ve been focused on addressing in our corner of the IT Security universe for some years now.

Our solution to this quandary is called SIEM SimplifiedSM and stems from the acceptance that as a vendor we could go developing all sorts of bells and whistles to our product offering only to see an ever shrinking percent of users actually use them in the manner they were designed. Why? Simply put, who has the time? Just as Mike says, our customers are people in mid-size businesses, wearing multiple hats, fighting fires and keeping things operational. SIEM Simplified is the addition of an expert crew at the EventTracker Control Center, in Columbia MD that does the basic blocking and tackling which is the core ingredient if you want to put points on the board. By sharing the crew across multiple customers, it reduces the cost for customers and increases the likelihood of finding the needle in the haystack. And because it’s our bread and butter, we can’t afford to get tired or take a vacation or fall sick and fall behind.

A decade-long focus on this problem as it relates to mid-size businesses has allowed us to tailor the solution to such needs. We use the behavior module to quickly spot new or out-of-ordinary patterns, and a wealth of existing reports and knowledge to do the routine but essential legwork of  log review. Mike was correct is pointing out that “folks in security no-man’s land need …. an advisor to guide them … They need someone to help them prioritize what they need to do right now.” SIEM Simplified delivers.  More information here.

EventTracker Recommendation Engine

Online shopping continues to bring more and more business to “e-tailers.”  Comscore says there was a  16% increase in holiday shopping this past season over the previous season. Some of this is attributed to “recommendations” that are helpfully shown by the giants of the game such as Amazon.

Here is how Amazon describes its recommendation algorithm. “We determine your interests by examining the items you’ve purchased, items you’ve told us you own items you’ve rated, and items you’ve told us you like. We then compare your activity on our site with that of other customers, and using this comparison, are able to recommend other items that may interest you.

Did you know that EventTracker has its own recommendation engine? It’s called Behavior Correlation and is part of the EventTracker Enterprise. Just as Amazon, learns about your browsing and buying habits and uses it to “suggest” other items, so also, EventTracker auto-learns what is “normal”  in your enterprise during an adaptive learning period. This can be as short as 3 days or as long as 15 days depending on the nature of your network. In this period, various items such as IP addresses, users, administrators, process names machines, USB serial numbers etc. are learned. Once learning is complete, data from the most recent period is compared to the learned behavior to pinpoint both unusual activities as well as those never-before-seen. EventTracker then “recommends” that you review these to determine if they point to trouble.

Learning never ends, so the baseline is adaptive, refreshing itself continuously. User defined rules can also be implemented wherein the comparison periods are not learned but specified, and comparisons performed not  once a day but as frequently as once a minute.

If you shop online and feel drawn to a “recommendation”, pause to reflect how this concept can also improve your IT security by looking at logs.

Cyber Security Executive Order

Based on early media reports, the Cyber Security executive order would seem to portend voluntary compliance on the part of U.S. based companies to implement security standards developed in concert with the federal government.  Setting aside the irony of an executive order to voluntarily comply with standards that are yet to be developed, how should private and public sector organizations approach cyber security given today’s exploding threatscape and limited information technology budgets?  How best to prepare for more bad guys, more threats, more imposed standards with less people, time and money?

Back to basics.  First let’s identify the broader challenges: of course you’re watching the perimeter with every flavor of firewall technology and multiple layers of IDS, IPS, AV and other security tools.  But don’t get too comfortable: every organization that has suffered a damaging breach had all those things too.  Since every IT asset is a potential target, every IT asset must be monitored.  Easy to describe, hard to implement. Why?

Challenge number one: massive volumes of log data.  Every organization running a network with more than 100 nodes is already generating millions of audit and event logs.  Those logs are generated by users, administrators, security systems, servers, network devices and other paraphernalia.  They generate the raw data that tracks everything going on from innocent to evil, without prejudice.

Challenge number two: unstructured data. Despite talk and movement toward audit log standards, log data remains widely variable with no common format across platforms, systems and applications, and no universal glossary to define tokens and values.  Even if every major IT player from Microsoft to Oracle (and HP and Cisco), along with several thousand other IT organizations were to adopt uniform, universal log standards today, we would still have another decade or two of the dreaded “legacy data” with which to contend.

Challenge number three: cryptic or non-human readable logs. Unstructured data is difficult enough, but further adding to the complexity is that most of the log data content and structure are defined by developers for developers or administrators.  Don’t assume that security officers and analysts, senior management, help desk personnel or even tenured system administrators can quickly and accurately glance at a log and immediately understanding its relevance or more importantly what to do about it.

Solution?  Use what you already have more wisely.  Implement a log monitoring solution that will ingest all of the data you already generate (and largely ignore until after you discover there’s a real problem), process it in real-time using built-in intelligence, and present the analysis immediately in the form of alerts, dashboards, reports and search capabilities.  Take a poorly designed and voluminous asset (audit logs) and turn it into actionable intelligence.  It isn’t as difficult as it sounds, though it require rigorous discipline and a serious time commitment.

Cyber criminals employ the digital equivalent of what our military refers to as an “asymmetrical tactic.” Consider a hostile emerging super power in Asia that directly or indirectly funds a million cyber warriors at the U.S. equivalent of $10 a day; cheap labor in a global economy.  No organization, not even the federal government, the world’s largest bank or a 10 location retailer, has unlimited people, time and money to defend against millions of bad guys attacking on a much lower (asymmetrical) operational budget.

SIEM in the Social Era

The value proposition of our SIEM Simplified offering is that you can leave the heavy lifting to us. What is undeniable is that getting value from SIEM solutions requires patient sifting through millions of logs, dozens of reports and alerts to find nuggets of value. It’s quite similar to detective work.

But does that not mean you are somehow giving up power? Letting someone else get a claw hold in your domain?

Valid question, but consider this from Nilofer Merchant who says “In the Social Era, value will be (maybe even already is) no longer created primarily by people who work for you or your organization“.

Isn’t power about being the boss?
The Social Era has disrupted the traditional view of power which has always been your title, span of control and budget. Look at Wikipedia or Kickstarter where being powerful is about championing an idea. With SIEM Simplified, you remain in control, notified as necessary, in charge of any remediation.

Aren’t I paid to know the answer?
Not really. Being the keeper of all the answers has become less important with the rise of fantastic search tools and the ease of sharing, as compared to say even 10 years ago. Merchant says “When an organization crowns a few people as chiefs of answers, it forces ideas to move slowly up and down the hierarchy, which makes the organization resistant to change and less competitive. The Social Era raises the pressure on leaders to move from knowing everything to knowing what needs to be addressed and then engaging many people in solving that, together.” Our staff does this every day, for many different environments. This allows us to see the commonalities and bring issues to the fore.

Does it mean blame if there is failure and no praise if it works?
In a crowd sourcing environment, there are many more hands in every pie. In practice, this leads to more ownership from more people than the other way around. Consider Wikipedia as an example of this. It does require different skills, collaborating instead of commanding, sharing power rather than hoarding it. After all, we are only successful, if you are. Indeed, as a provider of the service, we are always mindful that this applies to us more than it does you.

As a provider of services, we see clearly that the most effective engagements are the ones where we can avoid the classic us/them paradigm and instead act as a badgeless team. The Hubble Space Telescope is an excellent example of this type of effort.

It’s a Brave New World, and it’s coming at you, ready or not.

Big Data and Information Inequality

Mike Wu writing in Tech Crunch observed that in all realistic data sets (especially big data), the amount of information one can extract from the data is always much less than the data volume (see figure below): information data.

Big Data

In his view, given the above, the value of big data is hugely exaggerated. He then goes on to infer that this is actually a strong argument for why we need even bigger data. Because the amount of valuable insights we can derive from big data is so very tiny, we need to collect even more data and use more powerful analytics to increase our chance of finding them.

Now machine data (aka log data) is certainly big data, and it is certainly true that obtaining insights from such dataset’s is a painstaking (and often thankless) job, but I wonder if this means we need even more data. Methinks we need to be able to better interpret the big data set and its relevance to “events”.

Over the past two years, we have been deeply involved in “eating our own dog food” as it were. At multiple EventTracker installations that are nationwide in scope, and span thousands of log sources, we have been working to extract insights for presentation to the network owners. In some cases, this is done with a lot of cooperation from the network owner and we have a good understanding of IT assets and the actors who use/abuse them. We find that with such involvement we are better able to risk prioritize what we observe in the data set and map to business concerns. In other cases where there is less interaction with the network owner and we know less about the actors or the relative criticality of assets, then we fall back on past experience and/or vendor-provided info as to what is an incident.  It is the same dataset in both cases but there is more value in one case than the other.

To say it another way, to get more information from the same data we need other types of context to extract signal from noise. Enabling logging at a more granular level from the same devices thereby generating an ever bigger dataset won’t increase the signal level. EventTracker can merge change audit data netflow information as well as vulnerability scan data to enable a greater signal-to-noise ratio. That is a big deal.

Small Business: too small to care?

Small businesses around the world tend to be more innovative and cost-conscious. Most often, the owners tend to be younger and therefore more attuned to being online. The efficiencies that come from being computerized and connected are more obvious and attractive to them. But we know that if you are online then you are vulnerable to attack. Are these small businesses  too small for hackers to care?

Two recent reports say no.

The UK the Information Security Breaches survey 2012 survey results published by PWC shows:

  • 76% of small business had a security breach
  • 15% of small businesses were hit by a denial of service attack
  • 20% of small businesses lost confidential data and 80% of these breaches were serious
  • The average cost of a small business worst security breach was between 15-30K pounds
  • Only 8% of small businesses monitor what their staff post on social sites
  • 34% of small businesses allow smart phones and tablets to connect to their network but have done nothing about it
  • On average, IT security consumes 8% of the spending but 58% make no attempt to evaluate the effectiveness of the expenditure

From the US, the 2012 Verizon data breach report shows:

  • Restaurant and POS systems are popular targets.
  • Companies with 11-100 employees from 36 countries had the maximum number of breaches.
  • Top threats to small business were external against servers
  • 83% of the theft was by professional cybercriminals, for profit
  • Keyloggers designed to capture user input were present in 48% of breaches
  • The most common malware injection vector is installation by a remote attacker
  • Payment card info and authentication credentials were the most stolen data
  • The initial compromise required basic methods with no customization, automated scripts can do it
  • More than 79% of attacks were opportunistic; large-scale automated attacks are opportunistically attacking small to medium businesses, and POS systems frequently provide the opportunity
  • In 72% of cases, it took only minutes from initial attack to compromise but hours for data removal and days for detection
  • More than 55% of breaches remained undiscovered for months
  • More than 92% of the breaches were reported by an external party
  • Only 11% were monitoring access which is called out in Chapter 10 of PCI-DSS

Lesson learned? Small may be beautiful, but in the interconnected world we live in, not too small to be hacked. Protect thyself – start simple by changing remote access credentials and enabling a firewall, monitor and mine your logs. ‘Nuff said.

A smartphone named Desire

Is this true for you? That your smartphone has merged your private and work lives. Smartphones now contain—by accident or by design—a wealth of information about the businesses we work for.

If your phone is stolen, the chance of getting it back approaches zero. How about lost in an elevator or the back seat of a taxi? Will it be returned? More importantly, from our point of view, what about the info on it – the corporate info?

Earlier this year, the Symantec HoneyStick project conducted an experiment by “losing” 50 smartphones in five different cities: New York City; Washington D.C.; Los Angeles; San Francisco; and Ottawa, Canada. Each had a collection of simulated corporate and personal data on them, along with the capability to remotely monitor what happened to them once they were found. They were left in high traffic public places such as elevators, malls, food courts, and public transit stops.

Key findings:

  • 96% of lost smartphones were accessed by the finders of the devices
  • 89% of devices were accessed for personal related apps and information
  • 83% of devices were accessed for corporate related apps and information
  • 70%of devices were accessed for both business and personal related apps and information
  • 50% of smartphone finders contacted the owner and provided contact information

The corporate related apps included remote access as well as email accounts. What is the lesson for corporate IT staff?

  • Take inventory of the mobile devices connecting to your company’s networks; you can’t protect and manage what you don’t know about.
  • Track resource access by mobile devices. For example if you are using MS Exchange, then ActiveSync logs can tell you a whole lot about such access.
  • See our white paper on the subject
  • Track all remote login to critical servers

See our webinar, ‘Using Logs to Deal With the Realities of Mobile Device Security and BYOD.’

Should You Disable Java?

The headlines are ablaze with the news of a new zero-day vulnerability in Java which could expose you to a remote attacker.

The Department of Homeland Security recommends disabling Java completely and many experts are apparently concurring. Crisis communications 101 says maintain high-volume, multi-channel communications but there is a strange silence from Oracle, aside of the announcement of a patch for said vulnerability.

Allowing your opponents to define you is a bad mistake as any political consultant will tell you. Today it’s Java, tomorrow, some other widely used component. The shrillness of the calls also makes me wonder why the hullabaloo?  Upset by the Oracle stewardship of Java, perhaps?

So what should you make of the “disable Java” calls echoing across Cyberia?  Personally I think it’s bad advice, assuming you can even take the advice in the first place. Java is widespread in server side applications (usually enterprise software) and embedded devices. There is probably no easy way to “upgrade” a heart pump or elevator control or a POS system. As far as server side, this may be easier but spare a thought to backward compatibility and business applications that are “certified” on older browsers. Pause a moment, the vulnerability becomes exposed when you visit a malicious website which can then take advantage of the flaw and get on your machine.

Instead of disabling Java and thereby possibly breaking critical functionality, why don’t you limit access to outside websites instead? This is easily done by configuring proxy servers (good for desktops or mobile situations) or limiting devices to a subnet that only has access to the trusted internal hosts (this can work for bar code scanners or manufacturing equipment). This limits your exposure. Proxy server filtering at the internet perimeter is done by matching the user agent string. This is also a good way to limit those older insecure browsers that must be present for internal applications from accessing the outside and potentially being equally a source of infection in the enterprise.

This is a serious issue that merits a thoughtful response, not a panicked rush to comply and cripple your enterprise.

2013 Security Resolutions

A New Year’s resolution is a commitment that a person makes to one or more personal goals, projects, or the reforming of a habit.

  • The ancient Babylonians made promises to their gods at the start of each year that they would return borrowed objects and pay their debts.
  • The Romans began each year by making promises to the god Janus, for whom the month of January is named.
  • In the Medieval era, the knights took the “peacock vow” at the end of the Christmas season each year to re-affirm their commitment to chivalry.

Here are mine:

1)      Shed those extra pounds of logs:

Log retention is always a challenge — how much to keep, for how long? Keep them too long and they are just eating away storage space. Pitch them mercilessly and keep wondering if you will need them.  For guidance, look to any regulation that may apply. PCI-DSS says 365 days, for example; NIST 800-92 unhelpfully says “This should be driven primarily by organizational policies” and then goes on to classify logs into system, infrastructure and application levels. Bottom line, use your judgment because you know your environment best.

2)      Exercise your log analysis muscles regularly

As the Verizon Data Breach report says year in and year out, the bad guys are hoping that you are not collecting logs, and if you are, that you are not reviewing them. More than 96% of all attacks were not highly difficult and were avoidable (at least in hindsight) without difficult or expensive countermeasures. Easier said than done, isn’t it? Consider co-sourcing the effort.

3)      Play with existing toys before buying new ones

Know what configuration assessment is? It’s applying secure configurations to existing equipment. Agencies such as NIST, CIS and DISA provide detailed guidelines. Vendors such as Microsoft provide hardening guides. It’s a question of applying them to existing hardware. This reduces attack surface and contributes greatly to a more secure posture. You already have the equipment, just apply the secure configuration.  EventTracker can help measure results.

Happy New Year.