Security Signals Everywhere: Finding the Real Crisis in a World of Noise

Imagine dealing with a silent, but mentally grating barrage of security alerts every day. The security analyst’s dilemma? They either need to cast nets wide enough to identify all potential security incidents, or laser-focus on a few and risk missing an important attack.

A recent Cisco study covered in CSO found that 44 percent of security operations managers saw more than 5,000 security alerts a day. As a consequence, they can only investigate half of the alerts they receive every day, and follow up on less than half of alerts deemed legitimate. VentureBeat says the problem is far worse. Just 5 percent of alerts are investigated due to the time and complexity of completing preliminary investigations.

The CSO article recommends better filtering to reduce threat fatigue, while focusing efforts on the most important risks to a company’s industry and business. These are great suggestions. However, in a world of exploding risks, you need a dedicated team of experts on point 24/7, while deploying technology to stay ahead of the threat landscape.

This is all very cumbersome and expensive. Even the largest companies in the world may not have this level of resources. That is where a tailored, affordable managed threat detection and response or co-managed SIEM comes into play. Here’s why co-managed SIEM is better than a DIY scenario for the digital transformation era:
 
  1. A dedicated SWAT team for security – You may have great analysts, but they’re stretched and may be tired. Expand their reach with a team of external experts who can partner on calibrating and monitoring security services, follow up on alerts, and augment your team when you need more resources due to business growth, staff departures, or an inability to hire enough experts.
  2. – It’s challenging to optimize processes when you’re constantly fighting fires. Leave that work to your partner. EventTracker’s Security Operations Center, for example, is ISO/IEC 27001-certified, and we have to work hard to maintain that certification by continually improving our information management systems for our clients.
  3. – Self-managing a SIEM solution can be expensive and difficult. Co-management is on the rise and expected to grow five-fold by 2020. EventTracker’s SIEMphonic platform provides all the managed security services you need, including SIEM and log management, threat detection and response, vulnerability assessment, user behavior analysis, and compliance management. It collects data from a variety of sources, including your platform, application and network logs; alerts from intrusion detection systems; and vulnerability scans and analyzes it all.  In addition, our HoneyNet deception technology uses virtualized decoys throughout your network to lure bad actors and sniff out attacks.

If you’re concerned about the rise of risks, you should be. Your information security team has great expertise and skills – but it’s probably time to extend their reach.
 
Empower your company with co-managed SIEM and hone in on the real crises, despite a world of noise. Get SIEMphonic managed security service today.

EventTracker Statement on Meltdown and Spectre Vulnerability

On January 3, 2018, an industry-wide hardware-based security vulnerability was disclosed. CVE-2017-5753 and CVE-2017-5715 are the official references to Spectre, and CVE-2017-5754 is the official reference to Meltdown.

To exploit this vulnerability, specific code must be run on a CPU. The hosted EventTracker SIEMphonic service is provided from our own data center, and does not use compute-as-a-service from providers such as AWS EC2 or Azure who allow customers to run arbitrary code on the provided compute service.

Keeping our customers and their data secure is always our top priority. EventTracker continually tests and monitors our systems for vulnerabilities such as this, using our own products and services. The unknown process feature in EventTracker is expressly designed to detect and surface first-time-seen code execution. We have taken active steps to ensure that no EventTracker customer is exposed to these types of vulnerabilities. At the time of this posting, EventTracker has not received any information to indicate that these types of vulnerabilities have been used to attack the SIEMphonic infrastructure or in any way impact the integrity of customer data stored with the SIEMphonic service.

EventTracker does not use a third-party compute-as-a-service offering, so we don’t allow arbitrary code to be run on our servers. As such, security vulnerabilities that require specific code to be run on the same server as the exploited service pose less of a threat to EventTracker’s service and the data stored therein than those services and data stores utilizing shared servers at large cloud hosting facilities. With that said, EventTracker is constantly evaluating the server vendor patches that are relevant to server components used, and we will test and roll out these patches as they become available.

At our Security Operations Center we are patching on all workstations to address Meltdown and Spectre vulnerabilities. Specifically, we are: 

  1. Updating anti-virus to the latest version to make it compatible with Microsoft patches. Microsoft has identified a compatibility issue with a number of antivirus software products.
  2. Installing Microsoft cumulative patch on all workstations
  3. Installing the latest BIOS update on the workstations
  4. Updating Chrome and Firefox browsers to the latest versions

We will post more updates here, as they become available. More details about these vulnerabilities are available. Learn more about the Meltdown and Spectre vulnerabilities.

Believe it or not, compliance saves you money

We all hear it over and over again: complying with data protection requirements is expensive. But did you know that the financial consequences of non-compliance can be far more expensive?
 
The Ponemon Institute once again looked at the costs that organizations have incurred, or are incurring, in meeting mandated requirements, such as the EU General Data Protection Regulation (GDPR), the Payment Card Industry Data Security Standard (PCI-DSS), and the Healthcare Information Portability and Accountability Act (HIPAA). The results were compared with the findings from a 2011 Ponemon survey on the same topic. The differences were stark and telling.
 
Average costs of compliance have increased 43%, up from around $3.5 million in 2011 to just under $5.5 million this year, while non-compliance costs surged from $9.4 million to $14.8 million during the same period. On average, organizations that are found non-compliant with data protection obligations these days can expect to fork out at least 2.71 times more money getting started and proving compliance than if they had been compliant in the first place.
 
For most enterprises, the cost associated with buying and deploying data security and incident response technologies account for a bulk of their compliance-related expenditure. On average, organizations in the Ponemon survey spent $2 million on security technologies to meet compliance objectives. The study found that businesses today are spending on average about 36% more on data security technologies and 64% more on incident response tools compared to 2011.
 
Financial companies tend to spend a lot more - $30.9 million annually - on compliance initiatives than entities in other sectors. Organizations in the industrial sector and energy/utilities sector also have relatively high compliance-related expenses of $29.4 million and $24.8 million respectively, on an annual basis.
 
So, what is the hardest regulation to satisfy? GDPR. 90% of the participants in the Ponemon studied pointed to GDPR as being the most difficult regulation to meet.
 
Need to get off to a fast start? Thinking NIST 800-171 or PCI-DSS? Our SIEMphonic service, powered by EventTracker technology, was designed to do just that. Check out all the compliance regulations we support.
 
It's a paradox, but the less you might spend, the more you might pay.
 

Attribution of an attack - don’t waste time on empty calories

Empty calories are those derived from food containing no nutrients. When consumed in excess, they contribute to weight gain, especially if you're not burning them off in your daily activities. Why make more work for yourself?
 
When we are attacked, we feel a sense of outrage and the natural tendency is to want to somehow punish the attacker. To do this, you must first identify the attacker, preferably accurately, or else. This is easier said than done, especially online.
 
Threat researchers have built an industry on identifying and profiling hacking groups in order to understand their methods, anticipate future moves, and develop methods for battling them. They often attribute attacks by “clustering” malicious files, IP addresses, and servers that get reused across hacking operations, knowing that threat actors use the same code and infrastructure repeatedly to save time and effort. So, when researchers see the same encryption algorithms and digital certificates reused in various attacks, for example, they tend to assume the attacks were perpetrated by the same group. 
 
The attacks last year on the Democratic National Committee, for example, were attributed to hacking groups associated with Russian intelligence based in part on analysis done by the private security firm CrowdStrike, which found that tools and techniques used in the DNC network matched those used in previous attacks attributed to Russian intelligence groups.
 
This is, of course, is much harder for the average business that cannot (and should not) spend scarce IT security budget on attribution of an attacker. It's a lot harder than it would seem. This Virus Bulletin reviews cases in which they’ve seen hackers acting on behalf of nation-states stealing tools and hijacking infrastructure previously used by hackers of other nation-states. Investigators need to watch out for signs of this or risk tracing attacks to the wrong perpetrators. Which means that attribution of an attack is hard even for those agencies with limitless funds at their disposal.
 
The WannaCry ransomware outbreak is an obvious example of malware theft and reuse. Last year, a mysterious group known as the Shadow Brokers stole a cache of hacking tools that belonged to the National Security Agency and posted them online months later. One of the tools — a so-called zero-day exploit, targeting a previously unknown vulnerability — was repurposed by the hackers behind WannaCry to spread their attack. 
 
Even assuming you were somehow able to absolutely identify the attacker as "Peilin Gu" located at "He Nan Sheng Zheng Zhou Shi Nong Ke Lu 38hao Jin Cheng Guo Ji Guang Chang Wu Hao Lou Xi Dan Yuan 2206", then what? How would you inflict retribution on this attacker? Likely as a private company, without a presence in China.
 
The rational course of action is instead to study the attack method and the target within your infrastructure and use this information to shore up defenses. You can bet that if this attacker uncovered a vulnerability in your defenses and exploited it then others of his “ilk” would follow course imminently.
 
Are you finding it hard to keep up with all the threats? Co-managed SIEM services can help. Give us a chance to show you how you can avoid empty calories and in the process, breathe a little easier.
 
 

Can you outsource the risk? Five questions to ask a managed SIEM or SOC vendor.

Given the acute shortage of security skills, managed solutions like SIEM-as-a-Service and SOC-as-a-Service such as SIEMphonic have become more widely adopted. It has proven to be an excellent way to leverage outside expertise and reduce cost, which is a challenge for companies globally. Seem too good to be true? It is and it isn’t. Regardless of how much responsibility you delegate, accountability lays firmly on the shoulders of the organization doing the delegating. What this means is that when you consider co-sourcing a critical function like security monitoring, it’s important to perform a vendor risk assessment. After all, if your vendor has a problem, then you have a problem. Their risk becomes your risk. So, what should a responsible CIO be doing? Frankly, the best time to enforce security at a service provider is before you sign the contract. Ask these questions:
  1. How seriously does the provider take security?
  2. What industry standard practices do they follow?
  3. How do they vet their staff?
  4. Are the data centers properly redundant and physically secure?
  5. Are the regularly audited by a competent external authority?
Some buyers who have a dim view of their internal commitment to the various forms of risk automatically consider that any firm that provides services for a living must inevitably have better processes and procedures than they themselves do. Careful, now. Proceed with caution – assumptions are risky too. As part of our ongoing commitment to managing risk, our SIEMphonic solutions were certified as ISO27001 compliant. We regularly audit and review our own performance and share the results with our customers every month to solicit feedback. As you think about enjoying the benefits of co-sourcing, remember: Risk cannot be outsourced.

Going Mining for Bitcoin

While you’ve been busy defending against ransomware, the bad guys have been scheming about new ways to steal from you. Let’s review a tactic seen in the news called bitcoin mining.

Hackers broke into servers hosted at Amazon Web Services (AWS) that holds information from multi-national, multi-billion-dollar companies, Aviva and Gemalto. The criminals were using computer power to mine the cryptocurrency, bitcoin.

Though anyone could try to mine bitcoin off their computer services, the process is very energy intensive, and could be costly in electricity expenses alone. But it’s worthwhile for many hackers because a successful attempt can be very lucrative.

To avoid the high cost of going at it alone, most bitcoin miners join a pool of different computers that combine their powers to solve complex algorithms. Successfully solving the problem generates a set number of new bitcoin, which are worth upwards of $4,300 each. Bitcoin can be mined until there are a total of 21 million bitcoin that exist.

How should you defend against this? Know your baseline and watch for anomalies. See how EventTracker caught a bitcoin miner, hidden behind a rarely used server dedicated for key-fob provisioning.

Bitcoin

Prevention is Key in Cybersecurity

“You see, but you do not observe. The distinction is clear.” Sherlock Holmes said this to John Watson in “A Scandal in Bohemia.” Holmes was referring to the number of steps from the hall to the rooms upstairs. Watson, by his own admission, has mounted those steps hundreds of times, but could not say how many there were. The same can be said in the world of IT security. A lot of data, an overwhelming amount actually, is available from hundreds of sources, but rarely is it observed. Having something and getting value from it are entirely different.

This is also underlined in the story, “Peace Health employee accessed patient info unnecessarily.” On Aug. 9, a Vancouver medical center, Peace Health, discovered that an employee accessed electronic files containing protected health information, including patient names, ages, medical records, account numbers, admission and discharge dates, progress notes, and diagnoses. An investigation revealed that the employee accessed patient information between November 2011 and July 2017.

What? This had been going on for 5 years and was just discovered? It would seem this is another case of “You see but do not observe,” and indeed the distinction is clear. Log data showing what this employee was doing had been accumulating and faithfully archived, but it was never examined.

What was the impact? There was reputational damage, plus the costs incurred (letters, call center expenses, etc.), and possible fines by HHS for the HIPAA violation. Plus, there was disruption of regular tasks to investigate the extent and depth of this incident and related incidents that may have occurred.

Ben Franklin observed that an ounce of prevention is worth a pound of cure. The same is true in this case. We at EventTracker know that it’s hard to pay attention given the volume of security data that is emitted by the modern network. Therefore, we provide security monitoring as a service, so that you don’t just get more technology thrust your way, you gain the actual outcome you desire.

Contact us to start your free trial today.

What’s Next in 2018? Our Prediction: SIEM-as-a-Utility

The traditional enterprise network has seen a tectonic shift in recent years thanks to cloud, mobility and now IoT. Where once enterprise data was confined to the office network and data center, it’s now expanded past its traditional perimeter. For instance, in a hospital, traditionally data resided in the data center, laptops, and desktop machines. Now, data can be resident in the x-ray machines, PCs connected to blood test analyzers, HVAC chiller units, etc. In franchise restaurants, one sees the rapid advent of digital menus, self-serve kiosks, customer Wi-Fi, and more. These digital assets have come into the market and onto the network very quickly, so that businesses can keep pace and compete for customers.

Correspondingly, the threats have also migrated — hackers now attack that less secure digital drink dispenser to then go lateral to the POS network. Often in the rush to market, securing these new assets that are now on the network has been an afterthought.

The techniques to protect and monitor these new assets are not so different. Secure the configuration, limit access, watch over logs for patterns. The ubiquity and scale of these assets, though, is tenfold, and so, traditional SIEM technology struggles with deployment, cost, and scale. Traditional SIEM was designed for large enterprise with assumptions on lots of bandwidth, CPU, and staff. These are all belied in the brave new world where all are in short supply.

Now that organizations have a 10x increase in the number of devices on the network – but most of these devices are lower value, simpler assets, with fixed networks and a limited scope of attacks that they are susceptible to — those can be managed in a more automated sense.

SIEM Will Evolve in Functionality and Ubiquity

The progression of today’s SIEM platform has seen dramatic changes. Mature platforms that have their roots in centralized log management have proven to be the species best suited to evolve, adapt, and match today’s advanced cybersecurity demands. We see this trend continuing. SIEM’s ability to centralize and aggregate billions of event logs from devices makes it a natural choice to house advanced threat lifecycle management capabilities. We’ve already seen the beginnings of SIEM taking on functionality that was originally viewed by some as a different animal—those being User and Entity Behavior Analytics (UEBA) and Security Orchestration and Automated Response (SOAR). After a quick rise in interest surrounding UEBA and SOAR solutions, these concepts have become rightly absorbed into SIEM platforms.

Evolution of SIEM

In terms of ubiquity, as the Internet of Things (IoT) explosion continues to unfold, right-sized SIEM functionality will be brought to these simpler, yet very numerous, devices. Case in point, in 2017, Netsurion brought SIEM to the point-of-sale (POS) market to answer the restaurant data breach epidemic. By folding the POS into the enterprise cybersecurity scope, the days of a data breach siphoning credit card data going undetected for months would no longer be the case.

By then coupling SIEM with IoT and branch location connectivity technology, like SD-WAN, the evolved capabilities of SIEM will be able to reach every edge of the highly-distributed enterprise.

Bringing It All Together

With SIEM platforms evolving to encompass machine learning concepts and orchestration capabilities, plus spreading to the furthest ends of the digital enterprise, we must also look at the most appropriate delivery model. By intertwining connectivity, threat, and compliance management, the delivery model that might work best for some organizations would be that the SIEM, or IT security, is delivered from an organization’s preferred ISP or managed IT service provider (MSP). The fully evolved SIEM platform will be able to deliver advanced functionality, wide integration, and lastly, MSP-friendly deliverability.

SIEM, UEBA, SOAR and Your Cybersecurity Arsenal

The evolution of Security Information and Event Management (SIEM) solutions has made a few key shifts over time. It started as simply collecting and storing logs, then morphed into correlating information with rules and alerting a team when something suspicious was happening. And now, SIEM solutions are providing advanced analytics and response automation.

Today’s advanced SIEM solutions:

  1. Incorporate purpose-built sensors to continually collect digital forensics data across an organization.
  2. Leverage artificial intelligence and machine learning to identify out-of-the-ordinary network behavior that may indicate possible malware or a data breach.

Advanced SIEM requires continual tuning to learn what is deemed abnormal behavior for a given organization.

At EventTracker, this all happens through our ISO 27001 certified Security Operations Center (SOC), where expert analysts work with this intricate data to learn the customer network and the various device types (OS, application, network devices etc.). Ideally, these experts work in tandem with the customers’ internal IT teams to understand their definition of normal network activity.

Next, based on this information and the available knowledge packs within EventTracker, we schedule suitable daily and weekly reports, along with configure alerts. The real magic happens when this data becomes “flex reports”. These reports focus on valuable information that is embedded within the description portion of the log messages. When these parameters are trended in a graph, all sorts of interesting, actionable information emerges.

User and Entity Behavior Analytics

In addition to noticing suspicious network behavior, SIEMs have evolved to include User Behavior Analytics (UBA), or User and Entity Behavior Analytics (UEBA). UBA/UEBA triggers an alert when unusual user or entity behavior occurs. This is an important feature now that compromised credentials make up 76% of all network intrusions.

When credentials are stolen, they tend to be used in unusual ways, places, and times. For instance, if a log in occurs that is outside the normal pattern, then this is immediately flagged for investigation. If user ‘‘Susan’’ usually logs in to “Workstation5” but suddenly logs in to “Server3”, then this is out of ordinary and may merit an investigation.

Security Orchestration Automation and Response (SOAR)

While alerts to suspicious behavior are necessary, the real goal is acting on the suspicious behavior as quickly and effectively as possible. That’s the next evolution of SIEM: Security Orchestration Automation and Response (SOAR).

While traditional SIEMs can “say” something, those that incorporate SOAR can “do” something.

SOARs consolidate data sources, use information provided by threat intelligence feeds, and automate responses to improve efficiency and effectiveness.

For example, with EventTracker, if an infected USB is plugged into a laptop, even if it’s off the network at the time, and malware begins to run, EventTracker will detect the insertion of the USB, as well as detect any suspicious communication to a low-reputation IP address. It will also catch any suspicious processes that begin to run. Once detected, EventTracker automatically stops the communication and the executable, preventing a potential data breach. Watch a short demo about advanced endpoint security now.

Get the Most Out of Your SIEM

As attacks continue to become more sophisticated and persistent, traditional security tools that just focus on protecting the perimeter will continue to be replaced by solutions that also have detection and response capabilities, in particular on the endpoint devices.

Learn more about the features of EventTracker’s SIEMphonic Enterprise, and sign up for a demo to learn more about our machine learning, UEBA and SOAR functionality.

You’re in the Cybersecurity Fight No Matter What: Are You Prepared?

“You’re in the fight, whether you thought you were or not”, Gen. Mike Hayden, former Director of the CIA and NSA. It may appear at first to be a scare tactic or an attempt to sow fear, uncertainty, and doubt, but truly, what this means is that it’s time to adopt the Assume Breach paradigm.

Mr. Hayden also said, “You are almost certainly penetrated.” These words ring true and it’s time to acknowledge that a breach has either already occurred or that it’s only a matter of time until it will. It is more likely that an organization has already been compromised, but just hasn’t discovered it yet. Operating with this assumption will reshape detection and response strategies in a way that pushes the limits of any organization’s infrastructure, people, processes, and technologies.

Traditional security methodologies have largely been focused on prevention. It is a defensive strategy aimed at eliminating vulnerabilities and thereby mitigating security breaches before they happen. However, as the daily news headlines bear witness, perfect protection is not practical. So, monitoring is necessary.

Many businesses think of IT security as a nice-to-have option – just a second priority to be addressed, if IT budget dollars remain. However, compliance with regulations is seen as a must-have, mostly due to fear of the auditor and potential shame or penalty in the event of an audit failure. If this mindset prevails, then up to 70% of the budget under security and compliance will be allocated to the latter, with the rest “left over” for security. And as the total amount shrinks, this leads to the undesirable phenomenon known as checkbox compliance. Article after article explains why this is a bad mindset to have.

Remember, you’re in the fight, whether you knew it or not. Accept this and compliance becomes a result of good security practice. The same IT security budget can become more effective.

If you’re overwhelmed at the prospect of having to develop, staff, train, and manage security and compliance all by yourself, there are services like EventTracker’s SIEMphonic, that will do the heavy lifting. See our “Catch of the Day” to see examples of how this service has benefited our customers.

Avoid Three Common Active Directory Security Pitfalls

While the threats have changed over the past decade, the way systems and networks are managed have not. We continue with the same operations and support paradigm, despite the fact that internal systems are compromised regularly. As Sean Metcalf notes, while every environment is unique, they all too often have the same issues. These issues often boil down to legacy management of the enterprise Microsoft platform going back a decade or more.

There is also the reality of what we call the Assume Breach paradigm.  This means that during a breach incident, we must assume that an attacker a) has control of a computer on the internal network and b) can access the same resources of legitimate users through recent log on activity.

Active Directory (AD) is the most popular Lightweight Directory Access Protocol (LDAP) implementation and holds the keys to your kingdom. It attracts attackers, as honey attracts bees. There are many best practices to secure Active Directory, but to start, let’s ensure you stay away from common pitfalls. Below are three common mistakes to avoid:

  1. Too many Domain Admins: Active Directory administration is typically performed by a small number of people. Membership in Domain Admins is rarely a valid requirement.Those members have full administrative rights to all workstations, servers, Domain Controllers, Active Directory, Group Policy, etc., by default. This is too much power for any one account, especially in today’s modern enterprise. Unless you are actively managing Active Directory as a service, you should not be in Domain Admins.
  2. Over-permissioned Service Accounts: Vendors have historically required Domain Admin rights for Service Accounts even when the full suite of rights provided is not actually required, though it makes the product easier to test and deploy. The additional privileges provided to the Service Account can be used maliciously to escalate rights on a network. It is critical to ensure that every Service Account is delegated only the rights required, and nothing more. Keep in mind that a service running under the context of a Service Account has that credential in LSASS (protected memory), which can be extracted by an attacker. If the stolen credential has admin rights, the domain may be quickly compromised due to a single Service Account.
  3. Not monitoring admin group membership: Most organizations realize that the number of accounts with admin rights increases on a yearly, if not monthly basis, without ever going down. The admin groups in Active Directory need to be scrutinized, especially when new accounts are added. It’s even better to use a system that requires approval before a new account is added to the group. This system can also remove users from the group when their approved access expires.

By avoiding these pitfalls, and securing Active Directory properly, you are on your way to keeping your “kingdom” safe. But like Thomas Paine said, “Those who expect to reap the blessings of freedom must, like men, undergo the fatigue of supporting it.” There are a number of ways to reap the benefits of a secure infrastructure, but there are many intracacies required to make this a reality. Solutions, like SIEMphonic Enterprise, takes on “fatigue” required to with a dedicated 24/7 SOC.

Click here for more details or sign up for a free demo today.

Three myths surrounding cybersecurity

A common dysfunction in many companies is the disconnect between the CISO, who views cybersecurity as an everyday priority, versus top management who may see it as a priority only when an intrusion is detected. The seesaw goes something like this: If breaches have been few and far between then leaders tighten the reins on the cybersecurity budget until the CISO proves the need for further investment in controls. On the other hand, if threats have been documented frequently, leaders may reflexively decide to overspend on new technologies without understanding that there are other, nontechnical remedies to keep data and other corporate assets safe.

Does your organization suffer from any of these?

Myth: More spending equals more security

McKinsey says, “There is no direct correlation between spending on cybersecurity (as a proportion of total IT spending) and success of a company’s cybersecurity program.” Companies that spend heavily but are still lagging behind their peers may be protecting the wrong assets. Ad hoc approaches to funding (goes up when an intrusion is reported, goes down when all is quiet on the western front) will be ineffective in the long term.

Myth: All threats are external

Too often, the very people who are closest to the data or other corporate assets are the weak link in a company’s cybersecurity program. Bad habits — like sharing passwords or files over unprotected networks, clicking on malicious hyperlinks sent from unknown email addresses, etc. — open up corporate networks to attack. In this study by Intel Security, threats from inside the company account for about 43 percent of data breaches. Leaders must realize that they are actually the first line of defense against cyberthreats, which is never the sole responsibility of the IT department.

Myth: All assets are equally valuable

Are generic invoice numbers and policy documents that you generate in-house as valuable as balance sheets or budget projections? If not, then why deploy a one-size-fits-all cybersecurity strategy? Does leadership understand the return they are getting on their security investments and associated trade-offs? Leaders must inventory and prioritize assets and then determine the strength of cybersecurity protection required at each level. McKinsey cites the example of a global mining company that realized it was focusing a lot of resources on protecting production and exploration data, but had failed to separate proprietary information from that which could be reconstructed from public sources. After recognizing the flaw, the company reallocated its resources accordingly.

These three myths are common, but the list goes on…Now it’s time to decide what to do about it. Research is a great start, but time is of the essence. According to a 2017 Forbes survey, 69% of senior executives are already re-engineering their approach to cybersecurity. What’s your next step?

EventTracker reviews billions of logs daily to keep our customers safe. See what we caught recently and view our latest demo.

Can general purpose tools work for IT security?

This post got me thinking about a recent conversation I had with the CISO of a financial company. He commented on how quickly his team was able to instantiate a big data project with open source tools. He was of the view that such power could not be matched by IT security vendors who, in his opinion, charged too much money for demonstrably poorer performance.

The runaway success of the ELK stack has the DIY crowd energized. Why pay security vendors for specialist solutions when a “big data” project that we already have going on, based on this same stack, can work so much better, the thinking goes. And it’s free, of course.

What we know from 10+ years of rooting around in the security world is that solving the platform problem gets you about a quarter of the way to the security outcome. After that comes detection content, and then the skills to work the data plus the process discipline. Put another way, “Getting data into the data lake, easy. Getting value out of the data in the lake, not so much.”

In 2017, it is easier than ever to spin up an instance of ELK on premises or in the cloud and presume that success is at hand just because the platform is now available. Try using generic tools to solve the security problem and you will soon discover why security vendors have spent so much time writing rules and why service providers spend so much effort on process/procedure and recruitment/training.

Are you lowering your expectations to meet your SIEM performance?

It’s an old story. Admin meets SIEM. Admin falls in love with the demo provided by the SIEM vendor. Admin commits to a 3 year relationship with SIEM.

And now the daily grind. The SIEM requires attention, but the Admin is busy. Knowledge of what the SIEM needs in order to perform starts to dissipate from memory as the training period recedes in the past. Log volume constantly creeps up, adding to sluggishness.

Soon you are at a point where the SIEM could have theoretically performed but actually does not. It’s a mix of initial underestimation of hardware needs, increasing log volume, apathy and dissipation of knowledge about SIEM details.

How now?

In most implementations, this vicious cycle feeds on itself and the disillusionment reinforces itself. The SIEM is either abandoned or the user is resigned to poor performance.

What a revoltin’ development.

It doesn’t have to be this way, you know. Our SIEMphonic offerings were designed to address each of these problems. Don’t just buy a SIEM, get results!

Equifax’s enduring lesson — perfect protection is not practical

Recently Equifax, one of the big-three US credit bureaus, disclosed a major data breach. It affects 143 million individuals — mostly Americans, although data belonging to citizens of other countries, for the most part Canada and the United Kingdom, were also hit.

It’s known the data was stolen, not just exposed. Equifax disclosed it had detected unauthorized access. So this isn’t simply a case of potential compromise of data inadvertently exposed on the web. Someone came in and took it.

How the breach occurred remains publicly unknown, and Equifax has been close-mouthed about the details. But there’s considerable speculation online that the hackers exploited a patchable yet unpatched flaw in Equifax’s website.

Quartz suggests an Apache Struts vulnerability. Markets Insider says it’s unclear which vulnerability may have been exploited. The Apache Struts team has issued a statement which says: Regarding the assertion that especially CVE-2017-9805 is a nine year old security flaw, one has to understand that there is a huge difference between detecting a flaw after nine years and knowing about a flaw for several years. If the latter was the case, the team would have had a hard time to provide a good answer why they did not fix this earlier. But this was actually not the case here –we were notified just recently on how a certain piece of code can be misused, and we fixed this ASAP. What we saw here is common software engineering business –people write code for achieving a desired function, but may not be aware of undesired side-effects. Once this awareness is reached, we as well as hopefully all other library and framework maintainers put high efforts into removing the side-effects as soon as possible. It’s probably fair to say that we met this goal pretty well in case of CVE-2017-9805.

So where to turn? Is it reasonable to assume that Equifax should be rigorous in updating its systems, especially public facing ones with access to such valuable data? Yes, of course. But it frankly doesn’t matter what it was written in, how it was deployed, or whether it was up to date. How do you explain (apparently) no controls to monitor unusual activity? That’s dereliction of duty, in 2017.

Perfect protection is not practical, thus monitoring is necessary. Rinse and repeat, ad nauseam, it seems.

Looking for an expert set of eyes to monitor your assets? SIEMphonic can help. See what we’ve caught.

Three critical advantages of SIEMphonic Essentials

By now it’s accepted that SIEM is a foundational technology for both securing a network from threats as well as demonstrating regulatory compliance. This definition from Gartner says: Security information and event management (SIEM) technology supports threat detection and security incident response through the real-time collection and historical analysis of security events from a wide variety of event and contextual data sources. It also supports compliance reporting and incident investigation through analysis of historical data from these sources. The core capabilities of SIEM technology are a broad scope of event collection and the ability to correlate and analyze events across disparate sources.”

However, SIEM is not fit-and-forget technology, nor is it technically simple to implement and operate. In order to bring the benefits of SIEM technology to the small network, with a decade of experience behind us, we developed SIEMphonic Essentials to address the problems beyond mere technology. Here’s three specific advantages:

1) No hardware to procure or maintain

SIEMphonic Essentials is hosted in our Tier-1 data center freeing you from having to procure, maintain and upgrade server class hardware. Disk in particular is a challenge. Log data grows exponentially and while consumer disk cost is relatively inexpensive, the same cannot be said for business class disk cost.

2) More data? Fixed cost!

The hallmark of a successful SIEM implementation is growing volumes of data. Many SIEM solutions are priced based on log volume indexed or received (the so-called events per second). More data inevitably means more unforeseen cost. With SIEMphonic Essentials, you get simple t-shirt sizing (Small, Medium, Large) and you can leave both the cost and implementation of data storage to us.

3) Skill shortage

There is an African proverb that says, “It takes a village to raise a child.” In fact, it takes various skills to RUN and WATCH a SIEM solution. This specific problem is why many SIEM implementations become shelfware. Writing and tuning detection rules, performing incident investigations, and understanding how to search means that analysts need both security knowledge and specialized SIEM tool expertise. The IT Security space has zero unemployment, high staff acquisition costs and ongoing training costs. Buying a SIEM solution is easy. There are many providers and an end-of-quarter discount is always around the corner. Getting value from it? Not so much. With SIEMphonic Essentials, we start with a proper implementation (after all as Aristotle noted, well begun is half done) and then our 24/7 Security Operations Center escalates P1 events to your team.

SIEMphonic Essentials delivers visibility and detection across your enterprise. Not just technology…results!

Think you are too small to be hacked?

As a small business, how would you survive an abrupt demand for $250,000? It’s ransomware, and as this poll shows, that’s what an incident would cost a small business. Just why has ransomware exploded on to the scene in 2017? Because it works. Because most bad guys are capitalists and are driven by the profit motive. Because most small business have not taken the time to guard their data. Because they are soft targets. What makes the news headlines are the attacks on large companies like Merck, Maersk or large government, NHS Hospitals in the UK, etc. But make no mistake, small businesses get hit every day – they’re just not in the headlines. After all, more people miss work due to the common cold, but this never makes the news. On the other hand, a single case of Ebola and whoa!

Unfortunately this leads to confirmation bias. Since you don’t hear about it, it must not be a thing, right? That’s dangerous thinking for a small business. The large corporations can bounce back from cyberattacks; they have the depth of pocket to hire the experts needed during the crisis. But how does a small businesses cope? Breach costs can go to $250,000, not to mention the destruction of client trust if word gets out that confidential information was leaked.

So what do you do? Try these three steps:

Educate
It starts with you and your employees. Know your digital assets and maintain an up-to-date inventory. Invest in training of employees, as they are the weakest link in the IT security game.
Protect
Minimum diligence includes up-to-date anti-virus, a managed next-gen firewall and regular patching. Step it up with endpoint protection. Regular reviews of user and system activity is a solid, low-cost improvement to close the gap.
Co-source
Get an expert on your team. It’s too expensive to get dedicated resources, but this doesn’t mean you have to go it alone.  Co-sourcing is an excellent technique to have an expert team on call that specializes in cybersecurity.

If the first half of 2017 is an indicator, then it’s high time to wake up and smell the hummus.

***Some images from FreePik.com

How do you determine IT security risk?

How much security is enough? That’s a hard question to answer. You could spend $1 or $1M on security and still ask the same question. It’s a trick question; there is no correct answer. The better/correct question is how much risk are you willing to tolerate? Mind you, the answer to this question is a “beauty in the beholder” deal, and again there is no one correct answer.

The classic comeback from management when posed this question by the CISO is to debate what risk means, in a business context, of course. To answer this, consider the picture below.

This is your tax dollars at work. It comes from a NIST publication called “Small Business Information Security” and is available here. It presents a systematic method to first identify and thereafter mitigate the elements of risk to your business. To a small business owner, this may all be very well but can be overwhelming.

Did you know that you are not alone in tackling this problem? Our SIEMphonic program is specifically designed to provide co-management. We get that for a small business owner, it’s difficult to deploy, manage and use an effective combination of expertise and tools that provide early detection of targeted, advanced threats and insider threats. With SIEMphonic Enterprise Edition and SIEMphonic MDR Edition, we work together with you to analyze event data in real-time, then collect, store, investigate, and report on log data for incident response, forensics and regulatory compliance. Let us help you strengthen your security defenses, respond effectively, control costs and optimize your team’s capabilities through SIEMphonic.

Petya Ransomware – What it is and what to do

A new ransomware variant is sweeping across the globe known as Petya. It is currently having an impact on a wide range of industries and organizations, including critical infrastructure such as energy, banking, and transportation systems. While it was first observed in 2016, it contained notable differences in operation that caused it to be “immediately flagged as the next step in ransomware evolution.”

What is it?

This is a new generation of ransomware designed to take timely advantage of recent exploits. This current version is targeting the same vulnerabilities (ETERNALBLUE) that were exploited during the recent Wannacry attack. In this variant, rather than targeting a single organization, it uses a broad-brush approach that targets any device it can find that its attached worm is able to exploit.

The gravity of this attack is multiplied by the fact that even servers patched against the SMBv1 vulnerability exploited by EternalBlue can be successfully attacked, provided there is at least one Windows server on the network vulnerable to the flaw patched in March in MS17-010.

How it spreads?

Early reports also suspected that some infections were spread via phishing emails with infected Excel documents exploiting a CVE-2017-0199, a Microsoft Office/WordPad remote code execution vulnerability.

The attackers have built in the capability to infect patched local machines using the PSEXEC Windows SysInternals utility to carry out a pass-the-hash attack. Some researchers have also documented usage of the Windows Management Instrumentation (WMIC) command line scripting interface to spread the ransomware locally.

Unlike WannaCry, this attack does not have an internet-facing worming component, and only scans internal subnets looking for other machines to infect. Once a server is compromised by EternalBlue, the attacker is in as a system user.

What it does

The malware waits for 10-60 minutes after the infection to reboot the system. Reboot is scheduled using system facilities with “at” or “schtasks” and “shutdown.exe” tools. Once it reboots, it starts to encrypt the MFT table in NTFS partitions, overwriting the MBR with a customized loader with a ransom note.

The malware enumerates all network adapters, all known server names via NetBIOS and also retrieves the list of current DHCP leases, if available. Each and every IP on the local network and each server found is checked for open TCP ports 445 and 139. Those machines that have these ports open are then attacked with one of the methods described above.

The criminals behind this attack are asking for $300 in Bitcoins to deliver the key that decrypts the ransomed data, payable to a unified Bitcoin account. Unlike Wannacry, this technique would work because the attackers are asking the victims to send their wallet numbers by e-mail to “wowsmith123456@posteo.net,” thus confirming the transactions.

There is no kill-switch as of yet, and reports say the ransom email is invalid, so paying up is not recommended.

Technical Details

Talos observed that compromised systems have a file named “Perfc.dat” dropped on them. Perfc.dat contains the functionality needed to further compromise the system and contains a single unnamed export function referred to as #1. The library attempts to obtain administrative privileges (SeShutdowPrivilege and SeDebugPrivilege) for the current user through the Windows API AdjustTokenPrivileges. If successful, the ransomware will overwrite the master boot record (MBR) on the disk drive referred to as PhysicalDrive 0 within Windows. Regardless of whether the malware is successful in overwriting the MBR or not, it will then proceed to create a scheduled task via schtasks to reboot the system one hour after infection.

As part of the propagation process, the malware enumerates all visible machines on the network via the NetServerEnum and then scans for an open TCP 139 port. This is done to compile a list of devices that expose this port and may possibly be susceptible to compromise.

The malware has three mechanisms used to propagate once a device is infected:

  1. EternalBlue – the same exploit used by WannaCry.
  2. Psexec – a legitimate Windows administration tool.
  3. WMI – Windows Management Instrumentation, a legitimate Windows component.

These mechanisms are used to attempt installation and execution of perfc.dat on other devices to spread laterally.

For systems that have not had MS17-010 applied, the EternalBlue exploit is leveraged to compromise systems.

Psexec is used to execute the following instruction (where w.x.y.z is an IP address) using the current user’s windows token to install the malware on the networked device. Talos is still investigating the methods in which the “current user’s windows token” is retrieved from the machine.

C:\WINDOWS\dllhost.dat \\w.x.y.z -accepteula -s -d C:\Windows\System32\rundll32.exe C:\Windows\perfc.dat,#1

WMI is used to execute the following command which performs the same function as above, but using the current user’s username and password (as username and password).

Wbem\wmic.exe /node:”w.x.y.z” /user:”username” /password:”password” “process call create “C:\Windows\System32\rundll32.exe \”C:\Windows\perfc.dat\” #1″

Once a system is successfully compromised, the malware encrypts files on the host using 2048-bit RSA encryption. Additionally, the malware cleans event logs on the compromised device using the following command:

wevtutil cl Setup & wevtutil cl System & wevtutil cl Security & wevtutil cl Application & fsutil usn deletejournal /D %c:

What steps has EventTracker SIEMphonic taken?

  1. Closely monitoring announcements and details provided by industry experts including US CERT, SANS, Microsoft, etc.
  2. Reviewed the latest vulnerability scan results from your network (if subscribed to ETVAS service) for vulnerable machines. ETVAS service subscribers who would like us to scan your network again can request us at ecc@eventtracker.com and we will perform a scan at your convenience.
  3. Updated the Active Watch List in your instance of EventTracker with the latest Indicators of Compromise (IOCs). This includes MD5 hashes of the malware variants, IP addresses of  C&C servers, the email address wowsmith123456@posteo.net
  4. Monitoring system reboots and additions to the Scheduled Tasks list
  5. Watching Change Audit snapshots in your network for changes to registry (RunOnce)
  6. Updated ETIDS with snort signatures as described by Cisco Talos
  7. Performing log searches using known IOCs

Recommendations

  • Apply the Microsoft patch for the MS17-010 SMB vulnerability dated March 14, 2017.
  • Perform a detailed vulnerability scan of all systems on your network and apply missing patches ASAP.
  • Limit traffic from/to ports 139 and 445 to internal network only. Monitor traffic to these ports for out of ordinary behavior.
  • Enable strong spam filters to prevent phishing e-mails from reaching the end users and authenticate in-bound e-mail using technologies like Sender Policy Framework (SPF), Domain Message Authentication Reporting and Conformance (DMARC), and DomainKeys Identified Mail (DKIM) to prevent e-mail spoofing.
  • Scan all incoming and outgoing e-mails to detect threats and filter executable files from reaching the end users.
  • Ensure anti-virus and anti-malware solutions are set to automatically conduct regular scans.
  • Manage the use of privileged accounts. Implement the principle of least privilege. No users should be assigned administrative access unless absolutely needed. Those with a need for administrator accounts should only use them when necessary.
  • Configure access controls including file, directory, and network share permissions with least privilege in mind. If a user only needs to read specific files, they should not have write access to those files, directories, or shares.
  • Disable macro scripts from Microsoft Office files transmitted via e-mail. Consider using Office Viewer software to open Microsoft Office files transmitted via e-mail instead of full Office suite applications.

Perfect protection is not practical

With distressing regularity, new breaches continue to make headlines. The biggest companies, the largest institutions both private and government are affected. Every sector is in the news. Recounting these attacks is fruitless. Taking action based on the trends and threat landscape is the best step. Smarter threats that evade basic detection, mixed with the operational challenge of skills shortage, make the protection gap wider.

An overemphasis on prevention defines the current state of defenses as shown in the pie chart below.

pie-chart

According to ISACA’s 2015 cybersecurity report , over 85% of senior IT and business leaders report that they feel there is a labor crisis of skilled cybersecurity workers. Gartner believes approximately 50% of budgeted security positions are vacant; on average, technical staff spend about four years in a position before moving on. The threats that this outnumbered corps are working to confront are evolving so fast that security departments’ staffing methods are often hopelessly out of date.

prefect-protection

The main lesson to learn is that “perfect protection is not practical, so monitoring is necessary.”

Are you feeling overwhelmed with the variety, velocity and volume of cyber attacks? Help is at hand. Our SIEMphonic managed detection and response offering blends best-in-class technology with a 24/7 iSOC to help strengthen your security defenses while controlling cost.

Three myths about Ransomware

Three Myths about Ransomware

Ransomware is a popular weapon for the modern attacker with >50% of the 330,000+ attacks in 3Q15 targeted against US companies. No industry is immune to these attacks, which if successful are a blot on financial statements of the targeted companies. Despite their success, ransomware attacks are not sophisticated, exploit traditional infection vectors and are not stealthy. The success of such attacks reveal poor endpoint protection planning and strategy, which are observed at companies of every size and every vertical. This leads to most organizations reacting to such infections rather than planning against them, which is expensive in staff hours and of course hurtful to reputation.

A misunderstanding of ransomware, how it works and how the infection can be prevented are common. Here are three common misconceptions:

Myth #1: Ransomware is a zero-day attack

In fact, exploiting a zero-day vulnerability is an expensive proposition for a malicious actor. In reality, most malware target vulnerabilities, which while well-documented and easily remediated, remain unpatched. Therefore, a systematic schedule of patching and endpoint system updates within 30 days of becoming available is the most effective available way to minimize the threat of ransomware, and indeed most “targeted” attacks.

Myth #2: Anti-virus & perimeter solutions are sufficient protection

Signature-based protection has been widely used for 20+ years and is a necessary and effective protection mechanism. However, this approach is well known and easily evaded by attackers. In addition to signature-based anti-virus solutions, it is necessary to consider endpoint detection and response solutions supported by monitoring and analytics. Many ransomware attacks are successful because attackers breach perimeter security solutions and web-facing applications. Most networks are flat, making them easy to traverse. Segmenting assets into trust zones and enforcing traffic flow rules is the way to go.

Myth #3: IT Admins always follow best practices

When administrator accounts are not monitored at all, it exposes such super powers to hacker opportunism. Admin workstations with drive mappings and often used (and sadly common) administrator passwords to critical servers are a high priority target. Best practice prescribes monitoring administrator accounts for unauthorized use, access and behaviors.

Recognize that ransomware itself isn’t much different than the malware of the past. Ransomware enters the organization the same way as other malware, propagates the same way and leverages known vulnerabilities in the same way. Thus the good news is that ransomware can also be defended in the same way as malware.

WannaCry: Nuisance or catastrophe? What to expect next?

As we come to the one week point of the global pandemic of ransomware called WannaCry, it seems that while the infection gained worldwide (and unprecedented) news coverage, it has been more of a global nuisance than a global catastrophe. Some interesting points to note:

  • The most affected systems were un-patched Windows 7 and 2008 — not XP as thought earlier. This clearly points to patching cycle. It also validates the approach taken by Microsoft in Windows 10 to force Windows updates for consumers and small business. There was a lot of rage against the machine at the time, but in retrospect, can we agree that it was the right design choice?
  • The distribution method was not a phishing email, rather it seems the malware authors spread by scanning for networks that did not block port 445, which is used by the SMB protocol. It’s high time to correct this mis-configuration. Here is how to do it.
  • It may be that in the eyes of some users, this is another case of the security industry crying “wolf” again, thereby contributing to the numbness to such outbreaks.

What can we expect going forward?

  • As usual, criminals will be quick to take advantage of the attendant fear by pitching phony schemes to “protect” those that are worried they may be, or may become, victims.
  • There will be copycat malware. The distribution by worm (instead of phishing) makes network hygiene even more important.
  • Leaks will increase. Both Wikileaks and Shadow Brokers received tremendous publicity, and given the commercial nature of the latter, they will try and leverage this notoriety.
  • Patch hygiene may improve for a short period in businesses. This is similar to a driver slowing down after observing someone else pulled over by the police. The effects are only temporary though, sad to say.
  • Collaboration across the industry was a big part of blunting the damage. It looks set to continue, which is an incredibly good thing.

Do hackers prefer attacking over the weekend?

The recent WannaCry attack started on a Friday and it was feared that the results would be far more severe on Monday, as workers trickled back from the weekend. The fraudulent wires from Bangladesh Bank that resulted in $81M lost also happened on a Friday. A detailed account of how this weekend timing allowed hackers to get away a large sum (rerouted to the Philippines) with is described in this Reuters investigation.

Attribution in each case has veered towards a state-sponsored attacker that is interested in financial gain. The finger of suspicion points to North Korea in both cases. Lamont Siller, an FBI officer in the Philippines in a speech said, “We all know the Bangladesh Bank heist, this is just one example of a state-sponsored attack that was done on the banking sector.” Symantec in a blog update reported “that its researchers found hacking tools that are ‘exclusively used by Lazarus’ on machines infected with early versions of WanaCryptor, aka WannaCry.” Lazarus is thought to have originated in North Korea.

All righty then, 1) attacks are state sponsored, persistent and advanced, and 2) timed for non-working hours. So are you ready to defend against such attackers? You know, you are not alone. EventTracker’s SIEMphonic service blends award winning SIEM technology with a 24/7 iSOC to give you the cover you need at a price that won’t break the bank.

Want to know more? Here is how we caught WannaCry and what we are doing about it for our customers.

WannaCry at Industrial Control Systems

WannCry-Control-Systems

A global pandemic of ransomware hit Windows based systems in 150 countries in a matter of hours. The root cause was traced to a vulnerability corrected by Microsoft for supported platforms (Win 7, 8.1 and higher) in March 2017, about 55 days before the malware was widespread. Detailed explanations and mitigation steps are described here. The first step to mitigation is to apply the update from Microsoft. A version for XP and 2003 was also released by Microsoft on Friday May 12, 2017.

But what if you did not apply the update because you just cannot do so? This is often the case in Industrial Control Systems (ICS), which comprise Operational Technology (OT) systems built on the same platforms (Windows XP, 7) that are susceptible to this vulnerability, but the patch/backup strategy recommended for traditional desktops just simply does not apply.

There are reports of several manufacturers that have apparently stopped work at plants because of WannaCry infestations of control systems, including automobile manufacturers like Renault, Dacia, and Nissan. There are many valid reasons:

  • The earlier versions of Microsoft software used in ICS aren’t just off-the-shelf versions of Windows, but they’re Windows as mediated by industrial control system vendors like Honeywell, Siemens and the like. They don’t use off-the-shelf Windows. Applying updates requires testing to ensure the ICS system is not going to be disrupted.
  • ICS system owners abhor downtime. It is very expensive to shut down a manufacturing line or an airport runway, and not possible to shut down the International Space Station.
  • ICS system owners often cite the “air gap”. But that’s a myth that has been exploded often.

As a start, ICS-CERT has published an advisory which provides this guidance:

  • Disable SMBv1 on every system connected to the network.
    • Information on how to disable SMBv1 is available here.
    • While many modern devices will operate correctly without SMBv1, some older devices may experience communication or file/device access disruptions.
  • Block port 445 (Samba).
    • This may cause disruptions on systems that require port 445.
  • Review network traffic to confirm that there is no unexpected SMBv1 network traffic. The following links provide information and tools for detecting SMBv1 network traffic and Microsoft’s MS17-010 patch:
  • Vulnerable embedded systems that cannot be patched should be isolated or protected from potential network exploitation.

WannaCry: Fraud follows fear

After the global pandemic of the WannaCry ransomware attack this past weekend, it’s entirely predictable that fraudsters would follow. After every major attack or vulnerability disclosure, criminals are quick to take advantage of the attendant fear by pitching phony schemes to “protect” those that are worried they may be, or may become, victims.

This has indeed occurred already in the wake of WannaCrypt. Various third-party mobile app stores are offering protection from the ransomware, but those protective apps are for the most part bogus, and commonly infested with adware. So, steer clear of apps promising protection, and instead patch and update your systems.

Spam emails notifying you that your machine is infected with WannaCry (see picture below) are also making the rounds.

WannaCry Ransomware

Here’s some guidance to be safe from these attempts:

  • Apply the Microsoft patch for the MS17-010 SMB vulnerability dated March 14, 2017.
  • Perform a detailed vulnerability scan of all systems on your network and apply missing patches ASAP.
  • Limit traffic from/to ports 139 and 445 to internal network only. Monitor traffic to these ports for out-of-ordinary behavior.
  • Enable strong spam filters to prevent phishing e-mails from reaching the end users and authenticate in-bound e-mail using technologies like Sender Policy Framework (SPF), Domain Message Authentication Reporting and Conformance (DMARC), and DomainKeys Identified Mail (DKIM) to prevent e-mail spoofing.
  • Scan all incoming and outgoing e-mails to detect threats and filter executable files from reaching the end users.
  • Ensure anti-virus and anti-malware solutions are set to automatically conduct regular scans.
  • Manage the use of privileged accounts. Implement the principle of least privilege. No users should be assigned administrative access unless absolutely needed. Those with a need for administrator accounts should only use them when necessary.
  • Configure access controls including file, directory and network share permissions with least privilege in mind. If a user only needs to read specific files, they should not have write access to those files, directories or shares.
  • Disable macro scripts from Microsoft Office files transmitted via e-mail. Consider using Office Viewer software to open Microsoft Office files transmitted via e-mail instead of full Office suite applications.

Feeling overwhelmed? Don’t despair. Expert help available here.

WannaCry: What it is and what to do about it

What happened

For those of us in the IT Security profession, Friday May 12 was Black Friday. Networks in healthcare and critical infrastructure across at least 99 countries have been infected by the WannaCry ransomware worm, aka WanaCrypt, WannaCrypt or Wcry. The bulk of infections were reported in Russia, Taiwan and Spain.

First observed targeting UK hospitals and Spanish banks, big companies like Telefónica, Vodafone and FedEx had some of their systems infected with the threat that also hit rail stations and universities. The Spanish CERT issued an alert warning the organizations and confirming that the malware was rapidly spreading.

Is it over? Will it happen again?

A sample of malware was reverse engineered and found to contain a “kill switch“. The malware tries to resolve a particular domain name and if it exists, it self destructs. This domain has been registered and so, if you are infected and this particular strain is able to successfully resolve that domain name using your internet connection and DNS settings, then it will apparently terminate itself. Obviously hope is not a strategy and assuming that we don’t have to do anything now is a big mistake. It is inevitable that a new strain which won’t have any such kill switch will emerge. Accordingly, it is imperative to strengthen defenses.

How it spreads

Initial infection is possibly via phishing email. CERT also reported that the hacker or hacking group behind the WannaCry campaign is gaining access to enterprise servers either through Remote Desktop Protocol (RDP) compromise. Once the infection has taken root, it spreads across the network looking for new victims using the Server Message Block (SMB) protocol. The ransomware uses the Microsoft vulnerability MS17-10[1]. This vulnerability was used by ETERNALBLUE, an exploit that was developed by the NSA and released to the public by the Shadow Brokers, a hacker group on April 14, 2017. Microsoft released a patch for this vulnerability on March 14, one month before the release of the exploit.

What it does

Once the infection is on the machine, it encrypts files and shows a ransom note asking for $300 or $600 worth of bitcoin.

Technical details

As described by CERT, the WannaCry ransomware is a loader that contains an AES-encrypted DLL. During runtime, the loader writes a file to disk named “t.wry”. The malware then uses an embedded 128-bit key to decrypt this file. This DLL, which is then loaded into the parent process, is the actual Wanna Cry Ransomware responsible for encrypting the user’s files. Using this cryptographic loading method, the WannaCry DLL is never directly exposed on disk and not vulnerable to antivirus software scans.

The newly loaded DLL immediately begins encrypting files on the victim’s system and encrypts the user’s files with 128-bit AES. A random key is generated for the encryption of each file.

The malware also attempts to access the IPC$ shares and SMB resources the victim system has access to. This access permits the malware to spread itself laterally on a compromised network. However, the malware never attempts to attain a password from the victim’s account in order to access the IPC$ share.

This malware is designed to spread laterally on a network by gaining unauthorized access to the IPC$ share on network resources on the network on which it is operating.

What steps has EventTracker SIEMphonic taken?

  1. Closely monitoring announcements and details provided by industry experts including US CERT, SANS, Microsoft, etc.
  2. Reviewed the latest vulnerability scan results from your network (if subscribed to ETVAS service) for vulnerable machines. ETVAS service subscribers who would like us to scan your network again can request us at ecc@eventtracker.com and we will perform a scan at your convenience.
  3. Updated the Active Watch List in your instance of EventTracker with the latest Indicators of Compromise (IOCs). This includes MD5 hashes of the malware variants, IP addresses of WannaCry C&C servers and domain names used by the malware
  4. Added an alert if we see any logs containing the domain name iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea.com which is used by WannaCry
  5. Watching Change Audit snapshots in your network for changes to registry (RunOnce) and for files with extension .wncry
  6. Updated ETIDS with snort signatures as described by the SANS Internet Storm Center
  7. Performing log searches using known IOCs

Recommended steps for prevention

  • Apply the Microsoft patch for the MS17-010 SMB vulnerability dated March 14, 2017.
  • Perform a detailed vulnerability scan of all systems on your network and apply missing patches ASAP.
  • Limit traffic from/to ports 139 and 445 to internal network only. Monitor traffic to these ports for out of ordinary behavior.
  • Enable strong spam filters to prevent phishing e-mails from reaching the end users and authenticate in-bound e-mail using technologies like Sender Policy Framework (SPF), Domain Message Authentication Reporting and Conformance (DMARC), and DomainKeys Identified Mail (DKIM) to prevent e-mail spoofing.
  • Scan all incoming and outgoing e-mails to detect threats and filter executable files from reaching the end users.
  • Ensure anti-virus and anti-malware solutions are set to automatically conduct regular scans.
  • Manage the use of privileged accounts. Implement the principle of least privilege. No users should be assigned administrative access unless absolutely needed. Those with a need for administrator accounts should only use them when necessary.
  • Configure access controls including file, directory, and network share permissions with least privilege in mind. If a user only needs to read specific files, they should not have write access to those files, directories, or shares.
  • Disable macro scripts from Microsoft Office files transmitted via e-mail. Consider using Office Viewer software to open Microsoft Office files transmitted via e-mail instead of full Office suite applications.

How EventTracker protected customers

See the details in the Catch of the Day

Challenges with Threat Intelligence or why a Honeynet is a good idea

Shared threat intelligence is an attractive concept. The good guys share experiences about what the bad guys are doing thereby blunting attacks. This includes public-private partnerships like InfraGard, a partnership between the FBI and the private sector dedicated to sharing information and intelligence to prevent hostile acts against the U.S.

The analogy can be made to casinos that share information with each other about cheaters and their characteristics via the Gaming Board or the Griffin Book. If you share the intelligence then everybody but the cheater wins. So why not the same for cyber security?

For one thing, you are dealing with anonymous adversaries capable of rapid change, unlike the casino analogy where facial recognition can identify an individual even if their appearance is modified. Also, the behavior of the casino cheat tends to be similar (for example sit at the craps table or counting cards at blackjack as in Rain Man). In the cybersecurity world, all the defender has to go on is the type of attack (malware, phishing, ransomware), an IP range, and possibly a domain name. So the indicators of compromise (IOCs) that can be shared are file hashes, domain names, and sender email domains-all multiplying and morphing at digital speed. The IOCs are very hard to share globally at the scale and speed of the internet.

In addition, when the good guys share the IOCs, they do so in ways that are visible to bad guys as well (e.g., upload suspect files to Virus Total). This is leveraged by the bad guys to know the progress of the defenders and therefore adapt their attack.

So what now?

One solution is to implement local threat intelligence with a honeynet, a cyber-defense product that thwarts attempts by attackers to gain information about a private network. Comprised of
multiple virtualized decoys strategically scattered throughout the network to lure bad actors, honeynets can provide intelligence about malicious activity against the network. This solution is effective in identify bad actors including insiders, by their behavior, in your neighborhood. This blog describes the how they differ from Threat Intelligence.

Essential soft skills for cybersecurity success

IT workers in general, but more so IT Security professionals, pride themselves on their technical skills. Keeping abreast of the latest threats and the newest tactics to demonstrate to management and peers that one is “worthy.” The long alphabet soup in the signature, CISSP, CISA, MCSE, CCNA and so on, is all very necessary and impressive. However, cybersecurity puzzles are not solved by technical skills alone. In fact, the case can be made that soft skills are just as important, especially because everyone in the organization needs to cooperate. Security is everyone’s job.

Collaboration

Security is everyone’s job, so a critical success factor for the cybersecurity leader is what you communicate and how you communicate to various stakeholders to gain support, buy-in and behavior change. The soft skills to partner with various individuals and departments throughout your organization will drive the success of any cybersecurity program.

Communication

Too often, IT security leaders speak in the technical jargon of their area of expertise. Not surprisingly, this makes no impact on business leaders nor on others in the organization whose participation is critical to success. After all, a behavior change is only possible if the employee recognizes risk and internalizes the change. This skill, like many others, can be learned and improved with practice. It’s unusual to see a technically capable person want to learn and hone such a skill, but it’s incredibly valuable, and when encountered, its value is readily recognized.

Culture

Culture in this context includes the perceptions, attitudes and beliefs people in the organization have toward cybersecurity. The process of incorporating emotion is often difficult for technical people to comprehend, but plays a central role in communication and collaboration, and therefore success in changing behavior or adoption of new procedures. Old economy companies, such as financial or government organizations, may have a “professional” culture that requires formality and procedure in communication and content. Technology companies with relatively younger employees may react better to communications with humor or animation, and a more informal style. Learning company culture will make collaboration and communication, and therefore cybersecurity, much more effective.

Ultimately, technical skills are necessary for success, but absent these soft skills, a successful cybersecurity program cannot be achieved. As an industry, we tend to emphasize and value technical skills; the same is needed for soft skills.

Who suffers more — cybercrime victims or cybersecurity professionals?

So you got hit by a data breach, an all too common occurrence in today’s security environment. Who gets hit? Odds are you will say the customer. After all it’s their Personally Identifiable Information (PII) that was lost. Maybe their credit card or social security number or patient records were compromised. But pause a moment and consider the hit on the company itself. The hit includes attorney fees, lost business, reputational damage, and system remediation costs.

They deserve it, you say? They were negligent and must suffer the consequences. But spare a thought for the individuals on the “front line,” defending their organizations against the entire world of cyber criminals. They are victims, too. And it may not be a lack of diligence or due care on their part either. In the meantime they may experience the same disappointment and grief as a customer whose data is compromised. They are confused. They may feel a lack of focus and confidence in themselves. They may have sleepless nights and an increased level of anxiety. Not very different than a caregiver to a sick patient.

As in the patient/caregiver scenario, all the attention is focused on the patient. Consider this excerpt from American Nurse that says, “While nurses may not suffer the same way patients do, we experience pain, frustration, lack of resources, and many other forms of suffering when delivering care to patients and their families. In our highly regulated healthcare environment, administrators commonly view nursing as the highest cost center instead of a revenue generator. Typically, nursing is factored into room and board on the patient’s bill.”

This will sound eerily familiar to the IT staff on the front line of responding to a data breach.

How can you help?

  • Acknowledge their pain and anxiety; show that you understand
  • Coordinate care; be there for them in a continuous way
  • Get them help; outside experts who deal in incident management
  • Conduct a lessons learned; an excellent way to beef up skills on the team is to consider co-sourcing certain responsibilities

The next time you hear of a data breach, spare a thought for the IT Security team at the front line; after all they are victims, too.

Man Bites Dog!

Made you look!

It’s a clickbait headline, a popular tactic with the press to get people to click on their article.

Cyber criminals, the ones after the gold in your network, are at heart, capitalists. In other words, they seek efficiency. How to get maximum returns for the minimum possible work. This tendency reveals itself in multiple ways.

For example:

  • They scan networks, looking for the less well guarded ones; default passwords, unpatched systems, minimal defenses; easy pickings. After all why bother with hard work if the same results can be had easily?
  • The rise of Ransomware-as-a-service; essentially a franchise model for ransomware, such that criminals with little technical expertise can run ransomware attacks without having to build anything from scratch. As you can imagine, this has led to a sharp increase in ransomware attacks.

In order to get the bad guys to move along to the next target, your job then is to push them up the pyramid of pain — make it that much harder so as to decrease their ROI.

But, wait a minute, you’re thinking. What about that screaming headline? Anthem, Target, the beat goes on. Remember, headlines are always screaming. That’s what gets eyeballs and what sells. The mundane, common, low-level, ho-hum attacks simply don’t make the headlines but cause more damage on a sustained basis than the latest zero day.

The analogy in the healthcare world is that Bird Flu and Ebola garner screaming headlines while the common cold is responsible for more days missed at work and school by orders of magnitude. When was the last headline you saw about little Johnny missing school because of the flu?

How now, brown cow? The approach is well known but bears repeating:

  • Identify your crown jewels (know you assets)
  • Do a gap analysis to determine vulnerabilities
  • Address these vulnerabilities
  • Monitor for breaches

Sound like a plan? Check out our SIEMphonic service. It’s the easy button for sensible security.

Spending too much or too little on IT Security?

A common assumption is that security expenditure is a proxy for security maturity. This may make sense at first blush but paradoxically, a low relative level of information security spending compared to peers can be equally indicative of a very well-run or a poorly run security program. Spending analysis is, therefore, imprecise and a potentially misleading indicator of program success. In fact, it is necessary to ensure that the right risks are being adequately managed, and understand that spending may fluctuate accordingly.

According to Gartner’s most recent IT Key Metrics Data, respondents spent between 4-7% on IT security and risk management as a percentage of the overall IT budget. Note that IT spending statistics alone do not measure IT effectiveness and are not a gauge of successful IT within organizations. They simply provide an indicative view of average costs in general, without regard to complexity or demand.

The compliance hyperbole of previous years that drove information security spending has abated, having matured with organizations moving from planning to productive activities to address the requirements. Compliance remains a relevant internal selling point for justifying security and risk management budgets, but other factors — such as the series of high profile attacks played out in global media in recent years — have now become strong drivers. The visibility of information security spending in the boardroom is at an all-time high.

It is quite possible to constrain spending without compromising your security posture. One way is to consider managed detection and response. This is an effective outcome based combination of expertise and tools to detect threats, especially targeted advanced threats and insider threats. Our SIEMphonic service offering is a premier example of this type of service. The figure above, as described in this research note, can be the result.

Compliance is not a proxy for due care

Regulatory compliance is a necessary step for IT leaders, but it’s not sufficient enough to reduce residual IT security risk to tolerable levels. This is not news. But why is this the case? Here are three reasons:

  • Compliance regulations are focused on “good enough,” but the threat environment mutates rapidly. Therefore, any definition of “good enough” is temporary. The lack of specificity in most regulations is deliberate to accommodate these factors.
  • IT technologies change rapidly. An adequate technology solution today will be obsolete within a few years.
  • Circumstances and IT networks are so varied, that no single regulation can address them all. Prescribing a common set of solutions for all cases is not possible.

The key point to understand is that the compliance guidance documents are just that — guidance. Getting certification for the standard, while necessary, is not sufficient. If your network becomes the victim of a security breach and a third party suffers harm, then compliance to the guidelines alone will not be an adequate defense, although it may help mitigate certain regulatory penalties. All reasonable steps to mitigate the potential for harm to others must have been implemented, regardless of whether those steps are listed within the guidance.

A strong security program is based on effective management of the organization’s security risks. A process to do this effectively is what regulators and auditors look for.

Top three reasons SIEM solutions fail

We have been implementing Security Information and Event Management (SIEM) solutions for more than 10 years. We serve hundreds of active SIEM users and implementations. We have had many awesome, celebratory, cork-popping successes. Unfortunately, we’ve also had our share of sad, tearful, profanity-filled failures. Why? Why do some companies succeed with SIEM while others fail? Here is a secret for you: the product doesn’t matter. The size of the company doesn’t matter. It’s something else. SIEM can deliver great results but it can soak up budget, time and leave you frustrated with the outcome. Here are the (all too) common reasons why SIEM implementations fail.

Reason 1: You don’t have an administrator in charge.

We call this the RUN function. A person in charge of platform administration. A Sys Admin who:

  • Keeps the solution up-to-date with upgrades and new versions
  • Performs system health checks, storage projections and log volume/performance analysis
  • Analyzes changes in log collection for new systems and non-reporting systems
  • Adds and configures users, standardized reports, dashboards and alerts
  • Generates Weekly System Status Report
  • Confirms external/third party integration’s are functioning normally: threat intel feeds, IDS, VAS

Reason 2: The boss isn’t committed.

For the SIEM solution to deliver value, the executive in charge must be fully committed to it, providing emotional, financial and educational support to the administrator. You tell your team that this is the company’s system and everyone’s going to use it. You invest in outside help to get it up and running, and use it the right way with the proper training and service. You don’t cave in when people complain because they don’t like the color of the screen or the font, or that things take extra clicks, or that it’s not “user friendly.” For this system to work, your people will need to do more work. You provide resources to help them, but you stand firm because this is your network. You realize that using this product the right way will help you make your company safer…and more valuable. Stand firm. Commit. Or you will fail.

Reason 3: You’re not using the data.

Our best implementations have 2-3 key objectives satisfied by the SIEM systems each day. Managers read these reports and rely on the data to help them secure their network. Have a few key objectives or you will fail. We call this the WATCH function for obvious reasons.

We are a premier provider of SIEM solutions and services, but with all due respect we would advise against buying a SIEM solution if a client is not prepared to invest in an administrator or reports, or shows little interest in adopting the system into their company culture.

Monitoring DNS Traffic for Security Threats

Cyber criminals are constantly developing increasingly sophisticated and dangerous malware programs. Statistics for the first quarter of 2016 compared to 2015 shows that malware attacks have quadrupled.

Why DNS traffic is important

DNS has an important role in how end users in your enterprise connect to the internet. Each connection made to a domain by the client devices is recorded in the DNS logs. Inspecting DNS traffic between client devices and your local recursive resolver could reveal a wealth of information for forensic analysis.

DNS queries can reveal:

  • Botnets/Malware connecting to C&C servers
  • What websites visited by an employee
  • Which malicious and DGA domains were accessed
  • Which dynamic domains (DynDNS) accessed
  • DDOS attack detection like NXDomain, phantom domain. random subdomain

Identifying the threats using EventTracker

While parsing each DNS log, we verify each domain accessed against:

  • Malicious domain database (updated on regular basis)
  • Domain Generation Algorithm (DGA)

Any domain which matches any of the above mentioned criteria warrants attention and an alert is generated along with the client which accessed it, and the geological information of the domain (IP, Country).

Using behavior analysis, EventTracker tracks the volume of connections to each domain accessed in the enterprise. If the volume of traffic to a specific domain is more than average, alert conditions are triggered. When a domain is accessed for the first time, we check the following:

  • Is this a dynamic domain?
  • Is the domain registered recently or expiring soon?
  • Does the domain have a known malicious TLD?

Recent trends show that cyber criminals may create dynamic domains as command and control centers. These domains are activated for a very short duration and then discarded, which makes the above checks even more important.

EventTracker does statistical/threshold monitoring of query, client, record type and error. This helps in detecting many DDOS attacks like NXDOMAIN attack, Phantom domain attack, random sub-domain attack, etc. EventTracker’s monitoring of client DNS settings will help to detect DNS hijacking and generate an alert for anything suspicious, including information about the client as well as its DNS setting. The EventTracker flex dashboard helps in correlating attack detection data and client details, making attack detection simpler.

Monitoring the DNS logs is a powerful way to identify security attacks as they happen in the enterprise, enabling successful blocking of attacks and fixing vulnerabilities.

Idea to retire: Do more with less

Ideas to Retire is a TechTank series of blog posts that identify outdated practices in public sector IT management and suggest new ideas for improved outcomes.

Dr. John Leslie King is W.W. Bishop Professor in the School of Information at the University of Michigan and contributed a blog hammering the idea of “do more with less” calling it a “well-intentioned but ultimately ridiculous suggestion.”

King writes: “Doing more with less flies in the face of what everyone already knows: we do less with less. This is not our preference, of course. Most of us would like to do less, especially if we could have more. People are smart: they do not volunteer to do more if they will get less. Doing more with less turns incentive upside down. Eliminating truly wasteful practices and genuine productivity gains sometimes allows us to do more with less, but these cases are rare. The systemic problems with HealthCare.gov were not solved by spending less, but by spending more. Deep wisdom lies in matching inputs with outputs.”

IT managers should respond to suggestions of doing more with less by assessing what really needs to be done…what can reasonably be discarded or added that enables the IT staff to go about their responsibilities without exceeding their limits?

Considering these ideas as they relate to IT Security, a way to optimize input with outputs may be by considering a co-managed solution focused on outcome. Rather than merely acquiring technology and then watching it gather dust as you struggle to build process and train (non-existent) staff to utilize it properly, start with the end in mind – the desired outcome. If this is a well managed SIEM solution, (and associated technology) then perhaps a co-managed SIEM approach may provide the way to match output with input.

Detect Persistent Threats on a Budget

detect-persistent-threats

There’s a wealth of intelligence available in your DNS logs that can help you detect persistent threats.

So how can you use them to see if your network has been hacked, or check for unauthorized access to sensitive intellectual property after business hours?

All intruders in your network must re-connect with their “central command” in order to manage or update the malware they’ve installed on your system. As a result, your infected network devices will repeatedly resolve to the domain names that the attackers use. By mining your DNS logs, you can determine if known bad domain names and/or IP addresses have affected your systems. Depending on the most current “blacklist” of criminal domains is, and how rigid your network rules are regarding IP destinations that the domain names resolve to, DNS logs can help you spot these anomalies.

It’s not a a comprehensive technique for detecting persistent threats, but a good, budget friendly start.

Here is recent webinar we did on the subject of mining DNS logs.

Dirty truths your SIEM vendor won’t tell you

Analytics is an essential component of a modern SIEM solution. The ability to crunch large volumes of log and security data in order to extract meaningful insight can lead to improvements in security posture. Vendors love to tell you all about features and how their particular product is so much better than the competition.

Yeah, right!

The fact is, many products are available and most of them have comparable features. While software is a necessary part of the analytics process, it’s less critical than product marketing hype would have you believe.

As Meta Brown noted in Forbes, “Your own thought processes – the effort you put in to understand the business problem, investigate the data available, and plan a methodical approach to analysis – can do much more to simplify your work and maximize your chance for success than any product could.”

Techies just love to show off their tech macho. They can’t get together without arguing about the power of their code, speed of their response or the size of their clusters.

The reality? Once you invested in any of the comparable products, it’s the person behind the wheel that makes all the difference.

If you suffer from skill shortage, our remote managed SIEM Simplified solution may be for you.

Uncover C&C traffic to nip malware

In a recent webinar, we demonstrated techniques by which EventTracker monitors DNS logs to uncover attempts by malware to communicate with Command and Control (C&C) servers. Modern malware uses DNS to resolve algorithm generated domain names to find and communicate with C&C servers. These algorithms have improved by leaps and bounds since they were first see in Conficker.C. Early attempts were based on a fixed seed and so once the malware was caught, it could be decompiled to predict the domain names it would generate. The next improvement was to use the current time as a seed. Here again, once the malware is reverse engineered it’s possible to predict the domain names it will generate. Nowadays, the algorithms may use things like the current trending twitter topic as a seed to make prediction harder.

But hold on a second, you say – we don’t allow free access, we have installed a proxy with configuration and it will stop these attempts. Possibly. However, a study conducted between Sep 2015-Jan 2016 showed that less than 34% of outbound connection attempts to C&C infrastructure were blocked by firewalls or proxy servers. Said differently, more than 60% of the time an infected device successfully called out to a criminal operator.

Prevention technologies look for known threats. They examine inbound files and look for malware signatures. It’s more or less a one-time chance to stop the attacker from getting inside the network. Attackers have learned that time is their friend. Evasive malware attacks develop over time, allowing them to bypass prevention altogether. When no one is watching, the attack unfolds. Ultimately, an infected device will ‘phone home’ to a C&C server to receive instructions from the attacker.

DNS logs are a rich source of intelligence and bear close monitoring.

Maximize your SIEM ROI

Aristotle put forth the idea in his Poetics that a drama has three parts — a beginning or protasis, middle or epitasis, and end or catastrophe. Far too many SIEM implementations are considered to be catastrophes. Having implemented hundreds of such projects, here are the three parts of a SIEM implementation which if followed will in fact minimize the drama but maximize the ROI. If you prefer the video version of this, click here.

The beginning or protasis

  • Identify log sources and use cases.
  • Establish retention period for the data set and who gets access to which parts.
  • Nominate a SIEM owner and a sponsor for the project.

The middle or epitasis

  • Install the SIEM Console
  • Push out and configure sensors or the log sources to send data
  • Enable alerting and required reporting schedules
  • Take log volume measurements and compare against project disk space requirements
  • Perform preliminary tuning to eliminate most noisy and less useful log sources and type
  • Train the product owner and users on features and how-to use

The end or catastrophe

  • Review log volume and tune as needed
  • Review alerts for correctness and establish notification methods, if appropriate
  • Establish escalation policy – when and to whom
  • Establish report review process to generate artifacts for audit review
  • Establish platform maintenance cycle (platform and SIEM updates)

Research points to SIEM-as-a-Service

SC Magazine released the results of a research survey focused on the rising acceptance of SIEM-as-a-Service for the small and medium sized enterprise.

The survey, conducted in April 2016, found that SMEs and companies with $1 billion or more in revenue or 5,000-plus employees faced similar challenges:

  • 64 percent of respondents agreed that they “lack the time to manage all the security activities.”
  • 49 percent reported a lack of internal staff to address IT security challenges
  • 48 percent said they lacked the IT security budget needed to meet those challenges

This come as no surprise to us. We’ve been seeing these trends rise over the past several years. Gartner reports that by 2019, total enterprise spending on security outsourcing services will be 75 percent of the spending on security software and hardware products, and that by 2020, 40 percent of all security technology acquisitions will be directly influenced by managed security service provider (MSSP) and on-premises security outsourcing providers, up from less than 15% today.

It used to be that firewalls and antivirus were sufficient enough stop gaps; but in today’s complex threatscape, the cyber criminals are more sophisticated. The weak point of any security approach is usually the unwitting victim of a phishing scam or the person who plugs in the infected USB; but “securing the human” requires the expertise of other humans, trained staff with the certification and expertise to monitor the network and analyze the anomalies. An already busy IT staff can become even more overburdened; identifying, training and keeping security expertise is hard. So is keeping up with the alerts that come in on a daily basis, and being current on the SIEM technology.

Thus, the increasing movement towards a co-managed SIEM which allows the enterprise to have access to the expertise and resources they need to run an effective security program without ceding control. SIEM-as-a-Service: saving time and money.

You can download the SC Magazine report here.

Is it all about zero-day attacks?

The popular press makes much of zero-day attacks. These are attacks based on vulnerabilities in software that is unknown to the vendor. This security hole is then exploited by hackers before the vendor becomes aware and hurries to fix it—this exploit is called a zero day attack.

However, the reality is 99.99% of exploits are based on vulnerabilities already known for at least one year. Hardly zero-day.

What does this mean to you? It means you should prioritize vulnerability scanning to first identify and then patch and manage these vulnerabilities in your defense strategy. What is the point in obsessing over zero-day vulnerabilities when unpatched systems exist within your perimeter?

What’s so hard about this? Well, for many organizations, it’s the process and expertise that is needed to accomplish the related tasks. Procuring the technology is easy but that represents, at most, 20% of the challenges to obtain a successful outcome.

The people and process to leverage the technology are 80% of the challenge. The bulk of the iceberg below the waterline, which can sink your otherwise massive ship.

Top 3 traits of a successful Security Operations Center

Traditional areas of risk — financial risk, operational risk, geopolitical risk, risk of natural disasters — have been part of organizations’ risk management for a long time. Recently, information security has bubbled to the top, and now companies are starting to put weight behind IT security and Security Operations Centers (SOC).

Easier said than done, though. Why you ask? Two reasons:

  • It’s newer, so it’s less understood; process maturity is less commonly available
  • Skill shortages — many organizations might not yet have the right skill mix and tools in-house.

From our own experience creating and staffing an SOC over the past three years, here are the top three rules:

1) Continuous communication

It’s the fundamental dictum (sort of like “location” in real estate). Bi-directional management to the IT team.

Management communicates business goals to the technology team. In turn, the IT team explains threats and their translation to risk. Management decides the threat tolerance with their eye on the bottom line.

We maintain a Runbook for every customer which records management objectives and risk tolerance.

2) Tailor your team

People with the right skills are critical to success and often the hardest to assemble, train and retain. You may be able to groom from within. Bear in mind, however, that even basic skills, such as log management, networking expertise and technical research (scouring through blogs, pastes, code, and forums), often come after years of professional information security experience.

Other skills, such as threat analysis, are distinct and practiced skill sets. Intelligence analysis, correlating sometimes seemingly disparate data to a threat, requires highly developed research and analytical skills and pattern recognition.

When building or adding to your threat intelligence team, especially concerning external hires, personalities matter. Be prepared for Tuckman’s stages of group development.

3) Update your infrastructure

Security is 24x7x365 – automatically collect, store, process and correlate external data with internal telemetry such as security logs, DNS logs, Web proxy logs, Netflow and IDS/IPS. Query capabilities across the information store requires an experienced data architect. Design fast and nimble data structures with which external tools integrate seamlessly and bi-directionally. Understand not only the technical needs of the organization, but also be involved in a continuous two-way feedback loop with the SOC, vulnerability management, incident response, project management and red teams.

Easy, huh?

Feeling overwhelmed? Get SIEM Simplified on your team. We analyze billions of logs every day. See what we’ve caught.

Is the IT Organizational Matrix an IT Security Problem?

Do you embrace the matrix?

Not this one, but the IT Organizational Matrix, or org chart. The fact is, once networks get to a certain size, IT organizations begin to specialize and small kingdoms emerge. For example, endpoint management (aka Desktop) may be handled by one team, whereas the data center is handled by another (Server team).  Vulnerability scanning may be handled by a dedicated team but identity management (Active Directory? RSA tokens?) is handled by another.  At this level of organization, these teams tend to have their own support infrastructure.

However, InfoSec controls are not separable from IT.  What this matrix at the organizational level becomes is a graph of security dependencies at the information level.  John Lambert explains in this blog post.

For example, the vulnerability scanning systems may use a “super privileged account” that has admin rights on every host in the network to scan for weaknesses, but the scanners may be patched or backed up by the Server team with admin rights to them.  And the scanner servers themselves are accessed with admin rights from a set of endpoints that are managed by the Desktop team.

This matrix arising from domain specialization creates a honeycomb of critical dependencies. Why is this a problem? Well because it enables lateral movement. Attackers who don’t know the map or org chart can only navigate the terrain as it exists. In this case, though, the defenders may manage from the network map like good little blue tin soldiers.

If this is your situation, it’s time to simplify. Successful defenders manage from the terrain, not the map.

Why a Co-Managed SIEM?

In simpler times (2010?!), security technology approaches were clearly defined and primarily based on prevention with things like firewalls, antivirus, web and email gateways. There were relatively few available technology segments and a relatively clear distinction between buying security technology purchases and outsourcing engagements.

Organizations invested in the few well-known, broadly used security technologies themselves, and if outsourcing the management of these technologies was needed, be reasonably confident that all major security outsourcing providers would be able to support their choice of technology.

As observed by this Gartner paper (subscription required), this was a market truth for both on-premises management of security technologies and remote monitoring/management of the network security perimeter (managed security services).

So what has changed? It’s the increasing complexity of the threat landscape that has spawned more complex security technologies to combat those threats.

Net result? The “human element” is back into the forefront of security management discussions. This is the security analyst and Subject Matter Expert for the technology in use. The market agrees: The security gear is only as good as the people manning it.

With the threat landscape of today, the focus is squarely on detection, response, prediction, continuous monitoring and analytics. This means a successful outcome is critically dependent on the “human element.” So the choices are to procure security technology and:

  • Deploy adequate internal resources to use them effectively, or
  • Co-source the staffing who already has experience with the selected technology (for instance, using our co-managed SIEM)

If co-sourcing is a thought, then selection criteria must consider the expertise of the provider with the selected security technology. Our SIEM Simplified offering bundles comprehensive technology with expertise in its use.

Technology represents 20% or less of the overall challenges to better security outcomes. The “human element” coupled with mature processes are the rest of the iceberg, hiding beneath the waterline.

2015 Cyber Attack Trends — 2016 Implications

Red teams attack, blue teams defend.
That’s us – defending our network.

So what attack trends were observed in 2015? And what do they portend for us blue team members in 2016?

The range of threats included trojans, worms, trojan downloaders and droppers, exploits and bots (backdoor trojans), among others. When untargeted (more common), the goal was profit via theft. When targeted, they were often driven by ideology.

Over the years, attackers have had to evolve their tactics to get malware onto computers that have improved security levels. Attackers are increasingly using social engineering to compromise computer systems because vulnerabilities in operating systems have become harder to find and exploit.

Ransomware that seeks to extort victims by encrypting their data is the new normal, replacing rogue security software or fake antivirus software of yesteryear that was used to trick people into installing malware and disclosing credit card information. Commercial exploit kits now dominate the list of top exploits we see trying to compromise unpatched computers, which means the exploits that computers are exposed to on the Internet are professionally managed and constantly optimized at an increasingly quick rate.

However, one observation made by Tim Rains, Chief Security Advisor at Microsoft was, “although attackers have accumulated more tricks and tactics and seem to be using them in a more focused, fast paced way, they still focus on a relatively small number of ways to compromise computers.” These include:

  • Unpatched vulnerabilities
  • Misconfigured computers
  • Weak passwords
  • Social engineering

In fact, Rains goes on to note: “Notice I didn’t use the word ‘advanced.’

As always, it’s back to basics for blue team members. The challenge is to defend:

  • At scale (every device on the network, no exceptions)
  • Continuously (even on weekends, holidays etc.), and
  • Update/upgrade tactics constantly

If this feels like Mission Impossible, then you may be well served by a co-managed service offering in which some of the heavy lifting can be taken on by a dedicated team.

Your SIEM relationship status: It’s complicated

On Facebook, when two parties are sort-of-kind-of together but also sort-of, well, not, their relationship status reads, “It’s complicated.” Oftentimes, Party A really wants to like Party B, but Party B keeps doing and saying dumb stuff that prevents Party A from making a commitment.

Is it like that between you and your SIEM?

Here are dumb things that a SIEM can do to prevent you from making a commitment:

  • Require a lot of work, giving little in return
  • Be high maintenance, cost a lot to keep around
  • Be complex to operate, require lots of learning
  • Require trained staff to operate

Simplify your relationship with your SIEM with a co-managed solution.

Top 5 SIEM complaints

Here’s our list of the Top 5 SIEM complaints:

1) We bought a security information and event management (SIEM) system, but it’s too complicated and time-consuming, so we’re:

a) Not using it
b) Only using it for log collection
c) Taking log feeds, but not monitoring the alerts
d) Getting so many alerts that we can’t keep up with them
e) Way behind because the person who knew about the SIEM left

2) We’re updating technology and need to retrain to support it

3) It’s hard to find, train and retain security expertise

4) We don’t have enough trained staff to manage all of our devices

5) We don’t have trained resources to successfully respond to a security incident

What’s an IT Manager to do?
Get a co-managed solution, of course.
Here’s our’s. It’s called SIEM Simplified.
Billions of logs analyzed daily. See what we’ve caught.

The Cost of False IT Security Alarms

Think about the burglar alarm systems that are common in residential neighborhoods. In the eye of the passive observer, an alarm system makes a lot of sense. They watch your home while you’re asleep or away, and call the police or fire department if anything happens. So for a small monthly fee you feel secure. Unfortunately, there are a few things that the alarm companies don’t tell you.

1)      Between 95% and 97% of calls (depending on the time of year) are false alarms.

2)      The police regard calls from alarm companies as the lowest priority and it can take anywhere between 20-30 minutes for them to arrive. It only takes the average burglar 5 minutes to break and enter, and be off with your valuables.

3)      In addition to this, if your call does turn out to be a false alarm, the police and fire department have introduced hefty fines. It is about $130 for the police to be called out, and if fire trucks are sent, they charge around $410 per truck (protocol is to send 3 trucks). So as you can see, one false alarm can cost you well over $1,200.

With more than 2 million annual burglaries in the U.S., perhaps it’s worth putting up with so many false positives in service of the greater deterrent? Yes, provided we can sort out the false alarms which sap the first responder.

The same is true of information security. If we know which alerts to respond to, we can focus our time on those important alerts. Tuning the system to reduce the alerts, and removing the false positives so we can concentrate only on valid alerts, gives us the ability to respond only to the security events that truly matter.

While our technology does an excellent job of detecting possible security events, it’s our service, which examines these alerts and provides experts who make it relevant using context and judgement, that makes the difference between a rash of false positives and the ones that truly matter.

SIEM: Sprint or Marathon?

Winning a marathon requires dedication and preparation. Over long periods of time. A sprint requires intense energy but for a short period of time. While some tasks in IT Security are closer to a sprint (e.g., configuring a firewall), many, like deploying and using a Security Information and Event Management (SIEM) solution, are closer to a marathon.

What are the hard parts?

  1. Identifying the scope
  2. Ingesting log data and filtering out noise events
  3. Reviewing the data with discipline

Surveys show that 75% of organizations need to perform significant discovery to determine which devices, platforms, applications and databases should be included in the scope for log monitoring. The point is that when most companies really evaluate their log monitoring process, most of them don’t really know what systems are even available for them to include. They don’t know what they have. Additionally, 50% of organizations later realize that this initial discovery phase is not sufficient to meet their security needs. So, even after performing the discovery, they are not sure they have identified the right systems.

While on-boarding new clients, we usually identify legacy systems or firewall policies that generate large volumes of unnecessary data. This includes discovery of service accounts or scripts with expired credentials that appear to generate suspicious looking login failures. Other common items uncovered include network health monitoring systems which generate an abnormal amount of ICMP or SNMP activity, backup tools and internal applications using non-standard ports and cleartext protocols. Each of these false positives or legitimate activities add straw to the haystack(s), which makes it more difficult to find the needle. Every network contains activities that might appear suspicious or benign to an outside observer that lacks background on everyday activities of the company being monitored. It is important for network and security administrators to provide monitoring tools with additional context and background detail to account for the variety of networks that are thrown at them.

Reviewing the data with discipline is a difficult ask for organizations with a lean IT staff. Since IT is often viewed as a “cost center,” it is rare to see organizations (esp. mid-sized ones) with suitably trained IT Security staff.

Take heart — if getting there using only internal resources is a hard problem, our SIEM Simplified service gets you there. The bonus is the cost savings compared to a DIY approach.

5 IT Security resolutions

Ho hum. Another new year, time for some more New Year’s resolutions. Did you keep the ones you made last year? Meant to but somehow did not get around to it? This time how about making it easy on yourself?

New Year Resolutions for IT security

Here are some New Year’s resolutions for IT security that you can keep easily — by doing nothing at all!

5) Give out administrator privileges freely to most users. Less hassle for you. They don’t need to bother asking you install software or access some files.

4) Don’t bother inventorying hardware or software. It changes all the time. It’s hard to maintain a list, and what’s the point anyway?

3) Allow unfettered mobile device usage in the network. You know they are going to bring their own phone and tablet anyway. It’s better this way. Maybe they’ll get more work done now.

2) Use default configurations everywhere. It’s far easier to manage. Factory default resets are needed anyway and then you can find the default password on google.

And our favorite:

1) Ignore logs of every kind — audit logs, security logs, application logs. They just fill up disk space anyway.

SIEM and Return on Security Investment (RoSI)

The traditional method for calculating standard Return on Investment (RoI) is that it equals the gain minus the cost, divided by the cost. The higher the resulting value, the greater the RoI. The difficulty in calculating a return on security investment (RoSI), however, is that security tends not to increase profits (gain), but to decrease loss – meaning that the amount of loss avoided rather than the amount of gain achieved is the important element.

Following the standard RoI approach, RoSI can be calculated by the sum of the loss reduction minus the cost of the solution, divided by the cost of the solution. In short, a high result is better for RoI, and a low result is better for RoSI.

This is where it gets difficult: how do you measure the ‘loss reduction’? To a large extent it is based on guesswork and surveys. Bruce Schneier in The Data Imperative concluded, “Depending on how you answer those two questions, and any answer is really just a guess — you can justify spending anywhere from $10 to $100,000 annually to mitigate that risk.”

What we find as a practical outcome of delivering our SIEM-as-a-service offering (SIEM Simplified) is that many customers value the anecdotes and statistics that are provided in the daily reports and monthly reviews to demonstrate RoSI to management. Things such as how many attacks were repulsed by the firewalls, how many incidents were addressed by criticality, anecdotal evidence of an attack disrupted or misconfiguration detected. We publish some of these anonymously as Catch of the Day.

It’s a practical way to demonstrate RoSI which is easier to understand and does not involve any guesses.

Stuff the turkey, not the SIEM

Did you know that SIEM and Log Management are different?

The latter (log management) is all about collecting logs first and worrying about why you need them second (if at all). The objective is “let’s collect it all and have it indexed for possible review. Why? Because we can.”

The former (SIEM) is about specific security use cases. SIEM is a use-case driven technology. Use cases are implementation specific, unlike antivirus or firewalls.
Treating SIEM like Log Management, is a lot like a turducken.

Don’t want that bloated feeling like Aunt Mildred explains here? Then don’t stuff your SIEM with logs absent a use case.

Need help doing this effectively? A co-managed SIEM may be your best bet.

Effective cyber security by empowering people

You have, no doubt, heard that cyber security is everyone’s job. So then, as the prime defender of your network, what specifically are you doing to empower people so they can all act as sentries? After all, security cannot be automated as much as you’d like. Human adversaries will always be smarter than automated tools and will leverage human ingenuity to skirt around your protections.

But, marketing departments in overdrive are busy selling the notion of “magic” boxes that can envelope you in a protective shell against Voldemort and his minions. But isn’t that really just fantasy? The reality is that you can’t replace well-trained security professionals exercising judgment with computers.

So what does an effective security buyer do?

Answer: Empower the people by giving them tools that multiply their impact and productivity, instead of trying to replace them.

When we were designing EventTracker 8, an oft repeated observation from users was the shortage of senior analysts. If they existed at all in the organization, they were busy with higher level tasks such as policy creation, architecture updates and sometimes critical incident response. The last task on their plates was the bread-and-butter of log review and threat monitoring. Such tasks are often the purview of junior analysts (if they exist). In response, many of the features of EventTracker 8 are designed specifically to enable junior administrators to make effective contributions to cyber security.

Still feeling overwhelmed by the daily tasks that need doing, consoles that need watching, alerts that need triaging? Don’t fret – that is precisely what our SIEM Simplified service (SIEMaas) is designed to provide – as much, or as little help as you need. Become empowered, be effective.

Diagnosing Account Lockout in Active Directory

Symptom

Account Lockouts in Active Directory

Additional Information

“User X” is getting locked out and Security Event ID 4740 are logged on respective servers with detailed information.

Reason

The common causes for account lockouts are:

  • End-user mistake (typing a wrong username or password)
  • Programs with cached credentials or active threads that retain old credentials
  • Service accounts passwords cached by the service control manager
  • User is logged in on multiple computers or disconnected remote terminal server sessions
  • Scheduled tasks
  • Persistent drive mappings
  • Active Directory delayed replication

Troubleshooting Steps Using EventTracker

Here we are going to look for Event ID 4740. This is the security event that is logged whenever an account gets locked.

  1. Login to EventTracker console:

2. Select search on the menu bar

3. Click on advanced search

4. On the Advanced Log Search Window fill in the following details:

  • Enter the result limit in numbers, here 0 means unlimited.
  • Select the date, time range for the logs to be searched.
  • Select all the domain controllers in the required domain.
  • Click on the inverted triangle, make the search for Event ID: 4740 as shown below.

Once done hit search at the bottom.

You can see the details below. If you want to get more information about a particular log, click on the + sign

Below shows more information about this event.

Now, let’s take a closer look at 4740 event. This can help us troubleshoot this issue.

Log Name Security
Source Microsoft-Windows-Security-Auditing
Date MM/DD/YYYY HH:MM:SS PM
Event ID 4740
Task Category User Account Management
Level Information
Keywords Audit Success
User N/A
Computer COMPANY-SVRDC1
Description A user account was locked out.
Subject:
Security ID NT AUTHORITY\SYSTEM
Account Name COMPANY-SVRDC1$
Account Domain TOONS
Logon ID 0x3E7
Account That Was Locked Out:
Security ID S-1-5-21-1135150828-2109348461-2108243693-1608
Account Name demouser
Additional Information:
Caller Computer Name DEMOSERVER1
Field My Description
DateTime This shows Date/Time of event origination in GMT format.
Source This shows the Name of an Application or System Service originating the event.
Type This shows Warning, Information, Error, Success, Failure, etc.
User This is the user/service/computer initiating event. (Name with a $ means it’s a computer/system initiated event.
Computer This shows the name of server workstation where event was logged.
EventID Numerical ID of event.
Description This contains the entire unparsed event message.
Log Name The name of the event log (e.g. Application, Security, System, etc.)
Task Category A name for a subclass of events within the same Event Source.
Level Warning, Information, Error, etc.
Keywords Audit Success, Audit Failure, Classic, Connection etc.
Category This shows the name for an aggregative event class, corresponding to the similar ones present in Windows 2003 version.
Subject: Account Name Name of the account that initiated the action.
Subject: Account Domain Name of the domain that account initiating the action belongs to.
Subject: Logon ID A number that uniquely identifying the logon session of the user initiating action. This number can be used to correlate all user actions within one logon session.
Subject: Security ID SID of the locked out user
Account Name Account That Was Locked Out
Caller Computer Name This is the computer where the logon attempts occurred

Resolution

Logon into the computer mentioned on “Caller Computer Name”  (DEMOSERVER1) and look for one of the aforementioned reasons that produces the problem.

To understand further on how to resolve issues present on “Caller Computer Name”  (DEMOSERVER1) let us look into the different logon types.

LogonType Code 0
LogonType Value System
LogonType Meaning Used only by the System account.
Resolution No evidence so far seen that can contribute towards account lock out
LogonType Code 2
LogonType Value Interactive
LogonType Meaning A user logged on to this computer.
Resolution User has typed wrong password on the console
LogonType Code 3
LogonType Value Network
LogonType Meaning A user or computer logged on to this computer from the network.
Resolution User has typed wrong password from the network. It can be a connection from Mobile Phone/ Network Shares etc.
LogonType Code 4
LogonType Value Batch
LogonType Meaning Batch logon type is used by batch servers, where processes may be executing on behalf of a user without their direct intervention.
Resolution Batch file has an expired or wrong password
LogonType Code 5
LogonType Value Service
LogonType Meaning A service was started by the Service Control Manager.
Resolution Service is configured with a wrong password
LogonType Code 6
LogonType Value Proxy
LogonType Meaning Indicates a proxy-type logon.
Resolution No evidence so far seen that can contribute towards account lock out
LogonType Code 7
LogonType Value Unlock
LogonType Meaning This workstation was unlocked.
Resolution User has typed a wrong password on a password protected screen saver
LogonType Code 8
LogonType Value NetworkCleartext
LogonType Meaning A user logged on to this computer from the network. The user’s password was passed to the authentication package in its unhashed form. The built-in authentication packages all hash credentials before sending them across the network. The credentials do not traverse the network in plaintext (also called cleartext).
Resolution No evidence so far seen that can contribute towards account lock out
LogonType Code 9
LogonType Value NewCredentials
LogonType Meaning A caller cloned its current token and specified new credentials for outbound connections. The new logon session has the same local identity, but uses different credentials for other network connections.
Resolution User initiated an application using the RunAs command, but with wrong password.
LogonType Code 10
LogonType Value RemoteInteractive
LogonType Meaning A user logged on to this computer remotely using Terminal Services or Remote Desktop.
Resolution User has typed wrong password while logging in to this computer remotely using Terminal Services or Remote Desktop
LogonType Code 11
LogonType Value CachedInteractive
LogonType Meaning A user logged on to this computer with network credentials that were stored locally on the computer. The domain controller was not contacted to verify the credentials.
Resolution No evidence so far seen that can contribute towards account lock out as domain controller is never contacted in this case.
LogonType Code 12
LogonType Value CachedRemoteInteractive
LogonType Meaning Same as RemoteInteractive. This is used for internal auditing.
Resolution No evidence so far seen that can contribute towards account lock out as domain controller is never contacted in this case.
LogonType Code 13
LogonType Value CachedUnlock
LogonType Meaning This workstation was unlocked with network credentials that were stored locally on the computer. The domain controller was not contacted to verify the credentials.
Resolution No evidence so far seen that can contribute towards account lock out as domain controller is never contacted in this case.

How to identify the logon type for this locked out account?

Just like how it is shown earlier for Event ID 4740, do a log search for Event ID 4625 using EventTracker, and check the details.

Log Name Security
Source Microsoft-Windows-Security-Auditing
Date date
Event ID 4625
Task Category Logon
Level Information
Keywords Audit Failure
User N/A
Computer COMPANY-SVRDC1
Description An account failed to log on.
Subject:
Security ID SYSTEM
Account Name COMPANY-SVRDC1$
Account Domain TOONS
Logon ID ID
Logon Type 7
Account For Which Logon Failed:
Security ID NULL SID

Account Name demouser
Account Domain TOONS

Failure Information:
Failure Reason An Error occurred during Logon.
Status 0xc000006d
Sub Status 0xc0000380
Process Information:
Caller Process ID 0x384
Caller Process Name C:\Windows\System32\winlogon.exe
Network Information:
Workstation Name computer name
Source Network Address IP address
Source Port 0
Detailed Authentication Information:
Logon Process User32
Authentication Package Negotiate
Transited Services
Package Name (NTLM only)
Key Length 0

Logon Type 7 says User has typed a wrong password on a password protected screen saver.

Now we understand what reason to target and how to target the same.

Applies to

Microsoft Windows Servers
Microsoft Windows Desktops

Contributors

Ashwin Venugopal, Subject Matter Expert at EventTracker
Satheesh Balaji, Security Analyst at EventTracker

Index now, understand later

Late binding is a computer programming mechanism in which the method being called upon an object or the function being called with arguments is looked up by name at runtime. This contrasts with early binding, where everything must be known in advance. This method is favored in object-oriented languages and is efficient but incredibly restrictive. After all, how can everything be known in advance?

In EventTracker, late binding allows us to continue learning and leveraging new understanding instead of getting stuck in whatever was sensible at the time of indexing. The upside is that it is very easy to ingest data into EventTracker without knowing much (or anything) about its meaning or organization. Use any one of several common formats/protocols, and voila, data is indexed and available for searching/reporting.

As understanding improves, users can create a “Knowledge Pack” to describe the indexed data in reports, search output, dashboards, co-relation rules, behavior rules, etc. There is no single, forced “normalized” schema and thus no connectors to transform incoming data to the fixed schema.

As your understanding improves, the knowledge pack improves and so does the resulting output. And oh by the way, since the same data can be viewed by two different roles in very different ways, this is easily accommodated in the Knowledge Pack. Thus the same data (e.g., Login failures) can be viewed in one way by the Security team (in real time, as an alert, with trends) and in an entirely different way by the Compliance team (as a report covering a time-span with annotation to show due care).

Hallmarks of a successful security monitoring team

Over the years, we have seen many approaches to implementing a security monitoring capability.

The “checkbox mentality” is common—when the team uses the out-of-the-box functionality, including perhaps rules/reports, to meet a specific regulation.

The “big hero” approach is found in chaotic environments where tools are implemented with no planning or oversight, in a very “just do it” approach. The results may be fine, but are lost when the “big hero” moves on or loses interest.

The “strict process” organizations that implement a waterfall model and have rigid processes for change management and the like frequently lack the agility and dynamics required by today’s constantly evolving threats.

So what then are the hallmarks of a successful approach? Augusto Barrios described these factors here. Three factors are common:

  • Good people: Team members who know the environment and can create good use cases. Members who know the selected technology and can weave the rules, configuration and customize to suit.
  • Lightweight, but clear processes: Recognize that it’s very hard to move from good ideas to real (and deployed) use cases without processes. Absent this, things go to a slow death.
  • Top down and lateral support: The security team may have good people and processes to put together the use cases, but they are not an island. They will need continuous support to bring in new log sources, context data and the knowledge about the business and the environment required for implementation and optimization. They will need other people’s (IT ops, business applications specialists) time and commitment, and that’s only possible with top down support and empowerment.

Since it’s quite hard to get all of it right, an increasingly popular approach is to split the problem between the SIEM vendor and the buyer. Each has strengths critical to success. The SIEM vendor is expert with the technology, likely has well defined processes for implementation and operational success, whereas the buyer knows the environment intimately. Together, good use cases can be crafted. Escalation from the SIEM vendor who performs the monitoring is passed to the buyer team to provide lateral support. This approach has the potential to ramp up very quickly, since each team plays to their existing strengths.

The Gartner term for this approach is “co-managed SIEM.”

Want to get started quickly? Here is a link for you.

Where to focus efforts: Endpoint or Network?

The release of EventTracker 8 with new endpoint threat detection capabilities has led to many to ask: a) how to obtain these new features and b) where the focus on monitoring efforts should be, on the endpoint or on traditional attack vectors.

The answer to “a” is fairly simple and involves upgrading to the latest version; if you have licensed the suitable modules, the new features are immediately available to you.

The answer to “b” is not so simple and depends on your particular situation. After all, endpoint threat detection is not a replacement of signature based network packet sniffers. If your network permits BYOD or allows business partners to connect entire networks to yours, or permits remote access, why then network-based intrusion detection would be a must (how can you insist on sensors on BYOD?).

On the other hand, malware can be everywhere and anti-virus effectiveness is known to be weak. Phishing and drive-by exploits are real things. Perhaps even accurate inventory of endpoints (think traveling laptops) is hard. This all leads to endpoint-focused efforts as being paramount.

So really, it’s not endpoint or network-focused monitoring; rather it’s endpoint and network-focused monitoring efforts.

Feeling overwhelmed at having to deploy/manage so much complexity? Help is at hand. Our co-managed solution called SIEM Simplified is designed to take the sting out of the cost and complexity of mounting an effective defense.

The fallacy of “protect critical systems”

Risk management 101 says you can’t possibly apply the same safeguards to all systems in the network. Therefore, you must classify your assets and apply greater protection to the “critical” systems—the ones where you have more to lose in the event of a breach. And so, desktops are considered less critical as compared to servers, where the crown jewels are housed.

But think about this: an attacker will most likely probe for the weakly defended spot, and thus many widespread breaches originate at the desktop. In fact, in many cases, attackers discover crown jewels are sometimes also available at some workstations of key employees (e.g., the CEO’s assistant?), in which case there is not even a need to attack a hardened server.

So while it still makes sense to mount better defenses of critical systems, it’s equally sensible to be able to investigate compromised systems, regardless of their criticality. To do so, you must be gathering telemetry from all systems. While you may not be able to do this if you are allowing a BYOD policy, you should definitely think about data gathering from beyond just “critical systems.”

The ETDR functionality built in to the EventTracker 8 sensor (formerly agent) for Windows lets you collect this telemetry easily and efficiently. The argument here being it’s very worthwhile given the current threat landscape, to cover not just critical systems, but also desktops, with this technology.

What’s new in EventTracker 8? Find out here.

Security Subsistence Syndrome

Security Subsistence Syndrome (SSS) is defined as a mindset in an organization that believes it has no security choices and is underfunded, so it minimally spends to meet perceived statutory and regulatory requirements.

Andy Ellis describes this mindset as one “with attitude, not money. It’s possible to have a lot of money and still be in a bad place, just as it’s possible to operate a good security program on a shoestring budget.”

In the face of overwhelming evidence that traditional defenses such as signature based anti-virus and firewalls are woefully inadequate against modern threats, SSS leads defenders to proclaim satisfaction because they have been diligent in implementing these basic precautions.

However, people who deal with incident response today quietly assume that the malware will not be detected by whatever anti-virus tools are installed. The question of “does AV detect it?” never even comes up anymore. In their world, anti-virus effectiveness is basically 0% and this is not a subject of any debate. This is simply a fact of their daily life, as noted here.

So how does the modern IT manager defend effectively (and efficiently — since cost is always a concern) against this threat landscape?

The answer is in a suite of technologies now called endpoint threat detection and response (ETDR or EDR). These are IT analytics solutions which provide visibility and insight into abnormal behavior that could represent potential threats and risks and enable enterprises to improve their security posture. A sensor at the endpoint is used to detect the launch of new processes and compares the MD5 (or SHA) hash of this process to determine if it has been seen before/trusted.

Can your SIEM provide ETDR? EventTracker can. Time to upgrade?

Can you defeat a casual attacker?

The news is rife with stories on “advanced” and “persistent” attacks, in the same way as exotic health problems like Ebola. The reality is that you are much more likely to come down with the common cold than Ebola. Thus, it makes more sense to pay close attention to what the Center for Disease Control has to say about it than to stockpile Ebola serum.

In similar vein, how good is your organization in fighting basic, commodity attacks?

It is true that the scary monsters called 0-day, advanced/persistent attacks and state sponsored superhackers are real. But before worrying about these, how are you set up for traditional intrusion attempts that use (5+) year old tools, tactics and exploits? After all, the vast majority of successful attacks are low tech and old school.

Want to rapidly improve your security maturity? Consider SIEM Simplified, our surprisingly affordable service that can protect you from 90% of the attacks for 10% of the do-it-yourself cost.

When is an alert not an alert?

The Riddler is one of Batman’s enduring enemies who takes delight in incorporating riddles and puzzles into his criminal plots—often leaving them as clues for the authorities and Batman to solve.

Question: When is a door, not a door?
Answer: When it’s ajar.

So riddle me this, Batman: When is an alert not an alert?

EventTracker users know that one of its primary functions is to apply built-in knowledge to reduce the flood of all security/log data to a much smaller stream of alerts. However, in most cases, without applying local context, this is still too noisy, so a risk score is computed which factors in the asset value and CVSS score of the source.

This allows us to separate “alerts” into different priority levels. The broad categories are:

  • Actionable Alerts: these require that you pay immediate attention because it’s likely to affect the network or critical data. An analogy is that you have had a successful break-in and the intruder is inside the premises.
  • Awareness Alerts: there may not be anything to do, but administrators should become aware and perhaps plan to shore up defenses. The analogy is that bad guys have been lurking on your street and making observations about when you enter/exit the premises and when its unoccupied.
  • Compliance Alerts: these may affect your compliance posture and so bear either awareness or action on your part.

And so, there are alerts and there are alerts. Over-reacting to awareness or compliance alerts will drain your energy and eventually sap your enthusiasm, not to mention cost you in real terms. Under-reacting to actionable alerts will also hurt you by inaction.

Can your SIEM differentiate between actionable and awareness alerts?
EventTracker can.
Find out more here.

Can you predict attacks?

The “kill chain” is a military concept related to the structure of an attack. In the InfoSec area, this concept is a way of modeling intrusions on a computer network.

Threats occur in up to seven stages. Not all threats need to use every stage, and the actions available at each stage can vary, giving an almost unlimited diversity to attack sets.

  • Reconnaisance
  • Weaponization
  • Delivery
  • Exploitation
  • Installation
  • Command and Control
  • Actions on Objective

Of course, some of the steps can happen outside the defended network, and in those cases, it may not be possible or practical to identify or counter. However, the most common variety of attack is unstructured in nature and originates from external sources. These use scripts or commonly available cracking tools that are widely available. Such attacks are identified by many techniques including:

Evidence of such activities is a pre-cursor to an attack. If defenders observe the activities from external sources, then it is important to review what the targets are. Often times, these can be uncovered by a penetration test. Repeated attempts against specific targets are a clue.

A defense-in-depth strategy gives defenders multiple clues about such activities. These include IDS systems that detect attack signatures, logs showing the activities and vulnerability scans that identify weaknesses.

To be sure, defending requires carefully orchestrated expertise. Feeling overwhelmed? Take a look at our SIEM Simplified offering where we can do the heavy lifting.

The Attack on your infrastructure: a play in three parts

To defend against an attacker, you must know him and his methods. The typical attack launched on an IT infrastructure can be thought of in three stages.

Part 1: Establish a beachhead

The villain lures the unsuspecting victim to install malware. This can be done in a myriad of ways: by sending an attachment from an apparently trustworthy source, causing a drive by infection through a website hosting malware, or via a USB drive. Attackers target the weakest link, the less guarded desktop or a test system. Frontal assaults against heavily fortified and carefully watched servers are not practical.

Once installed, the malware usually copies itself to multiple spots to deter eradication and it can possibly “phone home” for further instructions. Malware usually lurks in the background, trying to obtain passwords or system lists to further enable Part 2.

Part 2: Move laterally

As a means to deter removal, malware will move laterally, copying itself to other machines/locations. This movement is also often from peripheral to more central systems (e.g., from workstations to file shares).

Part 3: Exfiltrate secrets

Having patiently gathered up (usually zip or rar) secrets (intellectual property, passwords, credit card info, PII, etc.), the malware (or attacker)now sends the data outside the network back to the attacker.
How do you defend yourself against this? A SIEM solution can help, or a managed SIEM solution if you are short on expertise.

Outsourcing versus As-a-Service

The (toxic) term “outsourcing” has long been vilified as the substitution of onshore jobs with cheaper offshore people. As noted here, outsourcing, by and large, has really always been about people. The story of outsourcing to-date is of service providers battling it out to deliver people-based services more productively, promising delights of delivery beyond merely doing the existing stuff significantly cheaper and a bit better.

When it comes to SIEM-as-a-service though, the game-changer is centered on today’s services work as a genuine blending of people-plus-technology. This empowers service buyers to focus on value-addition through meaningful and secure data, enabled by a sophisticated tool. All good, but recognize this is fundamentally made possible by smart people working together, your team and ours.

Business services, today, are one of speed to business impact. They are about simplification. They are about removing any blockage or obstacle diluting this business impact.

We refer to our SIEM Simplified service offering as co-managed. Inherent in the term is the acknowledgement that our team must work with your to deliver value. The “simplified” part is all about the removal of unneeded complexity.

That transition to As-a-Service is all about simplification — removing unnecessary complexity, poor processes and manual intervention to make way for a more nimble way of running a business. It is also about prioritizing where to focus investments to achieve maximum benefit and impact for the business from its operations.

Do you need a Log Whisperer?

Quick, take a look at these four log entries

  1. Mar 29 2014 09:54:18: %PIX-6-302005: Built UDP connection for faddr 198.207.223.240/53337 gaddr10.0.0.187/53 laddr 192.168.0.2/53
  2. Mar 12 12:00:08 server2 rcd[308]: id=304 COMPLETE ‘Downloading https://server2/data/red-carpet.rdf’time=0s (failed)
  3. 200.96.104.241 – – [12/Sep/2006:09:44:28 -0300] “GET /modules.php?name=Downloads&d_op=modifydownloadrequest&%20lid=-%20UNION%20SELECT%200,username,user_id,
    user_password,name,%20user_email,user_level,0,0%20FROM%20nuke_users HTTP/1.1” 200 9918 “-”
    “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)”
  4. Object Open:Object Server: Security
    Object Type: File
    Object Name: E:\SALES RESOURCE\2010\Invoice 2010 7-30-2010.xls
    Handle ID: –
    Operation ID: {0,132259258}
    Process ID: 4
    Image File Name:
    Primary User Name: ACCOUNTING$
    Primary Domain: PMILAB
    Primary Logon ID: (0x0,0x3E7)
    Client User Name: Aaron
    Client Domain: CONTOSO
    Client Logon ID: 0x0,0x7E0808E)
    Accesses: DELETE
    READ_CONTROL
    ACCESS_SYS_SEC
    ReadData (or ListDirectory)
    ReadEA
    ReadAttributes
    Privileges: –
    Restricted Sid Count: 0
    Access Mask: 0x1030089

Any idea what they mean?

No? Maybe you need a Log Whisperer — someone who understands these things.

Why, you ask?
Think security — aren’t these important?

Actually #3 and #4 are a big deal and you should be jumping on them, whereas #1 and #2 are routine — nothing to get excited about.

Here is what they mean:

  1. A Cisco firewall allowed a packet through (not a “connection” because it’s a UDP packet — never mind what the text says)
  2. An attempt to update by an OpenSuSE Linux machine, but some software packages are failing to be updated.
  3. A SQL injection attempt on PHP Nuke
  4. Access denied to a shared resource in a Windows environment

Log Whisperers are the heart of our SIEM Simplified. They are the experts who review logs, determine what they mean and provide remediation recommendations in simple, easy to understand language.

Not to be confused with these guys.

And no, they don’t look like Robert Redford either. You are thinking about the Horse Whisperer.

Three Indicators of Attack

For many years now, the security industry has become somewhat reliant on ‘indicators of compromise’ (IoC) to act as clues that an organization has been breached. Every year, companies invest heavily in digital forensic tools to identify the perpetrators and which parts of the network were compromised in the aftermath of an attack.

All too often, businesses are realizing that they are the victims of a cyber attack once it’s too late. It’s only after an attack that a company finds out what made them vulnerable and what they must do to make sure it doesn’t happen again.
This reactive stance was never useful to begin with and given the threat landscape, is totally undone as described by Ben Rossi.

Given the importance of identifying these critical indicators of attack (IoAs), here are eight common attack activities that IT departments should be tracking in order to gain the upper hand in today’s threat landscape.

Here are three IoAs that are both meaningful and relatively easy to detect:

  1. After hours: Malware detection after office hours; unusual activity including access to workstations or worse yet, servers and applications, should raise a red flag.
  2. Destination Unknown: Malware tends to “phone home” for instructions or to exfiltrate data. Connections from non-browsers and/or on non-standard ports and/or to poor reputation of “foreign” destinations is a low noise indicator of breaches.
  3. Inside Out: More than 75% of attacks, per the the Mandian m-report, are done using stolen credentials. It is often acknowledged that Insider attacks are much less common but much more damaging. When an outsider becomes a (privileged) insider, your worst nightmare has come true.

Can you detect out-of-ordinary or new behavior? To quote the SANS Institute…Know Abnormal to fight Evil. Read more here.

It’s all about detection, not protection

What did the 2015 Verizon DBIR show us?
• 200+ days on average before persistent attackers are discovered within the enterprise network
• 60%+ breaches are reported by a third party
• 100% of breached networks were up to date on Anti Virus

We’ve got detection deficit disorder.
And it’s costing us. Direly!

Think of the time and money spent in detecting, with some degree of confidence, the location of Osama Bin Laden. Then think of the time and money to dispatch Seal Team 6 on the mission. Detection took ten years and cost hundreds of millions of dollars while remediation took 10 days and a few million dollars.

The same situation is happening in your network. You have for example 5,000 endpoints and of those, maybe 5 are compromised as you’re reading this. But which endpoints are compromised? How do you get actionable intelligence so that you can dispatch your own Seal Team 6?

This is the problem, EventTracker 8 was designed to address. Continuous digital forensics data collection using purpose built sensors. The machine learning at the EventTracker Console, sifts through collected data to identify possible malware, lateral movement and exfiltration of data. The processes are all backed by experts of the SIEM Simplified service.

The Detection Deficit

The gap between the ‘time to compromise’ and the ‘time to discover’ is the detection deficit. According to Verizon DBIR, the trend lines of these have been diverging significantly in the past few years. Worse yet, the data shows that attackers are able to compromise the victim in days but thereafter are able to spend an average of 243 days undetected within the enterprise network before they are exposed. More often than not, this is happening by a third party. This trend points to an ongoing detection deficit disorder. The suggestion is that defenders struggle to uncover the indicators of compromise. While the majority of these attacks are via malware inserted to the victim’s system by a variety of methods, there is also theft of credentials that make it look like an inside job. To overcome the detection deficit, defenders must look for other common evidence of compromise. These include: command and control activity, suspicious network traffic, file access and unauthorized use of valid credentials. EventTracker 8 includes features incorporated into our Windows sensor that provide continuous forensics to look for evidence of compromise.” target=”_blank”>Verizon VBIR, the trend lines of these have been diverging significantly in the past few years.

Worse yet, the data shows that attackers are able to compromise the victim in days but thereafter are able to spend an average of 243 days undetected within the enterprise network before they are exposed. More often than not, this is happening by a third party.

This trend points to an ongoing detection deficit disorder. The suggestion is that defenders struggle to uncover the indicators of compromise.

While the majority of these attacks are via malware inserted to the victim’s system by a variety of methods, there is also theft of credentials that make it look like an inside job.

To overcome the detection deficit, defenders must look for other common evidence of compromise. These include: command and control activity, suspicious network traffic, file access and unauthorized use of valid credentials.

EventTracker 8 includes features incorporated into our Windows sensor that provide continuous forensics to look for evidence of compromise.

The Agent Advantage

For some time, “We use an agent for that” was a death spell for many security tools  while “agent-less” was the only game in town worth playing. Yes, people tolerate AV and device management agents, but that is where many organizations seemed to draw the line.  And an agent just to collect logs? – You’ve got to be kidding!

In this blog from 2006, Richard Bejtlich pointed out, enterprise security teams should seek to minimize their exposure to endpoint agent vulnerabilities.

Lets not confuse the means with the end. The end is “security information/event monitoring,” while getting the logs is the means to the end. Whereas, the threatscape of 2015 is dominated by polymorphic, persistent malware (dropped by phishing and stolen credentials); where our current mission still remains to defend the network.

Malware doesn’t write logs but it does however leave behind trace evidence on the host. This is evidence that you can’t get by monitoring the network. In any case, the rise of https by default has limited the ability of the network monitor to peer inside the payload.

Thus the Agent Advantage or the Sensor Advantage if you prefer.

Endpoints have first hand information when it comes to non-signature based attacks. This includes processes, file accesses, configuration changes, network traffic, etc. This data is critical to early detection of malicious activity.

Is an “agent” just to collect logs not doing it for you? How about a “sensor” that gathers endpoint data critical to detect persistent cyber attacks? That is the EventTracker 8 sensor which incorporates DFIR and UBA.

Why host data is essential for DFIR

Attacks on our IT network are a daily fact of life. As a defender, its job is to make the attackers life harder and to deter them to go elsewhere. Any attack, almost inevitably causes some type of host artifact to be left behind.

If defenders are able to quickly uncover the presence of host artifacts, it may be possible to disrupt the attack, thereby causing pain to the attacker. Such artifacts are present on the target/host and usually not visible to network monitors.

Many modern attacks use malware that is dropped and executed on the target machine or hollows out existing valid processes to spawn child processes that can be hijacked.

A common tactic when introducing malware on a target is to blend in. If the legitimate process is called svchost.exe, then the malware may be called svhost.exe. Another tactic is to maintain the same name as the legitimate EXE but have it executed from a different path.

EventTracker 8 includes a new module called Advanced Security Analytics which provides tools to help automate the detection of such attacks. When any process is launched, EventTracker gathers various bits of information about the EXE including, its hash, its full path name, its parent process, the publisher name and if it’s digitally signed or not. Then at the EventTracker Console, if the hash is being seen for the first time, it gets compared to lists of known malware from sources such as virustotal.com, virusshare.com etc. Analysts can also look and see if the EXE was digitally signed by the publisher name and source to determine if further investigation is warranted.

When tuned properly, this capability results in low false positive and can be useful to rapidly detect attackers.

Want more information on EventTracker 8? Click here.

User location affinity

It’s clear that we are now working under the assumption of a breach. The challenge is to find the attacker before they cause damage.

Once attackers gain a beach head within the organization, they pivot to other systems. The Verizon DBIR  shows that compromised credentials make up a whopping 76% of all network incursions.

However, the traditional IT security tools deployed at the perimeter, used to keep the bad guys out, are helpless in these cases. Today’s complex cyber security attacks require a different approach.

EventTracker 8 includes an advanced security analytic package which includes behavior rules to self-learn user location affinity heuristics and use this knowledge to pinpoint suspicious user activity.

In a nutshell, EventTracker learns typical user behavior for interactive login. Once a baseline of behavior is established, out of ordinary behavior is identified for investigation. This is done in real-time and across all enterprise assets.

For example if user susan typically logs into wks5 but now because her credentials are stolen, they are used to login to server6, this would be identified as out-of-ordinary and tagged for closer inspection.

EventTracker 8 has new features designed to support security analysts involved in Digital Forensics and Incident Response.

The PCI-DSS Compliance Report

A key element of the PCI-DSS standard is Requirement 10: Track and monitor all access to network resources and cardholder data. Logging mechanisms and the ability to track user activities are critical in preventing, detecting and minimizing the impact of a data compromise. The presence of logs in all environments allows thorough tracking, alerting and analysis when something does go wrong. Determining the cause of a compromise is very difficult, if not impossible, without system activity logs.

However the 2014 Verizon PCI Report is billed as an inside look at the business need for protecting payment card information says: “Only 9.4% of organizations that our RISK team investigated after a data breach was reported were compliant with Requirement 10. By comparison, our QSAs found 31.7% compliance with Requirement 10. This suggests a correlation between the lack of effective log management and the likelihood of suffering a security breach.”

Here is a side benefit of paying attention to compliance: Consistent and complete audit trails can also significantly reduce the cost of a breach. A large part of post-compromise cost is related to the number of cards thought to be exposed. Lack of conclusive log information reduces the forensic investigator’s ability to determine whether the card data in the environment was exposed only partially or in full.

In other words, when (not if) you detect the breach, having good audit records will reduce the cost of the breach.

Organizations can’t prevent or address a breach unless they can detect it. Active monitoring of the logs from their cardholder data environments enables organizations to spot and respond to suspected data breaches much more quickly.

Organizations generally find enterprise log management hard, in terms of generating logs (covered in controls 10.1 and 10.2), protecting them (10.5), reviewing them (10.6), and archiving them (10.7).

Is this you? Here is how you spell relief – SIEM Simplified.

Which is it? Security or Compliance?

This is a classic chicken/egg question but it’s too often thought to be the same. Take it from Merriam – Webster:
Compliance: (1a) the act or process of complying to a desire, demand, proposal, or regimen or to coercion. (1b) conformity in fulfilling official requirements. (2) a disposition to yield to others.
Security: (1) the quality or state of being secure. (4a) something that secures : protection. (4b1) measures taken to guard against espionage or sabotage, crime, attack, or escape. (4b2) an organization or department whose task is security.

Clearly they are not the same. Compliance means you meet a technical or non-technical requirement and periodically someone verifies that you have met them.

Compliance requirements are established by standards bodies, who obviously do not know your network. They are established for the common good because of industry wide concerns that information is not protected, usually because the security is poor. When you see an emphasis of compliance over security, it’s too often because the organization does not want to take the time to ensure that the network and information is secure, so they rely on compliance requirements to feel better about their security.

The problem with that is that it gives a false sense of hope. It gives the impression that if you check this box; everything is going to be ok. Obviously this is far from true, with examples like Sony, Target, TJMaxx and so many other breaches. Although there are implementations of compliance that will make you more secure, you cannot base your companies’ security policy on a third party’s compliance requirements.

So what comes first? Wrong question! Let’s rephrase – there needs to be a healthy relationship between the two but one cannot substitute one for the other.

Threat data or Threat Intel?

Have you noticed the number of vendors that have jumped on the “Threat Intelligence” bandwagon recently?

Threat Intel is the hot commodity with paid sources touting their coverage and timeliness while open sources tout the size of their lists. The FBI shares its info via Infraguard while many other ISACs are popping up across industry verticals allowing many large companies to compile internal data.

All good right? More is better, right? Actually, not quite.
Look closely. You are confusing “intelligence” with “data”.

As the Lt Commander of the Starship Enterprise would tell you, Data is not Intelligence. In this case, intelligence is really problem solving. As defenders, we want this data in order to answer “Who is attacking our assets and how?” Which would lead to coherent defense.

The steps to use Threat Data are easily explained:
1) Compare observations on the local network against the threat data.
2) Alert on matches.

Now comes the hard part…

3) Examine and validate the alert to decide if remediation is needed. This part is difficult to automate and really the crux of converting threat data to threat intelligence. To do this effectively would require human skills that combine both expert knowledge of the modern ThreatScape with knowledge of the network architecture.

This last part is  where most organizations come up hard against ground reality. The fact is thatdetailed knowledge of the internal network architecture is more common within an organization (more or less documented but present in some fashion/degree), than the expert knowledge of the modern ThreatScape and the contours/limitations of the threat data.

You could, of course hire and dedicate staff to perform this function but a) such staff are hard to come by and b) budget for this is even harder.

What now?

Consider a co-managed solution like SIEM Simplified where the expert knowledge of the modern ThreatScape in the context of your network is provided by an external group. When this is combined with your internal resources to co-manage the problem, it can result in improved coverage at an affordable price point.

How to shoot yourself in the foot with SIEM

shootyoself

Six ways to shoot yourself with SIEM technology:
1) Dont plan; just jump in
2) Have no defined scope or use cases; whatever
3) Confuse SIEM with Log Management
4) Monitor noise; apply no filters
5) Don’t correlate with any other technologies eg IDS, Vulnerability scanner, Active Directory
6) Staff poorly or not at all

For grins, here’s how programmers shoot themselves in the foot:

ASP.NET
Find a gun, it falls apart. Put it back together, it falls apart again. You try using the .GUN Framework, it falls apart. You stab yourself in the foot instead.
C
You try to shoot yourself in the foot, but find out the the gun is actually a howitzer cannon.
C++
You accidentally create a dozen clones of yourself and shoot them all in the foot. Emergency medical assistance is impossible since you can’t tell which are bitwise copies and which are just pointing at others and saying, “That’s me, over there.”
JavaScript
You’ve perfected a robust, rich user experience for shooting yourself in the foot. You then find that bullets are disabled on your gun.
SQL
SELECT @ammo:=bullet FROM gun WHERE trigger = ‘PULLED’;
INSERT INTO leg (foot) VALUES (@ammo);
UNIX
% ls
foot.c foot.h foot.o toe.c toe.o
% rm * .o
rm: .o: No such file or directory
% ls
%

Click here for the Top 6 Uses of SIEM.

Venom Vulnerability exposes most Data Centers to Cyber Attacks

Just after a new security vulnerability surfaced Wednesday, many tech outlets started comparing it with HeartBleed, the serious security glitch uncovered last year that rendered communications with many well-known web services insecure, potentially exposing millions of plain-text passwords.

But don’t panic. Though the recent vulnerability has a more terrific name than HeartBleed, it is not going to cause as much danger as HeartBleed did.

Dubbed VENOM, standing for Virtualized Environment Neglected Operations Manipulation, is a virtual machine security flaw uncovered by security firm CrowdStrike that could expose most of the data centers to malware attacks, but in theory.

Yes, the risk of Venom vulnerability is theoretical as there is no real-time exploitation seen yet, while, on the other hand, last year’s HeartBleed bug was practically exploited by hackers an unknown number of times, leading to the theft of critical personal information.

Now let’s know more about Venom:

Venom (CVE-2015-3456) resides in the virtual floppy drive code used by a several number of computer virtualization platforms that if exploited…

…could allow an attacker to escape from a guest ‘virtual machine’ (VM) and gain full control of the operating system hosting them, as well as any other guest VMs running on the same host machine.

According to CrowdStrike, this roughly decade-old bug was discovered in the open-source virtualization package QEMU, affecting its Virtual Floppy Disk Controller (FDC) that is being used in many modern virtualization platforms and appliances, including Xen, KVM, Oracle’s VirtualBox, and the native QEMU client.

Jason Geffner, a senior security researcher at CrowdStrike who discovered the flaw, warned that the vulnerability affects all the versions of QEMU dated back to 2004, when the virtual floppy controller was introduced at the very first.

However, Geffner also added that so far, there is no known exploit that could successfully exploit the vulnerability. Venom is critical and disturbing enough to be considered a high-priority bug.

Successful exploitation of Venom required:
For successful exploitation, an attacker sitting on the guest virtual machine would need sufficient permissions to get access to the floppy disk controller I/O ports.

When considering on Linux guest machine, an attacker would need to have either root access or elevated privilege. However on Windows guest, practically anyone would have sufficient permissions to access the FDC.

However, comparing Venom with Heartbleed is something of no comparison. Where HeartBleed allowed hackers to probe millions of systems, Venom bug simply would not be exploitable at the same scale.

Flaws like Venom are typically used in a highly targeted attack such as corporate espionage, cyber warfare or other targeted attacks of these kinds.

Did venom poison Clouds Services?

Potentially more concerning are most of the large cloud providers, including Amazon, Oracle, Citrix, and Rackspace, which rely heavily on QEMU-based virtualization are vulnerable to Venom.

However, the good news is that most of them have resolved the issue, assuring that their customers needn’t worry.
“There is no risk to AWS customer data or instances,” Amazon Web Services said in a statement.
Rackspace also said the flaw does affect a portion of its Cloud Servers, but assured its customers that it has “applied the appropriate patch to our infrastructure and are working with customers to remediate fully this vulnerability.”

Azure cloud service by Microsoft, on the other hand, uses its homemade virtualization hypervisor technology, and, therefore, its customers are not affected by Venom bug.

Meanwhile, Google also assured that its Cloud Service Platform does not use the vulnerable software, thus was never vulnerable to Venom.

Patch Now! Prevent yourself

Both Xen and QEMU have rolled out patches for Venom. If you’re running an earlier version of Xen or QEMU, upgrade and apply the patch.

Note: All versions of Red Hat Enterprise Linux, which includes QEMU, are vulnerable to Venom. Red Hat recommend its users to update their system using the commands, “yum update” or “yum update qemu-kvm.”

Once done, you must “power off” all your guests Virtual Machines for the update to take place, and then restart it to be on the safer side. Remember, only restarting without power off the guest operating system is not enough for the administrators because it would still use the old QEMU binary.

See more at Hacker News.

Five quick wins to reduce exposure to insider threats

Q. What is worse than the attacks at Target, Home Depot, Michael’s, Dairy Queen, Sony, etc?
A. A disgruntled insider (think Edward Snowden)

A data breach has serious consequences both directly and indirectly. Lost revenue and a tarnished brand reputation both inflict harm long after incident resolution and post breach clean-up. Still, many organizations don’t take necessary steps to protect themselves from a potentially detrimental breach.

But, the refrain goes, “We don’t have the budget or the manpower or the buy in from senior management. We’re doing the best we can.”

How about going for some quick wins?
Quick wins provide solid risk reduction without major procedural, architectural or technical changes to an environment. Quick wins also provide such substantial and immediate risk reduction against very common attacks that most security-aware organizations prioritize these key controls.

1) Control the use of Administrator privilege
The misuse of administrative privileges is a primary method for attackers to spread inside a target enterprise. Two very common attacker techniques take advantage of uncontrolled administrative privileges. For example, a workstation user running as a privileged user, is fooled by simply surfing to a website hosting attacker content that can automatically exploit browsers. The file or exploit contains executable code that runs on the victim’s machine. Since the victim user’s account has administrative privileges, the attacker can take over the victim’s machine completely and install malware to find administrative passwords and other sensitive data.

2) Limit access to documents to employees based on the need to know
It’s important to limit permissions so employees only have access to the data necessary to perform their jobs. Steps should also be taken to ensure users with access to sensitive or confidential data are trained to recognize which files require more strict protection.

3) Evaluate your security tools – can they detect insider theft?
Whether it’s intentional or inadvertent, would you even know if someone inside your network compromised or leaked sensitive data?

4) Assess security skills of employees, provide training
The actions of people play a critical part in the success or failure of an enterprise. People fulfill important functions at every stage of the business function. Attackers are very conscious of these issues and use them to plan their exploitations by: carefully crafting phishing messages that look like routine and expected traffic to an unwary user; exploiting the gaps or seams between policy and technology; working within the time window of patching or log review; using nominally non-security-critical systems as jump points or bots….

5) Have an incident response plan
How prepared is your information technology (IT) department or administrator to handle security incidents? Many organizations learn how to respond to security incidents only after suffering attacks. By this time, incidents often become much more costly than needed. Proper incident response should be an integral part of your overall security policy and risk mitigation strategy.

A guiding principle of IT Security is “Prevention is ideal but detection is a must.”

Have you reduced your exposure?

Secure, Usable, Cheap: Pick any two

This fundamental tradeoff between security, usability, and cost is critical. Yes, it is possible to have both security and usability, but at a cost, in terms of money, time and personnel. While making something both cost efficient and usable, or even making something secure and cost-efficient may not be very hard, it is however  more difficult and time consuming to make something both secure and usable. This takes a lot of effort and thinking because security takes planning and resources.

As a system administrator, usability is at the top of their list. However, as a security administrator, security will be on top of their list – no surprise here really.

What if I tell you that the two job roles are orthogonal? What gets a sys admin bouquets, will get a security admin, brickbats and vice versa.

Oh and when we say “cheap” we mean in terms of effort – either by the vendor or by the user.

Security administrators face some interesting tradeoffs. Fundamentally, the choice to be made is between a system that is secure and usable, one that is secure and cheap or one that is cheap and usable. Unfortunately, we cannot have everything. The best practice is not to make the same person responsible for both security and system administration. The goals of those two tasks are far too often in conflict to make this a position that someone can become successful at.

PCI-DSS 3.1 is here – what now?

On April 15, 2015, the PCI Security Standards Council (PCI SSC) announced the release of PCI DSS v3.1. This follows closely on the heels of PCI DSS 3.0, that just went into full effect on January 1, 2015. There is a three-year cycle between major updates of PCI DSS and, outside of that cycle, the standard can be updated to react to threats as needed.

The major driver of PCI DSS 3.1 is the industry’s conclusion that SSL version 3.0 is no longer a secure protocol and therefore must be addressed by the PCI DSS.

What happened to SSL?
The last-released version of encryption protocol to be called “SSL”—version 3.0—was superseded by “TLS,” or Transport Layer Security, in 1999. While weaknesses were identified in SSL 3.0 at that time, it was still considered safe for use up until October of 2014, when the POODLE vulnerability  came to light. POODLE is a flaw in the SSL 3.0 protocol itself, so it’s not something that can be fixed with a software patch.

Bottom line
Any business software running SSL 2.0 or 3.0 must be reconfigured or upgraded.
Note: Most SSL/TLS deployments support both SSL 3.0 and TLS 1.0 in their default configuration. Newer software may support SSL 3.0, TLS 1.0, TLS 1.1 and TLS 1.2. In these cases the software simply needs to be reconfigured. Older software may only support SSL 2.0 and SSL 3.0 (if this is the case, it is time to upgrade).

How to detect SSL/TLS usage and version?
A vulnerability scan from EventTracker Vulnerability Assessment Service (ETVAS) or other scanner, will identify insecure implementations.

SSL/TLS is a widely deployed encryption protocol. The most common use of SSL/TLS is to secure websites (HTTPS), though it is also used to:
• Secure email in transit (SMTPS or SMTP with STARTTLS, IMAPS or IMAP with STARTTLS)
• Share files (FTPS)
• Secure connections to remote databases and secure remote network logins (SSL VPN)
SIEM Simplified customers
The EventTracker Control Center will have contacted you to correctly configure the Windows server instance hosting the EventTracker Manager Console, to comply with the guideline. You must upgrade or reconfigure all other vulnerable servers in your network.

If you subscribe to ETVAS, the latest vulnerability reports will highlight any servers that must be reconfigured along with detailed recommendations on how to do so.

Three myths of IT Security

Myth 1: Hardening a system makes it secure
Security is a process, to be evaluated on a constant basis. There is nothing that will put you into a “state of security.” Did you really think that simply applying some hardening guide to a system will make it secure?

Threats exploit unpatched vulnerabilities and not one of them would have been stopped by any security settings. Few settings can prevent your network from getting attacked through unpatched vulnerabilities.

Myth 2: If We Hide It, the Bad Guys Won’t Find It
Also known as security by obscurity, hiding the system doesn’t really help. For instance, turning off SSID broadcast in wireless networks. Not only will you now have a network that is non compliant with the standard, but your clients will also prefer a rogue network with the same name over the legitimate one. Oh, and it takes a few minutes to actually find the network anyway, given the proper tools. Another example is changing the banners on your Web site so the bad guys will not know it is running IIS. First, it is relatively simple to figure out what the Web site is running anyway. Second, most of the bad guys are not smart enough to do that, so they just try all the exploits, including the IIS ones. Yet another example is renaming the Administrator account. It is a matter of a couple of API calls to find the real name. Our favorite is when administrators use Group Policy to rename the Administrator account. They now have an account called “Janitor3;” with a comment of “Built in account for administering the computer/domain.” This is not really likely to fool anyone.

Myth 3: “High Security” Is an End-Goal for All Environments
High security, in the sense of the most restrictive security possible, is not for everyone. In some environments you are willing to break things in the name of protection that you are not willing to break in others.

Some systems are subjected to incredibly serious threats. If these systems get compromised, people will die, nations and large firms will go bankrupt, and society as we know it will collapse. Other systems contain far less sensitive information and thus need not be subjected to the same level of security. The protective measures that are used on the former are entirely inappropriate for the latter; yet we keep hearing that “high security” is some sort of end-goal toward which all environments should strive.

Safeguards should be applied in proportion to risk.

Does sharing Threat Intel work?

In the next couple months, Congress will likely pass CISA, the Cybersecurity Information Sharing Act. The purpose is to “codify mechanisms for enabling cybersecurity information sharing between private and government entities, as well as among private entities, to better protect information systems and more effectively respond to cybersecurity incidents.”

Can it help? It’s interesting to note two totally opposing views.

Arguing that it will help is Richard Bejtlich of Brookings. His analogy is Threat intelligence, is in some ways like a set of qualified sales leads provided to two companies. The first has a motivated sales team, polished customer acquisition and onboarding processes, authority to deliver goods and services and quality customer support. The second business has a small sales team, or perhaps no formal sales team. Their processes are broken, and they lack authority to deliver any goods or services, which in this second case isn’t especially valuable. Now, consider what happens when each business receives a bundle of qualified sales leads. Which business will make the most effective use of their list of profitable, interested buyers? The answer is obvious, and there are parallels to the information security world.

Arguing that it won’t help at all is Robert Graham, the creator of BlackICE Guard. His argument is “CISA does not work. Private industry already has exactly the information sharing the bill proposes, and it doesn’t prevent cyber-attacks as CISA claims. On the other side, because of the false-positive problem, CISA does far more to invade privacy than even privacy advocates realize, doing a form of mass surveillance.”

In our view, Threat Intel is a new tool. It’s usefulness depends on the artisan wielding the tool. A poorly skilled user would get less value.

Want experts on your team but don’t know where to start? Try our managed service SIEM Simplified. Start quick and leverage your data!

Threat Intelligence vs Privacy

On Jan 13, 2015, the U.S. White House published a set of legislative proposals on cyber security threat intelligence (TI) sharing between private and public entities. Given the breadth of cyber attacks across the Internet, the sharing of information between commercial entities and the US Government is increasingly critical. Absent sharing, first responders charged with cyber defense are at a disadvantage in detecting and responding to common attacks.

Should this cause a privacy concern?
Richard Bejtlich, senior fellow at Brookings says “Threat intelligence does not contain personal information of American citizens, and privacy can be maintained while learning about threats. Intelligence should be published in an automated, machine-consumable, standardized manner.”

The White House proposal uses the following definition:
“The term `cyber threat indicator’ means information —
(A) that is necessary to indicate, describe or identify–
(i) malicious reconnaissance, including communications that reasonably appear to be transmitted for the purpose of gathering technical information related to a cyber threat;
(ii) a method of defeating a technical or operational control;
(iii) a technical vulnerability;
(iv) a method of causing a user with legitimate access to an information system or information that is stored on, processed by, or transiting an information system inadvertently to enable the defeat of a technical control or an operational control;
(v) malicious cyber command and control;
(vi) any combination of (i)-(v).
(B) from which reasonable efforts have been made to remove information that can be used to identify specific persons reasonably believed to be unrelated to the cyber threat.”

If you take the above at face value, then a reasonable assumption is that these sorts of cyber threat indicators should not trigger privacy concerns, whether shared between the private sector and the government or within the private sector.

Of course, getting TI and using it effectively are completely different as discussed here. Bejtlich reminds us that “private sector organizations should focus first on improving their own defenses before expecting that government assistance will mitigate their security problems.”

Looking for an practical, cost effective way to shore up your defenses? SIEM Simplified is one way to go about it.

Death by a Thousand cuts

You may recall that back in 2012, then Secretary of Defense Leon Panetta warned of “a cyber Pearl Harbor; an attack that would cause physical destruction and the loss of life.”

This hasn’t quite come to pass has it? Is it dumb luck? Or are we just waiting for it to happen?

In his annual testimony about the intelligence community’s assessment of “global threats,” Director of National Intelligence James Clapper sounded a more nuanced and less hyperbolic tone. “Rather than a ‘cyber Armageddon’ scenario that debilitates the entire U.S. infrastructure, we envision something different,” he said, “We foresee an ongoing series of low-to-moderate level cyber attacks from a variety of sources over time, which will impose cumulative costs on U.S. economic competitiveness and national security.”

The reality is that the U.S. is being bombarded by cyber attacks of a smaller scale every day—and those campaigns are taking a toll.

Now the DNI also went on to say “Although cyber operators can infiltrate or disrupt targeted [unclassified] networks, most can no longer assume that their activities will remain undetected, nor can they assume that if detected, they will be able to conceal their identities. Governmental and private sector security professionals have made significant advances in detecting and attributing cyber intrusions.”

Alan Paller of the SANS Institute says “Those words translate directly to a simpler statement: ‘The weapons and other systems we operate today cannot be protected from cyber attack.’ Instead, as a nation, we have to put in place the people and support systems who can find the intruders and excise them fast.”

So then what capabilities do you have in this area given that the attacks are continuous and ongoing against your infrastructure?

Want to do something about it quickly and effectively? Consider SIEM Simplified our service offering that can take the heavy lift required to implement such monitoring programs off your hands.

PoSeidon and EventTracker

A new and harmful Point-of-Sale (“POS”) malware has been identified by security researchers at Cisco’s Security Intelligence & Research Group. The team says it is more sophisticated and damaging than previous POS malware programs.

Nicknamed PoSeidon, the new malware family targets POS systems, infects machines and scrapes the memory for credit card information which it then exfiltrates to servers, primarily .ru TLD, for harvesting or resale.

When consumers use their credit or debit cards to pay for purchases from a retailer, they swipe their card through POS systems. Information stored on the magnetic stripe on the back of those cards is read and retained by the POS. If the information on that stripe is stolen, it can be used to encode the magnetic strip of a fake card, which is then used to make fraudulent purchases. POS malware and card fraud has been steadily rising, affecting large and small retailers. Target, one of the most visible victims of security breach involving access to its payment card data, incurred losses approximated at $162 million (before insurance recompense).

PoSeidon employs a technique called memory scraping in which the RAM of infected terminals are scanned for unencrypted strings which match credit card information. When PoSeidon take over a terminal, a loader binary is installed to allow the malware to remain on the target machine even during system reboots. The Loader then contacts a command and control server, and retrieves a URL which contains another binary, FindStr, to download and execute. FindStr scans the memory of the POS device and finds strings (hence its name) and installs a key logger which looks for number strings and keystrokes analogous to payment card numbers and sequences. CSS referred to the number sequences that begin with numbers generally used by Discover, Visa, MasterCard and American Express cards (6, 5, 4, and 3 respectively, as well as the number of digits following those numbers; 16 digits for the former three, 15 digits for the American Express card). This data is then encoded and sent to an exfiltration server.

A whitepaper for detecting and protecting from PoSeidon malware infection is also available from EventTracker.

Tired of keeping up with the ever changing Threatscape? Consider SIEM Simplified. Let our managed SIEM solution do the heavy lifting.

Want to be acquired? Get your cyber security in order!

Want to be acquired? Get your cyber security in order!

Washington Business Journal Senior Staff Reporter, Jill Aitoro hosted  a panel of cyber experts Feb. 26 at Crystal Tech Fund in Arlington, VA.

The panel noted that how well a company has locked down their systems and data will have a direct effect on how much a potential buyer is willing to shell out for an acquisition — or whether a buyer will even bite in the first place.

Howard Schmidt, formerly CISO at Microsoft recalled ‘”We did an acquisition one time — about $10 million. It brought tons of servers, a big IT infrastructure, when all was said and done, it cost more than $20 million to rebuild the systems that had been owned by criminals and hackers for at least two years. That’s a piece of M&A you need to consider.”

Many private investors are doing exactly that, calling in security companies to assess a target company’s cyber security posture before making an offer. In some cases, the result will be to not invest at all, with the venture capitalist telling a company to “get their act together and then call back”.

The Pyramid of Pain

There is great excitement amongst security technology and service providers about the intersection of global threat intelligence with local observations in the network. While there is certainly cause for excitement, it’s worth pausing to ask the question “Is Threat Intelligence being used effectively?”

David Bianco explains that not all indicators of compromise are created equal. The pyramid defines the pain it will cause the adversary when you are able to deny those indicators to them.

info

Hash Values: SHA1, MD5 or other similar hashes that correspond to specific suspicious or malicious files. Hash Values are often used to provide unique references to specific samples of malware or to files involved in an intrusion. EventTracker can provide this functionality via its Change Audit feature.
IP Addresses: or even net blocks. If you deny the adversary the use of one of their IPs, they can usually recover quickly. EventTracker addresses these via its Behavior Module and the associated IP Reputation lookup.
Domain Names: These are harder to change than IP addresses. EventTracker can either use logs from a proxy or scan web server logs to detect such artifacts.
Host Artifact: For example, if the attacker’s HTTP recon tool uses a distinctive User-Agent string when searching your web content (off by one space or semicolon, for example. Or maybe they just put their name. Don’t laugh. This happens!). This can be detected by the Behavior Module in EventTracker when focused on the User Agent string from web server logs.
Tools: Artifacts of tools (eg DLLs or EXE names or hashes) that the attacker is using, can be detected via the Unknown Process module within EventTracker via the Change Audit feature.
Tactics, Techniques & Procedures: An example can be detecting Pass-the-hash attacks as called out by the NSA in their white paper and discussed in our webinar “Spotting the adversary with Windows Event Log Monitoring

Bottom line: Having Threat Intelligence is not the same as using it effectively. The former is something you can buy, the latter is something you develop as a capability. It not only requires tools but also persistent, well trained humans.

Want both? Consider SIEM Simplified.

What good is Threat Intelligence integration in a SIEM?

Bad actors/actions are more and more prevelant on the Internet. Who are they? What are they up to? Are they prowling in your network?

The first two questions are answered by Threat Intelligence (TI), the last one can be provided by a SIEM that integrates TI into its functionality.

But wait, don’t buy just yet, there’s more, much more!

Threat Intelligence when fused with SIEM can:
• Validate correlation rules and improve base lining alerts by upping the priority of rules that also point at TI-reported “bad” sources
• Detect owned boxes, bots, etc. that call home when on your network
• Qualify entities related to an incident based on collected TI data (what’s the history of this IP?)
• Historical matching of past, historical log data to current TI data
• Review past TI history as key context for reviewed events, alerts, incidents, etc.
• Enable automatic action due to better context available from high-quality TI feeds
• Run TI effectiveness reports in a SIEM (how much TI leads to useful alerts and incidents?)
• Validate web server logs source IP to profile visitors and reduce service to those appearing on bad lists (uncommon)
and the beat goes on…

Want the benefits of SIEM without the heavy lifting involved? SIEM Simplified  may be for you.

Gathering logs or gathering dust?

Did you wrestle your big name SIEM vendor to throw in their “enterprise class” solution for a huge discount as part of the last negotiation? If so, good from you – you should be pleased with yourself for wrangling something so valuable for them. 90% discounts are not unheard of, by the way.

But do you know why they caved and included it? It’s because there is very high probability that you really won’t ever obtain any significant value from it.

You see the “enterprise class” SIEM solutions from the top name vendors all require significant trained staff to even just get them up and running, never mind tuning and delivering any real value. They figured, you probably just don’t have the staff or the time to do any of that so they can just give it away at that huge discount. It only adds some value to their invoice, preventing any other vendor from horning in on their turf and makes you happy – what’s not to like?

The problem of course is that you are not any closer to solving any of the problems that a SIEM can address. Is that ok with you? If so, why even bother to pay that 10%?

From a recent webinar on the topic by Gartner Analyst Anton Chuvakin:

Q: For a mid-size company what percent of time would a typical SIEM analyst spend in monitoring / management of the tool – outstanding incident management?
A: Look at my SIEM skill model of Run/Watch/Tune and the paper where it is described in depth. Ideally, you don’t want to have one person running the SIEM system, doing security monitoring and tuning SIEM content (such as writing correlation rules, etc) since it would be either one busy person or one really talented one. Overall, you want to spend a small minority of time on the management of the tool and most of the time using it. SIEM works if you work it! SIEM fails if you fail to use it.

So is your SIEM gathering logs? Or gathering dust?

If the latter, give us a call! Our SIEM Simplified service can take the sting out of the bite.

Why add more hay?

Recent terrorist attacks in France have shaken governments in Europe. The difficulty of defending against insider attacks is once again front and center. How should we respond? The UK government seems to feel that greater mass surveillance is a proper response. The Communications Data Bill  proposed by Prime Minister Cameron would compel telecom companies to keep records of all Internet, email, and cellphone activity. He also wants to ban encrypted communications services.

This approach would add even more massive data sets for analysis by computer programs than currently thought to be analyzed by NSA/GCHQ, in hopes that algorithms would be able to pinpoint the bad guys. Of course France has blanket surveillance but that did not prevent the Charlie Hebdo attack.

In the SIEM universe, the equivalent would be to gather every log from every source in hopes that attacks could be predicted and prevented. In practice,accepting data like this into a SIEM solution reduces it to a quivering mess of barely functioning components. In fact the opposite approach “output driven SIEM” is favored by experienced implementers.

Ray Corrigan writing Mass Surveillance Will Not Stop Terrorism  in the New Scientist notes “Surveillance of the entire population, the vast majority of whom are innocent, leads to the diversion of limited intelligence resources in pursuit of huge numbers of false leads. Terrorists are comparatively rare, so finding one is a needle-in-a-haystack problem. You don’t make it easier by throwing more needleless hay on the stack.”

Threat Intelligence – Paid or Free?

Threat Intelligence (TI) is evidence-based knowledge, including context, mechanisms, indicators, implications and actionable advice, about an existing or emerging menace or hazard to assets that can be used to inform decisions regarding the subject’s response to that menace or hazard. The challenge is that leading indicators of risk to an organization are difficult to identify when the organization’s adversaries, including their thoughts, capabilities and actions, are unknown. Therefore “black lists” of various types have become popular which list top attackers, spammers, poisoned URLs, malware domains etc have become popular. These lists are either community (free) maintained (eg SANS DShield), paid for by your tax dollars (eg InfraGuard) or paid services.

EventTracker 7.6 introduced formal support to automatically import and use such lists. We are often asked the question, which list(s) to use. Is it worth it to pay for TI service? Here is our thinking on the subject:

– External v/s Internal
In most cases, we find “white lists” to be much smaller, more effective and easier to tune/maintain than any “black list”. EventTracker supports the generation of such White lists from internal sources (the Change Audit feature) or the list of known good IP ranges (internal range, your Amazon EC2 or Azure instances, your O365 instances etc). Using the NOTIN match option of the Behavior module gives you a small list of suspicious activities (grey list) which can be quickly sorted to either black or white for future processing. As a first step, this is a quick/inexpensive/effective solution.

– Paid v/s Free
Free services include well regarded sources such as shadowservers.org, abuse.ch, dshield.org, FBI Infraguard, US CERT and EventTracker ThreatCenter (a curated list of low volume, high confidence sources formatted for quick import into EventTracker. Many customers in industry verticals (e.g., Electric power have lists circulated within their community.)

If you are thinking of paid services, then ask yourself:

– Will the feed allow me to detect threats faster? (e.g., a feed of top attackers updated in real-time v/s once in 6/12 hours). If faster is your motivation, are you able to respond to the threat detection faster? If the threat is detected at 8PM on a Friday, when will you be able to properly respond (not just acknowledge)?

– Will the feed allow me to detect threats better? i.e., you would have missed this threat if it had not been for that paid feed. At this time, many paid services for tactical TI are aggregating, cleaning and de-duplicating free sources and/or offering analysis that is also available in the public domain (e.g. McAfee and Kaspersky analysis of Dark Seoul, the malware that created havoc at Sony Pictures is available from US CERT).

Bottom line, Threat Intelligence is an excellent extension to SIEM solutions. The order of implementation should be internal/whitelist first, external free lists next and finally paid services to cover any remaining gaps.

Looking for 80% coverage at 20% cost? Let us do the detection with SIEM Simplified so you can remain focused on remediation.

Why Risk Classification is Important

Traditional threat models posit that it is necessary to protect against all attacks. While this may be true for a critical national defense network, it is unlikely to be true for the typical commercial enterprise. In fact many technically possible attacks are economically infeasible and thus not attempted by typical attackers.

This can be inferred by noting that most users ignore security precautions and yet escape regular harm. Most assets escape exploitation because they are not targeted, not because they are impregnable.

As Cormac Herley points out “a more realistic view is that we start with some variant of the traditional threat model, e.g., it is necessary and suffi cient to defend against all attacks” but then modify it in some way, e.g., defense eff ort should be appropriate to the assets.” However, while the first statement is absolute, and has a clear call-to-action, the qualifier is vague and imprecise. Of course we can’t defend against everything, but on what basis should we decide what to neglect?”

One way around this is by risk classification. The more you have to lose, the harder you must make it for the attacker. If you can make the value of the attack to be less than the monetization value then a financially motivated attacker will move on as its not worth it.

Want to present a hard target to attackers at an efficient price? Consider our SIEM Simplified service. You can get 80% of the value of a SIEM for 20% of the do-it-yourself price.

How many people does it take to run a SIEM?

You must have a heard light bulb jokes, for example:
How many optimists does it take to screw in a light bulb? None, they’re convinced that the power will come back on soon.

So how many people does it take to run a SIEM?
Let me count the ways.

Assuming the SIEM has been installed and configured properly (i.e, in accordance with the desired use cases), a few different skill sets are needed (these can all be the same person but that is quite rare).

SIEM Admin: This person handles the RUN function and will maintain the product in operational state and monitor its up-time. Other duties include deploying updates from the vendor and optimizing system performance. This is usually a fraction of a full time equivalent (FTE). About 4-8 hours/week for the typical EventTracker installation.

Security Analyst: This person handles the WATCH function and uses EventTracker for security monitoring. In the case of an incident, reviews activity reports and investigates alerts. Depending on the extent of the infrastructure being monitored, this can range from a fraction of an FTE to several FTEs. Plan for coverage on weekends and after hours. Incident response may require notification of other admin personnel.

SIEM Expert: This person handles the TUNE function and refines/customizes the SIEM rules/content and creates rules to support new use cases. This function requires the highest skill level, familiarity with the network and expertise with the SIEM product.

Back to the (bad) joke:
Q. So how many people does it take to run a SIEM?
A. None! The vendor said it manages itself!

How much security investment is enough?

In the last few weeks of 2014 and in the aftermath of the Sony hack, the attacks at many retailers and the incessant news on shell shock, poodle and many other vulnerabilities, many manager are considering 2015 budgets and the eternal question “how much to invest in IT security” is a common one.

It sometimes see that there is no limit and the more you spend, the lower your risk. But the Gordon-Loeb model says that is in fact not the case.

As pointed out by the RH Smith College at the University of Maryland:
The security of information is a fundamental concern to organizations operating in the modern digital economy. There are technical, behavioral, and organizational aspects related to this concern. There are also economic aspects of information security. One important economic aspect of information security (including cybersecurity) revolves around deriving the right amount an organization should invest in protecting information. Organizations also need to determine the most appropriate way to allocate such an investment. Both of these aspects of information security are addressed by Drs. Lawrence A. Gordon and Martin P. Loeb – See more here.

The focus of the Gordon-Loeb Model is to present an economic framework that characterizes the optimal level of investment to protect a given set of information. The model shows that the amount a firm should spend to protect information should generally be only a small fraction of the expected loss. More specifically, it shows that it is generally uneconomical to invest in information security activities (including cybersecurity related activities) more than 37 percent of the expected loss that would occur from a security breach. For a given level of potential loss, the optimal amount to spend to protect an information set does not always increase with increases in the information sets vulnerability. In other words, organizations may derive a higher return on their security activities by investing in cyber/information security activities that are directed at improving the security of information sets with a medium level of vulnerability.

Want the most for your 37% of expected loss? Consider SIEM Simplified.

What is a Stolen Credit Card Worth?

Solution Providers for Retail
Guest blog by A.N. Ananth

Cybercrime and stealing credit cards has been a hot topic all year. From the Target breach to Sony, the classic motivation for cybercriminals is profit. So how much is a stolen credit card worth?

The reason it is important to know the answer to this question is that it is the central motivation behind the criminal. If you could make it more expensive for a criminal to steal a card than what the thief would gain by selling them, then the attackers would find an easier target. That is what being a hard target is all about.

This article suggests prices of $35-$45 for a stolen credit card depending upon whether it is a platinum or corporate card. It is also worth noting that the viable lifetime of a stolen card is at most one billing cycle. After this time, the rightful owner will most likely detect its loss or the bank fraud monitor will pick up irregularities and terminate the account.

Why is a credit card with a high spending limit (say $10K) worth only $35? It is because monetizing a stolen credit card is difficult and requires a lot of expensive effort on part of the criminal. That is contrary to popular press which suggest that cybercrime results in easy billions. At the Workshop on Economics of Information Security, Herley and Florencio showed in their presentation, “Sex, Lies and Cybercrime Surveys,” that widely circulated estimates of cybercrime losses are wrong by orders of magnitude.For example:

Far from being broadly-based estimates of losses across the population, the cyber-crime estimates that we have appear to be largely the answers of a handful of people extrapolated to the whole population. A single individual who claims $50,000 losses, in an N = 1000 person survey, is all it takes to generate a $10 billion loss over the popu- lation. One unverified claim of $7,500 in phishing losses translates into $1.5 billion. …Cyber-crime losses follow very concentrated distributions where a representative sample of the pop- ulation does not necessarily give an accurate estimate of the mean. They are self-reported numbers which have no robustness to any embellishment or exaggeration. They are surveys of rare phenomena where the signal is overwhelmed by the noise of misinformation. In short they produce estimates that cannot be relied upon.

That’s a rational, fact based explanation as to why the most basic of information security is unusually effective in most cases. Pundits have been screaming this from the rooftops for a long time. What are your thoughts?

Read more at Solution Provider for Retail guest blog.

Are honeypots illegal?

In computer terminology, a honeypot is a computer system set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of IT systems. Generally, a honeypot appears to be part of a network, but is actually isolated and monitored, and which seems to contain information or a resource of value to attackers.

Lance Spitzner covers this topic from his (admittedly) non-legal perspective.

Is it entrapment?
Honeypots are not a form of entrapment. For some reason, many people have this misconception that if they deploy honeypots, they can be prosecuted for entrapping the bad guys. Entrapment, by definition is “a law-enforcement officer’s or government agent’s inducement of a person to commit a crime, by means of fraud or undue persuasion, in an attempt to later bring a criminal prosecution against that person.”

Does it violate privacy laws?
Privacy laws in the US may limit your right to capture data about an attacker, even when the attacker is breaking into your honeypot but the exemption under Service Provider Protection is key. What this exemption means is that security technologies can collect information on people (and attackers), as long as that technology is being used to protect or secure your environment. In other words, these technologies are now exempt from privacy restrictions. For example, an IDS sensor that is used for detection and captures network activity is doing so to detect (and thus enable organizations to respond to) unauthorized activity. Such a technology is most likely not considered a violation of privacy as the technology is being used to help protect the organization, so it falls under the exemption of Service Provider Protection. Honeypots that are used to protect an organization would fall under this exemption.

Does it expose us to liability?
Liability is not a criminal issue, but civil. Liability implies you could be sued if your honeypot is used to harm others. For example, if it is used to attack other systems or resources, the owners of those may sue. The argument being that if you had taken proper precautions to keep your systems secure, the attacker would not have been able to harm my systems, so you share the fault for any damage occurred to me during the attack. The issue of liability is one of risk. First, anytime you deploy a security technology (even one without an IP stack), that technology comes with risk. For example, there have been numerous vulnerabilities discovered in firewalls, IDS systems, and network sniffers. Honeypots are no different.

Obviously this blog entry is not legal advice and should not be construed as such.

SIEM or Log Management?

Security Information and Event Management (SIEM) is a Gartner coined term to describe solutions which monitor and help manage user and service privileges, directory services, and other system configuration changes in addition to providing log auditing, and review and incident response.

SIEM differs from Log Management, which refers to solutions which deal with large volumes of computer-generated log messages (also known as audit records, event-logs, etc.)

Log management is aimed at general system troubleshooting or incident response support. The focus is on collecting all logs for various reasons. This “input-driven” approach tries to get every possible bit of data.

This model fails with SIEM-focused solutions. Opening the floodgates, admitting any/all log data into the tool first, then considering what (if any) use is there for the data, reduces tool performance as it struggles to cope with the flood. More preferable is an “output-driven” model where data is admitted if and only if its usage is defined. This use can be defined to include alerts, dashboards, reports, behavior profiling, threat analysis, etc..

Buying a SIEM solution and using it as log management tool is a waste of money. Forcing a log management solution to act like a SIEM is folly.

The Security Risks of Industry Interconnections

2014 has seen a rash of high profile security breaches involving theft of personal data and credit card numbers from retailers Neiman Marcus, Home Depot, Target, Michaels, online auction site eBay, and grocery chains SuperValu and Hannaford among others. Hackers were able to steal hundreds of millions of credit and debit cards; from the information disclosed, this accounted for 40 million cards from Target, 350,000 from Neiman Marcus, up to 2.6 million from Michaels, 56 million from Home Depot.

The Identity Theft Resource Center (ITRC) reports that to date in 2014, 644 security breaches have occurred, an increase of 25.3 percent over last year. By far the majority of breaches targeted payment card data along with personal information like social security numbers and email addresses, and personal health information, and it estimates that over 78 million records were exposed.

Malware installed using third party credentials was found to be among the primary cause of the breaches in post-security analysis. Banks and financial institutions are critically dependent on their IT infrastructure and are also constantly exposed to attacks because of Sutton’s Law. Networks are empowering because they allow us to interact with other employees, customers and vendors. However, it is often the case that industry partners have a looser view of security and thus may be more vulnerable to being breached; exploiting industry interconnection is a favorite tactic used by attackers. After all, a frontal brute force attack on a well-defended large corporation’s doors are unlikely to be successful.

The Weak Link

The attackers target subcontractors, which are usually small companies with comparatively weaker IT security defenses and minimal cyber security expertise on hand. These small companies are also proud of their large customer and keen to highlight their connection. Likewise, companies often provide a surprising number of information meant for vendors on public sites for which logins are not necessary. This makes the first step of researching the target and their industry interconnections easier for the attacker.

The next step is to compromise the subcontractor network and find employee data. Social networking sites liked LinkedIn are a boon to attackers and used to create lists of IT admin and management staff who are likely to be privileged users. In West Virginia, state agencies were victims when malware infected computers of users whose email addresses ended with @wv.gov. The next step is to gain access to the contractors’ privileged users workstation, and from there, to breach the final target. In one retailer breach, the network credentials given to a heating, air conditioning and refrigeration contractor were stolen after hackers mounted a phishing attack, and were able to successfully lodge malware in the contractor’s systems, two months before they attacked the retailer, their ultimate target.

Good Practices, Good Security

Organizations can no longer assume that their enterprise is enforcing effective security standards; likewise, they cannot make the same assumption about their partners, vendors and clients, or anyone who has access to their networks. A Fortune 500 company has access to resources to acquire and manage security systems that a smaller vendor might not. So how can the enterprise protect itself while making the industry interconnections it needs to thrive?

Risk Assessments: When establishing a relationship with a vendor, partner, or client, consider vetting their security practices a part of due diligence. Before network access can be granted, the third party should be subject to a security appraisal that assesses where security gaps can occur (weak firewalls or security monitoring systems, lack of proper security controls). An inventory of the third party’s systems and applications and its control of those can help the enterprise develop an effective vendor management profile. Furthermore, it provides the enterprise with visibility into information that will be shared and who has access to that information.

Controlled Access: Third party access should be restricted and compartmentalized only to a segment of the network, and prevented access to other assets. Likewise, the organization can require that vendors and third parties use particular technologies for remote access, which enables the enterprise to catalog which connections are being made to the network.

Active Monitoring: Organizations should actively monitor network connections; SIEM software can help identify when remote access or other unauthorized software is installed, alert the organization when unauthorized connections are attempted, and establish baselines for “typical” versus unusual or suspicious user behaviors which can presage the beginning of a breach

Ongoing Audits: Vendors given access to the network should be required to submit to periodic audits; this allows both the organization and the vendor to assess security strengths and weaknesses and ensure that the vendor is in compliance with the organization’s security policies.

What next?

Financial institutions often implicitly trust vendors. But just as good fences make good neighbors, vendor audits produce good relationships. Initial due diligence and enforcing sound security practices with third parties can eliminate or mitigate security failures. Routine vendor audits send the message that the entity is always monitoring the vendor to ensure that it is complying with IT security practices.

SIEM is Sunlight

Security Information and Event Management (SIEM) refers to technology that provides real-time analysis of security alerts generated by network hardware and applications. SIEM works by gathering, analyzing and presenting information from a variety of sources of such information across the enterprise network including network and security devices; identity and access management applications; vulnerability management and policy compliance tools; operating system, database and application logs; and external threat data.

All compliance frameworks including PCI-DSS, HIPAA, FISMA, NERC etc call for the implementation and regular usage of SIEM technology. The absence of regular usage is noted as a major factor in post-mortem analysis of IT security related incidents.

Why is this the case? It’s because SIEM, when implemented properly gathers security data from all the nooks and crannies of the enterprise network. When this information is collated and presented well, an analyst is able to see what is happening, what happened and what is different.

It’s akin to letting in the sunlight to all corners and hidden places. You can see better, much better.

You can’t fix what you can’t see and don’t know. Knowledge of the goings-on in the various parts of the network, in real-time when possible, is the first step towards building a meaningful security defense.

Three key advantages for SIEM-As-A-Service

Three key advantages for SIEM-As-A-Service

Security Information and Event Management (SIEM) technology is an essential component in a modern defense-in-depth strategy for IT Security. SIEM is described as such in every Best Practice recommendation from industry groups and security pundits. The absence of SIEM is repeatedly noted in Verizon Enterprise Data Breach Investigations Report as a factor in late discovery of breaches. Indeed attackers are most often successful with soft targets where defenders do not review log and other security data. In addition, all regulatory compliance standards, such as PCI-DSS, HIPAA, FISMA etc specifically require SIEM technology be deployed and more importantly be used actively.

This last point (“be used actively”) is the Achilles heel for many organizations and has been noted often, as “security is something you do, not something you buy.” Organizations large and small struggle to assign staff with necessary expertise and maintain the discipline of periodic log review.

New SIEM-As-A Service options

SIEM Simplified services are available for buyers that cannot leverage traditional on premise, self-serve products. In such models, the vendor assumes responsibility for as much (or as little) of the heavy lifting as desired by the user including: Installation, Configuration, Tuning, Periodic review, Updates and responding to incident investigation or audit support requests.

Such offerings have three distinct advantages over the traditional self-serve, on premise model.

1) Managed Service Delivery: The vendor is responsible for the most “fragile” and “difficult to get right” aspect of a SIEM deployment – that is installation, configuration, tuning and Periodic review of SIEM data. This can also include upgrades, performance management to get speedy response and updates to security threat intelligence feeds.
2) Deployment options: In addition to the traditional on premise model, such services usually offer cloud based, managed hosted or hybrid solutions. Options for host based agents and/or premise based collectors/sensors allow for great flexibility in deployment
3) Utility pricing: Contrast with traditional perpetual models that require capital expenditure and front loading, SIEM-As-A-Service follows the utility model with usage based pricing and monthly expenditure. This is friendly to Operational Expenditures.

SIEM is a core technology in the modern IT Enterprise. New As-A-Service deployment models can increase adoption and value of this complex monitoring technology.

Top 5 Linux log file groups in/var/log

If you manage any Linux machines, it is essential that you know where the log files are located, and what is contained in them. Such files are usually in /var/log. Logging is controlled by the associated .conf file.

Some log files are distribution specific and this directory can also contain applications such as samba, apache, lighttpd, mail etc.

From a security perspective, here are 5 groups of files which are essential. Many other files are generated and will be important for system administration and troubleshooting.

1. The main log file
a) /var/log/messages – Contains global system messages, including the messages that are logged during system startup. There are several things that are logged in /var/log/messages including mail, cron, daemon, kern, auth, etc.

2. Access and authentication
a) /var/log/auth.log – Contains system authorization information, including user logins and authentication machinsm that were used.
b) /var/log/lastlog – Displays the recent login information for all the users. This is not an ascii file. You should use lastlog command to view the content of this file.
c) /var/log/btmp – This file contains information about failed login attemps. Use the last command to view the btmp file. For example, “last -f /var/log/btmp | more”
d) /var/log/wtmp or /var/log/utmp – Contains login records. Using wtmp you can find out who is logged into the system. who command uses this file to display the information.
e) /var/log/faillog – Contains user failed login attemps. Use faillog command to display the content of this file.
f) /var/log/secure – Contains information related to authentication and authorization privileges. For example, sshd logs all the messages here, including unsuccessful login.

3. Package install/uninstall
a) /var/log/dpkg.log – Contains information that are logged when a package is installed or removed using dpkg command
b) /var/log/yum.log – Contains information that are logged when a package is installed using yum

4. System
a) /var/log/daemon.log – Contains information logged by the various background daemons that runs on the system
b) /var/log/cups – All printer and printing related log messages
c) /var/log/cron – Whenever cron daemon (or anacron) starts a cron job, it logs the information about the cron job in this file

5. Applications
b) /var/log/maillog /var/log/mail.log – Contains the log information from the mail server that is running on the system. For example, sendmail logs information about all the sent items to this file
b) /var/log/Xorg.x.log – Log messages from the XWindows system

Happy Logging!

Seven Habits of Highly Fraudulent Users

This post Seven Habits of Highly Fraudulent Users from Izzy at SiftScience describes patterns culled from 6 million transactions over a three month sample. The “fraud” sample consisted of transactions confirmed fraudulent by customers; “normal” samples consisted of transactions confirmed by customers to be non-fraudulent, as well as a subset of unlabeled transactions.

These patterns are useful to Security Operations Center (SOC) teams who “hunt” for these things.

Habit #1 Fraudsters go hungry

Whereas there is a dip in activity by normal users at lunch time, no such dip is observed in fraudulent transactions. When looking for out-of-ordinary behavior, the absence of any dip during the day might speak to a script which never tires.

Habit #2 Fraudsters are night owls

Analyzing fraudulent transactions as a percentage of all transactions, 3AM was found to be the most fraudulent hour in the day, and night-time in general was a more dangerous time. SOC teams should hunt for “after hours” behavior as a tip-off for bad actors.

Habit #3 Fraudsters are international

Look for traffic originating outside your home country. While these patterns change frequently, as a general rule, international traffic is worth trending and observing.

Habit #4 Fraudsters don multiple identities

Fraudsters tend to make multiple accounts on their laptop or phone to commit fraud. When multiple accounts are associated with the same device, the higher the likelihood of fraud. A user who has 6 accounts on her laptop is 15 times more likely to be fraudulent than the average person. Users with only 1 account however, are less likely to be fraudulent. SOC teams should look for multiple users using the same computer in a given time frame. Even in shared PC situations (e.g, nurses station in a hospital, it is unusual for much more than one user accessing a PC in a given shift.

Habit #5 Fraudsters use well known domains

The top 3 sources of fraud originate from Microsoft sites including outlook.com, Hotmail and live.com. Traffic from/to such sites is worthy of trending and examining.

Habit #6 Fraudsters are boring

A widely recognized predictor of fraud is the number of digits in an email address. The more numbers, the more likely that it’s fraud.

Habit #7 Fraudsters like disposable things

We know that attacks almost always originate from DHCP addresses (which is why dshield.org/block.txt gives out /24 ranges). Its also true that the older an account age, the less likely (in general) its involved in fraud. SOC teams must always look out for account creation.

Good hunting.

EventTracker and Poodle

Summary:
• All systems and applications utilizing the Secure Socket Layer (SSL) 3.0 with cipher-block chaining (CBC) mode ciphers may be vulnerable. However, the POODLE (Padding Oracle On Downgraded Legacy Encryption) attack demonstrates this vulnerability using web browsers and web servers, which is one of the most likely exploitation scenarios.
• EventTracker v7.x is implemented above IIS on the Windows platform and there MAY be vulnerable to POODLE depending on the configuration of IIS..
• ETIDS and ETVAS which are offered as options of the SIEM Simplified service, are based on CentOS v6.5 which uses Apache and may also be vulnerable, depending on the configuration of Apache.

1. Poodle Scan can be used to test if your server is vulnerable
• Below are the links relevant to this vulnerability:

http://nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566
https://www.us-cert.gov/ncas/alerts/TA14-290A
http://www.dotnetnoob.com/2013/10/hardening-windows-server-20082012-and.html
http://support.microsoft.com/kb/187498

• If you are a subscriber to SIEM Simplified service, the EventTracker Control Center has already initiated action to patch this vulnerability on your behalf. Please contact ecc@eventtracker.com with any questions.
• If you maintain EventTracker yourself, this document explains how you can update your installation to remove the vulnerability against SSL 3.0

Details:
The SSL 3.0 vulnerability stems from the way blocks of data are encrypted under a specific type of encryption algorithm within the SSL protocol. The POODLE attack takes advantage of the protocol version negotiation feature built into SSL/TLS to force the use of SSL 3.0 and then leverages this new vulnerability to decrypt select content within the SSL session. The decryption is done byte by byte and will generate a large number of connections between the client and server.

While SSL 3.0 is an old encryption standard and has generally been replaced by Transport Layer Security (TLS) (which is not vulnerable in this way), most SSL/TLS implementations remain backwards compatible with SSL 3.0 to interoperate with legacy systems in the interest of a smooth user experience. Even if a client and server both support a version of TLS the SSL/TLS protocol suite allows for protocol version negotiation (being referred to as the “downgrade dance” in other reporting). The POODLE attack leverages the fact that when a secure connection attempt fails, servers will fall back to older protocols such as SSL 3.0. An attacker who can trigger a connection failure can then force the use of SSL 3.0 and attempt the new attack.

Solution:
• If you have installed EventTracker on Microsoft Windows Server and are maintaining it yourself, please download the Disable Weak Cyphers file to the server running EventTracker. Extract and save DisableWeakCiphers.bat; run this file as Administrator. This file executes the following commands:

REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Client” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\DES 56/56” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 40/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 56/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 128/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 40/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 56/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 64/128” /v Enabled /t REG_DWORD /d 00000000 /f

EventTracker Search Performance

EventTracker 7.6 is a complex software application and while there is no easy formula to compute its performance, there are ways to configure and use it so as to get better performance. All data received either real-time or by file ingest (called the Direct Log Archiver) is first indexed and then archived for optimal disk utilization. When performance of a search is cross indexed, compression speed of results depend on the type of search as well as the underlying hardware.

Searches can be categorized as:
Dense – at least one result per thousand (1,000) events
Sparse – at least one result per million (1,000,000) events
Rare – at least one result per billion (1,000,000,000) events
Needle in a haystack – one event in more than a billion events

Based on provided search criteria, EventTracker consults indexing meta-data to determine if and in which archive contains events matching the search terms. As searches go from dense to needle-in-a-haystack, they move from being CPU bound to I/O bound.

Dense searches are CPU bound because matches are found easily and there is sufficient raw data to decompress. For the fastest possible response on default hardware, EventTracker will limit return results to the first (sorted by time with newest on top) 200 results displayed. This setting can of course be defeated but is provided because it satisfies the most common use case.

As the events containing the search term get to one in a hundred thousand (100,000), performance becomes more I/O bound. The reason is there is less and less data but more and more index files have to be consulted.

I/O performance is measured as latency which is a measure of the time delay from when a disk I/O request is created, until the time the disk I/O request is completed by the underlying hardware. Windows perfmon can measure average disk/sec transfer. A rule of thumb is to have this be below 25 millisec for best I/O performance.

This can be realized in various ways:
– Having different drives (spindles) for the OS/progam and archives
– Using faster disk (15K RPM performs better than 7200 RPM disks)
– Using a SAN

In larger installations with multipleVirtual Collection Points (VCP), dedicating a separate disk spindle for each VCP can help.

The Data Scientist Unicorn

An essential part of any IT Security program is to hunt for unusual patterns in sensor (or log) data to uncover attacks. Aside of tools that gather and collate this data (for example SIEM solutions like EventTracker), a smart pair of eyeballs is needed to sift through the data warehouse. In modern parlance, this person is called a data scientist, one who extracts knowledge from data. This requires a deep understanding of the available data and a feel for pattern recognition and visualization.

As Michael Schrage notes in the HBR Blog network “…the opportunities for data-science-enabled efficiencies and innovation are too important to defer or deny. Big organizations can afford — or think they can afford — to throw money at the problem by hiring laid-off Wall Street quants or hiring big-budget analytics boutiques. More frugal and prudent enterprises seem to be taking alternate approaches.”

Starting up a “center of excellence” or addressing a “grand challenge”  is not practical for most organizations. Instead, how about an effort to deliver tangible and data-driven benefits in a short time frame?

Interestingly, Schrage notes “Without exception, every team I ran across or worked with hired outside expertise. They knew when a technical challenge and/or statistical technique was beyond the capability…the relationship was less of an RFP box-ticking exercise than a shared space…”

What does any of this have to do with SIEM you ask?

Well for the typical Small/Medium Enterprise [SME] this is a familiar dilemma. Data, data everywhere and not a drop (of intelligence) to drink. Either the “data scientist” is not on the employee roster or does not have time available. How then do you square this circle? Look for outside expertise, as Schrage notes.

SIEM Simplified service

SME’s looking for expertise to leverage the existing mountain of security data within their enterprise can leverage our SIEM Simplified service.

Unicorns don’t exist but that doesn’t mean that do-nothing is a valid option.

EventTracker and Shellshock

What’s your thought on Shellshock? EventTracker CEO A.N. Ananth weighs in.

Summary:

  • Shellshock (also known as Bashdoor) CVE-2014-6271 is a security bug in the Linux/Unix Bash shell.
  • EventTracker v 6.x, v7.x is NOT vulnerable to Shellshock as these products are based on the Microsoft Windows platform.
  • ETIDS and ETVAS which are offered as options of the SIEM Simplified service, are vulnerable to Shellshock, as these solutions are based on CentOS v6.5. Below are the links relevant to this vulnerability.
  • If you subscribe to ETVAS and/or ETIDS, the EventTracker Control Center has already initiated action to patch this vulnerability on your behalf. Please contact ecc@eventtracker.com with any questions.

Details:

Shellshock (also known as Bashdoor) CVE-2014-6271 is a security bug in the broadly used Unix Bash shell. Bash is used to process certain commands across many internet daemons. It is a program that is used by various Unix-based systems to execute command scripts and command lines. Often it is installed as the system’s default command line interface.

Notes:

  • Environment variables (each running program having its own list of name/value pairs) occur in Unix-based and other operating systems that Bash supports. When one program is started by an earlier program, an initial list of environment variables is provided by the earlier program to the new program. Apart from this, named scripts (internal list of functions) are also maintained by Bash that can be executed from within.
  • By creating vulnerable versions of Bash, an attacker can gain unauthorized access to a computer system. By executing Bash with a chosen value in its environment variable list, vulnerable versions of Bash can be caused, that may allow remote code execution.
  • Scrutiny of the Bash source code history, reveal that concealed vulnerabilities have been present since approximately version 1.13 (1992). Lack of comprehensive change logs do not allow, the maintainers of Bash source code, to pinpoint the exact time of introduction of the vulnerability.

We don’t need no stinkin Connectors

#36 on the American Film Institute list of Top Movie Quotes is “Badges? We don’t need no stinkin badges” which has been used often (e.g., Blazing Saddles). The equivalent of this in the log management universe is a “Connector”. We are often asked how many “Connectors” we have readily available or how long it takes to develop a Connector.

These questions stem from a model used by programs such as ArcSight which depend on Early Binding. In an earlier era of computing, Early Binding was needed for the compiler could not create an entry in the virtual method table for the procedure being compiled. It has the advantage of being efficient, an important consideration when CPU and memory are in very short supply, like years ago.

Just in time languages such as .NET or Java adopt Late Binding where the v-table is computed at run time. Years ago, Late Binding had negative connotations in terms of performance but that hasn’t been true for at least 20 years now.

Early binding requires a fixed schema to be mandated for all possible entries and for input to be “normalized” to this schema. The benefit of the fixed plan is efficiency in output since the data is already normalized. While that may make some sense for compilers, input in formalized language grammars makes almost no sense in the log management universe, where the input is log data from sources that do not adopt any standardization at all. The downside of such an approach is to require a “Connector” to normalize a new log source to the normalized schema. Another consideration is that outputs can greatly vary depending on usage – there are many possible uses for the data, the limitation is only the users imagination. The Early Binding model however, is designed with fixed outputs in mind. These disadvantages limit such designs.

In contrast, EventTracker uses Late Binding, where the meaning of tokens can be assigned at output (run) time, rather than being fixed at receive time. Thus new log formats do not need a “Collector” to be available at ingest time. The desired output format can be specified at search or report time for easy viewing. This requires somewhat greater computing capacity with Moores Law to the rescue. Late Binding is the primary advantage of EventTrackers’ “Fast In, Smart Out” architecture.

Spray & Pray or 80/20

If you spend any time at all looking at log data from any server that is accessible to the Internet, you will be shocked at the brazen attempts to knock the castle over. They being within minutes of the server being available. They most commonly include port scans, login attempts using default username/password, web server attacks described by OWASP.

How can this possibly be? Given the sheer number of machines that are visible on the Internet? Don’t these guys have anything better to do?

The answer is automation and scripted attacks, also known as spray and pray. The bad guys are capitalists too (regardless of country of origin!) and need to maximize their effort, computing capacity and network bandwidth usage. Accordingly, they use automation to “knock on all available doors in a wealthy neighborhood” as efficiently and regularly as possible. Why pick on servers in developed countries? Because that’s where the payoff is likely to be higher. Its Risk v. Reward all the way.

The automated (first) wave of these attacks is to identify vulnerable machines and establish presence. Following waves may be staffed depending on the the location and identity and thus the potential value to be obtained by a greater investment of (scarce) expertise by the attacker.

Such attacks can be deterred quite simply by using secure (non-default) configuration, system patching and basic security defenses such as firewall and anti-virus. This explains the repeated exhortations of security pundits on “best practice” and also the rationale behind compliance standards and auditors trying to enforce basic minimum safeguards.

The 80/20 rule applies to attackers just as it does to defenders. Attackers are trying to cover 80% of the ground at 20% of the cost so as to at-least identify soft high value targets and at most steal from them. Defenders are trying to deter 80% of the attackers at 20% of cost by using basic best practices.

Guidance such as SANS Critical Controls or lessons from Verizon’s Annual Data Breach studies can help you prioritize your actions. Attackers depend on the fact that the majority of users do not follow basic security hygiene, don’t collect logs which would expose the attackers actions and certainly never actually look at the logs.

Defeating a “spray and pray” attacks requires basic tooling and discipline. The easy way to so this? We call it SIEM Simplified. Drop us a shout, it beats being a victim.

Hackers: What they are looking for and the abnormal activities you should be evaluating

Most hackers are looking into critical data for credential theft. A credential theft attack is when an attacker initially gains privileged access to a computer on a network and then uses freely available tooling to extract credentials from the sessions of other logged-on accounts. The most prevalent target for a credential theft is a “VIP account.” VIP account’s consist of contacts with highly sensitive data attached: access to accounts and secure data that many others within that organization probably don’t have.

It’s very important for administrators to be conscious of activities that increase the likelihood of a successful credential-theft attack.

These activities are:
• Logging on to unsecured computers with privileged accounts
• Browsing the Internet with a highly privileged account
• Configuring local privileged accounts with the same credentials across systems
• Overpopulation and overuse of privileged domain groups
• Insufficient management of the security of domain controllers.

There are specific accounts, servers, and infrastructure components that are the usual primary targets of attacks against Active Directory.

These accounts are:
• Permanently privileged accounts
• VIP accounts
• “Privilege-Attached” Active Directory accounts
• Domain controllers
• Other infrastructure services that affect identity, access, and configuration management, such as public key infrastructure (PKI) servers and systems management servers

Although pass-the-hash (PtH) and other credential theft attacks are ubiquitous today, it is because there is freely available tooling that makes it simple and easy to extract the credentials of other privileged accounts when an attacker has gained Administrator – or SYSTEM-level access to a computer.

Even without this tool, an attacker with privileged access to a computer can just as easily install keystroke loggers that capture keystrokes, screenshots, and clipboard contents. An attacker with privileged access to a computer can disable anti-malware software, install rootkits, modify protected files, or install malware on the computer that automates attacks or turns a server into a drive-by download host.

The tactics used to extend a breach beyond a single computer vary, but the key to propagating compromise is the acquisition of highly privileged access to additional systems. By reducing the number of accounts with privileged access to any system, you reduce the attack surface not only of that computer, but the likelihood of an attacker harvesting valuable credentials from the computer.

Practical ways to analyze login and pre-authentication failures

Nikunj Shah, team lead of EventTracker SIEM Simplified team provides some practical tips on analyzing login and pre-authentication failures:

1) Learn and know how to identify login events and their descriptions. A great resource to find event IDs is here: http://technet.microsoft.com/en-us/library/cc787567(v=ws.10).aspx.

2) Identify and look into the event description. To analyze events efficiently and effectively you must analyze the event description. Within the login failure description, paying attention to the details like: failure reason, user name, logon type, workstation name and source network address are critical to your investigation and analysis. By identifying the description and knowing what to pay attention to, you will easily eliminate the noise.

When using a system like EventTracker, the display of the required fields used to showcase eliminates the noise and show you the immediate error results. EventTracker will provide a summary based on the total number of events for each failure type and user name to demonstrate the automation of your systems’ critical information.

Using IDS will help your enterprise run more efficiently and effectively with the analysis of traditional reports for the hundreds of events that happen every day. Doing this without the help of a management and a monitoring tool is nearly impossible.

Please reference here for detailed charts.

Simplify SIEM with Services

To support security, compliance and operational requirements, specific and fast answers to the 4 W questions (Who, What, When, Where) are very desirable. These requirements drive the need to Security Information Event Management (SIEM) solutions that provide detailed and one-pain-of-glass visibility into this data, which is constantly generated within your information ecosystem. This visibility and the attendant effectiveness are made possibly by centralizing the collection, analysis and storage of log and other security data from sources throughout the enterprise network.

To obtain value from your SIEM solution, it must be watered and fed. This is an eternal commitment, whether your team chooses to do-it yourself or get someone to do it for you. This new white paper from EventTracker examines the pros and cons of using a specialist external service provider.

“Think about this for a second: a lot more people will engage professional services to help them RUN, not just DEPLOY, a SIEM. However, this is not the same as managed services, as those organization will continue to own their SIEM tools.” –Anton Chuvakin, Gartner Analyst

Known knowns, Unknown unknowns

“There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know. ”
–Donald Rumsfeld, Secretary of Defense

In SIEM world, the known knowns are alerts. We configure rules to look at security data for threats/problems that we find to be interesting and bring them to the operators’ attention. This is a huge step up in the SIEM maturity scale from log ignorance. The Department of Homeland Security refers to this as “If you see something, say something.” What do you do when you see something? You “do something,” better known as alert-driven workflow. In the early stages of a SIEM implementation there is a lot of time spent refining alert definitions in order to reduce “noise.”

While this approach addresses the “known knowns”, it does nothing for the “unknown unknowns”. To identify the unknown, you must stop waiting for alerts and instead search for the insights. This approach starts with a question rather than a reaction to an alert. Notice that often enough, it’s non IT persons asking the questions e.g., Who changed this file? Which systems did “Susan” access on Saturday?

This approach results in interactive investigation rather than the traditional drill down. For example:
– Show me all successful login’s over the weekend
– Filter these to show only those on server3
– Why did “Susan” login here? Show all “Susan” activity over the weekend…

This form of active data exploration requires a certain degree of expertise in log management tools, with experience and knowledge of the data set to review a thread that looks out of place. Once you get used to the idea, it is incredible to see how visible these patterns become to you. This is essential to “running a tight ship” and being aware of out of the ordinary patterns given the baseline. When staffing technical persons for the EventTracker SIEM Simplified service team, we are constantly looking for “insight hunters” instead of mere “alert responders.”  Alert responding is so 2013…

Top 5 bad assumptions about SIEM

The cliché goes “When you assume, you make an ass out of u and me.” When implementing a SIEM solution, these five assumptions have the potential to get us in trouble. They stand in the way or organization and personal success and thus are best avoided.

5. Security by obscurity or my network is too unimportant to be attacked
Small businesses tend to be more innovative and cost-conscious. Is there such a thing as too small for hackers to care? In this blog post we outlined why this is almost never the case. As the Verizon Data Breach shows year in and year out, companies with 11-100 employees from 36 countries had the maximum number of breaches.

4. I’ve got to do it myself to get it right
Charles De Gaulle on humility “The graveyards are full of indispensable men”. Everyone tries to demonstrate multifaceted skill but its neither effective nor efficient. Corporations do it all the time. Tom Friedman explains it in “The World is Flat.”

3. Compliance = Security
This is only true if your auditor is your only threat actor. We tend to fear the known more than the unknown so it is often the case that we fear the (known) auditor more than we fear the (unknown) attacker. Among the myriad lessons from the Target breach, perhaps the most important is that “Compliance” does NOT equal Security.

2. All I have to do it plug it in, the rest happens by magic
Marketing departments of every security vendor would have you believe this of their magic appliance or software. When has this ever been true? Self-propelling lawn mower anyone?

1. It’s all about buying the most expen$ive technology
Kivas Fajo in “The Most Toys” the 70th episode of Star Trek TNG believed this. You could negotiate a 90% discount on a $200K solution and then park it as shelfware, what did you get? Wasted $20K is what. It’s always about using what you have.

Bad assumptions = bad decisions.
Always true.

Security is not something you buy, but something you do

The three sides of the security triangle are People, Processes and Technology.

SIEM-Triangles

  1. People –the key issues are: who owns the process, who is involved, what are their roles, are they committed to improving it and working together, and more importantly are they prepared to do the work to fix the problem?
  1. Process –can be defined as a trigger event which creates a chain of actions resulting in something being prepared for a customer of that process.
  1. Technology – Now that people are aligned, and the process developed and clarified, technology can be applied to ensure consistency in the process application and to provide the thin guiding rails to keep the process on track, making it easier to follow the process than not.

None of this is particularly new to CIOs and CSOs, yet how often have you seen six or seven digit “investments” sitting on datacenter racks, or even sometimes on actual storage shelves, unused or heavily underused? Organizations throw away massive amounts of money, then complain about “lack of security funds” and “being insecure.” Buying security technologies is far too often an easier task than utilizing them, and “operationalizing” them for many organizations. SIEM technology suffers from this problem as do many other “Monitoring” technologies.

Compliance and “checkbox mentality” makes this problem worse as people read the mandates and only pay attention to sections that refer to buying boxes.

Despite all this rhetoric, many managers equate information security with technology, completely ignoring the proper order. In reality, a skilled engineer with a so-so tool, but a good process is more valuable than an untrained person equipped with the best of tools.

As Gartner analyst Anton Chuvakin notes, “…if you got a $200,000 security appliance for $20,000 (i.e. at a steep 90% discount), but never used it, you didn’t save $180k – you only wasted $20,000!”

Security is not something you BUY, but something you DO.

IP Address is not a person

As we deal with forensic reviews of log data, our SIEM Simplified team is called upon to piece together a trail showing the four W’s: Who, What, When and Where. Logs can be your friend and if collected, centralized and indexed can get you answers very quickly.

There is a catch though. The “Where” question is usually answered by supplying either a system name or an IP Address which at the time in question was associated with that system name.

Is that good enough for the law? i.e., will the legal system accept that you are your IP Address?

Florida District Court Judge Ursula Ungaro says no.

Judge Ungaro was presented with a case brought by Malibu Media, who accused IP-address “174.61.81.171″ of sharing one of their films using BitTorrent without their permission. The Judge, however, was reluctant to issue a subpoena, and asked the company to explain how they could identify the actual infringer.

Responding to this order to show cause, Malibu Media gave an overview of their data gathering techniques. Among other things they explained that geo-location software was used to pinpoint the right location, and how they made sure that it was a residential address, and not a public hotspot.

Judge Ungaro welcomed the additional details, but saw nothing that actually proves that the account holder is the person who downloaded the file.

“Plaintiff has shown that the geolocation software can provide a location for an infringing IP address; however, Plaintiff has not shown how this geolocation software can establish the identity of the Defendant,” Ungaro wrote in an order last week.

“There is nothing that links the IP address location to the identity of the person actually downloading and viewing Plaintiff’s videos, and establishing whether that person lives in this district,” she adds.

As a side note, on April 26, 2012, Judge Ungaro ruled that an order issued by Florida Governor Rick Scott to randomly drug test 80,000 Florida state workers was unconstitutional. Ungaro found that Scott had not demonstrated that there was a compelling reason for the tests and that, as a result, they were an unreasonable search in violation of the Constitution.

Three trends in Enterprise Networks

There are three trends in Enterprise Networks:

1) Internet of Things Made Real. We’re all familiar with the challenge of big data ­ how the volume, velocity and variety of data is overwhelming. Studies confirm the conclusion many of you have reached on your own: There’s more data crossing the internet every second than existed on the internet in total 20 years ago. And, now, as customers deploy more sensors and devices in every part of their business, the data explosion is just beginning. This concept, called the “Internet of Things,” is a hot topic. Many businesses are uncovering efficiencies based on how connected devices drive decisions with more precision in their organizations.

2) “Reverse BYOD.” Most of us have seen firsthand how a mobile workplace can blur the line between our personal and professional lives. Today’s road warrior isn’t tethered to a PC in a traditional office setting. They move between multiple devices throughout their workdays with the expectation that they¹ll be able to access their settings, data and applications. Forrester estimates that nearly 80 percent of workers spend at least some portion of their time working out of the office and 29 percent of the global workforce can be characterized as “anywhere, anytime” information workers. This trend was called “bring your own device” or “BYOD.” But now we¹re seeing the reverse. Business-ready, secure devices are getting so good that organizations are centrally deploying mobility solutions that are equally effective at work and play.

3) Creating New Business Models with the Cloud. The conversation around cloud computing has moved from “if to “when.” Initially driven by the need to reduce costs, many enterprises saw cloud computing as a way to move non-critical workloads such as messaging and storage to a more cost-efficient, cloud-based model. However, the larger benefit comes from customers who identify and grow new revenue models enabled by the cloud. The cloud provides a unique and sustainable way to enable business value, innovation and competitive differentiation ­ all of which are critical in a global marketplace that demands more mobility, flexibility, agility and better quality across the enterprise.

The 5 stages of SIEM Implementation

Are you familiar with the Kübler-Ross 5 Stages of Grief model?

SIEM implementation (and indeed most enterprise software installations) bear a striking resemblance.

  • Stage One: Denial – The frustration that new users feel learning the terminology and delivering on the “asks” from the implementation make them question the time investment.
  • Stage Two: Despair – The self-doubt that most implementation teams feel in delivering on the promises of a complex security technology with many moving parts.
  • Stage Three: Hopeful Performance – Learning, and even using, the SIEM solution, partners build confidence in their ability to become one of those recognized for competence and potential.
  • Stage Four: Soaring Execution – The exalted status of a “go-to” team member, connected at the hip through the vendor support team or service provider; earning accolades from management. The team member has delivered value to the organization and is reaping rewards for the business. Personal relationships with vendor or service reps are genuine and mutually beneficial.
  • Stage Five:  Devolution/Plateau – Complacency, through lack of vision or agility, in embracing the next big thing drags down the relationship. Other partners, hungrier for  the customer’s attention, take over the mindshare once enjoyed.

How much security is enough?

Ask a pragmatic CISO about achieving a state of complete organizational security and you’ll quickly be told that this is unrealistic and financially imprudent goal. So then how much security is enough?

More than merely complying with regulations or implementing “best practice”, think in terms of optimizing the outcome of the security investment. So never mind the theoretical state of absolute security, think instead of determining and managing risk to critical business processes and assets.

Risk appetite is defined by the Committee of Sponsoring Organizations of the Treadway Commission (COSO) as “… the amount of risk, on a broad level, an entity is willing to accept in pursuit of value (and its mission).” Risk appetite influences the entity’s culture, operating style, strategies, resource allocation, and infrastructure. Risk appetite is not a constant; it is influenced by and must adapt to changes in the environment. Risk tolerance could be defined as the residual risk the organization is willing to accept after implementing risk-mitigation and monitoring processes and controls. One way to implement this is to define levels of residual risk and therefore the levels of security that is “enough”.

Risk-Wall

The basic level of security is the diligent one which is the staple of every business network; the organization is able to deal with known threats. The hardened level adds the ability to be proactive (with vulnerability scanning), compliant and gives the ability to perform forensic analysis.  At the advanced level, predictive capabilities are introduced and the organization develops the ability to deal with unknown threats.

If it all sounds a bit overwhelming, take heart; managed security services can relieve your team of the heavy lifting that is a staple of IT Security.

Bottom line – determine your risk appetite to determine how much security is enough.

Top 6 uses for SIEM

Security Information and Event Management (SIEM) is a term coined by Gartner in 2005 to describe technology used to monitor and help manage user and service privileges, directory services and other system configuration changes; as well as providing log auditing and review and incident response.

The core capabilities of SIEM technology are the broad scope of event collection and the ability to correlate and analyze events across disparate information sources. Simply put, SIEM technology collects log and security data from computers, network devices and applications on the network to enable alerting, archiving and reporting.

Once log and security data has been received, you can:

  • Discover external and internal threats

Logs from firewalls and IDS/IPS sensors are useful to uncover external threats; logs from e-mail servers, proxy servers can help detect phishing attacks; logs from badge and thumbprint scanners are used to detect physical access

  • Monitor the activities of privileged users

Computers, network devices and application logs are used to develop a trail of activity across the network by any user but especially users with high privileges

  • Monitor server and database resource access

Most enterprises have critical data repositories in files/folder /databases and these are attractive targets for attackers. By monitoring all server and db resource access, security is improved.

  • Monitor, correlate and analyze user activity across multiple systems and applications

With all logs and security data in one place, an especially useful benefit is the ability to correlate user activity across the network.

  • Provide compliance reporting

Often the source of funding for SIEM, when properly setup, auditor on-site time can be reduced by up to 90%; more importantly, compliance is to the spirit of the law rather than merely a check-the-box exercise

  • Provide analytics and workflow to support incident response

Answer Who, What, When, Where questions. Such questions are the heart of forensic activities and critical to draw valuable lessons.

SIEM technology is routinely cited as a basic best practice by every regulatory standard and its absence has been regularly shown as a glaring weakness in every data breach post mortem.

Want the benefit but not the hassle? Consider SIEM Simplified, our service where we do the disciplined blocking and tackling which forms the core of any security or compliance regime.

TMI, Too Little Analysis

The typical SIEM implementation suffers from TMI, TLA (Too Much Information, Too Little Analysis). And if any organization that’s recently been in the news knows this, it’s the National Security Agency (NSA). The Wall Street Journal carried this story quoting William Binney, who rose through the ranks at the National Security Agency (NSA) over a 30 year career, retiring in 2001. “The NSA knows so much it cannot understand what it has,” Binney said. “What they are doing is making themselves dysfunctional by taking all this data.”

Most SIEM implementations start at this premise – open the floodgates, gather everything because we are not sure what we are specifically looking for, and more importantly, the auditors don’t help and the regulations are vague and poorly worded.

Lt Gen Clarence E. McKnight is the former head of the Signal Corps and opined that “The issue is a straightforward one of simple ability to manage data effectively in order to provide our leaders with actionable information. Too much raw data compromises that ability. That is all there is to it.”

A presidential panel recently recommended the NSA shut down its bulk collection of telephone call records of all Americans. It also recommended creation of “smart software” to sort data as it is collected, rather than accumulate vast troves of information for sorting out later. The reality is that the collection becomes an end in itself, and the sorting out never gets done.

The NSA may be a large, powerful bureaucracy, intrinsically resistant to change, but how about your organization? If you are seeking a way to get real value out of SIEM data, consider co-sourcing that problem to a team that does that for a living. SIEM Simplified was created for just that purpose. Switch from TMI, TLA (Too Much Information, Too Little Analysis) to JEI, JEA (Just Enough Information, Just Enough Analysis).

EventTracker and Heartbleed

Summary:

The usage of OpenSSL in EventTracker v7.5 is NOT vulnerable to heartbleed.

Details:

A lot of attention has focused on CVE-2014-0160, the Heartbleed vulnerability in OpenSSL. According to http://heartbleed.com, OpenSSL 0.9.8 is NOT vulnerable.

The EventTracker Windows Agent uses OpenSSL indirectly if the following options are enabled and used:

1)      Send Windows events as syslog messages AND use the FTP server option to transfer non real-time events to a FTP server. To support this mode of operation, WinSCP.exe v4.2.9 is distributed as part of the EventTracker Windows Agent. This version of WinSCP.exe is compiled with OpenSSL 0.9.8, as documented in http://winscp.net/eng/docs/history_old (v4.2.6 onwards). Accordingly, the EventTracker Windows Agent is NOTvulnerable.

2)      Configuration Assessment (SCAP). This optional feature uses ovaldi.exe v5.8 Build 2 which in turn includes OpenLDAP v2.3.27 as documented in the OVALDI-README distributed with the EventTracker install package. This version of OpenLDAP uses OpenSSL v0.9.8c which is NOT vulnerable.

Notes:

  • EventTracker Agent uses Microsoft secure channel (Schannel) for transferring syslog over SSL/TLS. This package is NOT vulnerable as noted here.
  • We recommend that all customers who may be vulnerable follow the guidance from their software distribution provider.  For more information and corrective action guidance, please see the information from US Cert here.

Top 5 reasons IT Admins love logs

Top 5 reasons IT Admins love logs:

1) Answer the ‘W’ questions

Who, what, where and when; critical files, logins, USB inserts, downloads…see it all

2) Cut ’em off at the pass, ke-mo sah-bee

Get an early warning of the railroad jumping off track. It’s what IT Admins do.

3) Demonstrate compliance

Don’t even try to demonstrate compliance until you get a log management solution in place. Reduce on-site auditor time by 90%.

4) Get a life

Want to go home on time and enjoy the weekend? How about getting proactive instead of reactive?

5) Logs tell you what users don’t

“It wasn’t me. I didn’t do it.” Have you heard this before? Logs don’t lie.

Top 5 reasons Sys Admins hate logs

Top 5 Reasons Sys Admins hate logs:

1) Logs multiply – the volume problem

A single server easily generates 0.25M logs every day, even when operating normally. How many servers do you have? Plus you have workstations, applications and not to mention network devices.

2) Log obscurity – what does it mean?

Jan 2 19:03:22  r37s9p2 oesaudit: type=SYSCALL msg=audit(01/02/13 19:03:22.683:318) : arch=i386 syscall=open success=yes exit=3 a0=80e3f08 a1=18800

Do what now? Go where? ‘Nuff said.

3) Real hackers don’t get logged

If your purpose of logging is, for example, to review logs to “identify and proactively address unauthorized access to cardholder data” for PCI-DSS, how do you know what you don’t know?

4) How can I tell you logged in? Let me count the ways

This is a simple question with a complex answer. It depends on where you logged in. Linux? Solaris? Cisco? Windows 2003? Windows 2008? Application? VMware? Amazon EC2?

5) Compliance forced down your throat, but no specific guidance

Have you ever been in the rainforest with no map, creepy crawlies everywhere, low on supplies and a day’s trek to the nearest settlement? That’s how IT guys feel when management drops a 100+ page compliance standard on their desk.

Big Data: Lessons from the 2012 election

The US Presidential elections of 2012 confounded many pundits. The Republican candidate, Gov. Mitt Romney, put together a strong campaign and polls leading into the final week that suggested a close race. The final results were not so close, and Barack Obama handily won a second term.

Antony Young explains how the Obama campaign used big data, analytics and micro targeting to mobilize key voter blocks giving Obama the numbers needed to push him over the edge.

“The Obama camp in preparing for this election, established a huge Analytics group that comprised of behavioral scientists, data technologists and mathematicians. They worked tirelessly to gather and interpret data to inform every part of the campaign. They built up a voter file that included voter history, demographic profiles, but also collected numerous other data points around interests … for example, did they give to charitable organizations or which magazines did they read to help them better understand who they were and better identify the group of ‘persuadables‘ to target.”

That data was able to be drilled down to zip codes, individual households and in many cases individuals within those households.”

“However it is how they deployed this data in activating their campaign that translated the insight they garnered into killer tactics for the Obama campaign.

“Volunteers canvassing door to door or calling constituents were able to access these profiles via an app accessed on an iPad, iPhone or Android mobile device to provide an instant transcript to help them steer their conversations. They were also able to input new data from their conversation back into the database real time.

“The profiles informed their direct and email fundraising efforts. They used issues such Obama’s support for gay marriage or Romney’s missteps in his portrayal of women to directly target more liberal and professional women on their database, with messages that “Obama is for women,” using that opportunity to solicit contributions to his campaign.

“Marketers need to take heed of how the Obama campaign transformed their marketing approach centered around data. They demonstrated incredible discipline to capture data across multiple sources and then to inform every element of the marketing – direct to consumer, on the ground efforts, unpaid and paid media. Their ability to dissect potential prospects into narrow segments or even at an individual level and develop specific relevant messaging created highly persuasive communications. And finally their approach to tap their committed fans was hugely powerful. The Obama campaign provides a compelling case for companies to build their marketing expertise around big data and micro-targeting. How ready is your organization to do the same?”

Old dogs, new tricks

Doris Lessing passed away at the end of last year. She was the freewheeling Nobel Prize-winning writer on racism, colonialism, feminism and communism who died November 17 at the age of 94, was prolific for most of her life. But five years ago, she said the writing had dried up. “Don’t imagine you’ll have it forever,” she said, according to one obituary. “Use it while you’ve got it because it’ll go; it’s sliding away like water down a plug hole.”

In the very fast changing world of IT, it is common to feel like an old fogey. Everything changes at bewildering speed. From hardware specs to programming languages to user interfaces. We hear of wunderkinds whose innovations transform our very culture. Think Mozart, Zuckerberg to name two.

Tara Bahrampour examined the idea, and quotes author Mark Walton, “What’s really interesting from the neuroscience point of view is that we are hard-wired for creativity for as long as we stay at it, as long as nothing bad happens to our brain.”

The field also matters.

Howard Gardner, professor of cognition and education at the Harvard Graduate School of Education says, “Large creative breakthroughs are more likely to occur with younger scientists and mathematicians, and with lyric poets, than with individuals who create longer forms.”

In fields like law, psychoanalysis and perhaps history and philosophy, on the other hand, “you need a much longer lead time, and so your best work is likely to occur in the latter years. You should start when you are young, but there is no reason whatsoever to assume that you will stop being creative just because you have grey hair.” Gardner said.

Old dogs take heart; you can learn new tricks as long as you stay open to new ideas.

Fail How To: Top 3 SIEM implementation mistakes

Over the years, we had a chance to witness a large number of SIEM implementations, with results from the superb to the colossal failures. What is common with the failures? This blog by Keith Strier nails it:

1) Design Democracy: Find all internal stakeholders and grant all of them veto power. The result is inevitably a mediocre mess. The collective wisdom of the masses is not the best thing here. A super empowered individual is usually found at the center of the successful implementation. If multiple stakeholders are involved, this person builds consensus but nobody else has veto power.
2) Ignore the little things: A great implementation is a set of micro-experiences that add up to make the whole. Think of the Apple iPhone, every detail from the shape, size, appearance to every icon and gesture and feature converges to enhance the user experience. The path to failure is just focus on the big picture, ignore the little things from authentication to navigation and just launch to meet deadline.

3) Avoid Passion: View the implementation as non-strategic overhead; implement and deploy without passion. Result? At best, requirements are fulfilled but users are unlikely to be empowered. Milestones may be met but business sponsors still complain. Prioritizing deadlines, linking IT staff bonuses to delivery metrics, squashing creativity is a sure way to launch technology failures that crush morale.”

Digital detox: Learning from Luke Skywalker

For any working professional in 2013, multiple screens, devices and apps are integral instruments for success. The multitasking can be overwhelming and dependence on gadgets and Internet connectivity can become a full-blown addiction.

There are digital detox facilities for those whose careers and relationships have been ruined by extreme gadget use. Shambhalah Ranch in Northern California has a three-day retreat for people who feel addicted to their gadgets. For 72 hours, the participants eat vegan food, practice yoga, swim in a nearby creek, take long walks in the woods, and keep a journal about being offline. Participants have one thing in common: they’re driven to distraction by the Internet.

Is this you? Checking e-mail in the bathroom and sleeping with your cell phone by your bed are now considered normal. According to the Pew Research Center, in 2007 only 58 percent of people used their phones to text; last year it was 80 percent. More than half of all cell phone users have smartphones, giving them Internet access all the time. As a result, the number of hours Americans spend collectively online has almost doubled since 2010, according to ComScore, a digital analytics company.

Teens and twentysomethings are the most wired. In 2011, Diana Rehling and Wendy Bjorklund, communications professors at St. Cloud State University in Minnesota, surveyed their undergraduates and found that the average college student checks Facebook 20 times an hour.

So what can Luke Skywalker teach you? Shane O’Neill says it well:

“The climactic Death Star battle scene is the centerpiece of the movie’s nature vs. technology motif, a reminder to today’s viewers about the perils of relying too much on gadgets and not enough on human intuition. You’ll recall that Luke and his team of X-Wing fighters are attacking Darth Vader’s planet-size command center. Pilots are relying on a navigation and targeting system displayed through a small screen (using gloriously outdated computer graphics) to try to drop torpedoes into the belly of the Death Star. No pilot has succeeded, and a few have been blown to bits.

“Luke, an apprentice still learning the ways of The Force from the wise — but now dead — Obi-Wan Kenobi, decides to put The Force to work in the heat of battle. He pushes the navigation screen away from his face, shuts off his “targeting computer” and lets The Force guide his mind and his jet’s torpedo to the precise target.

“Luke put down his gadget, blocked out the noise and found a quiet place of Zen-like focus. George Lucas was making an anti-technology statement 36 years ago that resonates today. The overarching message of Star Wars is to use technology for good. Use it to conquer evil, but don’t let it override your own human Force. Don’t let technology replace you.

Take a lesson from a great Jedi warrior. Push the screen away from time to time and give your mind and personality a chance to shine. When it’s time to use the screen again, use it for good.”

Looking back: Operation Buckshot Yankee & agent.btz

It was the fall of 2008. A variant of a three year old relatively benign worm began infecting U.S. military networks via thumb drives.

Deputy Defense Secretary William Lynn wrote nearly two years later that the patient zero was traced to an infected flash drive that was inserted into a U.S. military laptop at a base in the Middle East. The flash drive’s malicious computer code uploaded itself onto a network run by the U.S. Central Command. That code spread undetected on both classified and unclassified systems, establishing what amounted to a digital beachhead, from which data could be transferred to servers under foreign control. It was a network administrator’s worst fear: a rogue program operating silently, poised to deliver operational plans into the hands of an unknown adversary.

The worm, dubbed agent.btz, caused the military’s network administrators major headaches. It took the Pentagon nearly 14 months of stop and go effort to clean out the worm — a process the military called Operation Buckshot Yankee. It was so hard to do that it led to a major reorganization of the information defenses of the armed forces, ultimately causing the new Cyber Command to come into being.

So what was agent.btz? It was a variant of the SillyFDC worm that copies itself from removable drive to computer and back to drive again. Depending on how the worm is configured, it has the ability to scan computers for data, open backdoors, and send through those backdoors to a remote command and control server.

To keep it from spreading across a network, the Pentagon banned thumb drives and the like from November 2008 to February 2010. You could also disable Windows’ “autorun” feature, which instantly starts any program loaded on a drive.

As Noah Shachtman noted, the havoc caused by agent.btz has little to do with the worm’s complexity or maliciousness — and everything to do with the military’s inability to cope with even a minor threat. “Exactly how much information was grabbed, whether it got out, and who got it — that was all unclear,” says an officer who participated in the operation. “The scary part was how fast it spread, and how hard it was to respond.”

Gen. Kevin Chilton of U.S. Strategic Command said, “I asked simple questions like how many computers do we have on the network in various flavor, what’s their configuration, and I couldn’t get an answer in over a month.” As a result, network defense has become a top-tier issue in the armed forces. “A year ago, cyberspace was not commanders’ business. Cyberspace was the sys-admin guy’s business or someone in your outer office when there’s a problem with machines business,” Chilton noted. “Today, we’ve seen the results of this command level focus, senior level focus.”

What can you learn from Operation Buckshot Yankee?
a) That denial is not a river in Egypt
b) There are well known ways to minimize (but not eliminate) threats
c) It requires command level, senior level focus; this is not a sys-admin business

Defense in Depth – The New York Times Case

In January 2013, the New York Times accused hackers from China with connections to its military of successful penetrating its network and gained access to the logins of 53 employees, including Shanghai bureau chief David Barboza who last October published an embarrassing article on the vast secret wealth of China’s prime minister, Wen Jiabao.

This came to light when AT&T noticed unusual activity which it was unable to trace or deflect. A security firm was brought into conduct a forensic investigation that uncovered the true extent of what had been going on.

Over four months starting in September 2012, the attackers had managed to install 45 pieces of targeted malware designed to probe for data such as emails after stealing credentials, only one of which was detected by the installed antivirus software from Symantec. Although the staff logins were hashed, that doesn’t appear to have stopped the hackers in this instance. Perhaps, the newspaper suggests, because they were able to deploy rainbow tables to beat the relatively short passwords.

Symantec offered this statement: “Turning on only the signature-based anti-virus components of endpoint solutions alone are not enough in a world that is changing daily from attacks and threats.”

Still think that basic AntiVirus and firewall is enough? Take it directly from Symantec – you need to monitor and analyze data from inside the enterprise for evidence of compromise. This is Security Information and Event Management (SIEM).

Cyber Pearl Harbor a myth?

Eric Gartzke writing in International Security argues that attackers don’t have much motive to stage a Pearl Harbor-type attack in cyberspace if they aren’t involved in an actual shooting war.

Here is his argument:

It isn’t going to accomplish any very useful goal. Attackers cannot easily use the threat of a cyber attack to blackmail the U.S. (or other states) into doing something they don’t want to do. If they provide enough information to make the threat credible, they instantly make the threat far more difficult to carry out. For example, if an attacker threatens to take down the New York Stock Exchange through a cyber attack, and provides enough information to show that she can indeed carry out this attack, she is also providing enough information for the NYSE and the U.S. Government to stop the attack.

Cyber attacks usually involve hidden vulnerabilities — if you reveal the vulnerability you are attacking, you probably make it possible for your target to patch the vulnerability. Nor does it make sense to carry out a cyber attack on its own, since the damage done by nearly any plausible cyber attack is likely to be temporary.

Points to ponder:

  • Most attacks are occurring against well known vulnerabilities; systems that are unpatched
  • Most attacks are undetected and systems are “pwned” for weeks/months
  • The disruption caused when attacks are discovered are significant both in human and cost terms
  • There was little logic in the 9/11 attacks other than to cause havoc and fear (i.e., terrorists are not famous for logical well thought out reasoning)

Coming to commercial systems, attacks are usually for monetary gain. Attacks are often performed because “they can” [Remember George Mallory famously quoted as having replied to the question “Why do you want to climb Mount Everest?” with the retort “Because it’s there”].

Did Big Data destroy the U.S. healthcare system?

The problem-plagued rollout of healthcare.gov has dominated the news in the USA. Proponents of the Affordable Care Act (ACA) urge that teething problems are inevitable and that’s all these are. In fact, President Obama has been at pains to say the ACA is more than just a website. Opponents of the law see the website failures as one more indicator that it is unworkable.

The premise of the ACA is that young healthy persons will sign up in large numbers and help defray the costs expected from older persons and thus provide a good deal for all. It has also been argued that the ACA is a good deal for young healthies. The debate between proponents of the ACA and the opponents of ACA hinge around this point. See for example, the debate (shouting match?) between Dr. Zeke Emmanuel and James Capretta on Fox News Sunday. In this segment, Capretta says the free market will solve the problem (but it hasn’t so far, has it?) and so Emmanuel says it must be mandated.

So when then has the free market not solved the problem? Robert X. Cringely argues that big data is the culprit. Here’s his argument:

– In the years before Big Data was available, actuaries at insurance companies studied morbidity and mortality statistics in order to set insurance rates. This involved metadata — data about data — because for the most part the actuaries weren’t able to drill down far enough to reach past broad groups of policyholders to individuals. In that system, insurance company profitability increased linearly with scale, so health insurance companies wanted as many policyholders as possible, making a profit on most of them.

– Enter Big Data. The cost of computing came down to the point where it was cost-effective to calculate likely health outcomes on an individual basis.

– Result? The health insurance business model switched from covering as many people as possible to covering as few people as possible — selling insurance only to healthy people who didn’t much need the healthcare system. The goal went from making a profit on most enrollees to making a profit on all enrollees.

The VARs tale

The Canterbury Tales is a collection of stories written by Geoffrey Chaucer at the end of the 14th century. The tales were a part of a story telling contest between pilgrims going to Canterbury Cathedral with the prize being a free meal on their return. While the original is in Middle English, here is the VARs tale in modern day English.

In the beginning, the Value Added Reseller (VAR) represented products to the channel and it was good. Software publishers of note always preferred the indirect sales model and took great pains to cultivate the VAR or channel, and it was good. The VAR maintained the relationship with the end user and understood the nuances of their needs. The VAR gained the trust of the end user by first understanding, then recommending and finally supporting their needs with quality, unbiased recommendations, and it was good. End users in turn, trusted their VAR to look out for their needs and present and recommend the most suitable products.

Then came the cloud which appeared white and fluffy and unthreatening to the end user. But dark and foreboding to the VAR, the cloud was. It threatened to disrupt the established business model. It allowed the software publisher to sell product directly to the end user and bypass the VAR. And it was bad for the VAR. Google started it with Office Apps. Microsoft countered with Office 365. And it was bad for the VAR. And then McAfee did the same for their suite of security products. Now even the security focused VARs took note. Woe is me, said the VAR. Now software publishers are selling directly to the end user and I am bypassed. Soon the day will come when cats and dogs are friends. What are we to do?

Enter Quentin Reynolds who famously said, If you can’t lick ‘em, join them.” Can one roll back the cloud? No more than King Canute could stop the tide rolling in. This means what, then? It means a VAR must transition from being a reseller of product to one of services or better yet, a provider of services. In this way, may the VAR regain relevance with the end user and cement the trust built up over the years, between them.

Thus the VARs tale may have a happy ending wherein the end user has a more secure network, and the auditor being satisfied, returns to his keep and the VAR is relevant again.

Which service would suit, you ask? Well, consider one that is not a commodity, one that requires expertise, one that is valued by the end user, one that is not a set-and-forget. IT Security leaps to mind; it satisfies these criteria. Even more within this field is SIEM, Log Management, Vulnerability scan and Intrusion Detection, given their relevance to both security and regulatory compliance.

The air gap myth

As we work with various networks to implement IT Security in general and SIEM, Log Management and Vulnerability scanning in particular, we sometimes meet with teams that inform us that they have air gapped networks. An air gap is a network security measure that consists of ensuring physical isolation from unsecured networks (like the Internet for example). The premise here being harmful packets cannot “leap” across the air gap. This type of measure is more often seen in utility and defense installations. Are they really effective in improving security?

A study by the Idaho National Laboratory shows that in the utility industry, while an air gap may provide defense, there are many more points of vulnerability in older networks. Often, critical industrial equipment is of older vintage when insecure coding practices were the norm. Over the years, such systems have had web front ends grated on to them to ease configuration and management. This makes them very vulnerable indeed. In addition these older systems are often missing key controls such as encryption. When automation is added to such systems (to improve reliability or reduce operations cost), the potential for damage is quite high indeed.

In a recent interview, Eugene Kaspersky stated that the ultimate air gap had been compromised. The International Space Station, he said, suffered from virus epidemics. Kaspersky revealed that Russian astronauts carried a removable device into space which infected systems on the space station. He did not elaborate on the impact of the infection on operations of the International Space Station (ISS). Kaspersky doesn’t give any details about when the infection he was told about took place, but it appears as if it was prior to May of this year when the United Space Alliance, the group which oversees the operation of the ISS, moved all systems entirely to Linux to make them more “stable and reliable.”

Prior to this move the “dozens of laptops” used on board the space station had been using Windows XP. According to Kaspersky, the infections occurred on laptops used by scientists who used Windows as their main platform and carried USB sticks into space when visiting the ISS. A 2008 report on ExtremeTech said that a Windows XP laptop was brought onto the ISS by a Russian astronaut infected with the W32.Gammima.AG worm, which quickly spread to other laptops on the station – all of which were running Windows XP.

If the Stuxnet infection from June 2010 wasn’t enough evidence, this should lay the air gap myth to rest.

End(er’s) game: Compliance or Security?

Who do you fear more – The Auditor or The Attacker? The former plays by well-established rules, gives plenty of prior notice before arriving on your doorstep and is usually prepared to accept a Plan of Action with Milestones (POAM) in case of deficiencies. The latter gives no notice, never plays fair and will gleefully exploit any deficiencies. Notwithstanding this, most small enterprises, actually fear the auditor more and will jump through hoops to minimize their interaction. It’s ironic, because the auditor is really there to help; the attacker, obviously is not.

While it is true that 100% compliance is not achievable (or for that matter desirable), it is also true that even the most basic of steps towards compliance go a long way to deterring attackers. The comparison to the merits of physical exercise is an easy one. How often have you heard it said that even mild physical exercise (taking the steps instead of elevator) gives you benefit? You don’t have to be a gym rat, pumping iron for hours every day.

And so, to answer the question: What comes first, Compliance or Security? It’s Security really, because Compliance is a set of guidelines to help you get there with the help of an Auditor. Not convinced? The news is rife with accounts of exploits which in many cases are at organizations that have been certified compliant. Obviously there is no such thing as being completely secure, but will you allow the perfect to be the enemy of the good?

The National Institutes of Standards (NIST) released Rev 4 of its seminal publication 800-53, one that applies to US Government IT systems. As budgets (time, money, people) are always limited, it all begins with risk classification, applying  scarce resources in order of value. There are other guidelines such as the SANS Institute Consensus Audit Guidelines to help you make the most of limited resources.

You may not have trained like Ender Wiggin from a very young age through increasingly difficult games, but it doesn’t take a tactical genius to recognize “Buggers” as attackers and Auditors as the frenemies.

Looking for assistance with your IT Security needs? Click here for our newest publication and learn how you can simplify with services.

Three common SMB mistakes

Small and medium business (SMB) owners/managers understand that IT plays a vital role within their companies. However, many SMBs are still making simple mistakes with the management of their IT systems, which are costing them money.

1) Open Source Solutions In a bid to reduce overall costs, many SMBs look to open source applications and platforms. While such solutions appear attractive because of low or no license costs, the effort required for installation, configuration, operation, maintenance and ongoing upgrades should be factored in. The total cost of ownership of such systems are generally ignored or poorly understood. In many cases, they may require a more sophisticated (and therefore more expensive and hard to replace) user to drive them.

2) Migrating to the Cloud Cloud based services promise great savings, which is always music to an SMB manager/owner’s ears, and the entire SaaS market has exploded in recent years. However the costs savings are not always obvious or tangible. The Amazon ec2 service is often touted as an example of cost savings but it very much depends on how you use the resource. See this blog for an example. More appropriate might be a hybrid system that keeps some of the data and services in-house, with others moving to the cloud.

3) The Knowledge Gap Simply buying technology, be it servers or software, does not provide any tangible benefit. You have to integrate it into the day-to-day business operation. This takes expertise both with the technology and your particular business.

In the SIEM space, these buying objections have often stymied SMBs from adopting the technology, despite its benefits and repeated advice from experts. To overcome these, we offer a managed SIEM offering called SIEM Simplified.

The Holy Grail of SIEM

Merriam Webster defines “holy grail” as a goal that is sought after for its great significance”. Mike Rothman of Securosis has described a twofold response to what the “holy grail” is for a security practitioner, i.e.,

  1. A single alert specifying exactly what is broken, with relevant details and the ability to learn the extent of the damage
  2. Make the auditor go away, as quickly and painlessly as possible

How do you achieve the first goal? Here are the steps:

  • Collect log information from every asset on the enterprise network,
  • Filter it through vendor provided intelligence on its significance
  • Filter it through local configuration to determine its significance
  • Gather and package related, relevant information – the so-called 5 Ws (Who, What, Where, When and Why)
  • Alert the appropriate person in the notification method they prefer (email, dashboard, ticket etc.)

This is a fundamental goal for SIEM systems like EventTracker, and over the ten plus years working on this problem, we’ve got a huge collection of intelligence to draw on to help configure and tune the system to you needs. Even so, there is an undefinable element of luck to have it all work out for you, just when you need it. Murphy’s Law says that luck is not on your side. So now what?

One answer we have found is Anomalous Behavior detection. Learn “normal” behavior during a baseline period and draw the attention of a knowledgeable user to out of ordinary or new items. When you join these two systems, you get coverage for both known-knowns as well as unknown-unknowns.

The second goal involves more discipline and less black magic. If you are familiar with the audit process, then you may know that it’s all about preparation and presentation. The Duke of Wellington famously remarked that the “Battle of Waterloo was won on the playing fields of Eton” another testament to winning through preparation. Here again, to enable diligence, EventTracker Enterprise  offers several features including report/alert annotation, summary report on reports, incident acknowledgement and an electronic logbook to record mitigation and incident handling actions.

Of course, all this requires staff with the time and training to use the features. Lack time and resources you say? We’ve got you covered with SIEM Simplified, a co-sourcing option where we do the heavy lifting leaving you to sip from the Cup of Jamshid.

Have neither the time, nor the tools, nor budget? Then the story might unfold like this.

SIEM vs Search Engine

The pervasiveness of Google in the tech world has placed the search function in a central locus of our daily routine. Indeed many of the most popular apps we use every day are specialized forms of search. For example:

  • E-Mail is a search for incoming msgs; search by sender, by topic, by key phrase, by thread
  • Voice calling or texting is preceded by a search for a contact
  • Yelp is really searching for a restaurant
  • The browser address bar is in reality a search box

And the list goes on.

In the SIEM space, the rise of Splunk, especially when coupled with the promise of “big data”, has led to speculation that SIEM is going to be eclipsed by the search function. Let’s examine this a little more closely, especially from the viewpoint of an expert constrained Small Medium Enterprise (SME) where Data Scientists are not idling aplenty.

Big data and accompanying technologies are, at present, more developer level elements that require assembly with application code or intricate setup and configuration before they can be used by typical system administrators much less mid-level managers. To leverage the big-data value proposition of such platforms, the core skill required by such developers is thinking about distributed computing where the processing is performed in batches across multiple nodes. This is not a common skill set in the SME.

Assuming the assembly problem is somehow overcome, can you rejoice in your big-data-set and reduce the problems that SIEM solves to search queries? Well maybe, if you are a Data Scientist and know how to use advanced analytics. However, SIEM functions include things like detecting cyber-attacks, insider threats and operational conditions such as app errors – all pesky real-time requirements. Not quite so effective as a search on archived and indexed data of yesterday. So now the Data Scientist must also have infosec skills and understand the IT infrastructure.

You can probably appreciate that decent infosec skills such as network security, host security, data protection, security event interpretation, and attack vectors do not abound in the SME. There is no reason to think that the shortage of cyber-security professionals and the ultra-shortage of data scientists and experienced Big Data programmers will disappear anytime soon.

So how can an SME leverage the promise of big-data now? Well, frankly EventTracker has been grappling with the challenges of massive, diverse, fast data for many years before became popularly known as Big Data. In testing on COTS hardware, our recent 7.4 release showed up to a 450% increase in receiver/archiver performance over the previous 7.3 release on the same hardware. This is not an accident. We have been thinking and working on this problem continuously for the last 10 years. It’s what we do. This version also has advanced data-science methods built right in to the EventVault Explorer, our data-mart engine so that security analysts don’t need to be data scientists. Our behavior module incorporates data visualization capabilities to help users recognize hidden patterns and relations in the security data, the so-called “Jeopardy” problem wherein the answers are present in the data-set, the challenge is in asking the right questions.

Last but not the least, we recognize that notwithstanding all the chest-thumping above, many (most?) SMEs are so resource constrained that a disciplined SOC-style approach to log review and incident handling is out of reach. Thus we offer SIEM Simplified, a service where we do the heavy lifting leaving the remediation to you.

Search engines are no doubt a remarkably useful innovation that has transformed our approach to many problems. However, SIEM satisfies specific needs in today’s threat, compliance and operations environment that cannot be satisfied effectively or efficiently with a raw big-data platform.

Resistance is futile

The Borg are a fictional alien race that are a terrifying antagonist in the Star Trek franchise. The phrase “Resistance is futile” is best delivered by Patrick Stewart in the episode The Best of Both Worlds.

When IBM demonstrated the power of Watson in 2011 by defeating two of the best humans to ever play Jeopardy, Ken Jennings who won 74 games in a row admitted in defeat, “I, for one, welcome our new computer overlords.”

As the Edward Snowden revelations about the collection of metadata for phone calls became known, the first thinking was that it would be technically impossible to store data for every single phone call – the cost would be prohibitive. Then Brewster Kahle, one of the engineers behind the Internet Archive made this spreadsheet to calculate the storage cost to record and store one year’s-worth of all U.S. calls. He works the cost to about $30M which is non-trivial but not out of reach by any means for a large US Gov’t agency.

The next thought was – ok so maybe it’s technically feasible to record every phone call, but how could anyone possibly listen to every call? Well obviously this is not possible, but can search terms be applied to locate “interesting” calls? Again, we didn’t think so, until another N.S.A. document, cited by The Guardian, showed a “global heat map” that appeared to represent how much data the N.S.A. sweeps up around the world. If it were possible to efficiently mine metadata, data about who is calling or e-mailing, then the pressure for wiretapping and eavesdropping on communications becomes secondary.

This study in Nature shows that just four data points about the location and time of a mobile phone call, make it possible to identify the caller 95 percent of the time.

IBM estimates that thanks to smartphones, tablets, social media sites, e-mail and other forms of digital communications, the world creates 2.5 quintillion bytes of new data daily. Searching through this archive of information is humanly impossible, but precisely what a Watson-like artificial intelligence is designed to do. Isn’t that exactly what was demonstrated in 2011 to win Jeopardy?

The Dark Side of Big Data

study published in Nature looked at the phone records of some 1.5 million mobile phone users in an undisclosed small European country, and found it took only four different data points on the time and location of a call to identify 95% of the people. In the dataset, the location of an individual was specified hourly with a spatial resolution given by the carrier’s antennas.

Mobility data is among the most sensitive data currently being collected. It contains the approximate whereabouts of individuals and can be used to reconstruct individuals’ movements across space and time. A simply anonymized dataset does not contain name, home address, phone number or other obvious identifier. For example, the Netflix Challenge provided a training dataset of 100,480,507 movie ratings each of the form <user, movie, date-of-grade, grade> where the user was an integer ID.

Yet, if individual’s patterns are unique enough, outside information can be used to link the data back to an individual. For instance, in one study, a medical database was successfully combined with a voters list to extract the health record of the governor of Massachusetts. In the case of the Netflix data set, despite the attempt to protect customer privacy, it was shown possible to identify individual users by matching the data set with film ratings on the Internet Movie Database. Even coarse data sets provide little anonymity.

The issue is making sure the debate over big data and privacy keeps up with the science. Yves-Alexandre de Montjoye, one of the authors of the Nature article, says that the ability to cross-link data, such as matching the identity of someone reading a news article to posts that person makes on Twitter, fundamentally changes the idea of privacy and anonymity.

Where do you, and by extension your political representative, stand on this 21st Century issue?

The Intelligence Industrial Complex

If you are old enough to remember the 1988 election in the USA for President, then the name Gary Hart may sound familiar. He was the clear frontrunner after his second Senate term from Colorado was over. He was caught in an extra-marital affair and dropped out of the race. He has since earned a doctorate in politics from Oxford and accepted an endowed professorship at the University of Colorado at Denver.

In this analysis, he quotes President Dwight Eisenhower, “…we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. The potential for the disastrous rise of misplaced power exists, and will persist.”

His point is that the US now has an intelligence-industrial complex composed of close to a dozen and a half federal intelligence agencies and services, many of which are duplicative, and in the last decade or two the growth of a private sector intelligence world. It is dangerous to have a technology-empowered government capable of amassing private data; it is even more dangerous to privatize this Big Brother world.

As has been extensively reported recently, the Foreign Intelligence Surveillance Act (FISA) courts are required to issue warrants, as the Fourth Amendment  (against unreasonable search and seizure) requires, upon a showing that the national security is endangered. This was instituted in the early 1970s following the findings of serious unconstitutional abuse of power. He asks “Is the Surveillance State — the intelligence-industrial complex — out of the control of the elected officials responsible for holding it accountable to American citizens protected by the U.S. Constitution?

We should not have to rely on whistle-blowers to protect our rights.

In a recent interview with Charlie Rose of PBS, President Obama said, “My concern has always been not that we shouldn’t do intelligence gathering to prevent terrorism, but rather: Are we setting up a system of checks and balances?” Despite this he avoided answering how no request to a FISA court has ever been rejected, that companies that provide data on their customers are under a gag order that even prevents them for disclosing the requests.

Is the Intelligence-Industrial complex calling the shots? Does the President know a lot more than he can reveal? Clearly he is unwilling to even consider changing his predecessor policy.

It would seem that Senator Hart has a valid point. If so, its a lot more consequential than Monkey Business.

Introducing EventTracker Log Manager

The IT team of a Small Business has it the worst. Just 1-2 administrators to keep the entire operation running, which includes servers, workstations, patching, anti-virus, firewalls, applications, upgrades, password resets…the list goes on. It would be great to have 25 hours in a day and 4 hands per admin just to keep up. Adding security or compliance demands to the list just make it that much harder.

The path to relief? Automation, in one word. Something that you can “fit-and-forget”.

You need a solution which gathers all security information from around the network, platforms, network devices, apps etc. and that knows what to do with it. One that retains it all efficiently and securely for later if-needed for analysis, displays it in a dashboard for you to examine at your convenience, alerts you via e-mail/SMS etc. if absolutely necessary, indexes it all for fast search, and finds new or out-of-ordinary patterns by itself.

And you need it all in a software-only package that is quickly installed on a workstation or server. That’s what I’m talking about. That’s EventTracker Log Manager.

Designed for the 1-2 sys admin team.
Designed to be easy to use, quick to install and deploy.
Based on the same award-winning technology that SC Magazine awarded a perfect 5-star rating to in 2013.

How do you spell relief? E-v-e-n-t-T-r-a-c-k-e-r  L-o-g  M-a-n-a-g-e-r.
Try it today.

Secure your electronic trash

At the typical office, computer equipment becomes obsolete, slow etc. and periodically requires replacement or refresh. This includes workstations, servers, copy machines, printers etc. Users who get the upgrades are inevitably pleased and carefully move their data carefully to the new equipment and happily release the older ones. What happens after this? Does someone cart them off the local recycling post? Do you call for a dumpster? This is likely the case of the Small Medium Enterprise whereas large enterprises may hire an electronics recycler.

This blog by Kyle Marks appeared in the Harvard Business Review and reminds us that sensitive data can very well be leaked via decommissioned electronics also.

A SIEM solution like EventTracker is effective when leakage occurs from connected equipment or even mobile laptops or those that connect infrequently. However, disconnected and decommissioned equipment is invisible to a SIEM solution.

If you are subject to regulatory compliance, leakage is leakage. Data security laws mandate that organizations implement “adequate safeguards” to ensure privacy protection of individuals.  It’s equally applicable to that leakage comes from your electronic trash. You are still bound to safeguard the data.

Marks points out that detailed tracking data, however, reveals a troubling fact: four out of five corporate IT asset disposal projects had at least one missing asset. More disturbing is the fact that 15% of these “untracked” assets are devices potentially bearing data such as laptops, computers, and servers.

Treating IT asset disposal as a “reverse procurement” process will deter insider theft. This is something that EventTracker cannot help with but is equally valid in addressing compliance and security regulations.

You often see a gumshoe or Private Investigator in the movies conduct Trash Archaeology in looking for clues. Now you know why.

What did Ben Franklin really mean?

In the aftermath of the disclosure of the NSA program called PRISM by Edward Snowden to a reporter at The Guardian, commentators have gone into overdrive and the most iconic quote is one attributed to Benjamin Franklin “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety”.

It was amazing that something said over 250 years ago would be so apropos. Conservatives favor an originalist interpretation of documents such as the US Constitution (see Federalist Society) and so it seemed possible that very similar concerns existed at that time.

Trying to get to the bottom of this quote, Ben Wittes of Brookings wrote that it does not mean what it seems to say.

The words appear originally in a 1755 letter that Franklin is presumed to have written on behalf of the Pennsylvania Assembly to the colonial governor during the French and Indian War. The Assembly wished to tax the lands of the Penn family, which ruled Pennsylvania from afar, to raise money for defense against French and Indian attacks. The Penn family was willing to acknowledge the power of the Assembly to tax them.  The Governor, being an appointee of the Penn family, kept vetoing the Assembly’s effort. The Penn family later offered cash to fund defense of the frontier–as long as the Assembly would acknowledge that it lacked the power to tax the family’s lands.

Franklin was thus complaining of the choice facing the legislature between being able to make funds available for frontier defense versus maintaining its right of self-governance. He was criticizing the Governor for suggesting it should be willing to give up the latter to ensure the former.

The statement is typical of Franklin style and rhetoric which also includes “Sell not virtue to purchase wealth, nor Liberty to purchase power.”  While the circumstances were quite different, it seems the general principle he was stating is indeed relevant to the Snowden case.

What, me worry?

Alfred E. Nueman is the fictitious mascot and cover boy of Mad Magazine. Al Feldstein, who took over as editor in 1956, said, “I want him to have this devil-may-care attitude, someone who can maintain a sense of humor while the world is collapsing around him”.

The #1 reason management doesn’t get security is the sense that “It can’t happen to me” or “What, me worry?” The general argument goes – we are not involved in financial services or national defense. Why would anyone care about what I have? And in any case, even if they hack me, what would they get? It’s not even worth the bother. Larry Ponemon writing in the Harvard Business Review captures this sentiment.

Attackers are increasingly targeting small companies, planting malware that not only steals customer data and contact lists but also makes its way into the computer systems of other companies, such as vendors. Hackers might also be more interested in your employees than you’d think. Are your workers relatively affluent? If so, chances are the hackers are way ahead of you and are either looking for a way into your company, or are already inside, stealing employee data and passwords which (as they well know) people tend to reuse for all their online accounts.

Ponemon says “It’s literally true that no company is immune anymore. In a study we conducted in 2006, approximately 5% of all endpoints, such as desktops and laptops, were infected by previously undetected malware at any given time. In 2009—2010, the proportion was up to 35%. In a new study, it looks as though the figure is going to be close to 54%, and the array of infected devices is wider too, ranging from laptops to phones.”

In the recent revelations by Edward Snowden who blew the whistle on the NSA program called “Prism”, many prominent voices have said they are ok with the program and have nothing to hide. This is another aspect of “What, me worry?” Benjamin Franklin had it right many years ago, “Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety.”

Learning from LeBron

Thinking about implementing analytics? Before you do that, ask yourself “What answers do I want from the data?”

After the Miami Heat lost the 2011 NBA playoffs to the Dallas Mavericks, many armchair MVPs were only too happy to explain that LeBron was not a clutch player and didn’t have what it takes to win championships in this league. Both LeBron and Coach Erik Spolestra however were determined to convert that loss into a teaching moment.

Analytics was indicated. But what was the question?  According to Spoelstra, “It took the ultimate failure in the Finals to view LeBron and our offense with a different lens. He was the most versatile player in the league. We had to figure out a way to use him in the most versatile of ways — in unconventional ways.” In the last game of the 2011 Finals, James was almost listlessly loitering beyond the arc, hesitating, shying away, and failing to take advantage of his stature. His last shot of those Finals was symbolic: an ill-fated 25-foot jump shot from the outskirts of the right wing — his favorite 3-point shot location that season.

LeBron decided the correct answer was to work on the post-up game during the off season. He spent a week learning from the great Hakeem Olajuwon. He brought his own videographer to record the sessions for later review. LeBron arrived early for each session and was stretched and ready to go every time. He took the lessons to the gym for the rest of the off season. It worked. James emerged from that summer transformed. “When he returned after the lockout, he was a totally different player,” Spoelstra says. “It was as if he downloaded a program with all of Olajuwon’s and Ewing’s post-up moves. I don’t know if I’ve seen a player improve that much in a specific area in one offseason. His improvement in that area alone transformed our offense to a championship level in 2012.”

The true test of analytics isn’t just on how good they are but in how committed you are in using the data. At the 2012 NBA Finals, LeBron won the MVP title and Miami, the championship.

The lesson to learn here is to know what answers you are seeking form the data and commit to going where the data takes you.

Two classes of cyber threat to critical infrastructure

Dan Villasenor describes two classes of cyber threat confronting critical infrastructure. Some, like the power grid, are viewed by everyone as critical, and the number of people who might credibly target them is correspondingly smaller. Others, like the internal networks in the Pentagon, are viewed as a target by a much larger number of people. Providing a high level of protection to those systems is extremely challenging, but feasible. Securing them completely is not.

While I would agree that fewer people are interested/able to hack the power grid, it reminds me of the “insider threat” problem that enterprises face. When an empowered insider who has legitimate access goes rogue, the threat can be very hard to locate and the damage can be incredibly high. Most defense techniques for insider threat depend on monitoring and behavior anomaly detection. Adding to the problem is that systems like the power grid are harder to upgrade and harden. The basic methods to restrict access and enforce authentication and activity monitoring would be applicable. No doubt, this was all true for the Natanz processing plant in Iran and it still got hacked by Stuxnet. That system was apparently infected by a USB device carried in by an external contractor, so it would seem that restricting access and activity monitoring may have helped detect it sooner.

In the second class of threat, exemplified by the internal networks at the Pentagon, one assumes that all classic protection methods are enforced. Situational awareness in such cases becomes important. A local administrator who relies entirely on some central IT team to patrol, detect and inform him in time is expecting too much. It is said that God helps those who help themselves.

Villasenor also says: “There is one number that matters most in cybersecurity. No, it’s not the amount of money you’ve spent beefing up your information technology systems. And no, it’s not the number of PowerPoint slides needed to describe the sophisticated security measures protecting those systems, or the length of the encryption keys used to encode the data they hold. It’s really much simpler than that. The most important number in cybersecurity is how many people are mad at you.”

Perhaps we should also consider those interested in cybercrime? The malware industrial complex is booming and the average price for renting botnets to launch DDoS is plummeting.

The Post Breach Boom

A basic requirement for security is that systems be patched and the security products like antivirus be updated as frequently as possible. However, there are practical reasons which limit the application of updates to production systems. This is often the reason why the most active attacks are the ones which have been known for many months.

new report from the Ponemon Institute polled 3,529 IT and IT security professionals in the U.S., Canada, UK, Australia, Brazil, Japan, Singapore and United Arab Emirates, to understand the steps they are taking in the aftermath of malicious and non-malicious data breaches. Here are some highlights:

On average, it is taking companies nearly three months (80 days) to discover a malicious breach and then more than four months (123 days) to resolve it.

    • One third of malicious breaches are not being caught by any of the companies’ defenses – they are instead discovered when companies are notified by a third party, either law enforcement, a partner, customer or other party – or discovered by accident. Meanwhile, more than one third of non-malicious breaches (34 percent) are discovered accidentally.
    • Nearly half of malicious breaches (42 percent) targeted applications and more than one third (36 percent) targeted user accounts.
    • On average, malicious breaches ($840,000) are significantly more costly than non-malicious data breaches ($470,000). For non-malicious breaches, lost reputation, brand value and image were reported as the most serious consequences by participants. For malicious breaches, organizations suffered lost time and productivity followed by loss of reputation.

Want an effective defense but wondering where to start? Consider SIEM Simplified.

Cyber Attacks: Why are they attacking us?

The news sites are abuzz with reports on Chinese cyber attacks on Washington DC institutions both government and NGOs. Are you a possible target? It depends. Attackers funded by nation states have specific objectives and they will follow these. So if you are a dissident or enabling one, or have secrets that the attacker wants, then you may be a target. A law firm with access to intellectual property may be a target, but an individual has much more reason to fear cyber criminals who seek credit card details than a Chinese attack.

As Sun Tzu noted in the Art of War, “Know your enemy and know yourself, find naught in fear for 100 battles.”

So what are the Chinese after? Ezra Klein has a great piece in the Washington Post. He outlines three reasons:

1)      Asymmetric warfare – the US defense budget is larger than the next 13 countries combined and has been that way for a long, long time. In any conventional or atomic war, no conceivable adversary has any chance. An attack on critical infrastructure may help level the playing field. Operators of critical infrastructure and of course US DoD locations are at risk and should shore up defenses.

2)      Intellectual property theft – China and Russia want to steal the intellectual property (IP) of American companies, and much of that property now lies in the cloud or on an employee’s hard drive. Stealing those blueprints and plans and ideas is an easy way to cut the costs of product development. Law firms or employees with IP need protection.

3)      Chinese intelligence services [are] eager to understand how Washington works. Hackers often are searching for the unseen forces that might explain how the administration approaches an issue, experts say, with many Chinese officials presuming that reports by think tanks or news organizations are secretly the work of government officials — much as they would be in Beijing. This is the most interesting explanation but the least relevant to the security practitioner.

If none of these apply to you, then you should be worried about cyber criminals who are out for financial gain. Classic money-making things like credit cards or Social Security numbers that are used to defraud Visa/Mastercard or perpetrate Medicare fraud. This is by far much more widespread than any other type of hacking.

It turns out that many of the tools and tactics used by all these enemies are the same. Commodity attacks tend to be opportunistic and high volume. Persistent attacks tend to be low-and-slow. This in turn means the defenses for the one would apply to the other and often the most basic approaches are also the most effective. Effective approaches require discipline and dedication most of all. Sadly this is the hardest commitment for small and medium enterprises that are most vulnerable. If this is you, then consider a service like SIEM Simplified as an alternative to do-nothing.

Distinguished Warfare Medal for cyber warriors

In what probably was his last move as defense secretary, Leon E. Panetta announced on February 13, 2013 the creation of a new type of medal for troops engaged in cyber-operations and drone strikes, saying the move “recognizes the changing face of warfare.” The official description said that it, “may not be awarded for valor in combat under any circumstances,” which is unique. The idea was to recognize accomplishments that are exceptional and outstanding, but not bounded in any geographic or chronologic manner – that is, it’s not taking place in the combat zone. This recognized that people can now do extraordinary things because of the new technologies that are used in war.

On April 16, 2013, barely two months later, incoming Defense Secretary, Chuck Hagel has withdrawn the medal. The medal was the first combat-related award to be created since the Bronze Star in 1944.

Why was it thought to be necessary? Use the case of the mission that got the leader of al-Qaida in Iraq, Abu Musab al-Zarqawi in June 2006. Reporting showed that U.S. warplanes dropped two 500-pound bombs on a house in which Zarqawi was meeting with other insurgent leaders. A U.S. military spokesman said coalition forces pinpointed Zarqawi’s location after weeks of tracking the movements of his spiritual adviser, Sheik Abdul Rahman, who also was killed in the blast. A team of unmanned aerial systems, drone operators, tracked him down. It was over 600 hours of mission operational work that finally pinpointed him. They put the laser target on the compound that he was in, this terrorist leader, and then an F-16 pilot flew six minutes, facing no enemy fire, and dropped the bombs – computer-guided of course – on that laser. The pilot was awarded the Distinguished Flying Cross.

The idea behind the medal was that drone operators can be recognized as well. The Distinguished Warfare Medal was to rank just below the Distinguished Flying Cross. It was to have precedence over — and be worn on a uniform above — the Bronze Star with “V” device, a medal awarded to troops for specific heroic acts performed under fire in combat. It was intended to recognize the magnitude of the achievement, not the personal risk taken by the recipient.

The decision to cancel the medal is more reflective on the uneasiness about the extent to which UAVs are being used in war, rather than questioning the skill and dedication of the operators. In announcing the move, Secretary Hagel said a “device” will be affixed to existing medals to recognize those who fly and operate drones, whom he described as “critical to our military’s mission of safeguarding the nation.” It also did not help that the medal had a higher precedence than a Purple Heart or Bronze Star.

There is no getting away from it, warfare in the 21st Century is increasingly in the cyber domain.

Interpreting logs, the Tesla story

Did you see the NY Times review by John Broder, which was critical about the Tesla Model S? Tesla CEO Elon Musk was not pleased. They are not arguing over interpretations or anecdotal recollections of experiences, instead they are arguing over basic facts — things that are supposed to be indisputable in an environment with cameras, sensors and instantly searchable logs.

The conflicting accounts — both described in detail — carry a lesson for those of us involved in log interpretation. Data is supposed to be the authoritative alternative to memory, which is selective in its recollection. As Bianca Bosker said, “In Tesla-gate, Big Data hasn’t made good on its promise to deliver a Big Truth. It’s only fueled a Big Fight.”

This is a familiar scenario if you have picked through logs as a forensic exercise. We can (within limitations) try and answer four of the five W questions – Who, What, When and Where, but the fifth one -Why- is elusive and brings the analyst of the realm of guesswork.

The Tesla story is interesting because interested observers are trying to deduce why the reporter was driving around the parking lot – to find the charger receptacle or to deliberately drain the battery and make for a bad review. Alas the data alone cannot answer this question.

In other words, relying on data alone, big data included, to plumb human intention is fraught with difficulty. An analyst needs context.

What is your risk appetite?

In Jacobellis v. Ohio (1964), Justice Potter Steward was quoted as saying, “I don’t know what porn is, but I’ll know it when I see it.” This is not dissimilar to the position that many business leaders confront the concept of “risk”.

When a business leader can describe and identify the risk they are willing to accept, then the security team can put appropriate controls in place. Easy to say, but so very hard to do. It’s because the quantification and definition of risk varies widely depending on the person, the business unit, the enterprise and also the vertical industry segment.

What is the downside of not being able to define risk? It leaves the security team guessing about what controls are appropriate. Inadequate controls expose the business to leakage and loss, whereas onerous controls are expen$ive and even offensive to users.

What do you do about it? Communication between the security team and business stakeholders is essential. We find that scenarios that demonstrate and personalize the impact of risk resonate best. It’s also useful to have a common vocabulary as the language divide between the security team and business stakeholders is a consistent problem. Where possible, use terminology that is already in use in the business instead of something from a standard or framework.

Five telltale signs that your data security is failing and what you can do about it

5 telltale signs that your data security is failing and what you can do about it:

1) Security controls are not proportional to the business value of data

Protecting every bit of data as if it’s a gold bullion in Ft. Knox is not practical. Controls complexity (and therefore cost) must be proportional to the value of the items under protection. Loose change belongs on the bedside table; the crown jewels belong in the Tower of London. If you haven’t classified your data to know which is which, then the business stakeholders have no incentive to be involved in its protection.

2) Gaps between data owners and the security team

Data owners usually only understand business processes and activities and the related information – not the “data”. Security teams, on the other hand, understand “data” but usually not its relation to the business, and therefore its criticality to the enterprise. Each needs to take a half step into the others’ domain.

3) The company has never been penalized

Far too often, toothless regulation encourages a wait-and-see approach. Show me an organization that has failed an audit and I’ll show you one that is now motivated to make investments in security.

4) Stakeholders only see value in sharing, not the risk of leakage

Data owners get upset and push back against involving security teams in the setup of access management. Open access encourages sharing and improves productivity, they say. It’s my data, why are you placing obstacles in its usage? Can your security team effectively communicate the risk of leakage in terms that the data owner can understand?

5) Security is viewed as a hurdle to be overcome

How large is the gap between the business leaders and the security team?  The farther apart they are, the harder it is to get support for security initiatives. It helps to have a champion, but over-dependence on a single person is not sustainable. You need buy-in from senior leadership.

SIEM Simplified for the Security No Man’s Land

In this blog post, Mike Rothman described the quandary facing the midsize business. With a few hundred employees, they have information that hackers want to and try to get but not the budget or manpower to fund dedicated IT Security types, nor the volume of business to interest a large outsourcer. This puts them in no-man’s land with a bull’s-eye on their backs. Hackers are highly motivated to monetize their efforts and will therefore cheerfully pick the lowest hanging fruit they can get. It’s a wicked problem to be sure and one that we’ve been focused on addressing in our corner of the IT Security universe for some years now.

Our solution to this quandary is called SIEM SimplifiedSM and stems from the acceptance that as a vendor we could go developing all sorts of bells and whistles to our product offering only to see an ever shrinking percent of users actually use them in the manner they were designed. Why? Simply put, who has the time? Just as Mike says, our customers are people in mid-size businesses, wearing multiple hats, fighting fires and keeping things operational. SIEM Simplified is the addition of an expert crew at the EventTracker Control Center, in Columbia MD that does the basic blocking and tackling which is the core ingredient if you want to put points on the board. By sharing the crew across multiple customers, it reduces the cost for customers and increases the likelihood of finding the needle in the haystack. And because it’s our bread and butter, we can’t afford to get tired or take a vacation or fall sick and fall behind.

A decade-long focus on this problem as it relates to mid-size businesses has allowed us to tailor the solution to such needs. We use the behavior module to quickly spot new or out-of-ordinary patterns, and a wealth of existing reports and knowledge to do the routine but essential legwork of  log review. Mike was correct is pointing out that “folks in security no-man’s land need …. an advisor to guide them … They need someone to help them prioritize what they need to do right now.” SIEM Simplified delivers.  More information here.

EventTracker Recommendation Engine

Online shopping continues to bring more and more business to “e-tailers.”  Comscore says there was a  16% increase in holiday shopping this past season over the previous season. Some of this is attributed to “recommendations” that are helpfully shown by the giants of the game such as Amazon.

Here is how Amazon describes its recommendation algorithm. “We determine your interests by examining the items you’ve purchased, items you’ve told us you own items you’ve rated, and items you’ve told us you like. We then compare your activity on our site with that of other customers, and using this comparison, are able to recommend other items that may interest you.

Did you know that EventTracker has its own recommendation engine? It’s called Behavior Correlation and is part of the EventTracker Enterprise. Just as Amazon, learns about your browsing and buying habits and uses it to “suggest” other items, so also, EventTracker auto-learns what is “normal”  in your enterprise during an adaptive learning period. This can be as short as 3 days or as long as 15 days depending on the nature of your network. In this period, various items such as IP addresses, users, administrators, process names machines, USB serial numbers etc. are learned. Once learning is complete, data from the most recent period is compared to the learned behavior to pinpoint both unusual activities as well as those never-before-seen. EventTracker then “recommends” that you review these to determine if they point to trouble.

Learning never ends, so the baseline is adaptive, refreshing itself continuously. User defined rules can also be implemented wherein the comparison periods are not learned but specified, and comparisons performed not  once a day but as frequently as once a minute.

If you shop online and feel drawn to a “recommendation”, pause to reflect how this concept can also improve your IT security by looking at logs.

Cyber Security Executive Order

Based on early media reports, the Cyber Security executive order would seem to portend voluntary compliance on the part of U.S. based companies to implement security standards developed in concert with the federal government.  Setting aside the irony of an executive order to voluntarily comply with standards that are yet to be developed, how should private and public sector organizations approach cyber security given today’s exploding threatscape and limited information technology budgets?  How best to prepare for more bad guys, more threats, more imposed standards with less people, time and money?

Back to basics.  First let’s identify the broader challenges: of course you’re watching the perimeter with every flavor of firewall technology and multiple layers of IDS, IPS, AV and other security tools.  But don’t get too comfortable: every organization that has suffered a damaging breach had all those things too.  Since every IT asset is a potential target, every IT asset must be monitored.  Easy to describe, hard to implement. Why?

Challenge number one: massive volumes of log data.  Every organization running a network with more than 100 nodes is already generating millions of audit and event logs.  Those logs are generated by users, administrators, security systems, servers, network devices and other paraphernalia.  They generate the raw data that tracks everything going on from innocent to evil, without prejudice.

Challenge number two: unstructured data. Despite talk and movement toward audit log standards, log data remains widely variable with no common format across platforms, systems and applications, and no universal glossary to define tokens and values.  Even if every major IT player from Microsoft to Oracle (and HP and Cisco), along with several thousand other IT organizations were to adopt uniform, universal log standards today, we would still have another decade or two of the dreaded “legacy data” with which to contend.

Challenge number three: cryptic or non-human readable logs. Unstructured data is difficult enough, but further adding to the complexity is that most of the log data content and structure are defined by developers for developers or administrators.  Don’t assume that security officers and analysts, senior management, help desk personnel or even tenured system administrators can quickly and accurately glance at a log and immediately understanding its relevance or more importantly what to do about it.

Solution?  Use what you already have more wisely.  Implement a log monitoring solution that will ingest all of the data you already generate (and largely ignore until after you discover there’s a real problem), process it in real-time using built-in intelligence, and present the analysis immediately in the form of alerts, dashboards, reports and search capabilities.  Take a poorly designed and voluminous asset (audit logs) and turn it into actionable intelligence.  It isn’t as difficult as it sounds, though it require rigorous discipline and a serious time commitment.

Cyber criminals employ the digital equivalent of what our military refers to as an “asymmetrical tactic.” Consider a hostile emerging super power in Asia that directly or indirectly funds a million cyber warriors at the U.S. equivalent of $10 a day; cheap labor in a global economy.  No organization, not even the federal government, the world’s largest bank or a 10 location retailer, has unlimited people, time and money to defend against millions of bad guys attacking on a much lower (asymmetrical) operational budget.

SIEM in the Social Era

The value proposition of our SIEM Simplified offering is that you can leave the heavy lifting to us. What is undeniable is that getting value from SIEM solutions requires patient sifting through millions of logs, dozens of reports and alerts to find nuggets of value. It’s quite similar to detective work.

But does that not mean you are somehow giving up power? Letting someone else get a claw hold in your domain?

Valid question, but consider this from Nilofer Merchant who says “In the Social Era, value will be (maybe even already is) no longer created primarily by people who work for you or your organization“.

Isn’t power about being the boss?
The Social Era has disrupted the traditional view of power which has always been your title, span of control and budget. Look at Wikipedia or Kickstarter where being powerful is about championing an idea. With SIEM Simplified, you remain in control, notified as necessary, in charge of any remediation.

Aren’t I paid to know the answer?
Not really. Being the keeper of all the answers has become less important with the rise of fantastic search tools and the ease of sharing, as compared to say even 10 years ago. Merchant says “When an organization crowns a few people as chiefs of answers, it forces ideas to move slowly up and down the hierarchy, which makes the organization resistant to change and less competitive. The Social Era raises the pressure on leaders to move from knowing everything to knowing what needs to be addressed and then engaging many people in solving that, together.” Our staff does this every day, for many different environments. This allows us to see the commonalities and bring issues to the fore.

Does it mean blame if there is failure and no praise if it works?
In a crowd sourcing environment, there are many more hands in every pie. In practice, this leads to more ownership from more people than the other way around. Consider Wikipedia as an example of this. It does require different skills, collaborating instead of commanding, sharing power rather than hoarding it. After all, we are only successful, if you are. Indeed, as a provider of the service, we are always mindful that this applies to us more than it does you.

As a provider of services, we see clearly that the most effective engagements are the ones where we can avoid the classic us/them paradigm and instead act as a badgeless team. The Hubble Space Telescope is an excellent example of this type of effort.

It’s a Brave New World, and it’s coming at you, ready or not.

Big Data and Information Inequality

Mike Wu writing in Tech Crunch observed that in all realistic data sets (especially big data), the amount of information one can extract from the data is always much less than the data volume (see figure below): information data.

Big Data

In his view, given the above, the value of big data is hugely exaggerated. He then goes on to infer that this is actually a strong argument for why we need even bigger data. Because the amount of valuable insights we can derive from big data is so very tiny, we need to collect even more data and use more powerful analytics to increase our chance of finding them.

Now machine data (aka log data) is certainly big data, and it is certainly true that obtaining insights from such dataset’s is a painstaking (and often thankless) job, but I wonder if this means we need even more data. Methinks we need to be able to better interpret the big data set and its relevance to “events”.

Over the past two years, we have been deeply involved in “eating our own dog food” as it were. At multiple EventTracker installations that are nationwide in scope, and span thousands of log sources, we have been working to extract insights for presentation to the network owners. In some cases, this is done with a lot of cooperation from the network owner and we have a good understanding of IT assets and the actors who use/abuse them. We find that with such involvement we are better able to risk prioritize what we observe in the data set and map to business concerns. In other cases where there is less interaction with the network owner and we know less about the actors or the relative criticality of assets, then we fall back on past experience and/or vendor-provided info as to what is an incident.  It is the same dataset in both cases but there is more value in one case than the other.

To say it another way, to get more information from the same data we need other types of context to extract signal from noise. Enabling logging at a more granular level from the same devices thereby generating an ever bigger dataset won’t increase the signal level. EventTracker can merge change audit data netflow information as well as vulnerability scan data to enable a greater signal-to-noise ratio. That is a big deal.

Small Business: too small to care?

Small businesses around the world tend to be more innovative and cost-conscious. Most often, the owners tend to be younger and therefore more attuned to being online. The efficiencies that come from being computerized and connected are more obvious and attractive to them. But we know that if you are online then you are vulnerable to attack. Are these small businesses  too small for hackers to care?

Two recent reports say no.

The UK the Information Security Breaches survey 2012 survey results published by PWC shows:

  • 76% of small business had a security breach
  • 15% of small businesses were hit by a denial of service attack
  • 20% of small businesses lost confidential data and 80% of these breaches were serious
  • The average cost of a small business worst security breach was between 15-30K pounds
  • Only 8% of small businesses monitor what their staff post on social sites
  • 34% of small businesses allow smart phones and tablets to connect to their network but have done nothing about it
  • On average, IT security consumes 8% of the spending but 58% make no attempt to evaluate the effectiveness of the expenditure

From the US, the 2012 Verizon data breach report shows:

  • Restaurant and POS systems are popular targets.
  • Companies with 11-100 employees from 36 countries had the maximum number of breaches.
  • Top threats to small business were external against servers
  • 83% of the theft was by professional cybercriminals, for profit
  • Keyloggers designed to capture user input were present in 48% of breaches
  • The most common malware injection vector is installation by a remote attacker
  • Payment card info and authentication credentials were the most stolen data
  • The initial compromise required basic methods with no customization, automated scripts can do it
  • More than 79% of attacks were opportunistic; large-scale automated attacks are opportunistically attacking small to medium businesses, and POS systems frequently provide the opportunity
  • In 72% of cases, it took only minutes from initial attack to compromise but hours for data removal and days for detection
  • More than 55% of breaches remained undiscovered for months
  • More than 92% of the breaches were reported by an external party
  • Only 11% were monitoring access which is called out in Chapter 10 of PCI-DSS

Lesson learned? Small may be beautiful, but in the interconnected world we live in, not too small to be hacked. Protect thyself – start simple by changing remote access credentials and enabling a firewall, monitor and mine your logs. ‘Nuff said.

A smartphone named Desire

Is this true for you? That your smartphone has merged your private and work lives. Smartphones now contain—by accident or by design—a wealth of information about the businesses we work for.

If your phone is stolen, the chance of getting it back approaches zero. How about lost in an elevator or the back seat of a taxi? Will it be returned? More importantly, from our point of view, what about the info on it – the corporate info?

Earlier this year, the Symantec HoneyStick project conducted an experiment by “losing” 50 smartphones in five different cities: New York City; Washington D.C.; Los Angeles; San Francisco; and Ottawa, Canada. Each had a collection of simulated corporate and personal data on them, along with the capability to remotely monitor what happened to them once they were found. They were left in high traffic public places such as elevators, malls, food courts, and public transit stops.

Key findings:

  • 96% of lost smartphones were accessed by the finders of the devices
  • 89% of devices were accessed for personal related apps and information
  • 83% of devices were accessed for corporate related apps and information
  • 70%of devices were accessed for both business and personal related apps and information
  • 50% of smartphone finders contacted the owner and provided contact information

The corporate related apps included remote access as well as email accounts. What is the lesson for corporate IT staff?

  • Take inventory of the mobile devices connecting to your company’s networks; you can’t protect and manage what you don’t know about.
  • Track resource access by mobile devices. For example if you are using MS Exchange, then ActiveSync logs can tell you a whole lot about such access.
  • See our white paper on the subject
  • Track all remote login to critical servers

See our webinar, ‘Using Logs to Deal With the Realities of Mobile Device Security and BYOD.’

Should You Disable Java?

The headlines are ablaze with the news of a new zero-day vulnerability in Java which could expose you to a remote attacker.

The Department of Homeland Security recommends disabling Java completely and many experts are apparently concurring. Crisis communications 101 says maintain high-volume, multi-channel communications but there is a strange silence from Oracle, aside of the announcement of a patch for said vulnerability.

Allowing your opponents to define you is a bad mistake as any political consultant will tell you. Today it’s Java, tomorrow, some other widely used component. The shrillness of the calls also makes me wonder why the hullabaloo?  Upset by the Oracle stewardship of Java, perhaps?

So what should you make of the “disable Java” calls echoing across Cyberia?  Personally I think it’s bad advice, assuming you can even take the advice in the first place. Java is widespread in server side applications (usually enterprise software) and embedded devices. There is probably no easy way to “upgrade” a heart pump or elevator control or a POS system. As far as server side, this may be easier but spare a thought to backward compatibility and business applications that are “certified” on older browsers. Pause a moment, the vulnerability becomes exposed when you visit a malicious website which can then take advantage of the flaw and get on your machine.

Instead of disabling Java and thereby possibly breaking critical functionality, why don’t you limit access to outside websites instead? This is easily done by configuring proxy servers (good for desktops or mobile situations) or limiting devices to a subnet that only has access to the trusted internal hosts (this can work for bar code scanners or manufacturing equipment). This limits your exposure. Proxy server filtering at the internet perimeter is done by matching the user agent string. This is also a good way to limit those older insecure browsers that must be present for internal applications from accessing the outside and potentially being equally a source of infection in the enterprise.

This is a serious issue that merits a thoughtful response, not a panicked rush to comply and cripple your enterprise.

2013 Security Resolutions

A New Year’s resolution is a commitment that a person makes to one or more personal goals, projects, or the reforming of a habit.

  • The ancient Babylonians made promises to their gods at the start of each year that they would return borrowed objects and pay their debts.
  • The Romans began each year by making promises to the god Janus, for whom the month of January is named.
  • In the Medieval era, the knights took the “peacock vow” at the end of the Christmas season each year to re-affirm their commitment to chivalry.

Here are mine:

1)      Shed those extra pounds of logs:

Log retention is always a challenge — how much to keep, for how long? Keep them too long and they are just eating away storage space. Pitch them mercilessly and keep wondering if you will need them.  For guidance, look to any regulation that may apply. PCI-DSS says 365 days, for example; NIST 800-92 unhelpfully says “This should be driven primarily by organizational policies” and then goes on to classify logs into system, infrastructure and application levels. Bottom line, use your judgment because you know your environment best.

2)      Exercise your log analysis muscles regularly

As the Verizon Data Breach report says year in and year out, the bad guys are hoping that you are not collecting logs, and if you are, that you are not reviewing them. More than 96% of all attacks were not highly difficult and were avoidable (at least in hindsight) without difficult or expensive countermeasures. Easier said than done, isn’t it? Consider co-sourcing the effort.

3)      Play with existing toys before buying new ones

Know what configuration assessment is? It’s applying secure configurations to existing equipment. Agencies such as NIST, CIS and DISA provide detailed guidelines. Vendors such as Microsoft provide hardening guides. It’s a question of applying them to existing hardware. This reduces attack surface and contributes greatly to a more secure posture. You already have the equipment, just apply the secure configuration.  EventTracker can help measure results.

Happy New Year.

Five Leadership Lessons from Simpson-Bowles

In January 2010 the U.S. Senate was locked in a sharp debate about the country’s debt and deficit crisis. Unable to agree on a course of action, some Senators proposed the creation of a fiscal commission that would send Congress a proposal to address the problem with no possibility of amendments.   It was chaired by former Senator, Alan Simpson, and former White House chief of staff, Erskine Bowles.

Darrel West and Ashley Gabriele of Brookings examined the leadership lessons in this article. I was struck by the application of some of the lessons to the SIEM problem.

1) Stop Fantasizing About Easy Fixes

Cutting waste and fraud is not sufficient to address long-term debt and deficit issues. To think that we can avoid difficult policy choices simply by getting rid of wasteful spending is a fantasy.   It’s also tempting to think that the next Cisco firewall, Microsoft OS or magic box will solve all security issues; that the hard work of reviewing logs, changes and assessing configuration will not be needed. It’s high time to stop fantasizing about such things.

2) Facts Are Informative

Senator Daniel Patrick Moynihan famously remarked that “everyone is entitled to his own opinion, but not to his own facts.” This insight often is lost in Washington D.C. where leaders invoke “facts” on a selective or misleading basis. The Verizon Data Breach report has repeatedly shown that attacks are not highly difficult, that most breaches took weeks or more to be discovered and that almost all were avoidable through simple controls.   We can’t get away from it — looking at logs is basic and effective.

3) Compromise Is Not a Dirty Word

One of the most challenging aspects of the contemporary political situation is how bargaining, compromise, and negotiation have become dirty words. Do you have this problem in your Enterprise? Between the Security and Compliance teams? Between the Windows and Unix teams? Between the Network and Host teams? Is it preventing you from evaluating and agreeing on a common solution? If yes, this lesson is for you — compromise is not a dirty word.

4) Security and Compliance Have Credibility in Different Areas

On fiscal issues, Democrats have credibility on entitlement reform because of their party’s longstanding advocacy on behalf of Social Security, Medicare, and Medicaid. Meanwhile, Republicans have credibility on defense issues and revenue enhancement because of their party’s history of defending the military and fighting revenue increases. In our world, the Compliance team has credibility on regular log review and coverage of critical systems, while the Security team has credibility on identifying obvious and subtle threats (out-of-ordinary behavior). Different areas, all good.

5) It’s Relationships, Stupid!

Commission leaders found that private and confidential discussions and trust-building exercises were important to achieving the final result. They felt that while public access and a free press were essential to openness and transparency, some meetings and most discussions had to be held behind closed doors. Empower the evaluation team to have frank and open discussion with all stakeholders — including those from Security, Compliance, Operations and Management. Such a consensus built in advance leads to a successful IT project.

Top 5 Security Threats of All Time

The newspapers are full of stories of the latest attack. Then vendors rush to put out marketing statements glorifying themselves for already having had a solution to the problem, if only you had their product/service, and the beat goes on.

Pause for a moment and compare this to health scares. The top 10 scares according to ABC News include Swine Flu (H1N1), BPA, Lead paint on toys from China, Bird Flu (H5N1) and so on.   They are, no doubt, scary monsters but did you know that the common cold causes 22 million school days to be lost in the USA alone?

In other words, you are better off enforcing basic discipline to prevent days lost from common infections than stockpiling exotic vaccines. The same is true in IT security. Here then, are the top 5 attack vectors of all time. Needless to say these are not particularly hard to execute, and are most often successful simply because basic precautions are not in place or enforced. The Verizon Data Breach Report demonstrates this year in and year out.

1. Information theft and leakage

Personally Identifiable Information (PII) data stolen from unsecured storage is rampant. The Federal Trade Commission says 21% of complaints are related to identity theft and have accounted for 1.3M cases in 2009/10 in the USA. The 2012 Verizon DBIR shows 855 incidents and 174M compromised records.

Lesson learned: Implement recommendations like SANS CAG or PCI-DSS.

2. Brute force attack

Hackers leverage cheap computing power and pervasive broadband connectivity to breach security. This is a low cost, low tech attack that can be automated remotely.   It can be easily detected and defended against, but it requires monitoring and eyes on the logs. It tends to be successful because monitoring is absent.

Lesson learned: Monitor logs from firewalls and network devices in real time. Set up alerts which are reviewed by staff and acted upon as needed. If this is too time consuming, then consider a service like SIEM Simplified.

3. Insider breach

Staff on the inside is often privy to a large amount of data and can cause much larger damage. The Wikileaks case is the poster child for this type of attack.

4. Process and Procedure failures

It is often the case that in the normal course of business, established process and procedures are ignored. Unfortunate coincidences can cause problems.   Examples of this are e-mailing interim work products to personal accounts, taking work home in USB sticks and then losing them, sending CDROMs with source code by mail and then they are lost, etc.

Lesson learned: Reinforce policies and procedures for all employees on a regular basis. Many US Government agencies require annual completion of a Computer Security and Assessment Test.   Many commercial banks remind users via message boxes in the login screen.

5. Operating failures

This includes oops moments, such as backing up data to the wrong server and sending backup data off-site where it can be restored by unauthorized persons.

Lesson learned: Review procedures and policies for gaps. An external auditor can be helpful in identifying such gaps and recommending compensating controls to cover them.

Big Data, Old News. Got Humans?

Did you know that big data is old news in the area of financial derivatives?   O’Connor & Associates  was founded in 1977 by mathematician Michael Greenbaum, who had run risk management for Ed & Bill O’Connor’s options trading firm. What made O’Connor and Associates successful was the understanding that expertise is far more important than any tool or algorithm. After all, absent expertise, any tool can only generate gibberish; perfectly processed and completely logical, of course, but still gibberish.

Which brings us back to the critical role played by the driver of today’s enterprise tools. These tools are all full featured and automate the work of crushing an entire hillside of dirt to locate tiny grams of gold — but “got human”? It comes back to the skilled operator who knows how and when to push all those fancy buttons. Of course deciding which hillside to crush is another problem altogether.

This is a particularly difficult challenge for midsize enterprises which struggle with SIEM data; billions of logs, change and configuration data all now available thanks to that shiny SIEM you just installed. What does it mean? What are you supposed to do next? Large enterprises can afford a small army of experts to extract value, whereas the small business can ignore the problem completely but for the midsize enterprises, it’s the worst of all worlds – Compliance regulations, tight budgets, lean staff and the demand for results?

This is why our  SIEM Simplified  offering was created: to allow customers to outsource the heavy lifting part of the problem while maintaining control over the critical and sensitive decision making parts. At the EventTracker Control Center (ECC), our expert staff watches your incidents and reviews log reports daily, and alert you to those few truly critical conditions that warrant your attention. This frees up your staff to take care of things that cannot be outsourced. In addition, since the ECC enjoys economies of scale, this can be done at lesser cost than do-it-yourself. This has the advantage of inserting the critical human component back into the equation but at a price point that is affordable.

As Grady Booch observed “A fool with a tool is still a fool.”

tool

Five myths about PCI-DSS

In the spirit of the Washington Posts’ regular column, “5 Myths,” here we “challenge everything you think you know” about PCI-DSS Compliance.

1. One vendor and product will make us compliant

While many vendors offer an array of services and software which target PCI-DSS, no single vendor or product fully addresses all 12 of the PCI-DSS v2.0 requirements. Marketing departments often position offerings in such a manner as to give the impression of a “silver bullet.”   The PCI Security Standards Council warns against reliance on a single product or vendor and urges a security strategy that focuses on the big picture.

2. Outsourcing card processing makes us compliant

Outsourcing may simplify payment card processing but does not provide automatic compliance. PCI-DSS also calls for policies and procedures to safeguard cardholder transactions and data processing when you receive them — for example, chargebacks or refunds. You should request an annual certificate of compliance from the vendor to ensure that their applications and terminals are compliant.

3. PCI is too hard, requires too much effort

The 12 requirements can seem difficult to understand and implement to merchants without a dedicated IT department, however these requirements are basic steps for good security. The standard offers the alternative of compensating controls, if needed. The market is awash with many products and services to help merchants achieve compliance. Also consider that the cost of non-compliance can often be higher, including fines, legal fees, lost business and reputation.

4. PCI requires us to hire a Qualified Security Assessor (QSA)

PCI-DSS offers the option of doing a self-assessment with officer sign-off if your merchant bank agrees. Most large retailers prefer to hire a QSA because they have complex environments, and QSAs provide valuable expertise including the use of compensating controls.

5. PCI compliance will make us more secure

Security exploits are non-stop and an ever escalating war between the bad and good guys. Achieving PCI-DSS compliance, while certainly a “brick in the wall” of your security posture, is only a snapshot in time. “Eternal vigilance is the price of liberty,” said Wendell Phillips.

Does Big Data = Better Results? It depends…

If you could offer your IT Security team 100 times more data than they currently collect – every last log, every configuration, every single change made to every device in the entire enterprise at zero cost – would they be better off? Would your enterprise be more secure? Completely compliant? You already know the answer – not really, no. In fact, some compliance-focused customers tell us they would be worse off because of liability concerns (you had the data all along but neglected to use it to safeguard my privacy), and some security focused customers say it will actually make things worse because we have no processes to effectively manage such archives.

As Micheal Schrage noted, big data doesn’t inherently lead to better results. Organizations must grasp that being “big data-driven requires more qualified human judgment than cloud-based machine learning.” For big data to be meaningful, it has to be linked to a desirable business outcome, or else executives are just being impressed or intimidated by the bigness of the data set. For example, IBMs DeepQA project stores petabytes of data and was demonstrated by Watson, the successful Jeopardy playing machine – that is big data linked clearly to a desirable outcome.

In our corner of the woods, the desirable business outcomes are well understood.   We want to keep bad guys out (malware, hackers), learn about the guys inside that have gone bad (insider threats), demonstrate continuous compliance, and of course do all this on a leaner, meaner budget.

Big data can be an embarrassment of riches if linked to such outcome.   But note the emphasis on “qualified human judgment.”   Absent this, big data may be just an embarrassment. This point underlines the core problem with SIEM – we can collect everything, but who has the time or rule-set to make the valuable stuff jump out? If you agree, consider a managed service. It’s a cost effective way to put big data to work in your enterprise today – clearly linked to a set of desirable outcomes.

Are you a Data Scientist?

The advent of the big data era means that analyzing large, messy, unstructured data will increasingly form part of everyone’s work. Managers and business analysts will often be called upon to conduct data-driven experiments, to interpret data, and to create innovative data-based products and services. To thrive in this world, many will require additional skills. In a new Avanade survey, more than 60 percent of respondents said their employees need to develop new skills to translate big data into insights and business value.

Are you:

Ready and willing to experiment with your log and SIEM data? Managers and security analysts must be able to apply the principles of scientific experimentation to their log and SIEM data. They must know how to construct intelligent hypotheses. They also need to understand the principles of experimental testing and design, including population selection and sampling, in order to evaluate the validity of data analyses. As randomized testing and experimentation become more commonplace, a background in scientific experimental design will be particularly valued.

Adept at mathematical reasoning? How many of your IT staff today are really “numerate” — competent in the interpretation and use of numeric data? It’s a skill that’s going to become increasingly critical. IT Staff members don’t need to be statisticians, but they need to understand the proper usage of statistical methods. They should understand how to interpret data, metrics and the results of statistical models.

Able to see the big (data) picture? You might call this “data literacy,” or competence in finding, manipulating, managing, and interpreting data, including not just numbers but also text and images. Data literacy skills should be widespread within the IT function, and become an integral aspect of every function and activity.

Jeanne Harris blogging in the Harvard Business Review writes, “Tomorrow’s leaders need to ensure that their people have these skills, along with the culture, support and accountability to go with it. In addition, they must be comfortable leading organizations in which many employees, not just a handful of IT professionals and PhDs in statistics, are up to their necks in the complexities of analyzing large, unstructured and messy data.

“Ensuring that big data creates big value calls for a reskilling effort that is at least as much about fostering a data-driven mindset and analytical culture as it is about adopting new technology. Companies leading the revolution already have an experiment-focused, numerate, data-literate workforce.”

If this presents a challenge, then co-sourcing the function may be an option. The EventTracker Control Center here at Prism offers SIEM Simplified, a service where trained and expert IT staff perform the heavy lifting associated with big data analysis, as it relates to SIEM data. By removing the outliers and bringing patterns to your attention at greater efficiencies because of scale, focus and expertise, you can focus on the interpretation and associated actions.

Seven deadly sins of SIEM

1) Lust: Be not easily lured by the fun, sexy demo. It always looks fantastic when the sales guy is driving. How does it work when you drive? Better yet, on your data?

2) Gluttony: Know thy log volume. When thee consumeth mucho more raw logs than thou expected, thou shall pay and pay dearly. More SIEM budgets die from log gluttony than starvation.

3) Greed: Pure pursuit of perfect rules is perilous. Pick a problem you’re passionate about, craft monitoring, and only after it is clearly understood do you automate remediation.

4) Sloth: The lazy shall languish in obscurity. Toilers triumph. Use thy SIEM every day, acknowledge the incidents, review the log reports. Too hard? No time you say?     Consider SIEM Simplified.

5) Wrath: Don’t get angry with the naysayers. Attack the problem instead. Remember “those who can, do; those who cannot, criticize.” Democrats: Yes we can v2.0.

6) Envy: Do not copy others blindly out of envy for their strategy. Account for your differences (but do emulate best practices).

7) Pride: Hubris kills. Humility has a power all its own. Don’t claim 100% compliance or security. Rather you have 80% coverage but at 20% cost and refining to get the rest. Republicans: So sayeth Ronald Reagan.

Trending Behavior – The Fastest Way to Value

Our  SIEM Simplified  offering is manned by a dedicated staff overseeing the EventTracker Control Center (ECC). When a new customer comes aboard, the ECC staff is tasked with getting to know the new environment, identifying which systems are critical, which applications need watching, and what access controls are in place, etc. In theory, the customer would bring the ECC staff up to speed (this is their network, after all) and keep them up to date as the environment changes. Reality bites and this is rarely the case. More commonly, the customer is unable to provide the ECC with anything other than the most basic of information.

How then can the ECC “learn” and why is this problem interesting to SIEM users at large?

Let’s tackle the latter question first. A problem facing new users at a SIEM installation is that  they get buried in getting to know the baseline pattern and the enterprise (the very same problem the ECC faces). See this  article  from a practitioner.

So it’s the same problem. How does the ECC respond?

Short answer: By looking at behavior trends and spotting the anomalies.

Long answer: The ECC first discovers the network and learns the various device types (OS, application, network devices etc.). This is readily automated by the StatusTracker module. If we are lucky, we get to ask specific the customer questions to bolster our understanding. Next, based on this information and the available knowledge packs within EventTracker, we schedule suitable daily and weekly reports and configure alerts. So far, so good, but really no cigar. The real magic lies in taking these reports  and creating flex reports where we control the output format to focus on parameters of value that are embedded within the description portion of the log messages (this is always true for syslog formatted messages but also for Windows style events). When these parameters are trended in a graph, all sorts of interesting information emerges.

In one case, we saw that a particular group of users was putting their passwords in the username field then logging in much more than usual — you see a failed login followed by a successful one; combine the two and you have both the username and password. In another case, we saw repeated failed logon after hours from a critical IBM i-Series machine and hit the panic button. Turns out someone left a book on the keyboard.

Takeaway: Want to get useful value from your SIEM but don’t have gobs of time to configure or tune the thing for months on end? Think trending behavior, preferably auto-learned. It’s what sets EventTracker apart from the search engine based SIEMs or from the rules based products that need an expen$ive human analyst chained to the product for months on end. Better yet, let the ECC do the heavy lifting for you. SIEM Simplified, indeed.

SIEM Fevers and the Antidote

SIEM Fever is a condition that robs otherwise rational people of common sense in regard to adopting and applying Security Information and Event Management (SIEM) technology for their IT Security and Compliance needs. The consequences of SIEM Fever have contributed to misapplication, misuse, and misunderstanding of SIEM with costly impact. For example, some organizations have adopted SIEM in contexts where there is no hope of a return on investment. Others have invested in training and reorganization but use or abuse the technology with new terminology taken from the vendor dictionary.   Alex Bell of Boeing first described these conditions.

Before you get your knickers in a twist due to a belief that it is an attack on SIEM and must be avenged with flaming commentary against its author, fear not. There are real IT Security and Compliance efforts wasting real money, and wasting real time by misusing SIEM in a number of common forms. Let’s review these types of SIEM Fevers, so they can be recognized and treated.

Lemming Fever: A person with Lemming Fever knows about SIEM simply based upon what he or she has been told (be it true or false), without any first-hand experience or knowledge of it themselves. The consequences of Lemming Fever can be very dangerous if infectees have any kind of decision making responsibility for an enterprise’s SIEM adoption trajectory. The danger tends to increase as a function of an afflictee’s seniority in the program organization due to the greater consequences of bad decision making and the ability to dismiss underling guidance. Lemming Fever is one of the most dangerous SIEM Fevers as it is usually a precondition to many of the following fevers.

Easy Button Fever: This person believes that adopting SIEM is as simple as pressing Staple’s Easy Button, at which point their program magically and immediately begins reaping the benefits of SIEM as imagined during the Lemming Fever stage of infection. Depending on the Security Operating Center (SOC) methodology, however, the deployment of SIEM could mean significant change. Typically, these people have little to no idea at all about the features which are necessary for delivering SIEM’s productivity improvements or the possible inapplicability of those features to their environment.

One Size Fits All Fever: Victims of One Size Fits All Fever believe that the same SIEM model is applicable to any and all environments with a return on investment being implicit in adoption. While tailoring is an important part of SIEM adoption, the extent to which SIEM must be tailored for a specific environment’s context is an important barometer of its appropriateness. One Size Fits All Fever is a mental mindset that may stand alone from other Fevers that are typically associated with the tactical misuse of SIEM.

Simon Says Fever: Afflictees of Simon Says Fever are recognized by their participation in SIEM related activities without the slightest idea as to why those activities are being conducted or why they important other than because they are included in some “checklist”. The most common cause of this Fever is failing to tie all log and incident review activities to adding value and falling into a comfortable, robotic regimen that is merely an illusion of progress.

One-Eyed King Fever: This Fever has the potential to severely impact the successful adoption of SIEM and occurs when the SIEM blind are coached by people with only a slightly better understanding of SIEM. The most common symptom occurring in the presence of One-Eyed King Fever is failure to tailor the SIEM implementation to its specific context or the failure of a coach to recognize and act on a low probability of return on investment as it pertains to a enterprise’s adoption.

The Antidote: SIEM doesn’t cause the Fevers previously described, people do. Whether these people are well intended have studied at the finest schools, or have high IQs, they are typically ignorant of SIEM in many dimensions. They have little idea about the qualities of SIEM which are the bases of its advertised productivity improving features, they believe that those improvements are guaranteed by merely adopting SIEM, or have little idea that the extent of SIEM’s ability to deliver benefit is highly dependent upon program specific context.

The antidote for the many forms of SIEM Fever is to educate. Unfortunately, many of those who are prone to the aforementioned SIEM infections are most desperately in need of such education, are often unaware of what they don’t know about SIEM, are unreceptive to learning about what they don’t know, or believe that those trying to educate them are simply village idiots who have not yet seen the brightly burning SIEM light.

While I’m being entirely tongue-in-cheek, the previously described examples of SIEM misuse and misapplication are real and occurring on a daily basis.   These are not cases of industrial sabotage caused by rogue employees planted by a competitor, but are instead self-inflicted and frequently continue even amidst the availability of experts who are capable of rectifying them.

Interested in getting help? Consider SIEM Simplified.

Surfing the Hype Cycle for SIEM

The Gartner hype cycle is a graphic “source of insight to manage technology deployment within the context of your specific business goals.”     If you have already adopted Security Information and Event Management (SIEM) (aka log management) technology in your organization, how is that working for you? As candidate, Reagan famously asked “Are you better off than you were four years ago?”

Sadly, many buyers of this technology are wallowing in the “trough of disillusionment.”   The implementation has been harder than expected, the technology more complex than demonstrated, the discipline required to use/tune the product is lacking, resource constraints, hiring freezes and the list goes on.

What next? Here are some choices to consider.

Do nothing: Perhaps the compliance check box has been checked off; auditors can be shown the SIEM deployment and sent on their way; the senior staff on to the next big thing; the junior staff have their hands full anyway; leave well enough alone.
Upside: No new costs, no disturbance in the status quo.
Downside: No improvements in security or operations; attackers count on the fact that even if you do collect log SIEM data, you will never really look at it.

Abandon ship: Give up on the whole SIEM concept as yet another failed IT project; the technology was immature; the vendor support was poor; we did not get resources to do the job and so on.
Upside: No new costs, in fact perhaps some cost savings from the annual maintenance, one less technology to deal with.
Downside: Naked in the face of attack or an auditor visit; expect an OMG crisis situation soon.

Try managed service: Managing a SIEM is 99% perspiration and 1% inspiration;offload the perspiration to a team that does this for a living; they can do it with discipline (their livelihood depends on it) and probably cheaper too (passing on savings to you);   you deal with the inspiration.
Upside: Security usually improves; compliance is not a nightmare; frees up senior staff to do other pressing/interesting tasks; cost savings.
Downside: Some loss of control.

Interested? We call it SIEM SimplifiedTM.

Big Data Gotcha’s

Jill Dyche writing in the Harvard Business Review suggests that “the question on many business leaders’ minds is this: Does the potential for accelerating existing business processes warrant the enormous cost associated with technology adoption, project ramp up, and staff hiring and training that accompany Big Data efforts?

A typical log management implementation, even in a medium enterprise is usually a big data endeavor. Surprised? You should not be. A relatively small network of a dozen log sources easily generates a million log messages per day with volumes in the 50-100 million per day being commonplace. With compliance and security guidelines requiring that logs be retained for 12 months or more, pretty soon you have big data.

So let’s answer the question raised in the article:

Q1: What can’t we do today that Big Data could help us do?   If you can’t define the goal of a Big Data effort, don’t pursue it.

A1: Comply with regulations like PCI-DSS, SOX 404, and HIPAA etc.; be alerted to security problems in the enterprise; control data leakage via insecure endpoints; improve operational efficiency

Q2: What skills, technologies, and existing data development practices do we have in place that could help kick-start a Big Data effort? If your company doesn’t have an effective data management organization in place, adoption of Big Data technology will be a huge challenge.

A2: Absent a trained and motivated user of the power tool that is the modern SIEM, an organization that acquires such technology is consigning it to shelf ware.   Recognizing this as a significant adoption challenge in our industry, we offer Monitored SIEM as a service; the best way to describe this is SIEM simplified! We do the heavy lifting so you can focus on leveraging the value.

Q3: What would a proof-of-concept look like, and what are some reasonable boundaries to ensure its quick deployment? As with many other proofs-of-concept the “don’t boil the ocean” rule applies to Big Data.

A3:   The advantage of a software-only solution like EventTracker is that an on premises trial is easy to set up. A virtual appliance with everything you need is provided; set up as a VMware or Hyper-Virtual machine within minutes.   Want something even faster? See it live online.

Q4: What determines whether we green light Big Data investment? Know what success looks like, and put the measures in place.

A4: Excellent point; success may mean continuous compliance;   a 75% reduction in cost of compliance; one security incident averted per quarter; delegation of log review to a junior admin.

Q5: Can we manage the changes brought by Big Data? With the regular communication of tangible results, the payoff of Big Data can be very big indeed.

A5: EventTracker includes more than 2,000 pre-built reports designed to deliver value to every interested stakeholder in the enterprise ranging from dashboards for management, to alerts for Help Desk staff, to risk prioritized incident reports for the security team, to system uptime and performance results for the operations folk and detailed cost savings reports for the CFO.

The old adage “If you fail to prepare, then prepare to fail” applies. Armed with these questions and answers, you are closer to gaining real value with Big Data.

Sun Tzu would have loved Flame

All warfare is based on deception says Sun Tzu. To quote:

“Hence, when able to attack, we must seem unable; 
When using our forces, we must seem inactive; 
When we are near, we must make the enemy believe we are far away;  
When far away, we must make him believe we are near.”

With the new era of cyberweapons, Sun Tzu’s blueprint can be followed almost exactly: a nation can attack when it seems unable to. When conducting cyber-attacks, a nation will seem inactive. When a nation is physically far away, the threat will appear very, very near.

Amidst all the controversy and mystery surrounding attacks like Stuxnet and Flame, it is becoming increasingly clear that the wars of tomorrow will most likely be fought by young kids at computer screens rather than by young kids on the battlefield with guns.

In the area of technology, what is invented for use by the military or for space, eventually finds its way to the commercial arena. It is therefore a matter of time before the techniques used by Flame or Stuxnet become a part of the arsenal of the average cyber thief.

Ready for the brave new world?

Learning from JPMorgan

The single most revealing moment in the coverage of JPMorgan’s multibillion dollar debacle can be found in this take-your-breath-away passage from The Wall Street Journal: On April 30, associates who were gathered in a conference room handed Mr. Dimon summaries and analyses of the losses. But there were no details about the trades themselves. “I want to see the positions!” he barked, throwing down the papers, according to attendees. “Now! I want to see everything!”

When Mr. Dimon saw the numbers, these people say, he couldn’t breathe.

Only when he saw the actual trades — the raw data — did Mr. Dimon realize the full magnitude of his company’s situation. The horrible irony: The very detail-oriented systems (and people) Dimon had put in place had obscured rather than surfaced his bank’s horrible hedge.

This underscores the new trust versus due diligence dilemma outlined by Michael Schrage. Raw data can have enormous impact on executive perceptions that pre-chewed analytics lack.   This is not to minimize or marginalize the importance of analysis and interpretation; but nothing creates situational awareness faster than seeing with your own eyes what your experts are trying to synthesize and summarize.

There’s a reason why great chefs visit the farms and markets that source their restaurants:   the raw ingredients are critical to success — or failure.

We have spent a lot of energy in building dashboards for critical log data and recognize the value of these summaries; but while we should trust our data, we also need to do the due diligence.

Big Data – Does insight equal decision?

In information technology, big data consists of data sets that grow so large that they become awkward to work with using whatever database management tools are on-hand. For that matter, how big is big? It depends on when you need to reconsider data management options – in some cases it may be 100Gb, in others, it may be 100Tb. So, following up on our earlier post about big data and insight, there is one more important consideration:

Does insight equal decision?

The foregone conclusion from big data proponents is that each nugget of “insight” uncovered by data mining will somehow be implicitly actionable and the end user (or management) will gush with excitement and praise.

The first problem is how can you assume that “insight” is actionable? It very well may not be, so what do you do then? The next problem is how can you convince the decision maker that the evidence constitutes an imperative to act? Absent action, the “insight” remains simply a nugget of information.

Note that management typically responds to “insight” with skepticism, seeing the message bearer as yet another purveyor of information (“insight”) and insisting that this new method is the silver bullet, thereby adding to workload.

Being in management myself, my team often comes to me with their little nuggets … some are gold, but some are chicken.   Rather than purvey insight, think about a recommendation backed up by evidence.

Big Data, does more data mean more insight?

In information technology, big data consists of data sets that grow so large they become unwieldy to work with using available database management tools. How big is big? It depends on when you need to reconsider data management options – in some cases it may be 100 Gigabytes, in others, as great as 100 Terabytes.

Does more data necessarily mean more insight?

The pro-argument is that larger data sets allow for greater incidences of patterns, facts, and insights. Moreover, with enough data, you can discover trends using simple counting that are otherwise undiscoverable in small data using sophisticated statistical methods.

On the other hand, while this is perfectly valid in theory, for many businesses the key barrier is not the ability to draw insights from large volumes of data; it is asking the right questions for which insight is needed.

The ability to provide answers does depend on the question being asked and the relevance of the big-data set to that question. How can one generalize to an assumption that more data will always mean more insight?   It isn’t always the answer that’s important, but the questions that are key.

Silly human – logs are for machines (too)

Here is an anecdote from a recent interaction with an enterprise application in the electric power industry:

1. Dave the developer logs all kinds of events. Since he is the primary consumer of the log, the format is optimized for human-readability. For example:

02-APR-2012 01:34:03 USER49 CMD MOD0053: ERROR RETURN FROM MOD0052 RETCODE 59

Apparently this makes perfect sense to Dave:   each line includes a timestamp and some text.

2. Sam from the Security team needs to determine the number of daily unique users. Dave quickly writes a parser script for the log and schedules it. He also builds a little Web interface so that Sam can query the parsed data on his own. Peace reigns.

3. A few weeks later, Sam complains that the web interface is broken. Dave takes a look at the logs, only to realize that someone else has added an extra field in each line, breaking his custom parser. He pushes the change and tells Sam that everything is okay again. Instead of writing a new feature, Dave has to go back and fill in the missing data.

4. Every 3 weeks or so, repeat Step 3 as others add logs.

What is your maximum NPH?

In The Information Diet, Clay Johnson wrote, “The modern human animal spends upwards of 11 hours out of every 24 in a state of constant consumption. Not eating, but gorging on information … We’re all battling a storm of distractions, buffeted with notifications and tempted by tasty tidbits of information. And just as too much junk food can lead to obesity, too much junk information can lead to cluelessness.”

Audit yourself and you may be surprised to find that you get more than 10 notifications per hour; they can be disruptive to your attention. I find myself trying hard (and often failing) to ignore the smartphone as it beeps softly to indicate a new distraction. I struggle to remain focused on the person in my office as the desktop tinkles for attention.

Should you kill off notifications though? Clay argues that you should and offers tools to help.

When designing EventTracker v7, minimizing notifications was a major goal. On Christmas Day in 2008, nobody was stirring, but the “alerts” console rung up over 180 items demanding review. It was obvious these were not “alerts.” This led to the “risk” score which dramatically reduces notifications.

We know that all “alerts”  are not equal: some merit attention before going to lunch, some before the end of the day, and some by the end of the quarter, budget permitting. There are a very rare few that require us to drop the coffee mug and attend instantly. Accordingly, a properly configured EventTracker installation will rarely “notify” you; but when you need to know — that alert will come screaming for your attention.

I am frequently asked what is the maximum events per second that can be managed. I think I’ll begin to ask how many notifications per hour (NPH) the questioner can handle. I think Clay Johnson would approve.

Data, data everywhere but not a drop of value

The sailor in The Rime of the Ancient Mariner relates his experiences after long sea voyage when his ship is blown off course:

“Water, water, every where,

And all the boards did shrink;

Water, water, every where,

Nor any drop to drink.”

An albatross appears and leads them out, but is shot by the Mariner and the ship winds up in unknown waters.  His shipmates blame the Mariner and force him to wear the dead albatross around his neck.

Replace water with data, boards with disk space, and drink with value and the lament would apply to the modern IT infrastructure. We are all drowning in data, but not so much in value. “Big data” are datasets that grow so large that managing them with on-hand tools is awkward. They are seen as the next frontier in innovation, competition, and productivity.

Log management is not immune to this trend. As the basic log collection problem (different sources, different protocols and different formats) has been resolved, we’re now collecting even larger datasets of logs. Many years ago we refuted the argument that log data belonged in a RDBMS, precisely because we saw the side problem of efficient data archival begin to overwhelm the true problem of extracting value from the data. As log data volumes continue to explode, that decision continues to be validated.

However, while storing raw logs in a database was not sensible, their power in extracting patterns and value from data is well established. Recognizing this, EventVault Explorer was released in 2011. Users can extract selected datasets to their choice of external RDBMS (a datamart) for fuzzy searching, pivot tables etc.   As was noted here , the key to managing big data is to personalize the results for maximum impact.

As you look under the covers of SIEM technology, pay attention to that albatross called log archives. It can lead you out of trouble, but you don’t want it around your neck.

Top 5 Compliance Mistakes

5.   Overdoing compensating controls

When a legitimate technological or documented business constraint prevents you from satisfying a requirement, a compensating control can be the answer after a risk analysis is performed. Compensating controls are not specifically defined inside PCI, but are instead defined by you (as a self-certifying merchant) or your QSA. It is specifically not an excuse to push PCI Compliance initiatives through completion at a minimal cost to your company. In reality, most compensating controls are actually harder to do and cost more money in the long run than actually fixing or addressing the original issue or vulnerability. See this article for a clear picture on the topic.

4. Separation of duty

Separation of duties is a key concept of internal controls. Increased protection from fraud and errors must be balanced with the increased cost/effort required.   Both PCI DSS Requirements 3.4.1 and 3.5 mention separation of duties as an obligation for organizations, and yet many still do not do it right, usually because they lack staff.

3. Principle of Least privilege

PCI 2.2.3 says they should “configure system security parameters to prevent misuse.” This requires organizations to drill down into user roles to ensure they’re following the rule of least privilege wherever PCI regulations apply.   This is easier said than done; more often it’s “easier” to grant all possible privileges rather than determine and assign just the correct set. Convenience is the enemy of security.

2. Fixating on excluding systems from scope

When you make the process of getting things out of scope a higher priority than addressing real risk, you get in trouble. Risk mitigation must come first and foremost. In far too many cases, out-of-scope becomes out-of-mind. This may make your CFO happy, but a hacker will get past weak security and not care if the system is in scope or not.

And drum roll …

1. Ignoring virtualization

Many organizations have embraced virtualization wholeheartedly given its efficiency gains. In some cases, virtualized machines are now off-premises and co-located at a service provider like Rackspace. This is a trend at federal government facilities.   However, “off-premises” does not mean “off-your-list”. Regardless of the location of the cardholder data, such systems are within scope as are the hypervisor. In fact, PCI DSS 2.0 says, if the cardholder data is present on even one VM, then the entire VM infrastructure is “in scope.”

The 5 Most Annoying Terms of 2011

Since every cause needs “Awareness,” here are my picks for management speak to camouflage the bloody obvious:

  5. Events per second

Log Management vendors are still trying to “differentiate” with this tired and meaningless metric as we pointed out in The EPS Myth.

  4. Thought leadership

Mitch McCrimmon describes it best.

  3. Cloud

Now here is a term that means all things to all people.

  2. Does that make sense?

The new “to be honest.” Jerry Weismann discusses it in the Harvard Business Review.

  1. Nerd

During the recent SOPA debate, so many self-described “country boys” wanted to get the “nerds” to explain the issue to them; as Jon Stewart pointed out, the word they were looking for was “expert.”

SIEM and the Appalachian Trail

The Appalachian Trail is a marked hiking trail in the eastern United States extending between Georgia and Maine. It is approximately 2,181 miles long and takes about six months to complete. It is not a particularly difficult journey from start to finish; yet even so, completing the trail requires more from the hiker than just enthusiasm, endurance and will.

Likewise, SIEM implementation can take from one to six months to complete (depending on the level of customization) and like the Trail, appears deceptively simple.   It too, can be filled with challenges that reduce even the most experienced IT manager to despair, and there is no shortage of implementations that have been abandoned or uncompleted.   As with the Trail, SIEM implementation requires thoughtful consideration.

1) The Reasons Why

It doesn’t take too many nights scurrying to find shelter in a lightning storm, or days walking in adverse conditions before a hiker wonders: Why am I doing this again? Similarly, when implementing any IT project, SIEM included, it doesn’t take too many inter-departmental meetings, technical gotchas, or budget discussions before this same question presents itself: Why are we doing this again?

  All too often, we don’t have a compelling answer, or we have forgotten it. If you are considering a half year long backpacking trip through the woods, there is a really good reason for it.   In the same way, one embarks on a SIEM project with specific goals, such as regulatory compliance, IT security improvement or to control operating costs.   Define the answer to this question before you begin the project and refer to it when the implementation appears to be derailing. This is the compass that should guide your way.   Make adjustments as necessary.

2) The Virginia Blues

Daily trials can include anything from broken bones to homesickness, a circumstance that occurs on the Appalachian Trail about four to eight weeks into the journey, within the state lines of Virginia. Getting through requires not just perseverance but also an ability to adapt.

For a SIEM project, staff turnover, false positives, misconfigurations or unplanned explosions of data can potentially derail the project. But pushing harder in the face of distress is a recipe for failure. Step back, remind yourself of the reasons why this project is underway, and look at the problems from a fresh perspective. Can you be flexible? Can you make find new avenues to go around the problems?

  3) A Fresh Perspective

In the beginning, every day is chock full of excitement, every summit view or wild animal encounter is exciting.   But life in the woods will become the routine and exhilaration eventually fades into frustration.

In  much the same way, after the initial thrill of installation and its challenges, the SIEM project devolves into a routine of discipline and daily observation across the infrastructure for signs of something amiss.

This is where boredom can set in, but the best defense against the lull that comes along with the end of the implementation is the expectation of it. The journey’s going to end.   Completing it does not occur when the project is implemented.   Rather, when the installation is done, the real journey and the hard work begins.

Humans in the loop – failsafe or liability?

Among InfoSec and IT staff, there is a lot of behind-the-scenes hand wringing that users are the weakest link.   But are InfoSec staff that much stronger?

While automation is and does have a place, Dan Geer, of CIA-backed venture fund In-Q-Tel, properly notes that while ” …humans can build structures more complex” than they can operate, ” …Are humans in the loop a failsafe or a liability? Is fully automated security to be desired or to be feared?”

We’ve considered this question before at Prism, when “automated remediation” was being heavily touted as a solution for mid-market enterprises, where IT staff is not abundant. We’ve found that human intervention is not just a fail-safe, but a necessity.   The interdependencies, even in medium sized networks are far too complex to automate.   We introduced the feature a couple of years back and in reviewing the usage, concluded that such “automated remediation” does have a role to play in the modern enterprise. Use cases include changes to group membership in Active Directory, unrecognized processes, account creation where the naming convention is not followed or honeypot access. In other words, when the condition can be well defined and narrowly focused, humans in the loop will slow things down. However for every such “rule” there are hundreds more that will be obvious to a human but missed by the narrow rule.

So are humans in the loop a failsafe or a liability? It depends on the scenario.

What’s your thought?

Will the cloud take my job?

Nearly every analyst has made aggressive predictions that outsourcing to the cloud will continue to grow rapidly. It’s clear that servers and applications are migrating to the cloud as fast as possible, but according to an article in The Economist, the tradeoff is efficiency vs. sovereignty.   The White House announced that the federal government will shut down 178 duplicative data centers in 2012, adding to the 195 that will be closed by the end of this year.

Businesses need motivation and capability to recognize business problems, solutions that can improve the enterprise, and ways to implement those solutions.   There is clearly a role for outsourced solutions and it is one that enterprises are embracing.

For an engineer, however, the response to outsourcing can be one of frustration, and concerns about short-sighted decisions by management that focus on short term gains at the risk of long term security. But there is also an argument why in-sourcing isn’t necessarily the better business decision:   a recent Gartner report noted that IT departments often center too much of their attention on technology and not enough on business needs, resulting in a “veritable Tower of Babel, where the language between the IT organization and the business has been confounded, and they no longer understand each other.”

Despite increased migration to cloud services, it does not appear that there is an immediate impact on InfoSec-related jobs.   Among the 12 computer-related job classifications tracked by the Department of Labor’s Bureau of Labor Statistics (BLS), information security analysts, along with computer and information research scientists, were among those whose jobs did not report unemployment during the first two quarters of 2011.

John Reed, executive director at IT staffing firm Robert Half Technology, attributes the high growth to the increasing organizational awareness of the need for security and hands-on IT security teams to ensure appropriate security controls are in place to safeguard digital files and vital electronic infrastructure, as well as respond to computer security breaches and viruses.

Simply put: the facility of using cloud services does not replace the skills needed to analyze and interpret the data to protect the enterprise.   Outsourcing to a cloud may provide immediate efficiencies, but it’s   the IT security staff who deliver business value that ensure long term security.

Threatscape 2012 – Prevent, Detect, Correct

The past year has been a hair-raising series of IT security breakdowns and headlining events reaching as high as RSA itself falling victim to a phishing attack.   But as the year set on 2011, the hacker group Anonymous remained busy, providing a sobering reminder that IT Security can never rest.

It turned out that attackers sent two different targeted phishing e-mails to four workers at its parent company, EMC.   The e-mails contained a malicious attachment that was identified in the subject line as “2011 Recruitment plan.xls” which was the point of attack.

Back to Basics:

Prevent:

Using administrative controls such as security awareness training, technical controls such as firewalls, and anti-virus and IPS, to stop attacks from penetrating the network.   Most industry and government experts agree that security configuration management is probably the best way to ensure the best security configuration allowable, along with automated patch management and updating anti-virus software.

Detect:

Employing a blend of technical controls such as anti-virus, IPS, intrusion detection systems (IDS), system monitoring, file integrity monitoring, change control, log management and incident alerting   can help to track how and when system intrusions are being attempted.

Correct:

Applying operating system upgrades, backup data restore and vulnerability mitigation and other controls to make sure systems are configured correctly and can prevent the irretrievable loss of data.

Echo Chamber

In the InfoSec industry, there is an abundance of familiar flaws and copycat theories and approaches. We repeat ourselves and recommend the same approaches. But what has really changed in the last year?

The emergence of hacking groups like Anonymous, LulzSec, and TeaMp0isoN.

In 2011, these groups brought the fight to corporate America, crippling firms both small (HBGary Federal) and large (Stratfor, Sony). As the year drew to a close these groups shifted from prank-oriented hacks for laughs (or “lulz”), to aligning themselves with political movements like Occupy Wall Street, and hacking firms like Stratfor, a Austin, Tex.-based security “think tank” that releases a daily newsletter concerning security and intelligence matters all over the world. After HBGary Federal CEO Aaron Barr publicly bragged that he was going to identify some members of the group during a talk in San Francisco at the RSA Conference week, Anonymous members responded by dumping a huge cache of his personal emails and those of other HBGary Federal executives online, eventually leading to Barr’s resignation. Anonymous and LulzSec then spent several months targeting various retailers, public figures and members of the security community. Their Operation AntiSec aimed to expose alleged hypocrisies and sins by members of the security community. They targeted a number of federal contractors, including IRC Federal and Booz Allen Hamilton, exposing personal data in the process. Congress got involved in July when Sen. John McCain urged Senate leaders to form a select committee to address the threat posed by Anonymous/LulzSec/Wikileaks.

The attack on RSA SecurId was another watershed event. The first public news of the compromise came from RSA itself, when it published a blog post explaining that an attacker had been able to gain access to the company’s network through a “sophisticated” attack. Officials said the attacker had compromised some resources related to the RSA SecurID product, which set off major alarm bells throughout the industry. SecurID is used for two-factor authentication by a huge number of large enterprises, including banks, financial services companies, government agencies and defense contractors. Within months of the RSA attack, there were attacks on SecurID customers, including Lockheed Martin, and the current working theory espoused by experts is that the still-unidentified attackers were interested in LM and other RSA customers all along and, having run into trouble compromising them directly, went after the SecurID technology to loop back to the customers.

The specifics of the attack were depressingly mundane (targeted phishing email with a malicious Excel file attached).

Then too, several certificate authorities were compromised throughout the year. Comodo was the first to fall when it was revealed in March that an attacker (apparently an Iranian national) had been able to compromise the CA infrastructure and issue himself a pile of valid certificates for domains belonging to Google, Yahoo, Skype and others. The attacker bragged about his accomplishments in Pastebin posts and later posted evidence of his forged certificate for Mozilla. Later in the year, the same person targeted the Dutch CA DigiNotar. The details of the attack were slightly different, but the end result was the same: he was able to issue himself several hundred valid certificates and this time went after domains owned by, among others, the Central Intelligence Agency. In the end, all of the major browser manufacturers had to revoke trust in the DigiNotar root CA.   The damage to the company was so bad that the Dutch government eventually took it over and later declared it bankrupt. Staggering, isn’t it? A lone attacker not only forced Microsoft, Apple and Mozilla to yank a root CA from their list of trusted roots, but he was also responsible for forcing a certificate authority out of business.

What has changed in our industry? Nothing really. It’s not a question “if” but “when” the attack will arrive on your assets.

Plus ça change, plus c'est la même, I suppose.

New Bill Promises to Penalize Companies for Security Breaches

On September 22, the Senate Judiciary Committee approved and passed Sen. Richard Blumenthal’s (D, Conn.) bill, the “Personal Data Protection and Breach Accountability Act of 2011,” sending it to the Senate floor. The bill will penalize companies for online data breaches and was introduced on the heels of several high profile security breaches and hacks that affected millions of consumers. These included the Sony breach which compromised the data of 77 million customers, and the DigiNotar breach which resulted in 300,000 Google GMail account holders having their mail hacked and read. The measure addresses companies that hold the personal information of more than 10,000 customers and requires them to put privacy and security programs in place to protect the information, and to respond quickly in the event of a security failure.

The bill proposes that companies be fined $5,000 per day per violation, with a maximum of $20 million per infringement. Additionally, companies who fail to comply with the data protection law (if it is passed) may be required to pay for credit monitoring services and subject to civil litigation by the affected consumers. The bill also aims to increase criminal penalties for identity theft, as well as crimes including the installing of a data collection program on someone’s computer and concealing any security breached in which personal data is compromised.

Key provisions in the bill include a process to help companies establish appropriate minimum security standards, notifications requirements, information sharing after a breach and company accountability.

While the intent of the bill is admirable, the problem is not a lack of laws to deter breaches, but the insufficient enforcement of these laws. Many of the requirements espoused in this new legislation already exist in many different forms.

SANS is the largest source for information security training and security certification, and their position is that we don’t need an extension to the Federal Information Security Management Act of 2002 (FISMA) or other compliance regulations, which have essentially encouraged a checkbox mentality: “I checked it off, so we are good.” This is the wrong approach to security but companies get rewarded for checking off criteria lists. Compliance regulations do not drive improvement. Organizations need to focus on the actual costs that can occur by not being compliant:

  • Loss of consumer confidence: Consumers will think twice before they hand over their personal data to an organization perceived to be careless with that information which can lead to a direct hit in sales.
  • Increased costs of doing business as with PCI-DSS: PCI-DSS is one example where enforcement is prevalent, and the penalties can be stringent. Merchants who do not maintain compliance are subject to higher rates charged by VISA, MasterCard, etc.
  • Negative press: One need only look at the recent data breaches to consider the continuing negative impact on the compromised company’s brand and reputation. In one case (DigiNotar), the company folded.

The gap does not exist in the laws, but rather, in the enforcement of those laws. Until there is enforcement any legislation or requirements are hollow threats.

Top 10 Pitfalls of Implementing IT Projects

It’s a dirty secret, many IT projects fail; maybe even as many as 30% of all IT projects.

Amazing, given the time, money and mojo spent on them, and the seriously smart people working in IT.

As a vendor, it is painful to see this. We see it from time to time (often helplessly from the sidelines), we think about it a lot, we’d like to see eliminated along with malaria, cancer and other “nasties.”

They fail for a lot of reasons, many of them unrelated to software.

At EventTracker we’ve helped save a number of nearly-failed implementations, and we have noticed some consistency of why they fail.

From the home office in Columbia MD, here are the top 10 reasons IT projects fail:

10. “It has to be perfect”

This is the “if you don’t do it right, don’t do it at all” belief system. With this viewpoint, the project lead person believes that the solution must perfectly fit existing or new business processes. The result is a massive, overly complicated implementation that is extremely expensive. By the time it’s all done, the business environment has changed and an enormous investment is wasted.

Lesson: Value does not mean perfection. Make sure the solution delivers value early and often, and let perfection happen as it may.

9. Doesn’t integrate with other systems

In almost every IT shop, “seamless integration with everything” is the mantra. Vendors tout it, management believes it, and users demand it. In other words to be all things to all people, IT project cannot exist in isolation. Integration has become a key component of many IT projects and it can’t exist alone anymore.

Lesson: Examine your needs for integration before you start the project. Find out if there are pre-built tools to accomplish this. Plan accordingly if they aren’t.

8. No one is in charge, everyone is in charge

This is the classic “committee” problem. The CIO or IT Manager decides the company needs an IT solution, so they assign the task of getting it done to a group. No one is accountable, no one is in charge. So they deliberate and discuss forever. Nothing gets done, and when it does, no one makes sure it gets driven into the organization. Failure is imminent.

Lesson: Make sure someone is accountable in the organization for success. If you are using a contractor, give that contractor enough power to make it happen.

7. The person who championed the IT solution quits, goes on vacation, or loses interest

This is a tough problem to foresee because employees don’t usually broadcast their departure or disinterest before bailing. The bottom line is that if the project lead leaves, the project will suffer. It might kill the project if no one else is up to speed. It’s a risk that should be taken seriously.

Lesson: Make sure that more than just one person is involved, and keep a new interim project manager shadowing and up-to-date.

6. Drive-by management

IT projects are often as much about people and processes as it is about technology. If the project doesn’t have consistent management support, the project will fail. After all, if no one knows how or why to use the solution, no one will

Lesson: Make sure you and your team have allocated time to define, test, and use your new solution as it is rolled out.

5. No one thought it through

One day someone realized, “hey we need a good solution to address the compliance regulations and these security gaps.” The next day someone started looking at packages, and a month later you buy one. Then you realized that there were a lot of things this solution affects, including core systems, router, applications and operations processes. But you’re way too far down the road on a package and have spent too much money to switch to something else. So you keep investing until you realize you are dumping money down a hole. It’s a bad place to be.

Lesson: Make sure you think it all through before you buy. Get support. Get input. Then take the plunge. You’ll be glad you did.

4. Requirements are not defined

In this all-too-common example, half way through a complex project, someone says “we actually want to rework our processes to fit X.” The project guys look at what they have done, realize it won’t work, and completely redesign the system. It takes 3 months. The project goes over budget. The key stakeholder says “hey this project is expensive, and we’ve seen nothing of value.” The budget vanishes. The project ends.

Lesson: Make sure you know what you want before you start building it. If you don’t know, build the pieces you do, then build the rest later. Don’t build what you don’t understand.

3. Processes are not defined

This relates to #4 above. Sometimes requirements are defined, but they don’t match good processes, because these processes don’t exist. Or no one follows them. Or they are outdated. Or not well understood. The point is that the solution is computer software: it does exactly what you tell it the same way every time, and it’s expensive to change it. Sloppy processes are impossible to create in software making the solution more of a hindrance than a help.

Lesson: Only implement and automate processes that are well understood and followed. If they are not well understood, implement them in a minimal way and do not automate until they are well understood and followed.

2. People don’t buy in

Any solution with no users is a very lonely piece of software. It’s also a very expensive use of 500Mb on your server. Most IT projects fail because they just aren’t used by anyone. They are a giant database of old information and spotty data.  That’s a failure.

Lesson: Focus on end user adoption. Buy training. Talk about the value that it brings your customers, your employees, and your shareholders. Make usage a part of your employee review process. Incentivize usage. Make it make sense to use it.

1. Key value is not defined

This is by far the most prevalent problem in implementing IT solutions: Businesses don’t take time to define what they want out of their implementation, so it doesn’t do what they want. This goes further than just defining requirements. It’s about defining what value the new software will deliver for the business. By focusing on the nuts and bolts, the business doesn’t figure out what they want from the system as a whole.

Lesson: Instead of starting with “hey I need something to accomplish X,” the organization should be asking “how can this software help us bring value to our security posture, to our internal costs, to our compliance requirements.”

This list is not exhaustive – there are many more ways to kill your implementation. However if your organization is aware of the pitfalls listed above, you have a very high chance of success.

A.N. Ananth

Five reasons for log apathy — and the antidote

Five Reasons for Log Apathy – and the Antidote

How many times have you heard people just don’t care about logs? That IT guys are selfish, stupid or lazy? That they would rather play with new toys than do serious work?
I argue that IT guys are amazing, smart and do care about the systems they curate, but native tools are such that log management is often like running into a brick wall — they encourage disengagement.

Here are five reasons for this perception and what can be done about them.

#1 Obscure descriptions: Ever see a raw log? A Cisco intrusion or a Windows failed object access attempt or a Solaris BSM record to mount a volume? Blech… it’s a description even the author would find hard to love. Not written to be easy to understand, rather its purpose is either debugging by the developer or meant to satisfy certification requirements. This is not apathy, it’s intentional exclusion.

To make this relevant, you need a relevant description which highlights the elements of value, enrichs the information (e.g., lookup an IP address or event id) and not just spew them in time sequence but present information in priority order of risk.

#2 Lack of access: What easier way to spur disengagement than by hiding the logs away in an obscure part of the file system, out of sight to any but the most determined; if they cannot see it, they won’t care about it.

The antidote is to centralize logging and throw up an easy to under display which presents relevant information – preferably risk ordered

#3 Unsexiness:  All the security stories are about wikileaks and credit card theft. Log review is considered dull/boring, it’s a rare occurrence to make it to the plot line of Hawaii Five-O .

Compare it to working out at the gym, it can be boring and there are 10 reasons why other things are more “fun” but it’s good for you and pays handsomely in the long run.

#4 Unsung Heroes: Who is the Big Man on your Campus? Odds are, it’s the guys who make money for the enterprise (think sales guys or CEOs).

Rarely is it the folks who keep the railroad running or god forbid, reduce cost or prevent incidents.

However, they are the wind beneath the wings of the enterprise. The organization that recognizes and values the guys who show up for work everyday and do their job without fuss/drama is much more likely to succeed. Heroes are the ones who make a voluntary effort over a long period of time to accomplish serious goals, not chosen ones with marks on their forehead, destined from birth to save the day.

#5 Forced Compliance: As long as management looks at regulatory compliance as unwarranted interference, it will be resented and IT is forced into checkbox mentality that benefits nobody.

It’s the old question “What comes first? Compliance (chicken) or security (egg)?” We see compliance as a result of secure practices. By making it easy to crunch the data and present meaningful scores and alerts, there is less need to force this.

I’ll say it again, I know many IT guys and gals who are amazing, smart and care deeply about the systems they manage. To combat log apathy, make it easier to deal with them.

Tip of the hat to Dave Meslin whose recent talk at Tedx in Toronto spurred this blog entry

A.N. Ananth

Personalization wins the day

Despite tough times for the corporate world in the past year, spending on IT security was a bright spot in an otherwise gloomy picture.

However if you’ve tried to convince a CFO to sign off on tools and software, you know just how difficult this can be. In fact, the most common way to get approval is to tie this request to an unrelenting compliance mandate. Sadly, a security incident can also help focus and trigger the approval of budget.

Vendors have tried hard to showcase their value by appealing to the preventive nature of their products. ROI calculations are usually provided to demonstrate quick payback but these are often dismissed by the CFO as self serving. Recognizing the difficulty of measuring ROI, an alternate model called ROSI has been proposed but has met with limited success.

So what is an effective way to educate and persuade the gnomes? Try an approach from a parallel field, presentation of medical data. Your medical chart: it’s hard to access, impossible to read — and full of information that could make you healthier if you just knew how to use it, pretty much like security information inside the enterprise. But if you have seen lab results, even motivated persons find it hard to decipher and take action, much less the disinclined.

In a recent talk at TED, Thomas Goetz, the executive editor of Wired magazine addressed this issue and proposed some simple ideas to make this data meaningful and actionable. The use of color, graphics and most important personalization of the information to drive action. We know from experience that posting the speed limit is less effective at getting motorists to comply as compared to a radar gun which posts the speed limit and framed by “Your speed is __”. Its all about personalization.

To make security information meaningful to the CFO, a similar approach can be much more effective than bland “best practice” prescriptions or questionable ROI numbers. Gather data from your enterprise and present it with color and graphs tailored to the “patient”.

Personalize your presentation; get a more patient ear and much less resistance to your budget request.

A. N. Ananth

Best Practice v/s FUD

Have you observed how “best practice” recommendations are widely known but not followed as much? While it seems more the case in IT Security, it is observed true in every other sphere as well. For example, dentists repeatedly recommend brush and floss after each meal as best practice, but how many follow this advice? And then there is the clearly posted speed limit on the road, more often than not, motorists are speeding.

Now the downside to non-compliance is well known to all and for the most part well accepted – no real argument. In the dentist example these include social hardships ranging from bad teeth and breath to health issues and the resulting expense. In the speeding example, there is potential physical harm and of course monetary fines. However it would appear that neither the fear of “bad outcomes” nor “monetary fine” spur widespread compliance. Indeed one observes that the persons who do indeed comply, appear to do so because they wish to; the fear or fine factors don’t play a major role for them.

In a recent experiment, people visiting the dentist were divided in two groups. Before the start, each patient was asked to indicate if they classified themselves as “generally listen to the doctors advice”. After the checkup, people from one group were given the advice to brush and floss regularly but then given a “fear” message on the consequences of non-compliance — bad teeth, social ostracism, high cost of dental procedures etc. People from the other group got the same checkup and advice but were given a “positive” message on the benefits of compliance– nice smile, social popularity, less cost etc. A follow up was conducted to determine which of the two approaches was more effective in getting patients to comply.

Those of us in IT Security battling for budget from unresponsive upper management have been conditioned to think that the “fear” message would be more effective … but … surprise, neither approach was more effective than the other in getting patients to comply with “best practice.”  Instead, those who classified themselves as “generally listen to doctors advice” were the one who did comply. The rest were equally impervious to either the negative or positive consequences, while not disputing them.

You could also point to the great reduction in smoking incidence but this best practice has required more than 3 decades of education to achieve the trend and still can’t be stamped out.

Lesson for IT Security — education takes time and behavior modification, even more so.

Subtraction, Multiplication, Division and Task Unification through SIEM and Log Management

When we originally conceived the idea of SIEM and log management solution for IT managers many years ago, it was because of the problems they faced dealing with high volumes of cryptic audit logs from multiple sources. Searching, categorizing/analyzing, performing forensics and remediation for system security and operational challenges evidenced in disparate audit logs were time consuming, tedious, inconsistent and unrewarding tasks.  We wanted to provide technology that would make problem detection, understanding and therefore remediation, faster and easier.

A recent article in Slate caught my eye; it was all about Infomercials…staple of late night TV and a pitch-a-thon that was conducted in Washington DC for new ideas. The question is just how would you know a “successful” idea if you heard it described?

By now, SIEM has “Crossed the Chasm” , indeed the Gartner MQ puts it well into mainstream adoption, but in the early days, there was some question as to whether this was a real problem or if, as is too often the case, if SIEM and log management was a solution in search of a problem.

Back to the question — how does one determine the viability of an invention before it is released into the market?  Jacob Goldenberg, a professor of marketing at Hebrew University in Jerusalem and a visiting professor at Columbia University, has coded a kind of DNA for successful inventions. After studying a year’s worth of new product launches, Goldenberg developed a classification system to predict the potential success of a new product. He found the same patterns embedded in every watershed invention.

The first is subtraction—the removal of part of a previous invention.

For example, an ATM is a successful invention because it subtracts the bank teller.

Multiplication is the second pattern, and it describes an invention with a component copied to serve some alternate purpose.  Example: the digital camera’s additional flash to prevent “red-eye.”

A TV remote exemplifies the third pattern: division. It’s a product that has been physically divided, or separated, from the original; the remote was “divided” off of the TV.

The fourth pattern, task unification, involves saddling a product with an additional job unrelated to its original function. The iPhone is the quintessential task unifier.

SIEM and log management solutions subtract (liberate) embedded logs and log management functionality from source systems.

SIEM and log management solutions (via aggregation) the problems that can be detected with correlation that would have gone unnoticed otherwise.

EventTracker also meets the last two criteria–arguably decent tools for managing logs ought to have been included by OS and platform vendors (Unix, Linux, Windows, Cisco all have very rudimentary tools for this, if anything); so one can say EventTracker provides something needed for operations (like the TV remote) but not included in the base product.

With the myriad features now available such as configuration assessment, change audit, netflow monitoring and system status, the task unification criteria is also satisfied; you can now address a lot of security and operational requirements that are not strictly “log” related – “task unification”.

When President Obama praised innovation as a critical element in the recovery in his State of the Union, he may not have had “As Seen on TV” in mind but does SIEM fit the bill?

What’s the message supposed to be?  That SIEM and log management solutions are (now?) a good invention? SIEM has crossed the chasm!

SIEM meets Hawaii Five-O

In 2010, CBS rebooted the classic series Hawaii Five-O. It features a fictional state police unit run by Detective Steve McGarrett and named in honor of Hawaii’s status as the 50th state. The action centers on a special task force empowered by Hawaii’s governor to investigate serious crime.

The tech guru on the show is a Detective Chin Ho Kelly (played by Daniel Dae Kim) and is shown to be adept at various forensic techniques, including…wait for it…SIEM (of all things).

In Season 1, Episode 15 (Kai e’ e) the island’s leading tsunami expert is kidnapped on the same day that ocean reports indicate that a huge tsunami is headed to Hawaii. However, Five-0 soon suspects that the report is a hoax and is related to the kidnapping.

During the investigation, Chin Ho uncovers two failed logins with the kidnapped expert’s username and a numeric password each time. This is followed by a successful login. This seems odd because the correct password is all alphabetical and totally unrelated to the numbers. Turns out the kidnapped person was trying to send a message to the cops, knowing the failed logins would get scrutiny. The clue is incomplete though, because the failed logins do not capture the originating IP address and so can’t be readily geolocated.

Its great that SIEM is now firmly entrenched in the mainstream….bodes well for our industry and for IT security.

When the bad guys attack your assets, use EventTracker to “book ‘em Danno”.

– A.N. Ananth

5 Myths about SIEM and Log Management

In the spirit of the Washington Posts’ regular column, “5 Myths”, here is “a challenge everything you think you know” about SIEM/Log Management.

Driven by compliance regulation and the unending stream of security issues, the IT community, over the past few years, has accepted SIEM and Log Management as must-have technology for the data center. The Analyst community lumps a number of vendors together as SIEM and marketing departments are always in overdrive to claim any or all possible benefits or budget. Consequently some “truths” are bandied about. This misinformation affects the decision-making process so let’s look at them.

1. Price is everything…all SIEM products are roughly equal in terms of features/functions. 

An August 2010 article in SC Magazine points out that “At first blush, these (SIEM solutions) looked like 11 cats in a bag” quickly followed by “But a closer look shows interesting differences in focus.”  Nice save but the first thought was the products were roughly equal, and for many that was a key take-away. As so many are influenced by the Gartner Magic Quadrant, the picture is taken to mean everything separated from the detailed commentary, even though that commentary states quite explicitly to look closely at features.

Even better, look at where vendor started?  Very different places it turns out, but then added the features and functionality to meet market (or marketing) needs. For example, NetForensics preaches that SIEM is really correlation; Logrhythm believes that focusing on your logs is the key to security; Tenable thinks vulnerability status is the key; Q1Labs offers network flow monitoring as the critical element; eIQ origins are as a firewall log analyzer.  So, while each solution may claim “the same features”, under the hood, they each started in a certain place, and packed additional feature/functionality around their core – they continue to focus on their core as being their differentiator; adding functionality as the market demands.

Also, some SIEM vendors are software-based, while others are appliance-based, which in itself differentiates the players in the market.

All the same? Hardly.

2. Appliances are a better solution.
Can you spell groupthink? It’s a way; neither better nor worse as a technical approach; perhaps easier for resellers to carry.

When does a software-based solution win?

– Sparing.  To protect your valuable IT infrastructure, you will need to calculate a 1xN relationship of live appliances to back-ups.  If your appliance breaks down and you don’t have a spare, you have to ship the appliance and wait for a replacement.  With software, if your device breaks down, you can simply install the software on existing capacity in your infrastructure, and be back up and running in minutes versus potentially days.

– Scalability.  With an appliance solution, your SIEM solution has a floor and a ceiling.  You need at least one device to get started, and it has a maximum capacity before you have to add another appliance at a high price.  With a software solution, you can scale incrementally… one IT infrastructure device at a time.
– Single Sign On. Integrate easily with Active Directory or LDAP; same username/password or smartcard authentication; very attractive

– Storage. What retention period is best for your logs? Weeks? Months? Years? With appliances, its dictated by the disk size provided; with software you decide or can use network based storage

So appliances must be easier to install? Plug in the box, provide an IP and you are done? Not really – more than 99% of the configuration is local to the user.

3. Your log volumes don’t matter…disk space is cheap.

Sure…but as Defense Secretary Rumsfeld used to say $10B and $10B there and pretty soon you’re talking real money.

Logs are voluminous, a successful implementation leads to much higher log volume and terabytes add up very quickly.  Compression is essential but the ability to access network based storage is even more important. The ability to backup/restore archives easily and natively to nearline or offline storage is critical.

If you consider an appliance solution, it is inherently limited in the available disk.

4. The technology is like an antivirus… just install it and forget it, and if anything is wrong, it will tell you.

Ahh, the magic bullet!  Like the ad says, “Set it and forget it!”  If only this were true… wishing will not make it so.  There is not one single SIEM vendor that can justify saying “open the box, activate your SIEM solution in minutes, and you will be fine!”  To say so, or even worse, to believe it would just be irresponsible!

If you just open the box and install it, you will only have the protection offered by the default settings.  With an antivirus solution, this is possible because you have all of the virus signatures to date, and it automatically looks to the virus database to see if there are any updates, and is constantly updated as signatures are added.  Too bad they cannot recognize a “Zero Day” attack when it happens, but that for now, is impossible.

With a SIEM solution, you need something you don’t need with an antivirus…  you need human interaction.  You need to tell the SIEM what your organization’s business rules are, define the roles and capabilities of the users, and have an expert analyst team monitor it, and adapt it to ever-changing conditions.  The IT infrastructure is constantly changing, and people are needed to adjust the SIEM to meet threats, business rules, and the addition or subtraction of IT components or users.

Some vendors imply that their SIEM solution is all that is needed, and you can just plug and play.  You know what the result is?  Unhappy SIEM users chasing down false positives or much worse false negatives.  All SIEM solutions require educated analysts to understand the information being provided, and turn it into actions.  These adjustments can be simplified, but again, it takes people.  If you are thinking about implementing a SIEM and forgetting about it…then fuhgeddaboutit!

5. Log Management is only meaningful if you have a compliance requirement.

Seen the recent headlines? From Stuxnet to Wikileaks to Heartland? There is a lot more to log management than merely satisfying compliance regulations. This myth exists because people are not aware of the internal and external threats that exist in this century!  SIEM/Log Management solutions provide some very important benefits to your organization beyond meeting a compliance requirement.

– Security.  SIEM/Log Management solutions can detect and alert you to a “Zero-Day” virus before the damage is done…something other components in your IT infrastructure can’t do.  They can also alert you to brute force attacks, malware, and trojans by determining what has changed in your environment…

– Improve Efficiency.  Face it!  There are two many devices transmitting too many logs, and the IT staff doesn’t have the time to comb through the logs and know if they are performing the most essential tasks in the proper order.  Many times order is defined by who is screaming the loudest.  A SIEM/Log Management solution help to know of a potential problem sooner, can automate the log analysis, prioritize the order in which issues are addressed, improving the overall efficiency of the IT team!  It is also much more efficient to perform forensic analysis to determine the cause and effect of an incident.

– Improve Network Performance.  Are the servers not working properly?  Are the applications going slowly?  The answer is in the logs, and with a SIEM/Log Management solution, you can quickly locate the problem and fix it.

– Reduce costs.  Implementing a SIEM enables organizations to reduce the number of threats both internal and external, and reduce the operating cost per device.   A SIEM can dramatically reduce the number of incidents that occur within your organization, which eliminates the cost it would take to figure out what actually happened.  Should an event occur, the amount of time it takes to perform the forensic analysis and fix the problem can be greatly shortened, reducing the total loss per incident.

– Ananth

Portable drives and Working remotely in Today’s IT Infrastructure

So, Wikileaks announced this week that its next release will be 7 times as large as the Iraq logs. The initial release brought a very common problem that organizations of all sizes face to the top of the global stage – anyone with a USB drive or writeable CD drive can download confidential information, and walk right out the door. The reverse is true, and harmful malware, Trojans, and viruses can be placed onto the network, as seen with the Stuxnet virus. These pesky little portable media drives are more trouble than they are worth! OK, you’re right, let’s not cry “The sky is falling” just yet.

But, the Wikileaks and Stuxnet virus aside, how big is this threat?

  • A 2009 a study revealed that 59% of former employees stole data from their employers prior to leaving learn more
  • A new study in the UK reveals USB sticks (23%) and other portable storage devices (19%) are the most common devices for insider theft learn more

Right now, there are two primary schools of thought to this significant problem. The first is to take an alarmist approach, and disable all drives, so that no one can steal this data, or infect the network. The other approach is to turn a blind eye, and have no controls in place.

But how does one know who is doing what, and which files are being downloaded or uploaded? The answer is in your device and application logs, of course. The first step is to define your organization’s security policy concerning USB and readable CD drives:

1. Define the capabilities for each individual user as tied to their system login

  • Servers and folders they have permission to access
  • Allow/disallow USB and writeable CD drives
  • Create a record of the serial numbers of the CD drive and USB drive

2. Monitor log activity for USB drives and writeable CD drives to determine what information may have been taken, and by whom

Obviously, this is like closing the barn door after the horse has left. You will be able to know who did what, and when… but by then it may be too late to prevent any financial loss or harm to your customers.

The ideal solution is to support this organization-wide policy that defines the abilities of each individual user, and determine who has permission to use the writeable capabilities of the CD drive or USB drive at the workstation, while monitoring and controlling serial numbers and information access from the server level with automation… combing through all of the logs to look for this event, and being able to trace what happened would seem almost impossible.

With a SIEM/log management solution, this process can be automated, and your organization can be alerted to any event that occurs where the transfer of data does not match the user profile/serial number combination. It is even possible to prevent that data from being transferred by automatically disabling the device. In other words, if someone with a sales ID attempts to copy a file from the accounting server onto a USB drive where the serial number does not match their profile, you can have the drive automatically disabled and issue an incident to investigate this activity. By the same token, if someone with the right user profile/serial number combination copies a file they are permitted to access – something that is a normal, everyday event in conducting business – they would be allowed to do so.

This solution prevents many headaches, and will prevent your confidential data from making the headlines of the Los Angeles Times or the Washington Post.

To learn how EventTracker can actually automate this security initiative for you, click here 

-John Sennott

Lessons from Honeynet Challenge “Log Mysteries”

Ananth, from Prism Microsystems, provides in-depth analysis on the Honeynet Challenge “Log Mysteries” and his thoughts on what it really means in the real world. EventTracker’s Syslog monitoring capability protects your enterprise infrastructure from external threats. “Syslog monitoring”

100 Log Management uses #66 Secure Auditing – LAuS

Today we continue our series on Secure Auditing with a look at the LAuS, the Linux Audit-Subsystem Design secure auditing implementation in Linux. Redhat and Open SUSE both have supported implementations but the LAuS is available in the generic Linux kernel as well.

[See post to watch Flash video] -Ananth

Is correlation killing the SIEM market?

Correlation – what’s it good for? Absolutely nothing!*

* Thank you Edwin Starr.

Ok, that might be a little harsh, but hear me out.

The grand vision of Security Information and Event Management is that it will tell you when you are in danger, and the means to deliver this is through sifting mountains of log files looking for trouble signs. I like to think of that as big-C correlation. Big-C correlation is an admirable concept of associating events with importance. But whenever a discussion occurs about correlation or for that matter SIEM – it quickly becomes a discussion about what I call little-c correlation – that is rules-based multi-event pattern matching.

To proponents of correlation, correlation can detect patterns of behavior so subtle that it would be impossible for a human unaided to do the same. It can deliver the promise of SIEM – telling you what is wrong in a sea of data. Heady stuff indeed and partially true. But the naysayers have numerous good arguments against as well; in no particular order some of the more common ones:

• Rules are too hard to write
• The rule builders supplied by the vendors are not powerful enough
• Users don’t understand the use cases (that is usually a vendor rebuttal argument for the above).
• Rules are not “set and forget” and require constant tuning
• Correlation can’t tell you anything you don’t already know (you have to know the condition to write the rule)
• Too many false positives

The proponents reply that this is a technical challenge and the tools will get better and the problem will be conquered. I have a broader concern about correlation (little c) however, and that is just how useful is it to the majority of customer uses cases. And if it is not useful, is SIEM, with a correlation focus, really viable?

The guys over at Securosis have been running a series defining SIEM that is really worth a read. Now the method they recommend for approaching correlation is that you look at your last 4-5 incidents when it comes to rule-authoring. Their basic point is that if the goals are modest, you can be modestly successful. OK, I agree, but then how many of the big security problems today are really the ones best served by correlation? Heck it seems the big problems are people being tricked into downloading and running malware and correlation is not going to help that. Education and Change Detection are both better ways to avoid those types of threats. Nor will correlation help with SQL injection. Most of the classic scenarios for correlation are successful perimeter breaches but with a SQL attack you are already within the perimeter. It seems to me correlation is potentially solving yesterdays’ problems – and doing it, because of technical challenges, poorly.

So to break down my fundamental issue with correlation – how many incidents are 1) serious 2) have occurred 3) cannot be mitigated in some other more reasonable fashion and 4) the future discovery is best done by detecting a complex pattern?

Not many, I reckon.

No wonder SIEM gets a bad rap on occasion. SIEM will make a user safer but the means to the end is focused on a flawed concept.

That is not to say correlation does not have its uses – certainly the bigger and more complex the environment the more likely you are going to have cases where correlation could and does help. In F500 the very complexity of the environment can mean other mitigation approaches are less achievable. The classic correlation focused SEM market started in large enterprise but is it a viable approach?

Let’s use Prism as an example, as I can speak for the experiences of our customers. We have about 900 customers that have deployed EventTracker, our SIEM solution. These customers are mostly smaller enterprises, what Gartner defines as SME, however they still purchased predominantly for the classic Gartner use case – the budget came from a compliance drive but they wanted to use SIEM as a means of improving overall IT security and sometime operations.

In the case of EventTracker the product is a single integrated solution so the rule-based correlation engine is simply part of the package. It is real-time, extensible and ships with a bunch of predefined rules.

But only a handful of our customers actually use it, and even those who do, don’t do much.

Interestingly enough, most of the customers looked at correlation during evaluation but when the product went into production only a handful actually ended up writing correlation rules. So the reality was, although they thought they were going to use the capability, few did. A larger number, but still a distinct minority, are using some of the preconfigured correlations as there are some uses cases (such as failed logins on multiple machines from a single IP) that a simple correlation rule makes good sense for. Even with the packaged rules however customers tended to use only a handful and regardless these are not the classic “if you see this on a firewall, and this on a server, and this in AD, followed by outbound ftp traffic you are in trouble” complex correlation examples people are fond of using.

Our natural reaction was there was something wrong with the correlation feature so we went back to the installed base and started nosing about. The common response was, no nothing wrong, just never got to it. On further questioning we surfaced the fact for most of the problems they were facing – rules were simply not the best approach to solving the problem.

So we have an industry that, if you agree with my premise, is talking about core value that is impractical to all but a small minority. We are, as vendors, selling snake oil.

So what does that mean?

Are prospects overweighting correlation capability in their evaluations to the detriment of other features that they will actually use later? Are they setting themselves up to fail with false expectations into what SIEM can deliver?

From a vendor standpoint are we all spending R&D dollars on capability that is really simply demoware? Case in point is correlation GUIs. Lots of R&D $$ go into correlation GUIs because writing rules is too hard and customers are going to write a lot of rules. But the compelling value promised for correlation is the ability to look for highly complex conditions. Inevitably when you make a development tool simpler you compromise the power in favor of speed of development. In truth you have not only made it simpler, but also stupider, and less capable. And if you are seldom writing rules, per the Securosis approach, does it need to be fast and easy at all?

That is not to say SIEM is valueless, SIEM is extremely valuable, but we are focusing on its most difficult and least valuable component which is really pretty darn strange. There was an interesting and amusing exchange a week or so ago when LogLogic lowered the price of their SEM capability. This, as you might imagine, raised the hackles of the SEM apologists. Rocky uses Arcsight as an example of successful SIEM (although he conveniently talks about SEM as SIEM, and the SIEM use case is broader now than SEM) – but how much ESM is Arcsight selling down-market? I tend to agree with Rocky in large enterprise but using that as an indicator of the broad market is dangerous. Plus the example of our customers I gave above would lead one to believe people bought for one reason but using the products in an entirely different way.

So hopefully this will spark some discussion. This is, and should not be, a slag between Log Management, SEM or SIM because it seems to me the only real differences between SEM and LM these days is in the amount of lip service paid to real-time rules.

So let’s talk about correlation – what is it good for?

-Steve Lafferty

SIEM or Log Management?

Mike Rothman of Securosis has a thread titled Understanding and Selecting SIEM/Log Management. He suggests both disciplines have fused and defines the holy grail of security practitioners as “one alert telling exactly what is broken”. In the ensuing discussion, there is a suggestion that SIEM and Log Mgt have not fused and there are vendors that do one but not the other.

After a number of years in the industry, I find myself uncomfortable with either term (SIEM or Log Mgt) as it relates to the problem the technology can solve, especially for the mid-market, our focus.

The SIEM term suggests it’s only about Security, and while that is certainly a significant use-case, it’s hardly the only use for the technology. That said if a user wishes to use the technology for only the security use case, fine, but that is not a reflection of the technology. Oh by the way, Security Information Management would perforce include other items such as change audit and configuration assessment data as well which is outside scope of “Log Management”.

The trouble with the term Log Management is that it is not tied to any particular use case and that makes it difficult to sell (not to mention boring). Why would you want to manage logs anyway? Users only care about solutions to real problems they have; not generic “best practice” because Mr. Pundit says so.

SIEM makes sense as “the” use case for this technology as you go to large (Fortune 2000) enterprises and here SIEM is often a synonym for correlation.
But to do this in any useful way, you will need not just the box (real or
virtual) but especially the expert analyst team to drive it, keep it updated and ticking. What is this analyst team busy with? Updating the rules to accommodate constantly changing elements (threats, business rules, IT components) to get that “one alert”. This is not like AntiVirus where rule updates can happen directly from the vendor with no intervention from the admin/user. This is a model only large enterprises can afford.

Some vendors suggest that you can reduce this to an analyst-in-a-box for small enterprise i.e., just buy my box, enable these default rules, minimal intervention and bingo you will be safe. All too common results are either irrelevant alerts or the magic box acts as the dog in the night time. A major reason for “pissed-off SIEM users”. And of course a dedicated analyst (much less a team) is simply not available.

This not to say that the technology is useless absent the dedicated analyst or that SIEM is a lost cause but rather to paint a realistic picture that any “box” can only go so far by itself; and given the more-with-less needs in this mid-market, obsessing on SIEM features obscures the greater value offered by this technology.

Most Medium Enterprise networks are “organically grown architectures” a response to business needs — there is rarely an overarching security model that covers the assets. Point solutions dominate based on incidents or perceived threats or in response to specific compliance mandates. See the results of our virtualization survey for example. Given the resource constraints, the technology must have broad features beyond the (essential) security ones. The smarter the solution, the less smart the analyst needs to be — so really it’s a box-for-an-analyst (and of course all boxes now ought to be virtual).

It makes sense to ask what problem is solved, as this is the universe customers live in. Mike identifies reacting faster, security efficiency and compliance automation to which I would add operations support and cost reduction. More specifically, across the board, show what is happening (track users, monitor critical systems/applications/firewalls, USB activity, database activity, hypervisor changes, physical eqpt etc), show what has happened (forensic, reports etc) and show what is different (change audit).

So back to the question, what would you call such a solution? SIEM has been pounded by Gartner et al into the budget line items of large enterprises so it becomes easier to be recognized as a need. However it is a limiting description. If I had only these two choices, I would have to favor Log Management where one (essential) application is SIEM.

-Ananth

100 Log Management uses #64: Tracking user activity, Part III

Continuing our series on user activity monitoring, today we look at something that is very hard to do in Vista and later, and impossible in XP and earlier — that is reporting on system idle time. The only way to accomplish this in Windows is to setup a domain policy to lock the screen after a certain amount of time and then calculate from the time the screen saver is invoked to when it is cleared. In XP and prior, however, the invocation of the screensaver does not generate an event so you are out of luck. In Vista and later, an event is triggered so it is slightly better, but even there the information generated should only be viewed as an estimate as the method is not fool-proof. We’ll look at the Pro’s (few) and Con’s (many). Enjoy.

100 Log Management uses #63 Tracking user activity, Part II

Today we continue our series on user activity monitoring using event logs. The beginning of any analysis of user activity starts with the system logon. We will take a look at some sample events and describe the types of useful information that can be pulled from the log. While we are doing user logons, we will also take a short diversion into failed user logons. While perhaps not directly useful for activity monitoring paying attention to attempts to logon are also critical.

100 Log Management uses #62 Tracking user activity

Today we begin a new miniseries – looking at and reporting on user activities. Most enterprises restrict what users are able to do — such as playing computer games during work hours. This can be done through software that restricts access, but often it is simply enforced on the honor system. Regardless of which approach a company takes, analyzing logs presents a pretty good idea of what users are up to. In the next few sessions we will take a look at the various logs that get generated and what can be done with them.

100 Log Management uses #61: Static IP address conflicts

Today we look at an interesting operational use case of logs that we learned about by painful experience — static IP address conflicts. We have a pretty large number of static IP addresses assigned to our server machines. Typical of a smaller company we assigned IP addresses and recorded them in a spread sheet. Well, one of our network guys made a mistake and we ended up having problems with duplicate addresses. The gremlins came out in full force and nothing seemed to be working right! We used logs to quickly diagnosis the problem. Although I mention a windows pop-up as a possible means of being alerted to the problem I can safely say we did not see it, or if we did, we missed it.

– By Ananth

100 Log Management uses #59 – 6 items to monitor on workstations

In part 2 of our series on workstation monitoring we look at the 6 things that are in your best interest to monitor — the types of things that if you proactively monitor will save you money by preventing operational and security problems. I would be very interested if any of you monitor other things that you feel would be more valuable. Hope you enjoy it.

100 Log Management uses #58 The why, how and what of monitoring logs on workstations

Today we are going to start a short series on the value of monitoring logs on Windows workstations. It is commonly agreed to that log monitoring on servers is a best practice, but until recently the complexity and expense of log management on workstations made most people shy away, but log monitoring on the workstation is valuable, and easy as well, if you know what to look for. These next 3 blogs will tell you the why, how and what.

Sustainable vs. Situational Values

I am often asked that if Log Management is so important to the modern IT department, then how come more than 80% of the market that “should” have adopted it has not done so?

The cynic says “unless you have best practice as an enforced regulation (think PCI-DSS here)” then twill always be thus.

One reason why I think this is so is because earlier generations never had power tools and found looking at logs to be hard and relatively unrewarding work. That perception is hard to overcome even in this day and age after endless punditry and episode after episode has clarified the value.

Still resisting the value proposition? Then consider a recent column in the NY Times which quotes Dov Seidman, the C.E.O. of LRN who describes two kinds of values: “situational values” and “sustainable values.”

The article is in the context of the current political situation in the US but the same theme applies to many other areas.

“Leaders, companies or individuals guided by situational values do whatever the situation will allow, no matter the wider interests of their communities. For example, a banker who writes a mortgage for someone he knows can’t make the payments over time is acting on situational values, saying: I’ll be gone when the bill comes due.”

At the other end, people inspired by sustainable values act just the opposite, saying: I will never be gone. “I will always be here. Therefore, I must behave in ways that sustain — my employees, my customers, my suppliers, my environment, my country and my future generations.”

We accept that your datacenter grew organically, that back-in-the-day there were no power tools and you dug ditches with your bare hands outside when it was 40 below and tweets were for the birds…but…that was then and this is now.

Get Log Management, it’s a sustainable value.

Ananth

100 Log Management uses #57 PCI Requirement XII

Today we conclude our journey through the PCI Standard with a quick look at Requirement 12. Requirement 12 documents the necessity to setup and maintain a policy for Information Security for employees and contractors. While this is mostly a documentation exercise it does have requirements for monitoring and alerting that log management can certainly help with.

100 Log Management uses #55 PCI Requirements VII, VIII & IX

Today we look at PCI-DSS Requirements 7, 8 and 9. In general these are not quite as applicable as the audit requirements in Requirement 10 which we will be looking at next time, but still log management is useful in several ancillary areas. Restricting access and strong access control are both disciplines log management helps you enforce.

Panning for gold in event logs

Ananth, the CEO of Prism is fond of remarking “there is gold in them thar logs…” this is absolutely true but the really hard thing about logs is figuring out how to get the gold out without needing to be the guy with the pencil neck and the 26 letters after their name that enjoys reading logs in their original arcane format. For the rest of us, I am reminded of the old western movies where prospectors pan for gold – squatting by the stream, scooping up dirt and sifting through it looking for gold, all day long, day after day. Whenever I see one of those scenes my back begins to hurt and I feel glad I am not a prospector. At Prism we are in the business of gold extraction tools. We want more people finding gold and lots of it. It is good for both of us.

One of the most common refrains we hear from prospects is they are not quite sure what the gold looks like. When you are panning for gold and you are not sure that glinty thing in the dirt is gold, well, that makes things really challenging. If very few people can recognize the gold we are not going to sell large quantities of tools.

In EventTracker 6.4 we undertook a little project where we asked ourselves “what can we do for the person that does not know enough to really look or ask the right questions?” A lot of log management is looking for the out-of-ordinary, after all. The result is a new dashboard view we call the Enterprise Activity Monitor.

Enterprise Activity uses statistical correlation to looks for things that are simply unusual. We can’t tell you they are necessarily trouble, but we can tell you they are not normal and enable you to analyze them and make a decision. Little things that are interesting – like if you get a new IP address coming into your enterprise 5000 times. Or if a user generally performs 1000 activities in a day, but suddenly does 10,000, or even as simple as a new executable showing up unexpectedly on user machines. Will you chase the occasionally false positive ? definitely, but a lot of the manual log review being performed by the guys with the alphabets after their names is really simply manually chasing trends – this enables you to stop wasting significant time in detecting the trend — all the myriad clues that are easily lost when you are aggregating 20 or 100 million logs a day.

The response from the Beta customers indicates that we are onto something. After all, any thing that can make our (hopefully more) customers’ lives less tedious and their backs hurt less, is all good!

Steve Lafferty

100 Log Management uses #54 PCI Requirements V & VI

Last we looked at PCI-DSS Requirements 3 and 4, so today we are going to look at Requirements 5 and 6. Requirement 5 talks about using AV software, and log management can be used to monitor AV applications to ensure they are running and updated. Requirement 6 is all about building and maintaining a secure network for which log management is a great aid.

-By Ananth

PCI-DSS under the gun

Have you been wondering how some of the statements coming from the credit card processing industry seem a little contradictory? You hear about PCI compliant entities being hacked but the PCI guys are still claiming they have never had a compliant merchant successfully breached. Perhaps not, but if both statements are true, you certainly have an ineffective real world standard or problematic certification process at the very least.

Not to pick on Heartland again but Heartland passed their PCI mandated audit and were deemed compliant by a certified PCI Auditor approximately one month prior to the now infamous hack. Yet, at Visa’s Global Security Summit in Washington in March, Visa officials were adamant in pointing out that no PCI compliant organization has been breached.

Now, granted, Heartland was removed from their list of certified vendors after the breach although perhaps this was just a bizarre Catch 22 in play – you are compliant until you are hacked, but when you are hacked the success of the hack makes you non-compliant.

Logically it seems 4 things or a combination of the 4 could potentially have occurred at Heartland. 1) The audit could have been inadequate or the results incorrect leading to a faulty certification. 2) Heartland in the intervening month made a material change in the infrastructure such that it threw them out of compliance. 3) The hack was accomplished in an area outside of the purview of the DSS, or 4) Ms. Richey (and others) is doing some serious whistling past the graveyard.

What is happening in the Heartland case is the classic corporate litigation-averse response to a problem. Anytime something bad happens the blame game starts with multiple targets, and as a corporation your sole goal is to be sure to get behind one or the other (preferably larger) target because when the manure hits the fan the person in the very front is going to get covered. Unfortunately this behavior does not seem to really foster solving the problem as everyone has their lawyers and are not talking.

Regardless, maybe the PCI should not be saying things like “no compliant entity has ever been breached” and maybe say something like “perhaps we have a certification issue here”, or “how do we reach continuous compliance?” or even “what are we missing here?”

-Steve Lafferty