User location affinity

It’s clear that we are now working under the assumption of a breach. The challenge is to find the attacker before they cause damage.

Once attackers gain a beach head within the organization, they pivot to other systems. The Verizon DBIR  shows that compromised credentials make up a whopping 76% of all network incursions.

However, the traditional IT security tools deployed at the perimeter, used to keep the bad guys out, are helpless in these cases. Today’s complex cyber security attacks require a different approach.

EventTracker 8 includes an advanced security analytic package which includes behavior rules to self-learn user location affinity heuristics and use this knowledge to pinpoint suspicious user activity.

In a nutshell, EventTracker learns typical user behavior for interactive login. Once a baseline of behavior is established, out of ordinary behavior is identified for investigation. This is done in real-time and across all enterprise assets.

For example if user susan typically logs into wks5 but now because her credentials are stolen, they are used to login to server6, this would be identified as out-of-ordinary and tagged for closer inspection.

EventTracker 8 has new features designed to support security analysts involved in Digital Forensics and Incident Response.

The PCI-DSS Compliance Report

A key element of the PCI-DSS standard is Requirement 10: Track and monitor all access to network resources and cardholder data. Logging mechanisms and the ability to track user activities are critical in preventing, detecting and minimizing the impact of a data compromise. The presence of logs in all environments allows thorough tracking, alerting and analysis when something does go wrong. Determining the cause of a compromise is very difficult, if not impossible, without system activity logs.

However the 2014 Verizon PCI Report is billed as an inside look at the business need for protecting payment card information says: “Only 9.4% of organizations that our RISK team investigated after a data breach was reported were compliant with Requirement 10. By comparison, our QSAs found 31.7% compliance with Requirement 10. This suggests a correlation between the lack of effective log management and the likelihood of suffering a security breach.”

Here is a side benefit of paying attention to compliance: Consistent and complete audit trails can also significantly reduce the cost of a breach. A large part of post-compromise cost is related to the number of cards thought to be exposed. Lack of conclusive log information reduces the forensic investigator’s ability to determine whether the card data in the environment was exposed only partially or in full.

In other words, when (not if) you detect the breach, having good audit records will reduce the cost of the breach.

Organizations can’t prevent or address a breach unless they can detect it. Active monitoring of the logs from their cardholder data environments enables organizations to spot and respond to suspected data breaches much more quickly.

Organizations generally find enterprise log management hard, in terms of generating logs (covered in controls 10.1 and 10.2), protecting them (10.5), reviewing them (10.6), and archiving them (10.7).

Is this you? Here is how you spell relief – SIEM Simplified.

Which is it? Security or Compliance?

This is a classic chicken/egg question but it’s too often thought to be the same. Take it from Merriam – Webster:
Compliance: (1a) the act or process of complying to a desire, demand, proposal, or regimen or to coercion. (1b) conformity in fulfilling official requirements. (2) a disposition to yield to others.
Security: (1) the quality or state of being secure. (4a) something that secures : protection. (4b1) measures taken to guard against espionage or sabotage, crime, attack, or escape. (4b2) an organization or department whose task is security.

Clearly they are not the same. Compliance means you meet a technical or non-technical requirement and periodically someone verifies that you have met them.

Compliance requirements are established by standards bodies, who obviously do not know your network. They are established for the common good because of industry wide concerns that information is not protected, usually because the security is poor. When you see an emphasis of compliance over security, it’s too often because the organization does not want to take the time to ensure that the network and information is secure, so they rely on compliance requirements to feel better about their security.

The problem with that is that it gives a false sense of hope. It gives the impression that if you check this box; everything is going to be ok. Obviously this is far from true, with examples like Sony, Target, TJMaxx and so many other breaches. Although there are implementations of compliance that will make you more secure, you cannot base your companies’ security policy on a third party’s compliance requirements.

So what comes first? Wrong question! Let’s rephrase – there needs to be a healthy relationship between the two but one cannot substitute one for the other.

Threat data or Threat Intel?

Have you noticed the number of vendors that have jumped on the “Threat Intelligence” bandwagon recently?

Threat Intel is the hot commodity with paid sources touting their coverage and timeliness while open sources tout the size of their lists. The FBI shares its info via Infraguard while many other ISACs are popping up across industry verticals allowing many large companies to compile internal data.

All good right? More is better, right? Actually, not quite.
Look closely. You are confusing “intelligence” with “data”.

As the Lt Commander of the Starship Enterprise would tell you, Data is not Intelligence. In this case, intelligence is really problem solving. As defenders, we want this data in order to answer “Who is attacking our assets and how?” Which would lead to coherent defense.

The steps to use Threat Data are easily explained:
1) Compare observations on the local network against the threat data.
2) Alert on matches.

Now comes the hard part…

3) Examine and validate the alert to decide if remediation is needed. This part is difficult to automate and really the crux of converting threat data to threat intelligence. To do this effectively would require human skills that combine both expert knowledge of the modern ThreatScape with knowledge of the network architecture.

This last part is  where most organizations come up hard against ground reality. The fact is thatdetailed knowledge of the internal network architecture is more common within an organization (more or less documented but present in some fashion/degree), than the expert knowledge of the modern ThreatScape and the contours/limitations of the threat data.

You could, of course hire and dedicate staff to perform this function but a) such staff are hard to come by and b) budget for this is even harder.

What now?

Consider a co-managed solution like SIEM Simplified where the expert knowledge of the modern ThreatScape in the context of your network is provided by an external group. When this is combined with your internal resources to co-manage the problem, it can result in improved coverage at an affordable price point.

How to shoot yourself in the foot with SIEM

shootyoself

Six ways to shoot yourself with SIEM technology:
1) Dont plan; just jump in
2) Have no defined scope or use cases; whatever
3) Confuse SIEM with Log Management
4) Monitor noise; apply no filters
5) Don’t correlate with any other technologies eg IDS, Vulnerability scanner, Active Directory
6) Staff poorly or not at all

For grins, here’s how programmers shoot themselves in the foot:

ASP.NET
Find a gun, it falls apart. Put it back together, it falls apart again. You try using the .GUN Framework, it falls apart. You stab yourself in the foot instead.
C
You try to shoot yourself in the foot, but find out the the gun is actually a howitzer cannon.
C++
You accidentally create a dozen clones of yourself and shoot them all in the foot. Emergency medical assistance is impossible since you can’t tell which are bitwise copies and which are just pointing at others and saying, “That’s me, over there.”
JavaScript
You’ve perfected a robust, rich user experience for shooting yourself in the foot. You then find that bullets are disabled on your gun.
SQL
SELECT @ammo:=bullet FROM gun WHERE trigger = ‘PULLED’;
INSERT INTO leg (foot) VALUES (@ammo);
UNIX
% ls
foot.c foot.h foot.o toe.c toe.o
% rm * .o
rm: .o: No such file or directory
% ls
%

Click here for the Top 6 Uses of SIEM.

Venom Vulnerability exposes most Data Centers to Cyber Attacks

Just after a new security vulnerability surfaced Wednesday, many tech outlets started comparing it with HeartBleed, the serious security glitch uncovered last year that rendered communications with many well-known web services insecure, potentially exposing millions of plain-text passwords.

But don’t panic. Though the recent vulnerability has a more terrific name than HeartBleed, it is not going to cause as much danger as HeartBleed did.

Dubbed VENOM, standing for Virtualized Environment Neglected Operations Manipulation, is a virtual machine security flaw uncovered by security firm CrowdStrike that could expose most of the data centers to malware attacks, but in theory.

Yes, the risk of Venom vulnerability is theoretical as there is no real-time exploitation seen yet, while, on the other hand, last year’s HeartBleed bug was practically exploited by hackers an unknown number of times, leading to the theft of critical personal information.

Now let’s know more about Venom:

Venom (CVE-2015-3456) resides in the virtual floppy drive code used by a several number of computer virtualization platforms that if exploited…

…could allow an attacker to escape from a guest ‘virtual machine’ (VM) and gain full control of the operating system hosting them, as well as any other guest VMs running on the same host machine.

According to CrowdStrike, this roughly decade-old bug was discovered in the open-source virtualization package QEMU, affecting its Virtual Floppy Disk Controller (FDC) that is being used in many modern virtualization platforms and appliances, including Xen, KVM, Oracle’s VirtualBox, and the native QEMU client.

Jason Geffner, a senior security researcher at CrowdStrike who discovered the flaw, warned that the vulnerability affects all the versions of QEMU dated back to 2004, when the virtual floppy controller was introduced at the very first.

However, Geffner also added that so far, there is no known exploit that could successfully exploit the vulnerability. Venom is critical and disturbing enough to be considered a high-priority bug.

Successful exploitation of Venom required:
For successful exploitation, an attacker sitting on the guest virtual machine would need sufficient permissions to get access to the floppy disk controller I/O ports.

When considering on Linux guest machine, an attacker would need to have either root access or elevated privilege. However on Windows guest, practically anyone would have sufficient permissions to access the FDC.

However, comparing Venom with Heartbleed is something of no comparison. Where HeartBleed allowed hackers to probe millions of systems, Venom bug simply would not be exploitable at the same scale.

Flaws like Venom are typically used in a highly targeted attack such as corporate espionage, cyber warfare or other targeted attacks of these kinds.

Did venom poison Clouds Services?

Potentially more concerning are most of the large cloud providers, including Amazon, Oracle, Citrix, and Rackspace, which rely heavily on QEMU-based virtualization are vulnerable to Venom.

However, the good news is that most of them have resolved the issue, assuring that their customers needn’t worry.
“There is no risk to AWS customer data or instances,” Amazon Web Services said in a statement.
Rackspace also said the flaw does affect a portion of its Cloud Servers, but assured its customers that it has “applied the appropriate patch to our infrastructure and are working with customers to remediate fully this vulnerability.”

Azure cloud service by Microsoft, on the other hand, uses its homemade virtualization hypervisor technology, and, therefore, its customers are not affected by Venom bug.

Meanwhile, Google also assured that its Cloud Service Platform does not use the vulnerable software, thus was never vulnerable to Venom.

Patch Now! Prevent yourself

Both Xen and QEMU have rolled out patches for Venom. If you’re running an earlier version of Xen or QEMU, upgrade and apply the patch.

Note: All versions of Red Hat Enterprise Linux, which includes QEMU, are vulnerable to Venom. Red Hat recommend its users to update their system using the commands, “yum update” or “yum update qemu-kvm.”

Once done, you must “power off” all your guests Virtual Machines for the update to take place, and then restart it to be on the safer side. Remember, only restarting without power off the guest operating system is not enough for the administrators because it would still use the old QEMU binary.

See more at Hacker News.

Five quick wins to reduce exposure to insider threats

Q. What is worse than the attacks at Target, Home Depot, Michael’s, Dairy Queen, Sony, etc?
A. A disgruntled insider (think Edward Snowden)

A data breach has serious consequences both directly and indirectly. Lost revenue and a tarnished brand reputation both inflict harm long after incident resolution and post breach clean-up. Still, many organizations don’t take necessary steps to protect themselves from a potentially detrimental breach.

But, the refrain goes, “We don’t have the budget or the manpower or the buy in from senior management. We’re doing the best we can.”

How about going for some quick wins?
Quick wins provide solid risk reduction without major procedural, architectural or technical changes to an environment. Quick wins also provide such substantial and immediate risk reduction against very common attacks that most security-aware organizations prioritize these key controls.

1) Control the use of Administrator privilege
The misuse of administrative privileges is a primary method for attackers to spread inside a target enterprise. Two very common attacker techniques take advantage of uncontrolled administrative privileges. For example, a workstation user running as a privileged user, is fooled by simply surfing to a website hosting attacker content that can automatically exploit browsers. The file or exploit contains executable code that runs on the victim’s machine. Since the victim user’s account has administrative privileges, the attacker can take over the victim’s machine completely and install malware to find administrative passwords and other sensitive data.

2) Limit access to documents to employees based on the need to know
It’s important to limit permissions so employees only have access to the data necessary to perform their jobs. Steps should also be taken to ensure users with access to sensitive or confidential data are trained to recognize which files require more strict protection.

3) Evaluate your security tools – can they detect insider theft?
Whether it’s intentional or inadvertent, would you even know if someone inside your network compromised or leaked sensitive data?

4) Assess security skills of employees, provide training
The actions of people play a critical part in the success or failure of an enterprise. People fulfill important functions at every stage of the business function. Attackers are very conscious of these issues and use them to plan their exploitations by: carefully crafting phishing messages that look like routine and expected traffic to an unwary user; exploiting the gaps or seams between policy and technology; working within the time window of patching or log review; using nominally non-security-critical systems as jump points or bots….

5) Have an incident response plan
How prepared is your information technology (IT) department or administrator to handle security incidents? Many organizations learn how to respond to security incidents only after suffering attacks. By this time, incidents often become much more costly than needed. Proper incident response should be an integral part of your overall security policy and risk mitigation strategy.

A guiding principle of IT Security is “Prevention is ideal but detection is a must.”

Have you reduced your exposure?

Secure, Usable, Cheap: Pick any two

This fundamental tradeoff between security, usability, and cost is critical. Yes, it is possible to have both security and usability, but at a cost, in terms of money, time and personnel. While making something both cost efficient and usable, or even making something secure and cost-efficient may not be very hard, it is however  more difficult and time consuming to make something both secure and usable. This takes a lot of effort and thinking because security takes planning and resources.

As a system administrator, usability is at the top of their list. However, as a security administrator, security will be on top of their list – no surprise here really.

What if I tell you that the two job roles are orthogonal? What gets a sys admin bouquets, will get a security admin, brickbats and vice versa.

Oh and when we say “cheap” we mean in terms of effort – either by the vendor or by the user.

Security administrators face some interesting tradeoffs. Fundamentally, the choice to be made is between a system that is secure and usable, one that is secure and cheap or one that is cheap and usable. Unfortunately, we cannot have everything. The best practice is not to make the same person responsible for both security and system administration. The goals of those two tasks are far too often in conflict to make this a position that someone can become successful at.

PCI-DSS 3.1 is here – what now?

On April 15, 2015, the PCI Security Standards Council (PCI SSC) announced the release of PCI DSS v3.1. This follows closely on the heels of PCI DSS 3.0, that just went into full effect on January 1, 2015. There is a three-year cycle between major updates of PCI DSS and, outside of that cycle, the standard can be updated to react to threats as needed.

The major driver of PCI DSS 3.1 is the industry’s conclusion that SSL version 3.0 is no longer a secure protocol and therefore must be addressed by the PCI DSS.

What happened to SSL?
The last-released version of encryption protocol to be called “SSL”—version 3.0—was superseded by “TLS,” or Transport Layer Security, in 1999. While weaknesses were identified in SSL 3.0 at that time, it was still considered safe for use up until October of 2014, when the POODLE vulnerability  came to light. POODLE is a flaw in the SSL 3.0 protocol itself, so it’s not something that can be fixed with a software patch.

Bottom line
Any business software running SSL 2.0 or 3.0 must be reconfigured or upgraded.
Note: Most SSL/TLS deployments support both SSL 3.0 and TLS 1.0 in their default configuration. Newer software may support SSL 3.0, TLS 1.0, TLS 1.1 and TLS 1.2. In these cases the software simply needs to be reconfigured. Older software may only support SSL 2.0 and SSL 3.0 (if this is the case, it is time to upgrade).

How to detect SSL/TLS usage and version?
A vulnerability scan from EventTracker Vulnerability Assessment Service (ETVAS) or other scanner, will identify insecure implementations.

SSL/TLS is a widely deployed encryption protocol. The most common use of SSL/TLS is to secure websites (HTTPS), though it is also used to:
• Secure email in transit (SMTPS or SMTP with STARTTLS, IMAPS or IMAP with STARTTLS)
• Share files (FTPS)
• Secure connections to remote databases and secure remote network logins (SSL VPN)
SIEM Simplified customers
The EventTracker Control Center will have contacted you to correctly configure the Windows server instance hosting the EventTracker Manager Console, to comply with the guideline. You must upgrade or reconfigure all other vulnerable servers in your network.

If you subscribe to ETVAS, the latest vulnerability reports will highlight any servers that must be reconfigured along with detailed recommendations on how to do so.

Three myths of IT Security

Myth 1: Hardening a system makes it secure
Security is a process, to be evaluated on a constant basis. There is nothing that will put you into a “state of security.” Did you really think that simply applying some hardening guide to a system will make it secure?

Threats exploit unpatched vulnerabilities and not one of them would have been stopped by any security settings. Few settings can prevent your network from getting attacked through unpatched vulnerabilities.

Myth 2: If We Hide It, the Bad Guys Won’t Find It
Also known as security by obscurity, hiding the system doesn’t really help. For instance, turning off SSID broadcast in wireless networks. Not only will you now have a network that is non compliant with the standard, but your clients will also prefer a rogue network with the same name over the legitimate one. Oh, and it takes a few minutes to actually find the network anyway, given the proper tools. Another example is changing the banners on your Web site so the bad guys will not know it is running IIS. First, it is relatively simple to figure out what the Web site is running anyway. Second, most of the bad guys are not smart enough to do that, so they just try all the exploits, including the IIS ones. Yet another example is renaming the Administrator account. It is a matter of a couple of API calls to find the real name. Our favorite is when administrators use Group Policy to rename the Administrator account. They now have an account called “Janitor3;” with a comment of “Built in account for administering the computer/domain.” This is not really likely to fool anyone.

Myth 3: “High Security” Is an End-Goal for All Environments
High security, in the sense of the most restrictive security possible, is not for everyone. In some environments you are willing to break things in the name of protection that you are not willing to break in others.

Some systems are subjected to incredibly serious threats. If these systems get compromised, people will die, nations and large firms will go bankrupt, and society as we know it will collapse. Other systems contain far less sensitive information and thus need not be subjected to the same level of security. The protective measures that are used on the former are entirely inappropriate for the latter; yet we keep hearing that “high security” is some sort of end-goal toward which all environments should strive.

Safeguards should be applied in proportion to risk.

Does sharing Threat Intel work?

In the next couple months, Congress will likely pass CISA, the Cybersecurity Information Sharing Act. The purpose is to “codify mechanisms for enabling cybersecurity information sharing between private and government entities, as well as among private entities, to better protect information systems and more effectively respond to cybersecurity incidents.”

Can it help? It’s interesting to note two totally opposing views.

Arguing that it will help is Richard Bejtlich of Brookings. His analogy is Threat intelligence, is in some ways like a set of qualified sales leads provided to two companies. The first has a motivated sales team, polished customer acquisition and onboarding processes, authority to deliver goods and services and quality customer support. The second business has a small sales team, or perhaps no formal sales team. Their processes are broken, and they lack authority to deliver any goods or services, which in this second case isn’t especially valuable. Now, consider what happens when each business receives a bundle of qualified sales leads. Which business will make the most effective use of their list of profitable, interested buyers? The answer is obvious, and there are parallels to the information security world.

Arguing that it won’t help at all is Robert Graham, the creator of BlackICE Guard. His argument is “CISA does not work. Private industry already has exactly the information sharing the bill proposes, and it doesn’t prevent cyber-attacks as CISA claims. On the other side, because of the false-positive problem, CISA does far more to invade privacy than even privacy advocates realize, doing a form of mass surveillance.”

In our view, Threat Intel is a new tool. It’s usefulness depends on the artisan wielding the tool. A poorly skilled user would get less value.

Want experts on your team but don’t know where to start? Try our managed service SIEM Simplified. Start quick and leverage your data!

Threat Intelligence vs Privacy

On Jan 13, 2015, the U.S. White House published a set of legislative proposals on cyber security threat intelligence (TI) sharing between private and public entities. Given the breadth of cyber attacks across the Internet, the sharing of information between commercial entities and the US Government is increasingly critical. Absent sharing, first responders charged with cyber defense are at a disadvantage in detecting and responding to common attacks.

Should this cause a privacy concern?
Richard Bejtlich, senior fellow at Brookings says “Threat intelligence does not contain personal information of American citizens, and privacy can be maintained while learning about threats. Intelligence should be published in an automated, machine-consumable, standardized manner.”

The White House proposal uses the following definition:
“The term `cyber threat indicator’ means information —
(A) that is necessary to indicate, describe or identify–
(i) malicious reconnaissance, including communications that reasonably appear to be transmitted for the purpose of gathering technical information related to a cyber threat;
(ii) a method of defeating a technical or operational control;
(iii) a technical vulnerability;
(iv) a method of causing a user with legitimate access to an information system or information that is stored on, processed by, or transiting an information system inadvertently to enable the defeat of a technical control or an operational control;
(v) malicious cyber command and control;
(vi) any combination of (i)-(v).
(B) from which reasonable efforts have been made to remove information that can be used to identify specific persons reasonably believed to be unrelated to the cyber threat.”

If you take the above at face value, then a reasonable assumption is that these sorts of cyber threat indicators should not trigger privacy concerns, whether shared between the private sector and the government or within the private sector.

Of course, getting TI and using it effectively are completely different as discussed here. Bejtlich reminds us that “private sector organizations should focus first on improving their own defenses before expecting that government assistance will mitigate their security problems.”

Looking for an practical, cost effective way to shore up your defenses? SIEM Simplified is one way to go about it.

Death by a Thousand cuts

You may recall that back in 2012, then Secretary of Defense Leon Panetta warned of “a cyber Pearl Harbor; an attack that would cause physical destruction and the loss of life.”

This hasn’t quite come to pass has it? Is it dumb luck? Or are we just waiting for it to happen?

In his annual testimony about the intelligence community’s assessment of “global threats,” Director of National Intelligence James Clapper sounded a more nuanced and less hyperbolic tone. “Rather than a ‘cyber Armageddon’ scenario that debilitates the entire U.S. infrastructure, we envision something different,” he said, “We foresee an ongoing series of low-to-moderate level cyber attacks from a variety of sources over time, which will impose cumulative costs on U.S. economic competitiveness and national security.”

The reality is that the U.S. is being bombarded by cyber attacks of a smaller scale every day—and those campaigns are taking a toll.

Now the DNI also went on to say “Although cyber operators can infiltrate or disrupt targeted [unclassified] networks, most can no longer assume that their activities will remain undetected, nor can they assume that if detected, they will be able to conceal their identities. Governmental and private sector security professionals have made significant advances in detecting and attributing cyber intrusions.”

Alan Paller of the SANS Institute says “Those words translate directly to a simpler statement: ‘The weapons and other systems we operate today cannot be protected from cyber attack.’ Instead, as a nation, we have to put in place the people and support systems who can find the intruders and excise them fast.”

So then what capabilities do you have in this area given that the attacks are continuous and ongoing against your infrastructure?

Want to do something about it quickly and effectively? Consider SIEM Simplified our service offering that can take the heavy lift required to implement such monitoring programs off your hands.

PoSeidon and EventTracker

A new and harmful Point-of-Sale (“POS”) malware has been identified by security researchers at Cisco’s Security Intelligence & Research Group. The team says it is more sophisticated and damaging than previous POS malware programs.

Nicknamed PoSeidon, the new malware family targets POS systems, infects machines and scrapes the memory for credit card information which it then exfiltrates to servers, primarily .ru TLD, for harvesting or resale.

When consumers use their credit or debit cards to pay for purchases from a retailer, they swipe their card through POS systems. Information stored on the magnetic stripe on the back of those cards is read and retained by the POS. If the information on that stripe is stolen, it can be used to encode the magnetic strip of a fake card, which is then used to make fraudulent purchases. POS malware and card fraud has been steadily rising, affecting large and small retailers. Target, one of the most visible victims of security breach involving access to its payment card data, incurred losses approximated at $162 million (before insurance recompense).

PoSeidon employs a technique called memory scraping in which the RAM of infected terminals are scanned for unencrypted strings which match credit card information. When PoSeidon take over a terminal, a loader binary is installed to allow the malware to remain on the target machine even during system reboots. The Loader then contacts a command and control server, and retrieves a URL which contains another binary, FindStr, to download and execute. FindStr scans the memory of the POS device and finds strings (hence its name) and installs a key logger which looks for number strings and keystrokes analogous to payment card numbers and sequences. CSS referred to the number sequences that begin with numbers generally used by Discover, Visa, MasterCard and American Express cards (6, 5, 4, and 3 respectively, as well as the number of digits following those numbers; 16 digits for the former three, 15 digits for the American Express card). This data is then encoded and sent to an exfiltration server.

A whitepaper for detecting and protecting from PoSeidon malware infection is also available from EventTracker.

Tired of keeping up with the ever changing Threatscape? Consider SIEM Simplified. Let our managed SIEM solution do the heavy lifting.

Want to be acquired? Get your cyber security in order!

Want to be acquired? Get your cyber security in order!

Washington Business Journal Senior Staff Reporter, Jill Aitoro hosted  a panel of cyber experts Feb. 26 at Crystal Tech Fund in Arlington, VA.

The panel noted that how well a company has locked down their systems and data will have a direct effect on how much a potential buyer is willing to shell out for an acquisition — or whether a buyer will even bite in the first place.

Howard Schmidt, formerly CISO at Microsoft recalled ‘”We did an acquisition one time — about $10 million. It brought tons of servers, a big IT infrastructure, when all was said and done, it cost more than $20 million to rebuild the systems that had been owned by criminals and hackers for at least two years. That’s a piece of M&A you need to consider.”

Many private investors are doing exactly that, calling in security companies to assess a target company’s cyber security posture before making an offer. In some cases, the result will be to not invest at all, with the venture capitalist telling a company to “get their act together and then call back”.

The Pyramid of Pain

There is great excitement amongst security technology and service providers about the intersection of global threat intelligence with local observations in the network. While there is certainly cause for excitement, it’s worth pausing to ask the question “Is Threat Intelligence being used effectively?”

David Bianco explains that not all indicators of compromise are created equal. The pyramid defines the pain it will cause the adversary when you are able to deny those indicators to them.

info

Hash Values: SHA1, MD5 or other similar hashes that correspond to specific suspicious or malicious files. Hash Values are often used to provide unique references to specific samples of malware or to files involved in an intrusion. EventTracker can provide this functionality via its Change Audit feature.
IP Addresses: or even net blocks. If you deny the adversary the use of one of their IPs, they can usually recover quickly. EventTracker addresses these via its Behavior Module and the associated IP Reputation lookup.
Domain Names: These are harder to change than IP addresses. EventTracker can either use logs from a proxy or scan web server logs to detect such artifacts.
Host Artifact: For example, if the attacker’s HTTP recon tool uses a distinctive User-Agent string when searching your web content (off by one space or semicolon, for example. Or maybe they just put their name. Don’t laugh. This happens!). This can be detected by the Behavior Module in EventTracker when focused on the User Agent string from web server logs.
Tools: Artifacts of tools (eg DLLs or EXE names or hashes) that the attacker is using, can be detected via the Unknown Process module within EventTracker via the Change Audit feature.
Tactics, Techniques & Procedures: An example can be detecting Pass-the-hash attacks as called out by the NSA in their white paper and discussed in our webinar “Spotting the adversary with Windows Event Log Monitoring

Bottom line: Having Threat Intelligence is not the same as using it effectively. The former is something you can buy, the latter is something you develop as a capability. It not only requires tools but also persistent, well trained humans.

Want both? Consider SIEM Simplified.

What good is Threat Intelligence integration in a SIEM?

Bad actors/actions are more and more prevelant on the Internet. Who are they? What are they up to? Are they prowling in your network?

The first two questions are answered by Threat Intelligence (TI), the last one can be provided by a SIEM that integrates TI into its functionality.

But wait, don’t buy just yet, there’s more, much more!

Threat Intelligence when fused with SIEM can:
• Validate correlation rules and improve base lining alerts by upping the priority of rules that also point at TI-reported “bad” sources
• Detect owned boxes, bots, etc. that call home when on your network
• Qualify entities related to an incident based on collected TI data (what’s the history of this IP?)
• Historical matching of past, historical log data to current TI data
• Review past TI history as key context for reviewed events, alerts, incidents, etc.
• Enable automatic action due to better context available from high-quality TI feeds
• Run TI effectiveness reports in a SIEM (how much TI leads to useful alerts and incidents?)
• Validate web server logs source IP to profile visitors and reduce service to those appearing on bad lists (uncommon)
and the beat goes on…

Want the benefits of SIEM without the heavy lifting involved? SIEM Simplified  may be for you.

Gathering logs or gathering dust?

Did you wrestle your big name SIEM vendor to throw in their “enterprise class” solution for a huge discount as part of the last negotiation? If so, good from you – you should be pleased with yourself for wrangling something so valuable for them. 90% discounts are not unheard of, by the way.

But do you know why they caved and included it? It’s because there is very high probability that you really won’t ever obtain any significant value from it.

You see the “enterprise class” SIEM solutions from the top name vendors all require significant trained staff to even just get them up and running, never mind tuning and delivering any real value. They figured, you probably just don’t have the staff or the time to do any of that so they can just give it away at that huge discount. It only adds some value to their invoice, preventing any other vendor from horning in on their turf and makes you happy – what’s not to like?

The problem of course is that you are not any closer to solving any of the problems that a SIEM can address. Is that ok with you? If so, why even bother to pay that 10%?

From a recent webinar on the topic by Gartner Analyst Anton Chuvakin:

Q: For a mid-size company what percent of time would a typical SIEM analyst spend in monitoring / management of the tool – outstanding incident management?
A: Look at my SIEM skill model of Run/Watch/Tune and the paper where it is described in depth. Ideally, you don’t want to have one person running the SIEM system, doing security monitoring and tuning SIEM content (such as writing correlation rules, etc) since it would be either one busy person or one really talented one. Overall, you want to spend a small minority of time on the management of the tool and most of the time using it. SIEM works if you work it! SIEM fails if you fail to use it.

So is your SIEM gathering logs? Or gathering dust?

If the latter, give us a call! Our SIEM Simplified service can take the sting out of the bite.

Why add more hay?

Recent terrorist attacks in France have shaken governments in Europe. The difficulty of defending against insider attacks is once again front and center. How should we respond? The UK government seems to feel that greater mass surveillance is a proper response. The Communications Data Bill  proposed by Prime Minister Cameron would compel telecom companies to keep records of all Internet, email, and cellphone activity. He also wants to ban encrypted communications services.

This approach would add even more massive data sets for analysis by computer programs than currently thought to be analyzed by NSA/GCHQ, in hopes that algorithms would be able to pinpoint the bad guys. Of course France has blanket surveillance but that did not prevent the Charlie Hebdo attack.

In the SIEM universe, the equivalent would be to gather every log from every source in hopes that attacks could be predicted and prevented. In practice,accepting data like this into a SIEM solution reduces it to a quivering mess of barely functioning components. In fact the opposite approach “output driven SIEM” is favored by experienced implementers.

Ray Corrigan writing Mass Surveillance Will Not Stop Terrorism  in the New Scientist notes “Surveillance of the entire population, the vast majority of whom are innocent, leads to the diversion of limited intelligence resources in pursuit of huge numbers of false leads. Terrorists are comparatively rare, so finding one is a needle-in-a-haystack problem. You don’t make it easier by throwing more needleless hay on the stack.”

Threat Intelligence – Paid or Free?

Threat Intelligence (TI) is evidence-based knowledge, including context, mechanisms, indicators, implications and actionable advice, about an existing or emerging menace or hazard to assets that can be used to inform decisions regarding the subject’s response to that menace or hazard. The challenge is that leading indicators of risk to an organization are difficult to identify when the organization’s adversaries, including their thoughts, capabilities and actions, are unknown. Therefore “black lists” of various types have become popular which list top attackers, spammers, poisoned URLs, malware domains etc have become popular. These lists are either community (free) maintained (eg SANS DShield), paid for by your tax dollars (eg InfraGuard) or paid services.

EventTracker 7.6 introduced formal support to automatically import and use such lists. We are often asked the question, which list(s) to use. Is it worth it to pay for TI service? Here is our thinking on the subject:

– External v/s Internal
In most cases, we find “white lists” to be much smaller, more effective and easier to tune/maintain than any “black list”. EventTracker supports the generation of such White lists from internal sources (the Change Audit feature) or the list of known good IP ranges (internal range, your Amazon EC2 or Azure instances, your O365 instances etc). Using the NOTIN match option of the Behavior module gives you a small list of suspicious activities (grey list) which can be quickly sorted to either black or white for future processing. As a first step, this is a quick/inexpensive/effective solution.

– Paid v/s Free
Free services include well regarded sources such as shadowservers.org, abuse.ch, dshield.org, FBI Infraguard, US CERT and EventTracker ThreatCenter (a curated list of low volume, high confidence sources formatted for quick import into EventTracker. Many customers in industry verticals (e.g., Electric power have lists circulated within their community.)

If you are thinking of paid services, then ask yourself:

– Will the feed allow me to detect threats faster? (e.g., a feed of top attackers updated in real-time v/s once in 6/12 hours). If faster is your motivation, are you able to respond to the threat detection faster? If the threat is detected at 8PM on a Friday, when will you be able to properly respond (not just acknowledge)?

– Will the feed allow me to detect threats better? i.e., you would have missed this threat if it had not been for that paid feed. At this time, many paid services for tactical TI are aggregating, cleaning and de-duplicating free sources and/or offering analysis that is also available in the public domain (e.g. McAfee and Kaspersky analysis of Dark Seoul, the malware that created havoc at Sony Pictures is available from US CERT).

Bottom line, Threat Intelligence is an excellent extension to SIEM solutions. The order of implementation should be internal/whitelist first, external free lists next and finally paid services to cover any remaining gaps.

Looking for 80% coverage at 20% cost? Let us do the detection with SIEM Simplified so you can remain focused on remediation.

Why Risk Classification is Important

Traditional threat models posit that it is necessary to protect against all attacks. While this may be true for a critical national defense network, it is unlikely to be true for the typical commercial enterprise. In fact many technically possible attacks are economically infeasible and thus not attempted by typical attackers.

This can be inferred by noting that most users ignore security precautions and yet escape regular harm. Most assets escape exploitation because they are not targeted, not because they are impregnable.

As Cormac Herley points out “a more realistic view is that we start with some variant of the traditional threat model, e.g., it is necessary and suffi cient to defend against all attacks” but then modify it in some way, e.g., defense eff ort should be appropriate to the assets.” However, while the first statement is absolute, and has a clear call-to-action, the qualifier is vague and imprecise. Of course we can’t defend against everything, but on what basis should we decide what to neglect?”

One way around this is by risk classification. The more you have to lose, the harder you must make it for the attacker. If you can make the value of the attack to be less than the monetization value then a financially motivated attacker will move on as its not worth it.

Want to present a hard target to attackers at an efficient price? Consider our SIEM Simplified service. You can get 80% of the value of a SIEM for 20% of the do-it-yourself price.

How many people does it take to run a SIEM?

You must have a heard light bulb jokes, for example:
How many optimists does it take to screw in a light bulb? None, they’re convinced that the power will come back on soon.

So how many people does it take to run a SIEM?
Let me count the ways.

Assuming the SIEM has been installed and configured properly (i.e, in accordance with the desired use cases), a few different skill sets are needed (these can all be the same person but that is quite rare).

SIEM Admin: This person handles the RUN function and will maintain the product in operational state and monitor its up-time. Other duties include deploying updates from the vendor and optimizing system performance. This is usually a fraction of a full time equivalent (FTE). About 4-8 hours/week for the typical EventTracker installation.

Security Analyst: This person handles the WATCH function and uses EventTracker for security monitoring. In the case of an incident, reviews activity reports and investigates alerts. Depending on the extent of the infrastructure being monitored, this can range from a fraction of an FTE to several FTEs. Plan for coverage on weekends and after hours. Incident response may require notification of other admin personnel.

SIEM Expert: This person handles the TUNE function and refines/customizes the SIEM rules/content and creates rules to support new use cases. This function requires the highest skill level, familiarity with the network and expertise with the SIEM product.

Back to the (bad) joke:
Q. So how many people does it take to run a SIEM?
A. None! The vendor said it manages itself!

How much security investment is enough?

In the last few weeks of 2014 and in the aftermath of the Sony hack, the attacks at many retailers and the incessant news on shell shock, poodle and many other vulnerabilities, many manager are considering 2015 budgets and the eternal question “how much to invest in IT security” is a common one.

It sometimes see that there is no limit and the more you spend, the lower your risk. But the Gordon-Loeb model says that is in fact not the case.

As pointed out by the RH Smith College at the University of Maryland:
The security of information is a fundamental concern to organizations operating in the modern digital economy. There are technical, behavioral, and organizational aspects related to this concern. There are also economic aspects of information security. One important economic aspect of information security (including cybersecurity) revolves around deriving the right amount an organization should invest in protecting information. Organizations also need to determine the most appropriate way to allocate such an investment. Both of these aspects of information security are addressed by Drs. Lawrence A. Gordon and Martin P. Loeb – See more here.

The focus of the Gordon-Loeb Model is to present an economic framework that characterizes the optimal level of investment to protect a given set of information. The model shows that the amount a firm should spend to protect information should generally be only a small fraction of the expected loss. More specifically, it shows that it is generally uneconomical to invest in information security activities (including cybersecurity related activities) more than 37 percent of the expected loss that would occur from a security breach. For a given level of potential loss, the optimal amount to spend to protect an information set does not always increase with increases in the information sets vulnerability. In other words, organizations may derive a higher return on their security activities by investing in cyber/information security activities that are directed at improving the security of information sets with a medium level of vulnerability.

Want the most for your 37% of expected loss? Consider SIEM Simplified.

What is a Stolen Credit Card Worth?

Solution Providers for Retail
Guest blog by A.N. Ananth

Cybercrime and stealing credit cards has been a hot topic all year. From the Target breach to Sony, the classic motivation for cybercriminals is profit. So how much is a stolen credit card worth?

The reason it is important to know the answer to this question is that it is the central motivation behind the criminal. If you could make it more expensive for a criminal to steal a card than what the thief would gain by selling them, then the attackers would find an easier target. That is what being a hard target is all about.

This article suggests prices of $35-$45 for a stolen credit card depending upon whether it is a platinum or corporate card. It is also worth noting that the viable lifetime of a stolen card is at most one billing cycle. After this time, the rightful owner will most likely detect its loss or the bank fraud monitor will pick up irregularities and terminate the account.

Why is a credit card with a high spending limit (say $10K) worth only $35? It is because monetizing a stolen credit card is difficult and requires a lot of expensive effort on part of the criminal. That is contrary to popular press which suggest that cybercrime results in easy billions. At the Workshop on Economics of Information Security, Herley and Florencio showed in their presentation, “Sex, Lies and Cybercrime Surveys,” that widely circulated estimates of cybercrime losses are wrong by orders of magnitude.For example:

Far from being broadly-based estimates of losses across the population, the cyber-crime estimates that we have appear to be largely the answers of a handful of people extrapolated to the whole population. A single individual who claims $50,000 losses, in an N = 1000 person survey, is all it takes to generate a $10 billion loss over the popu- lation. One unverified claim of $7,500 in phishing losses translates into $1.5 billion. …Cyber-crime losses follow very concentrated distributions where a representative sample of the pop- ulation does not necessarily give an accurate estimate of the mean. They are self-reported numbers which have no robustness to any embellishment or exaggeration. They are surveys of rare phenomena where the signal is overwhelmed by the noise of misinformation. In short they produce estimates that cannot be relied upon.

That’s a rational, fact based explanation as to why the most basic of information security is unusually effective in most cases. Pundits have been screaming this from the rooftops for a long time. What are your thoughts?

Read more at Solution Provider for Retail guest blog.

Are honeypots illegal?

In computer terminology, a honeypot is a computer system set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of IT systems. Generally, a honeypot appears to be part of a network, but is actually isolated and monitored, and which seems to contain information or a resource of value to attackers.

Lance Spitzner covers this topic from his (admittedly) non-legal perspective.

Is it entrapment?
Honeypots are not a form of entrapment. For some reason, many people have this misconception that if they deploy honeypots, they can be prosecuted for entrapping the bad guys. Entrapment, by definition is “a law-enforcement officer’s or government agent’s inducement of a person to commit a crime, by means of fraud or undue persuasion, in an attempt to later bring a criminal prosecution against that person.”

Does it violate privacy laws?
Privacy laws in the US may limit your right to capture data about an attacker, even when the attacker is breaking into your honeypot but the exemption under Service Provider Protection is key. What this exemption means is that security technologies can collect information on people (and attackers), as long as that technology is being used to protect or secure your environment. In other words, these technologies are now exempt from privacy restrictions. For example, an IDS sensor that is used for detection and captures network activity is doing so to detect (and thus enable organizations to respond to) unauthorized activity. Such a technology is most likely not considered a violation of privacy as the technology is being used to help protect the organization, so it falls under the exemption of Service Provider Protection. Honeypots that are used to protect an organization would fall under this exemption.

Does it expose us to liability?
Liability is not a criminal issue, but civil. Liability implies you could be sued if your honeypot is used to harm others. For example, if it is used to attack other systems or resources, the owners of those may sue. The argument being that if you had taken proper precautions to keep your systems secure, the attacker would not have been able to harm my systems, so you share the fault for any damage occurred to me during the attack. The issue of liability is one of risk. First, anytime you deploy a security technology (even one without an IP stack), that technology comes with risk. For example, there have been numerous vulnerabilities discovered in firewalls, IDS systems, and network sniffers. Honeypots are no different.

Obviously this blog entry is not legal advice and should not be construed as such.

SIEM or Log Management?

Security Information and Event Management (SIEM) is a Gartner coined term to describe solutions which monitor and help manage user and service privileges, directory services, and other system configuration changes in addition to providing log auditing, and review and incident response.

SIEM differs from Log Management, which refers to solutions which deal with large volumes of computer-generated log messages (also known as audit records, event-logs, etc.)

Log management is aimed at general system troubleshooting or incident response support. The focus is on collecting all logs for various reasons. This “input-driven” approach tries to get every possible bit of data.

This model fails with SIEM-focused solutions. Opening the floodgates, admitting any/all log data into the tool first, then considering what (if any) use is there for the data, reduces tool performance as it struggles to cope with the flood. More preferable is an “output-driven” model where data is admitted if and only if its usage is defined. This use can be defined to include alerts, dashboards, reports, behavior profiling, threat analysis, etc..

Buying a SIEM solution and using it as log management tool is a waste of money. Forcing a log management solution to act like a SIEM is folly.

The Security Risks of Industry Interconnections

2014 has seen a rash of high profile security breaches involving theft of personal data and credit card numbers from retailers Neiman Marcus, Home Depot, Target, Michaels, online auction site eBay, and grocery chains SuperValu and Hannaford among others. Hackers were able to steal hundreds of millions of credit and debit cards; from the information disclosed, this accounted for 40 million cards from Target, 350,000 from Neiman Marcus, up to 2.6 million from Michaels, 56 million from Home Depot.

The Identity Theft Resource Center (ITRC) reports that to date in 2014, 644 security breaches have occurred, an increase of 25.3 percent over last year. By far the majority of breaches targeted payment card data along with personal information like social security numbers and email addresses, and personal health information, and it estimates that over 78 million records were exposed.

Malware installed using third party credentials was found to be among the primary cause of the breaches in post-security analysis. Banks and financial institutions are critically dependent on their IT infrastructure and are also constantly exposed to attacks because of Sutton’s Law. Networks are empowering because they allow us to interact with other employees, customers and vendors. However, it is often the case that industry partners have a looser view of security and thus may be more vulnerable to being breached; exploiting industry interconnection is a favorite tactic used by attackers. After all, a frontal brute force attack on a well-defended large corporation’s doors are unlikely to be successful.

The Weak Link

The attackers target subcontractors, which are usually small companies with comparatively weaker IT security defenses and minimal cyber security expertise on hand. These small companies are also proud of their large customer and keen to highlight their connection. Likewise, companies often provide a surprising number of information meant for vendors on public sites for which logins are not necessary. This makes the first step of researching the target and their industry interconnections easier for the attacker.

The next step is to compromise the subcontractor network and find employee data. Social networking sites liked LinkedIn are a boon to attackers and used to create lists of IT admin and management staff who are likely to be privileged users. In West Virginia, state agencies were victims when malware infected computers of users whose email addresses ended with @wv.gov. The next step is to gain access to the contractors’ privileged users workstation, and from there, to breach the final target. In one retailer breach, the network credentials given to a heating, air conditioning and refrigeration contractor were stolen after hackers mounted a phishing attack, and were able to successfully lodge malware in the contractor’s systems, two months before they attacked the retailer, their ultimate target.

Good Practices, Good Security

Organizations can no longer assume that their enterprise is enforcing effective security standards; likewise, they cannot make the same assumption about their partners, vendors and clients, or anyone who has access to their networks. A Fortune 500 company has access to resources to acquire and manage security systems that a smaller vendor might not. So how can the enterprise protect itself while making the industry interconnections it needs to thrive?

Risk Assessments: When establishing a relationship with a vendor, partner, or client, consider vetting their security practices a part of due diligence. Before network access can be granted, the third party should be subject to a security appraisal that assesses where security gaps can occur (weak firewalls or security monitoring systems, lack of proper security controls). An inventory of the third party’s systems and applications and its control of those can help the enterprise develop an effective vendor management profile. Furthermore, it provides the enterprise with visibility into information that will be shared and who has access to that information.

Controlled Access: Third party access should be restricted and compartmentalized only to a segment of the network, and prevented access to other assets. Likewise, the organization can require that vendors and third parties use particular technologies for remote access, which enables the enterprise to catalog which connections are being made to the network.

Active Monitoring: Organizations should actively monitor network connections; SIEM software can help identify when remote access or other unauthorized software is installed, alert the organization when unauthorized connections are attempted, and establish baselines for “typical” versus unusual or suspicious user behaviors which can presage the beginning of a breach

Ongoing Audits: Vendors given access to the network should be required to submit to periodic audits; this allows both the organization and the vendor to assess security strengths and weaknesses and ensure that the vendor is in compliance with the organization’s security policies.

What next?

Financial institutions often implicitly trust vendors. But just as good fences make good neighbors, vendor audits produce good relationships. Initial due diligence and enforcing sound security practices with third parties can eliminate or mitigate security failures. Routine vendor audits send the message that the entity is always monitoring the vendor to ensure that it is complying with IT security practices.

SIEM is Sunlight

Security Information and Event Management (SIEM) refers to technology that provides real-time analysis of security alerts generated by network hardware and applications. SIEM works by gathering, analyzing and presenting information from a variety of sources of such information across the enterprise network including network and security devices; identity and access management applications; vulnerability management and policy compliance tools; operating system, database and application logs; and external threat data.

All compliance frameworks including PCI-DSS, HIPAA, FISMA, NERC etc call for the implementation and regular usage of SIEM technology. The absence of regular usage is noted as a major factor in post-mortem analysis of IT security related incidents.

Why is this the case? It’s because SIEM, when implemented properly gathers security data from all the nooks and crannies of the enterprise network. When this information is collated and presented well, an analyst is able to see what is happening, what happened and what is different.

It’s akin to letting in the sunlight to all corners and hidden places. You can see better, much better.

You can’t fix what you can’t see and don’t know. Knowledge of the goings-on in the various parts of the network, in real-time when possible, is the first step towards building a meaningful security defense.

Three key advantages for SIEM-As-A-Service

Three key advantages for SIEM-As-A-Service

Security Information and Event Management (SIEM) technology is an essential component in a modern defense-in-depth strategy for IT Security. SIEM is described as such in every Best Practice recommendation from industry groups and security pundits. The absence of SIEM is repeatedly noted in Verizon Enterprise Data Breach Investigations Report as a factor in late discovery of breaches. Indeed attackers are most often successful with soft targets where defenders do not review log and other security data. In addition, all regulatory compliance standards, such as PCI-DSS, HIPAA, FISMA etc specifically require SIEM technology be deployed and more importantly be used actively.

This last point (“be used actively”) is the Achilles heel for many organizations and has been noted often, as “security is something you do, not something you buy.” Organizations large and small struggle to assign staff with necessary expertise and maintain the discipline of periodic log review.

New SIEM-As-A Service options

SIEM Simplified services are available for buyers that cannot leverage traditional on premise, self-serve products. In such models, the vendor assumes responsibility for as much (or as little) of the heavy lifting as desired by the user including: Installation, Configuration, Tuning, Periodic review, Updates and responding to incident investigation or audit support requests.

Such offerings have three distinct advantages over the traditional self-serve, on premise model.

1) Managed Service Delivery: The vendor is responsible for the most “fragile” and “difficult to get right” aspect of a SIEM deployment – that is installation, configuration, tuning and Periodic review of SIEM data. This can also include upgrades, performance management to get speedy response and updates to security threat intelligence feeds.
2) Deployment options: In addition to the traditional on premise model, such services usually offer cloud based, managed hosted or hybrid solutions. Options for host based agents and/or premise based collectors/sensors allow for great flexibility in deployment
3) Utility pricing: Contrast with traditional perpetual models that require capital expenditure and front loading, SIEM-As-A-Service follows the utility model with usage based pricing and monthly expenditure. This is friendly to Operational Expenditures.

SIEM is a core technology in the modern IT Enterprise. New As-A-Service deployment models can increase adoption and value of this complex monitoring technology.