The Agent Advantage

For some time, “We use an agent for that” was a death spell for many security tools  while “agent-less” was the only game in town worth playing. Yes, people tolerate AV and device management agents, but that is where many organizations seemed to draw the line.  And an agent just to collect logs? – You’ve got to be kidding!

In this blog from 2006, Richard Bejtlich pointed out, enterprise security teams should seek to minimize their exposure to endpoint agent vulnerabilities.

Lets not confuse the means with the end. The end is “security information/event monitoring,” while getting the logs is the means to the end. Whereas, the threatscape of 2015 is dominated by polymorphic, persistent malware (dropped by phishing and stolen credentials); where our current mission still remains to defend the network.

Malware doesn’t write logs but it does however leave behind trace evidence on the host. This is evidence that you can’t get by monitoring the network. In any case, the rise of https by default has limited the ability of the network monitor to peer inside the payload.

Thus the Agent Advantage or the Sensor Advantage if you prefer.

Endpoints have first hand information when it comes to non-signature based attacks. This includes processes, file accesses, configuration changes, network traffic, etc. This data is critical to early detection of malicious activity.

Is an “agent” just to collect logs not doing it for you? How about a “sensor” that gathers endpoint data critical to detect persistent cyber attacks? That is the EventTracker 8 sensor which incorporates DFIR and UBA.



Why host data is essential for DFIR

Attacks on our IT network are a daily fact of life. As a defender, its job is to make the attackers life harder and to deter them to go elsewhere. Any attack, almost inevitably causes some type of host artifact to be left behind.

If defenders are able to quickly uncover the presence of host artifacts, it may be possible to disrupt the attack, thereby causing pain to the attacker. Such artifacts are present on the target/host and usually not visible to network monitors.

Many modern attacks use malware that is dropped and executed on the target machine or hollows out existing valid processes to spawn child processes that can be hijacked.

A common tactic when introducing malware on a target is to blend in. If the legitimate process is called svchost.exe, then the malware may be called svhost.exe. Another tactic is to maintain the same name as the legitimate EXE but have it executed from a different path.

EventTracker 8 includes a new module called Advanced Security Analytics which provides tools to help automate the detection of such attacks. When any process is launched, EventTracker gathers various bits of information about the EXE including, its hash, its full path name, its parent process, the publisher name and if it’s digitally signed or not. Then at the EventTracker Console, if the hash is being seen for the first time, it gets compared to lists of known malware from sources such as virustotal.com, virusshare.com etc. Analysts can also look and see if the EXE was digitally signed by the publisher name and source to determine if further investigation is warranted.

When tuned properly, this capability results in low false positive and can be useful to rapidly detect attackers.

Want more information on EventTracker 8? Click here.



User location affinity

It’s clear that we are now working under the assumption of a breach. The challenge is to find the attacker before they cause damage.

Once attackers gain a beach head within the organization, they pivot to other systems. The Verizon DBIR  shows that compromised credentials make up a whopping 76% of all network incursions.

However, the traditional IT security tools deployed at the perimeter, used to keep the bad guys out, are helpless in these cases. Today’s complex cyber security attacks require a different approach.

EventTracker 8 includes an advanced security analytic package which includes behavior rules to self-learn user location affinity heuristics and use this knowledge to pinpoint suspicious user activity.

In a nutshell, EventTracker learns typical user behavior for interactive login. Once a baseline of behavior is established, out of ordinary behavior is identified for investigation. This is done in real-time and across all enterprise assets.

For example if user susan typically logs into wks5 but now because her credentials are stolen, they are used to login to server6, this would be identified as out-of-ordinary and tagged for closer inspection.

EventTracker 8 has new features designed to support security analysts involved in Digital Forensics and Incident Response. For a quick introduction click here.



The PCI-DSS Compliance Report

A key element of the PCI-DSS standard is Requirement 10: Track and monitor all access to network resources and cardholder data. Logging mechanisms and the ability to track user activities are critical in preventing, detecting and minimizing the impact of a data compromise. The presence of logs in all environments allows thorough tracking, alerting and analysis when something does go wrong. Determining the cause of a compromise is very difficult, if not impossible, without system activity logs.

However the 2014 Verizon PCI Report is billed as an inside look at the business need for protecting payment card information says: “Only 9.4% of organizations that our RISK team investigated after a data breach was reported were compliant with Requirement 10. By comparison, our QSAs found 31.7% compliance with Requirement 10. This suggests a correlation between the lack of effective log management and the likelihood of suffering a security breach.”

Here is a side benefit of paying attention to compliance: Consistent and complete audit trails can also significantly reduce the cost of a breach. A large part of post-compromise cost is related to the number of cards thought to be exposed. Lack of conclusive log information reduces the forensic investigator’s ability to determine whether the card data in the environment was exposed only partially or in full.

In other words, when (not if) you detect the breach, having good audit records will reduce the cost of the breach.

Organizations can’t prevent or address a breach unless they can detect it. Active monitoring of the logs from their cardholder data environments enables organizations to spot and respond to suspected data breaches much more quickly.

Organizations generally find enterprise log management hard, in terms of generating logs (covered in controls 10.1 and 10.2), protecting them (10.5), reviewing them (10.6), and archiving them (10.7).

Is this you? Here is how you spell relief – SIEM Simplified.



Which is it? Security or Compliance?

This is a classic chicken/egg question but it’s too often thought to be the same. Take it from Merriam – Webster:
Compliance: (1a) the act or process of complying to a desire, demand, proposal, or regimen or to coercion. (1b) conformity in fulfilling official requirements. (2) a disposition to yield to others.
Security: (1) the quality or state of being secure. (4a) something that secures : protection. (4b1) measures taken to guard against espionage or sabotage, crime, attack, or escape. (4b2) an organization or department whose task is security.

Clearly they are not the same. Compliance means you meet a technical or non-technical requirement and periodically someone verifies that you have met them.

Compliance requirements are established by standards bodies, who obviously do not know your network. They are established for the common good because of industry wide concerns that information is not protected, usually because the security is poor. When you see an emphasis of compliance over security, it’s too often because the organization does not want to take the time to ensure that the network and information is secure, so they rely on compliance requirements to feel better about their security.

The problem with that is that it gives a false sense of hope. It gives the impression that if you check this box; everything is going to be ok. Obviously this is far from true, with examples like Sony, Target, TJMaxx and so many other breaches. Although there are implementations of compliance that will make you more secure, you cannot base your companies’ security policy on a third party’s compliance requirements.

So what comes first? Wrong question! Let’s rephrase – there needs to be a healthy relationship between the two but one cannot substitute one for the other.



Threat data or Threat Intel?

Have you noticed the number of vendors that have jumped on the “Threat Intelligence” bandwagon recently?

Threat Intel is the hot commodity with paid sources touting their coverage and timeliness while open sources tout the size of their lists. The FBI shares its info via Infraguard while many other ISACs are popping up across industry verticals allowing many large companies to compile internal data.

All good right? More is better, right? Actually, not quite.
Look closely. You are confusing “intelligence” with “data”.

As the Lt Commander of the Starship Enterprise would tell you, Data is not Intelligence. In this case, intelligence is really problem solving. As defenders, we want this data in order to answer “Who is attacking our assets and how?” Which would lead to coherent defense.

The steps to use Threat Data are easily explained:
1) Compare observations on the local network against the threat data.
2) Alert on matches.

Now comes the hard part…

3) Examine and validate the alert to decide if remediation is needed. This part is difficult to automate and really the crux of converting threat data to threat intelligence. To do this effectively would require human skills that combine both expert knowledge of the modern ThreatScape with knowledge of the network architecture.

This last part is  where most organizations come up hard against ground reality. The fact is thatdetailed knowledge of the internal network architecture is more common within an organization (more or less documented but present in some fashion/degree), than the expert knowledge of the modern ThreatScape and the contours/limitations of the threat data.

You could, of course hire and dedicate staff to perform this function but a) such staff are hard to come by and b) budget for this is even harder.

What now?

Consider a co-managed solution like SIEM Simplified where the expert knowledge of the modern ThreatScape in the context of your network is provided by an external group. When this is combined with your internal resources to co-manage the problem, it can result in improved coverage at an affordable price point.



How to shoot yourself in the foot with SIEM

shootyoself

 

 

 

 

 

 

Six ways to shoot yourself with SIEM technology:
1) Dont plan; just jump in
2) Have no defined scope or use cases; whatever
3) Confuse SIEM with Log Management
4) Monitor noise; apply no filters
5) Don’t correlate with any other technologies eg IDS, Vulnerability scanner, Active Directory
6) Staff poorly or not at all

For grins, here’s how programmers shoot themselves in the foot:

ASP.NET
Find a gun, it falls apart. Put it back together, it falls apart again. You try using the .GUN Framework, it falls apart. You stab yourself in the foot instead.
C
You try to shoot yourself in the foot, but find out the the gun is actually a howitzer cannon.
C++
You accidentally create a dozen clones of yourself and shoot them all in the foot. Emergency medical assistance is impossible since you can’t tell which are bitwise copies and which are just pointing at others and saying, “That’s me, over there.”
JavaScript
You’ve perfected a robust, rich user experience for shooting yourself in the foot. You then find that bullets are disabled on your gun.
SQL
SELECT @ammo:=bullet FROM gun WHERE trigger = ‘PULLED';
INSERT INTO leg (foot) VALUES (@ammo);
UNIX
% ls
foot.c foot.h foot.o toe.c toe.o
% rm * .o
rm: .o: No such file or directory
% ls
%

Click here for the Top 6 Uses of SIEM.



Venom Vulnerability exposes most Data Centers to Cyber Attacks

Just after a new security vulnerability surfaced Wednesday, many tech outlets started comparing it with HeartBleed, the serious security glitch uncovered last year that rendered communications with many well-known web services insecure, potentially exposing millions of plain-text passwords.

But don’t panic. Though the recent vulnerability has a more terrific name than HeartBleed, it is not going to cause as much danger as HeartBleed did.

Dubbed VENOM, standing for Virtualized Environment Neglected Operations Manipulation, is a virtual machine security flaw uncovered by security firm CrowdStrike that could expose most of the data centers to malware attacks, but in theory.

Yes, the risk of Venom vulnerability is theoretical as there is no real-time exploitation seen yet, while, on the other hand, last year’s HeartBleed bug was practically exploited by hackers an unknown number of times, leading to the theft of critical personal information.

Now let’s know more about Venom:

Venom (CVE-2015-3456) resides in the virtual floppy drive code used by a several number of computer virtualization platforms that if exploited…

…could allow an attacker to escape from a guest ‘virtual machine’ (VM) and gain full control of the operating system hosting them, as well as any other guest VMs running on the same host machine.

According to CrowdStrike, this roughly decade-old bug was discovered in the open-source virtualization package QEMU, affecting its Virtual Floppy Disk Controller (FDC) that is being used in many modern virtualization platforms and appliances, including Xen, KVM, Oracle’s VirtualBox, and the native QEMU client.

Jason Geffner, a senior security researcher at CrowdStrike who discovered the flaw, warned that the vulnerability affects all the versions of QEMU dated back to 2004, when the virtual floppy controller was introduced at the very first.

However, Geffner also added that so far, there is no known exploit that could successfully exploit the vulnerability. Venom is critical and disturbing enough to be considered a high-priority bug.

Successful exploitation of Venom required:
For successful exploitation, an attacker sitting on the guest virtual machine would need sufficient permissions to get access to the floppy disk controller I/O ports.

When considering on Linux guest machine, an attacker would need to have either root access or elevated privilege. However on Windows guest, practically anyone would have sufficient permissions to access the FDC.

However, comparing Venom with Heartbleed is something of no comparison. Where HeartBleed allowed hackers to probe millions of systems, Venom bug simply would not be exploitable at the same scale.

Flaws like Venom are typically used in a highly targeted attack such as corporate espionage, cyber warfare or other targeted attacks of these kinds.

Did venom poison Clouds Services?

Potentially more concerning are most of the large cloud providers, including Amazon, Oracle, Citrix, and Rackspace, which rely heavily on QEMU-based virtualization are vulnerable to Venom.

However, the good news is that most of them have resolved the issue, assuring that their customers needn’t worry.
“There is no risk to AWS customer data or instances,” Amazon Web Services said in a statement.
Rackspace also said the flaw does affect a portion of its Cloud Servers, but assured its customers that it has “applied the appropriate patch to our infrastructure and are working with customers to remediate fully this vulnerability.”

Azure cloud service by Microsoft, on the other hand, uses its homemade virtualization hypervisor technology, and, therefore, its customers are not affected by Venom bug.

Meanwhile, Google also assured that its Cloud Service Platform does not use the vulnerable software, thus was never vulnerable to Venom.

Patch Now! Prevent yourself

Both Xen and QEMU have rolled out patches for Venom. If you’re running an earlier version of Xen or QEMU, upgrade and apply the patch.

Note: All versions of Red Hat Enterprise Linux, which includes QEMU, are vulnerable to Venom. Red Hat recommend its users to update their system using the commands, “yum update” or “yum update qemu-kvm.”

Once done, you must “power off” all your guests Virtual Machines for the update to take place, and then restart it to be on the safer side. Remember, only restarting without power off the guest operating system is not enough for the administrators because it would still use the old QEMU binary.

See more at Hacker News.



Five quick wins to reduce exposure to insider threats

Q. What is worse than the attacks at Target, Home Depot, Michael’s, Dairy Queen, Sony, etc?
A. A disgruntled insider (think Edward Snowden)

A data breach has serious consequences both directly and indirectly. Lost revenue and a tarnished brand reputation both inflict harm long after incident resolution and post breach clean-up. Still, many organizations don’t take necessary steps to protect themselves from a potentially detrimental breach.

But, the refrain goes, “We don’t have the budget or the manpower or the buy in from senior management. We’re doing the best we can.”

How about going for some quick wins?
Quick wins provide solid risk reduction without major procedural, architectural or technical changes to an environment. Quick wins also provide such substantial and immediate risk reduction against very common attacks that most security-aware organizations prioritize these key controls.

1) Control the use of Administrator privilege
The misuse of administrative privileges is a primary method for attackers to spread inside a target enterprise. Two very common attacker techniques take advantage of uncontrolled administrative privileges. For example, a workstation user running as a privileged user, is fooled by simply surfing to a website hosting attacker content that can automatically exploit browsers. The file or exploit contains executable code that runs on the victim’s machine. Since the victim user’s account has administrative privileges, the attacker can take over the victim’s machine completely and install malware to find administrative passwords and other sensitive data.

2) Limit access to documents to employees based on the need to know
It’s important to limit permissions so employees only have access to the data necessary to perform their jobs. Steps should also be taken to ensure users with access to sensitive or confidential data are trained to recognize which files require more strict protection.

3) Evaluate your security tools – can they detect insider theft?
Whether it’s intentional or inadvertent, would you even know if someone inside your network compromised or leaked sensitive data?

4) Assess security skills of employees, provide training
The actions of people play a critical part in the success or failure of an enterprise. People fulfill important functions at every stage of the business function. Attackers are very conscious of these issues and use them to plan their exploitations by: carefully crafting phishing messages that look like routine and expected traffic to an unwary user; exploiting the gaps or seams between policy and technology; working within the time window of patching or log review; using nominally non-security-critical systems as jump points or bots….

5) Have an incident response plan
How prepared is your information technology (IT) department or administrator to handle security incidents? Many organizations learn how to respond to security incidents only after suffering attacks. By this time, incidents often become much more costly than needed. Proper incident response should be an integral part of your overall security policy and risk mitigation strategy.

A guiding principle of IT Security is “Prevention is ideal but detection is a must.”

Have you reduced your exposure?



Secure, Usable, Cheap: Pick any two

This fundamental tradeoff between security, usability, and cost is critical. Yes, it is possible to have both security and usability, but at a cost, in terms of money, time and personnel. While making something both cost efficient and usable, or even making something secure and cost-efficient may not be very hard, it is however  more difficult and time consuming to make something both secure and usable. This takes a lot of effort and thinking because security takes planning and resources.

As a system administrator, usability is at the top of their list. However, as a security administrator, security will be on top of their list – no surprise here really.

What if I tell you that the two job roles are orthogonal? What gets a sys admin bouquets, will get a security admin, brickbats and vice versa.

Oh and when we say “cheap” we mean in terms of effort – either by the vendor or by the user.

Security administrators face some interesting tradeoffs. Fundamentally, the choice to be made is between a system that is secure and usable, one that is secure and cheap or one that is cheap and usable. Unfortunately, we cannot have everything. The best practice is not to make the same person responsible for both security and system administration. The goals of those two tasks are far too often in conflict to make this a position that someone can become successful at.