Are You Listening to Your Endpoints?

There’s plenty of interest in all kinds of advanced security technologies like threat intelligence, strong/dynamic authentication, data loss prevention and information rights management. However, so many organizations still don’t know that the basic indicators of compromise on their network are new processes and modified executables. This is so important because in every high profile case of data breaches over the past few years a common thread has been the presence of a malicious program that provided the attackers with persistent access to the internal network of the victim organization.

Moreover, some security technologies – such as strong authentication – are no defense if you have malicious code running on the endpoint of a strongly authenticated user.

So rapid detection of malicious code is paramount and the importance can’t be over-stated. Detecting malicious code isn’t easy and traditional signature-based AV is only going to catch comparatively “old” and widely distributed malware. It isn’t likely to catch the targeted attacks we are up against today in which the bad guy uses shrink wrapped tools to build and package a unique malicious agent to use against your organization.

How do you detect and even prevent malware like this?

Like everything it takes a defense-in-depth approach. Advanced 3rd party application white-list and advanced memory protection are very effective. But whether you have such technologies deployed or on the radar, your SIEM solution can provide you early warning when new software is observed on your network.

The key thing is to look for Event ID 4688 in the Windows security log. Compare the executable name in that event to a list of whitelisted EXEs you expect to see –or better yet a list of executables that automatically build from past events.

You want these events from every possible system – including workstations. If you are concerned about the amount of log data involved, the sponsor of this article, EventTracker, provides an agent that can efficiently forward just the relevant events you want from thousands of endpoints.

Will there be false positives? Yes – especially until you refine your rules to take into account patches.

Will this catch every malicious agent? Of course not. After all, there are multiple ways to insert malicious code on an endpoint and some are completely in-memory with no new executable involved. 3rd party advanced memory protection products or Microsoft’s EMET can provide some help with detecting memory exploits though using your SIEM to collect and monitor those events is the obvious thing to do if you use EMET or another memory protection technology.

Some malware embeds itself in the existing, trusted EXEs and DLLs so it makes sense to monitor for modifications to such files. Again you want this from your workstations – not just server endpoints. Getting EXE/DLL modification events requires either Windows file monitoring or a file integrity monitoring (FIM) solution. Enabling auditing of just EXE and DLL files with Windows file auditing though is not that easy. You can’t configure audit policy on files with Group Policy without also impacting permissions. The reason why widely distributed scripts would be required. FIM is definitely an easier route. Again, it’s worth mentioning that EventTracker’s agent includes FIM monitoring making it easy to catch changes to existing software as soon as it happens.

The bottom line is this: to stop breaches we’ve got to detect and respond to malicious agent software. Are you listening to your endpoints?

It’s all about detection, not protection

What did the 2015 Verizon DBIR show us?
• 200+ days on average before persistent attackers are discovered within the enterprise network
• 60%+ breaches are reported by a third party
• 100% of breached networks were up to date on Anti Virus

We’ve got detection deficit disorder.
And it’s costing us. Direly!

Think of the time and money spent in detecting, with some degree of confidence, the location of Osama Bin Laden. Then think of the time and money to dispatch Seal Team 6 on the mission. Detection took ten years and cost hundreds of millions of dollars while remediation took 10 days and a few million dollars.

The same situation is happening in your network. You have for example 5,000 endpoints and of those, maybe 5 are compromised as you’re reading this. But which endpoints are compromised? How do you get actionable intelligence so that you can dispatch your own Seal Team 6?

This is the problem, EventTracker 8 was designed to address. Continuous digital forensics data collection using purpose built sensors. The machine learning at the EventTracker Console, sifts through collected data to identify possible malware, lateral movement and exfiltration of data. The processes are all backed by experts of the SIEM Simplified service.

The Detection Deficit

The gap between the ‘time to compromise’ and the ‘time to discover’ is the detection deficit. According to Verizon DBIR, the trend lines of these have been diverging significantly in the past few years. Worse yet, the data shows that attackers are able to compromise the victim in days but thereafter are able to spend an average of 243 days undetected within the enterprise network before they are exposed. More often than not, this is happening by a third party. This trend points to an ongoing detection deficit disorder. The suggestion is that defenders struggle to uncover the indicators of compromise. While the majority of these attacks are via malware inserted to the victim’s system by a variety of methods, there is also theft of credentials that make it look like an inside job. To overcome the detection deficit, defenders must look for other common evidence of compromise. These include: command and control activity, suspicious network traffic, file access and unauthorized use of valid credentials. EventTracker 8 includes features incorporated into our Windows sensor that provide continuous forensics to look for evidence of compromise.” target=”_blank”>Verizon VBIR, the trend lines of these have been diverging significantly in the past few years.

Worse yet, the data shows that attackers are able to compromise the victim in days but thereafter are able to spend an average of 243 days undetected within the enterprise network before they are exposed. More often than not, this is happening by a third party.

This trend points to an ongoing detection deficit disorder. The suggestion is that defenders struggle to uncover the indicators of compromise.

While the majority of these attacks are via malware inserted to the victim’s system by a variety of methods, there is also theft of credentials that make it look like an inside job.

To overcome the detection deficit, defenders must look for other common evidence of compromise. These include: command and control activity, suspicious network traffic, file access and unauthorized use of valid credentials.

EventTracker 8 includes features incorporated into our Windows sensor that provide continuous forensics to look for evidence of compromise.

The Agent Advantage

For some time, “We use an agent for that” was a death spell for many security tools  while “agent-less” was the only game in town worth playing. Yes, people tolerate AV and device management agents, but that is where many organizations seemed to draw the line.  And an agent just to collect logs? – You’ve got to be kidding!

In this blog from 2006, Richard Bejtlich pointed out, enterprise security teams should seek to minimize their exposure to endpoint agent vulnerabilities.

Lets not confuse the means with the end. The end is “security information/event monitoring,” while getting the logs is the means to the end. Whereas, the threatscape of 2015 is dominated by polymorphic, persistent malware (dropped by phishing and stolen credentials); where our current mission still remains to defend the network.

Malware doesn’t write logs but it does however leave behind trace evidence on the host. This is evidence that you can’t get by monitoring the network. In any case, the rise of https by default has limited the ability of the network monitor to peer inside the payload.

Thus the Agent Advantage or the Sensor Advantage if you prefer.

Endpoints have first hand information when it comes to non-signature based attacks. This includes processes, file accesses, configuration changes, network traffic, etc. This data is critical to early detection of malicious activity.

Is an “agent” just to collect logs not doing it for you? How about a “sensor” that gathers endpoint data critical to detect persistent cyber attacks? That is the EventTracker 8 sensor which incorporates DFIR and UBA.

Strengthen your defenses where the battle is actually being fought – the endpoint

Defense-in-depth pretty much secures and confirms the thought that every security technology has a place but are they really all created equal?

Security is not a democratic process and no one is going to complain about security inequality if you are successful at halting breaches. So I think we need to acknowledge a few things. Right now the bad guys are winning on the endpoint – in particular on the workstations. One way or another the attackers are getting users to execute bad code on their workstation allowing attackers to achieve a beach head, work their way across our networks following a horizontal kill chain until they reach “the goods”. Next generation firewalls, identity/access control and privileged account management all have a part to play in detecting and slowing down this process. However, we are not doing enough on the endpoint to recognize malicious code and key changes in user and application behavior. Though the strength of NGFWs is their eye in the sky ability to watch network traffic as a whole, they don’t see inside encrypted packets, nor do they know which program inside the endpoint is sending or receiving observed packets. NGFWs also cannot tell you when that program appeared on the endpoint, how it got there or who executed it.

So am I arguing in favor of collecting endpoint security logs? Including workstations?

If you have more than a handful of workstations, forget trying to collect their logs using any kind of pull/polling method; it just isn’t going to work. Getting all your workstation security logs is challenging, noisy and may not meet your requirements as most native logs lack important information. If you stick with native logs you need to implement Windows native Event Forwarding which is a great technology but right now lacks management tools. What does that mean for most organizations? Agents.

Historically there’s been a lot of push back to deploying YAA (yet another agent) on workstations simply for the purpose of collecting logs. Like most, I’d have to agree that going through the trouble of installing and maintaining an agent on every workstation when the return is native logs is a tough proposition.

This is why I like what EventTracker has done with its latest update, EventTracker 8 and the powerful detection, behavior analysis and prevention capabilities in their new agent. Basically goes like this:
1. We are losing the war on the endpoint front
2. Ergo, we need to beef up defenses on the endpoint
3. But native logs aren’t valuable enough alone to justify installing an agent
4. Conclusion: increase the value of the agent by doing more than just efficiently forwarding logs

EventTracker 8’s Windows agent does much more than just forward logs. In fact, maybe we shouldn’t call it an agent but perhaps a sensor would be a better term.

One of the key things we need to do on endpoints is analyze the programs executing and identify new, suspect and known-bad programs. With native logs all you can get is the name of the program, who ran it and when (event ID 4688). The native log can’t tell you anything about the contents (i.e. the “bits”) of the program, whether it’s been signed, etc.

Every time a process is launched, EventTracker 8 takes the process’s signature, pathname, md5 hash and compares that information against
• A local whitelist
• National Software Reference Library
• VirusTotal

This is something you can only do if you have your own bits (i.e. an agent) running on the endpoint. This cannot be done with native logs or even with a NGFW. Below is an example of a “synthetic” event generated by EventTracker that says it all:

Advanced Search

I wish Windows had that event.
“But, wait. There’s more!”

Visibility inside the programs running on your endpoints and being able to compare them against internal and external reputation data is extremely valuable to detecting and stopping attacks. But if we have a good agent on the endpoint we can do even more by analyzing what that program is doing on the network. What other systems is trying to access internally and where is it sending data out on the Internet?

Here’s an example of what EventTracker 8 does with that information. How would you like to know whenever a non-browser application connects to a standard port on some unnamed system on the Internet? Check out the event below.

Advanced Search Details

If you are cultured in malware techniques, you realize that discreet EXEs are not the only way attackers get arbitrary code to run on target systems. They have developed many different ways to hide bad guy code inside legit processes. One thing EventTracker does to detect this is by looking for suspicious threads injected into commonly abused processes like svchost.exe. EventTracker also does sophisticated analysis of the user too – not just programs – and alerts you when it sees suspicious combinations of user account, destination and source IP addresses.

EventTracker combines all the data that can only be obtained with an endpoint agent with general blacklist data from outside security organizations and specific whitelist data automatically built from internal activity. This is a great example of what you can do once you have your own code running on the endpoint. Combine native logs from each endpoint with all this other information and you are ahead of the game.

Why host data is essential for DFIR

Attacks on our IT network are a daily fact of life. As a defender, its job is to make the attackers life harder and to deter them to go elsewhere. Any attack, almost inevitably causes some type of host artifact to be left behind.

If defenders are able to quickly uncover the presence of host artifacts, it may be possible to disrupt the attack, thereby causing pain to the attacker. Such artifacts are present on the target/host and usually not visible to network monitors.

Many modern attacks use malware that is dropped and executed on the target machine or hollows out existing valid processes to spawn child processes that can be hijacked.

A common tactic when introducing malware on a target is to blend in. If the legitimate process is called svchost.exe, then the malware may be called svhost.exe. Another tactic is to maintain the same name as the legitimate EXE but have it executed from a different path.

EventTracker 8 includes a new module called Advanced Security Analytics which provides tools to help automate the detection of such attacks. When any process is launched, EventTracker gathers various bits of information about the EXE including, its hash, its full path name, its parent process, the publisher name and if it’s digitally signed or not. Then at the EventTracker Console, if the hash is being seen for the first time, it gets compared to lists of known malware from sources such as virustotal.com, virusshare.com etc. Analysts can also look and see if the EXE was digitally signed by the publisher name and source to determine if further investigation is warranted.

When tuned properly, this capability results in low false positive and can be useful to rapidly detect attackers.

Want more information on EventTracker 8? Click here.

User location affinity

It’s clear that we are now working under the assumption of a breach. The challenge is to find the attacker before they cause damage.

Once attackers gain a beach head within the organization, they pivot to other systems. The Verizon DBIR  shows that compromised credentials make up a whopping 76% of all network incursions.

However, the traditional IT security tools deployed at the perimeter, used to keep the bad guys out, are helpless in these cases. Today’s complex cyber security attacks require a different approach.

EventTracker 8 includes an advanced security analytic package which includes behavior rules to self-learn user location affinity heuristics and use this knowledge to pinpoint suspicious user activity.

In a nutshell, EventTracker learns typical user behavior for interactive login. Once a baseline of behavior is established, out of ordinary behavior is identified for investigation. This is done in real-time and across all enterprise assets.

For example if user susan typically logs into wks5 but now because her credentials are stolen, they are used to login to server6, this would be identified as out-of-ordinary and tagged for closer inspection.

EventTracker 8 has new features designed to support security analysts involved in Digital Forensics and Incident Response.

The PCI-DSS Compliance Report

A key element of the PCI-DSS standard is Requirement 10: Track and monitor all access to network resources and cardholder data. Logging mechanisms and the ability to track user activities are critical in preventing, detecting and minimizing the impact of a data compromise. The presence of logs in all environments allows thorough tracking, alerting and analysis when something does go wrong. Determining the cause of a compromise is very difficult, if not impossible, without system activity logs.

However the 2014 Verizon PCI Report is billed as an inside look at the business need for protecting payment card information says: “Only 9.4% of organizations that our RISK team investigated after a data breach was reported were compliant with Requirement 10. By comparison, our QSAs found 31.7% compliance with Requirement 10. This suggests a correlation between the lack of effective log management and the likelihood of suffering a security breach.”

Here is a side benefit of paying attention to compliance: Consistent and complete audit trails can also significantly reduce the cost of a breach. A large part of post-compromise cost is related to the number of cards thought to be exposed. Lack of conclusive log information reduces the forensic investigator’s ability to determine whether the card data in the environment was exposed only partially or in full.

In other words, when (not if) you detect the breach, having good audit records will reduce the cost of the breach.

Organizations can’t prevent or address a breach unless they can detect it. Active monitoring of the logs from their cardholder data environments enables organizations to spot and respond to suspected data breaches much more quickly.

Organizations generally find enterprise log management hard, in terms of generating logs (covered in controls 10.1 and 10.2), protecting them (10.5), reviewing them (10.6), and archiving them (10.7).

Is this you? Here is how you spell relief – SIEM Simplified.

Which is it? Security or Compliance?

This is a classic chicken/egg question but it’s too often thought to be the same. Take it from Merriam – Webster:
Compliance: (1a) the act or process of complying to a desire, demand, proposal, or regimen or to coercion. (1b) conformity in fulfilling official requirements. (2) a disposition to yield to others.
Security: (1) the quality or state of being secure. (4a) something that secures : protection. (4b1) measures taken to guard against espionage or sabotage, crime, attack, or escape. (4b2) an organization or department whose task is security.

Clearly they are not the same. Compliance means you meet a technical or non-technical requirement and periodically someone verifies that you have met them.

Compliance requirements are established by standards bodies, who obviously do not know your network. They are established for the common good because of industry wide concerns that information is not protected, usually because the security is poor. When you see an emphasis of compliance over security, it’s too often because the organization does not want to take the time to ensure that the network and information is secure, so they rely on compliance requirements to feel better about their security.

The problem with that is that it gives a false sense of hope. It gives the impression that if you check this box; everything is going to be ok. Obviously this is far from true, with examples like Sony, Target, TJMaxx and so many other breaches. Although there are implementations of compliance that will make you more secure, you cannot base your companies’ security policy on a third party’s compliance requirements.

So what comes first? Wrong question! Let’s rephrase – there needs to be a healthy relationship between the two but one cannot substitute one for the other.

Threat data or Threat Intel?

Have you noticed the number of vendors that have jumped on the “Threat Intelligence” bandwagon recently?

Threat Intel is the hot commodity with paid sources touting their coverage and timeliness while open sources tout the size of their lists. The FBI shares its info via Infraguard while many other ISACs are popping up across industry verticals allowing many large companies to compile internal data.

All good right? More is better, right? Actually, not quite.
Look closely. You are confusing “intelligence” with “data”.

As the Lt Commander of the Starship Enterprise would tell you, Data is not Intelligence. In this case, intelligence is really problem solving. As defenders, we want this data in order to answer “Who is attacking our assets and how?” Which would lead to coherent defense.

The steps to use Threat Data are easily explained:
1) Compare observations on the local network against the threat data.
2) Alert on matches.

Now comes the hard part…

3) Examine and validate the alert to decide if remediation is needed. This part is difficult to automate and really the crux of converting threat data to threat intelligence. To do this effectively would require human skills that combine both expert knowledge of the modern ThreatScape with knowledge of the network architecture.

This last part is  where most organizations come up hard against ground reality. The fact is thatdetailed knowledge of the internal network architecture is more common within an organization (more or less documented but present in some fashion/degree), than the expert knowledge of the modern ThreatScape and the contours/limitations of the threat data.

You could, of course hire and dedicate staff to perform this function but a) such staff are hard to come by and b) budget for this is even harder.

What now?

Consider a co-managed solution like SIEM Simplified where the expert knowledge of the modern ThreatScape in the context of your network is provided by an external group. When this is combined with your internal resources to co-manage the problem, it can result in improved coverage at an affordable price point.

How to shoot yourself in the foot with SIEM

shootyoself

Six ways to shoot yourself with SIEM technology:
1) Dont plan; just jump in
2) Have no defined scope or use cases; whatever
3) Confuse SIEM with Log Management
4) Monitor noise; apply no filters
5) Don’t correlate with any other technologies eg IDS, Vulnerability scanner, Active Directory
6) Staff poorly or not at all

For grins, here’s how programmers shoot themselves in the foot:

ASP.NET
Find a gun, it falls apart. Put it back together, it falls apart again. You try using the .GUN Framework, it falls apart. You stab yourself in the foot instead.
C
You try to shoot yourself in the foot, but find out the the gun is actually a howitzer cannon.
C++
You accidentally create a dozen clones of yourself and shoot them all in the foot. Emergency medical assistance is impossible since you can’t tell which are bitwise copies and which are just pointing at others and saying, “That’s me, over there.”
JavaScript
You’ve perfected a robust, rich user experience for shooting yourself in the foot. You then find that bullets are disabled on your gun.
SQL
SELECT @ammo:=bullet FROM gun WHERE trigger = ‘PULLED’;
INSERT INTO leg (foot) VALUES (@ammo);
UNIX
% ls
foot.c foot.h foot.o toe.c toe.o
% rm * .o
rm: .o: No such file or directory
% ls
%

Click here for the Top 6 Uses of SIEM.

Venom Vulnerability exposes most Data Centers to Cyber Attacks

Just after a new security vulnerability surfaced Wednesday, many tech outlets started comparing it with HeartBleed, the serious security glitch uncovered last year that rendered communications with many well-known web services insecure, potentially exposing millions of plain-text passwords.

But don’t panic. Though the recent vulnerability has a more terrific name than HeartBleed, it is not going to cause as much danger as HeartBleed did.

Dubbed VENOM, standing for Virtualized Environment Neglected Operations Manipulation, is a virtual machine security flaw uncovered by security firm CrowdStrike that could expose most of the data centers to malware attacks, but in theory.

Yes, the risk of Venom vulnerability is theoretical as there is no real-time exploitation seen yet, while, on the other hand, last year’s HeartBleed bug was practically exploited by hackers an unknown number of times, leading to the theft of critical personal information.

Now let’s know more about Venom:

Venom (CVE-2015-3456) resides in the virtual floppy drive code used by a several number of computer virtualization platforms that if exploited…

…could allow an attacker to escape from a guest ‘virtual machine’ (VM) and gain full control of the operating system hosting them, as well as any other guest VMs running on the same host machine.

According to CrowdStrike, this roughly decade-old bug was discovered in the open-source virtualization package QEMU, affecting its Virtual Floppy Disk Controller (FDC) that is being used in many modern virtualization platforms and appliances, including Xen, KVM, Oracle’s VirtualBox, and the native QEMU client.

Jason Geffner, a senior security researcher at CrowdStrike who discovered the flaw, warned that the vulnerability affects all the versions of QEMU dated back to 2004, when the virtual floppy controller was introduced at the very first.

However, Geffner also added that so far, there is no known exploit that could successfully exploit the vulnerability. Venom is critical and disturbing enough to be considered a high-priority bug.

Successful exploitation of Venom required:
For successful exploitation, an attacker sitting on the guest virtual machine would need sufficient permissions to get access to the floppy disk controller I/O ports.

When considering on Linux guest machine, an attacker would need to have either root access or elevated privilege. However on Windows guest, practically anyone would have sufficient permissions to access the FDC.

However, comparing Venom with Heartbleed is something of no comparison. Where HeartBleed allowed hackers to probe millions of systems, Venom bug simply would not be exploitable at the same scale.

Flaws like Venom are typically used in a highly targeted attack such as corporate espionage, cyber warfare or other targeted attacks of these kinds.

Did venom poison Clouds Services?

Potentially more concerning are most of the large cloud providers, including Amazon, Oracle, Citrix, and Rackspace, which rely heavily on QEMU-based virtualization are vulnerable to Venom.

However, the good news is that most of them have resolved the issue, assuring that their customers needn’t worry.
“There is no risk to AWS customer data or instances,” Amazon Web Services said in a statement.
Rackspace also said the flaw does affect a portion of its Cloud Servers, but assured its customers that it has “applied the appropriate patch to our infrastructure and are working with customers to remediate fully this vulnerability.”

Azure cloud service by Microsoft, on the other hand, uses its homemade virtualization hypervisor technology, and, therefore, its customers are not affected by Venom bug.

Meanwhile, Google also assured that its Cloud Service Platform does not use the vulnerable software, thus was never vulnerable to Venom.

Patch Now! Prevent yourself

Both Xen and QEMU have rolled out patches for Venom. If you’re running an earlier version of Xen or QEMU, upgrade and apply the patch.

Note: All versions of Red Hat Enterprise Linux, which includes QEMU, are vulnerable to Venom. Red Hat recommend its users to update their system using the commands, “yum update” or “yum update qemu-kvm.”

Once done, you must “power off” all your guests Virtual Machines for the update to take place, and then restart it to be on the safer side. Remember, only restarting without power off the guest operating system is not enough for the administrators because it would still use the old QEMU binary.

See more at Hacker News.

Five quick wins to reduce exposure to insider threats

Q. What is worse than the attacks at Target, Home Depot, Michael’s, Dairy Queen, Sony, etc?
A. A disgruntled insider (think Edward Snowden)

A data breach has serious consequences both directly and indirectly. Lost revenue and a tarnished brand reputation both inflict harm long after incident resolution and post breach clean-up. Still, many organizations don’t take necessary steps to protect themselves from a potentially detrimental breach.

But, the refrain goes, “We don’t have the budget or the manpower or the buy in from senior management. We’re doing the best we can.”

How about going for some quick wins?
Quick wins provide solid risk reduction without major procedural, architectural or technical changes to an environment. Quick wins also provide such substantial and immediate risk reduction against very common attacks that most security-aware organizations prioritize these key controls.

1) Control the use of Administrator privilege
The misuse of administrative privileges is a primary method for attackers to spread inside a target enterprise. Two very common attacker techniques take advantage of uncontrolled administrative privileges. For example, a workstation user running as a privileged user, is fooled by simply surfing to a website hosting attacker content that can automatically exploit browsers. The file or exploit contains executable code that runs on the victim’s machine. Since the victim user’s account has administrative privileges, the attacker can take over the victim’s machine completely and install malware to find administrative passwords and other sensitive data.

2) Limit access to documents to employees based on the need to know
It’s important to limit permissions so employees only have access to the data necessary to perform their jobs. Steps should also be taken to ensure users with access to sensitive or confidential data are trained to recognize which files require more strict protection.

3) Evaluate your security tools – can they detect insider theft?
Whether it’s intentional or inadvertent, would you even know if someone inside your network compromised or leaked sensitive data?

4) Assess security skills of employees, provide training
The actions of people play a critical part in the success or failure of an enterprise. People fulfill important functions at every stage of the business function. Attackers are very conscious of these issues and use them to plan their exploitations by: carefully crafting phishing messages that look like routine and expected traffic to an unwary user; exploiting the gaps or seams between policy and technology; working within the time window of patching or log review; using nominally non-security-critical systems as jump points or bots….

5) Have an incident response plan
How prepared is your information technology (IT) department or administrator to handle security incidents? Many organizations learn how to respond to security incidents only after suffering attacks. By this time, incidents often become much more costly than needed. Proper incident response should be an integral part of your overall security policy and risk mitigation strategy.

A guiding principle of IT Security is “Prevention is ideal but detection is a must.”

Have you reduced your exposure?

Secure, Usable, Cheap: Pick any two

This fundamental tradeoff between security, usability, and cost is critical. Yes, it is possible to have both security and usability, but at a cost, in terms of money, time and personnel. While making something both cost efficient and usable, or even making something secure and cost-efficient may not be very hard, it is however  more difficult and time consuming to make something both secure and usable. This takes a lot of effort and thinking because security takes planning and resources.

As a system administrator, usability is at the top of their list. However, as a security administrator, security will be on top of their list – no surprise here really.

What if I tell you that the two job roles are orthogonal? What gets a sys admin bouquets, will get a security admin, brickbats and vice versa.

Oh and when we say “cheap” we mean in terms of effort – either by the vendor or by the user.

Security administrators face some interesting tradeoffs. Fundamentally, the choice to be made is between a system that is secure and usable, one that is secure and cheap or one that is cheap and usable. Unfortunately, we cannot have everything. The best practice is not to make the same person responsible for both security and system administration. The goals of those two tasks are far too often in conflict to make this a position that someone can become successful at.

PCI-DSS 3.1 is here – what now?

On April 15, 2015, the PCI Security Standards Council (PCI SSC) announced the release of PCI DSS v3.1. This follows closely on the heels of PCI DSS 3.0, that just went into full effect on January 1, 2015. There is a three-year cycle between major updates of PCI DSS and, outside of that cycle, the standard can be updated to react to threats as needed.

The major driver of PCI DSS 3.1 is the industry’s conclusion that SSL version 3.0 is no longer a secure protocol and therefore must be addressed by the PCI DSS.

What happened to SSL?
The last-released version of encryption protocol to be called “SSL”—version 3.0—was superseded by “TLS,” or Transport Layer Security, in 1999. While weaknesses were identified in SSL 3.0 at that time, it was still considered safe for use up until October of 2014, when the POODLE vulnerability  came to light. POODLE is a flaw in the SSL 3.0 protocol itself, so it’s not something that can be fixed with a software patch.

Bottom line
Any business software running SSL 2.0 or 3.0 must be reconfigured or upgraded.
Note: Most SSL/TLS deployments support both SSL 3.0 and TLS 1.0 in their default configuration. Newer software may support SSL 3.0, TLS 1.0, TLS 1.1 and TLS 1.2. In these cases the software simply needs to be reconfigured. Older software may only support SSL 2.0 and SSL 3.0 (if this is the case, it is time to upgrade).

How to detect SSL/TLS usage and version?
A vulnerability scan from EventTracker Vulnerability Assessment Service (ETVAS) or other scanner, will identify insecure implementations.

SSL/TLS is a widely deployed encryption protocol. The most common use of SSL/TLS is to secure websites (HTTPS), though it is also used to:
• Secure email in transit (SMTPS or SMTP with STARTTLS, IMAPS or IMAP with STARTTLS)
• Share files (FTPS)
• Secure connections to remote databases and secure remote network logins (SSL VPN)
SIEM Simplified customers
The EventTracker Control Center will have contacted you to correctly configure the Windows server instance hosting the EventTracker Manager Console, to comply with the guideline. You must upgrade or reconfigure all other vulnerable servers in your network.

If you subscribe to ETVAS, the latest vulnerability reports will highlight any servers that must be reconfigured along with detailed recommendations on how to do so.

Three myths of IT Security

Myth 1: Hardening a system makes it secure
Security is a process, to be evaluated on a constant basis. There is nothing that will put you into a “state of security.” Did you really think that simply applying some hardening guide to a system will make it secure?

Threats exploit unpatched vulnerabilities and not one of them would have been stopped by any security settings. Few settings can prevent your network from getting attacked through unpatched vulnerabilities.

Myth 2: If We Hide It, the Bad Guys Won’t Find It
Also known as security by obscurity, hiding the system doesn’t really help. For instance, turning off SSID broadcast in wireless networks. Not only will you now have a network that is non compliant with the standard, but your clients will also prefer a rogue network with the same name over the legitimate one. Oh, and it takes a few minutes to actually find the network anyway, given the proper tools. Another example is changing the banners on your Web site so the bad guys will not know it is running IIS. First, it is relatively simple to figure out what the Web site is running anyway. Second, most of the bad guys are not smart enough to do that, so they just try all the exploits, including the IIS ones. Yet another example is renaming the Administrator account. It is a matter of a couple of API calls to find the real name. Our favorite is when administrators use Group Policy to rename the Administrator account. They now have an account called “Janitor3;” with a comment of “Built in account for administering the computer/domain.” This is not really likely to fool anyone.

Myth 3: “High Security” Is an End-Goal for All Environments
High security, in the sense of the most restrictive security possible, is not for everyone. In some environments you are willing to break things in the name of protection that you are not willing to break in others.

Some systems are subjected to incredibly serious threats. If these systems get compromised, people will die, nations and large firms will go bankrupt, and society as we know it will collapse. Other systems contain far less sensitive information and thus need not be subjected to the same level of security. The protective measures that are used on the former are entirely inappropriate for the latter; yet we keep hearing that “high security” is some sort of end-goal toward which all environments should strive.

Safeguards should be applied in proportion to risk.

Four Key Steps to Rapid Incident Response

Is it possible to avoid security breaches? Judging from recent headlines, probably not. Victims range from startups like Kreditech, to major retailers like Target,to the US State Department and even the White House. Regardless of the security measures you have in place, it is prudent to assume you will suffer a breach at some point. Be sure to have a response plan in place — just in case.

If you find it difficult to justify the time needed to develop a response plan, consider how long you will have to formulate a response once an attack begins. According to a 2013 Verizon study, 84% of successful attacks compromised their targets in a matter of hours. The brief time window for detecting and mitigating attacks requires not only constant monitoring but a rapid response. That means having a plan in place.

As you formulate your strategy for handling breaches, keep in mind four key aspects of incident response including: analysis and assessment, response strategy, containment, and prevention of a subsequent attack.

The first step in managing a security breach is detecting it. This is one of the most difficult challenges facing IT professionals. You are trying to detect a stealth adversary with many potential points of entry into your system and you have no knowledge of when the attack will occur. Also, attack-related events may occur in rapid succession or over extended periods of time. Some of the steps in the attack may appear innocuous, such as an executive unknowingly downloading and opening malicious content. Others may be more apparent, such as a disgruntled employee downloading large volumes of customer data to a USB drive. In all cases, analyzing logs and integrating data from multiple application and servers logs can help identify events indicative of an attack.

The response strategy spans both technical and business aspects of your organization. An incident response team should be in place to address the breach. This will include containing the threat (discussed below), notifying stakeholders, and communicating the progress of the response efforts. There may be a need to coordinate with those responsible for business continuity and disaster recovery in cases of large-scale attacks, such as suffered by Sony last year.

Containment is the process of isolated compromised devices and network segments to limit the spread of a breach. Containment can be as crude as cutting power to a compromised device. If malicious activity originates with a mobile device, a mobile device management (MDM) system can block that device from accessing network resources. Network administrators can change firewall filtering rules to limit traffic into and out of a subnet. They may also consider updating DNS entries of compromised servers to point to failover servers, assuming those have not been compromised. Monitoring application, operating system, and network logs during containment operations can help understand the effects of your responses

The fourth issue to keep in mind is preventing subsequent attacks. A security breach can have wide and unexpected consequences. It is also a potential opportunity to learn how your security measures were compromised. Was someone tricked by a phishing lure? Was an administrator account compromised by simple, brute force dictionary attack? Did an insider take advantage of excessive privileges? Security Information and Event Management systems support forensic analysis and can help integrate event data from across your infrastructure. This may enable you to find correlations between events that lead to insights about the behavior of the attackers and the vulnerabilities in your systems.
This brief discussion of incident response planning touches on just some of the most salient aspects dealing with a breach. Sources, such as CERT, provide detailed resources to help organizations create computer security incident response teams and incident response best practices.

Does sharing Threat Intel work?

In the next couple months, Congress will likely pass CISA, the Cybersecurity Information Sharing Act. The purpose is to “codify mechanisms for enabling cybersecurity information sharing between private and government entities, as well as among private entities, to better protect information systems and more effectively respond to cybersecurity incidents.”

Can it help? It’s interesting to note two totally opposing views.

Arguing that it will help is Richard Bejtlich of Brookings. His analogy is Threat intelligence, is in some ways like a set of qualified sales leads provided to two companies. The first has a motivated sales team, polished customer acquisition and onboarding processes, authority to deliver goods and services and quality customer support. The second business has a small sales team, or perhaps no formal sales team. Their processes are broken, and they lack authority to deliver any goods or services, which in this second case isn’t especially valuable. Now, consider what happens when each business receives a bundle of qualified sales leads. Which business will make the most effective use of their list of profitable, interested buyers? The answer is obvious, and there are parallels to the information security world.

Arguing that it won’t help at all is Robert Graham, the creator of BlackICE Guard. His argument is “CISA does not work. Private industry already has exactly the information sharing the bill proposes, and it doesn’t prevent cyber-attacks as CISA claims. On the other side, because of the false-positive problem, CISA does far more to invade privacy than even privacy advocates realize, doing a form of mass surveillance.”

In our view, Threat Intel is a new tool. It’s usefulness depends on the artisan wielding the tool. A poorly skilled user would get less value.

Want experts on your team but don’t know where to start? Try our managed service SIEM Simplified. Start quick and leverage your data!

Threat Intelligence vs Privacy

On Jan 13, 2015, the U.S. White House published a set of legislative proposals on cyber security threat intelligence (TI) sharing between private and public entities. Given the breadth of cyber attacks across the Internet, the sharing of information between commercial entities and the US Government is increasingly critical. Absent sharing, first responders charged with cyber defense are at a disadvantage in detecting and responding to common attacks.

Should this cause a privacy concern?
Richard Bejtlich, senior fellow at Brookings says “Threat intelligence does not contain personal information of American citizens, and privacy can be maintained while learning about threats. Intelligence should be published in an automated, machine-consumable, standardized manner.”

The White House proposal uses the following definition:
“The term `cyber threat indicator’ means information —
(A) that is necessary to indicate, describe or identify–
(i) malicious reconnaissance, including communications that reasonably appear to be transmitted for the purpose of gathering technical information related to a cyber threat;
(ii) a method of defeating a technical or operational control;
(iii) a technical vulnerability;
(iv) a method of causing a user with legitimate access to an information system or information that is stored on, processed by, or transiting an information system inadvertently to enable the defeat of a technical control or an operational control;
(v) malicious cyber command and control;
(vi) any combination of (i)-(v).
(B) from which reasonable efforts have been made to remove information that can be used to identify specific persons reasonably believed to be unrelated to the cyber threat.”

If you take the above at face value, then a reasonable assumption is that these sorts of cyber threat indicators should not trigger privacy concerns, whether shared between the private sector and the government or within the private sector.

Of course, getting TI and using it effectively are completely different as discussed here. Bejtlich reminds us that “private sector organizations should focus first on improving their own defenses before expecting that government assistance will mitigate their security problems.”

Looking for an practical, cost effective way to shore up your defenses? SIEM Simplified is one way to go about it.

Death by a Thousand cuts

You may recall that back in 2012, then Secretary of Defense Leon Panetta warned of “a cyber Pearl Harbor; an attack that would cause physical destruction and the loss of life.”

This hasn’t quite come to pass has it? Is it dumb luck? Or are we just waiting for it to happen?

In his annual testimony about the intelligence community’s assessment of “global threats,” Director of National Intelligence James Clapper sounded a more nuanced and less hyperbolic tone. “Rather than a ‘cyber Armageddon’ scenario that debilitates the entire U.S. infrastructure, we envision something different,” he said, “We foresee an ongoing series of low-to-moderate level cyber attacks from a variety of sources over time, which will impose cumulative costs on U.S. economic competitiveness and national security.”

The reality is that the U.S. is being bombarded by cyber attacks of a smaller scale every day—and those campaigns are taking a toll.

Now the DNI also went on to say “Although cyber operators can infiltrate or disrupt targeted [unclassified] networks, most can no longer assume that their activities will remain undetected, nor can they assume that if detected, they will be able to conceal their identities. Governmental and private sector security professionals have made significant advances in detecting and attributing cyber intrusions.”

Alan Paller of the SANS Institute says “Those words translate directly to a simpler statement: ‘The weapons and other systems we operate today cannot be protected from cyber attack.’ Instead, as a nation, we have to put in place the people and support systems who can find the intruders and excise them fast.”

So then what capabilities do you have in this area given that the attacks are continuous and ongoing against your infrastructure?

Want to do something about it quickly and effectively? Consider SIEM Simplified our service offering that can take the heavy lift required to implement such monitoring programs off your hands.

PoSeidon and EventTracker

A new and harmful Point-of-Sale (“POS”) malware has been identified by security researchers at Cisco’s Security Intelligence & Research Group. The team says it is more sophisticated and damaging than previous POS malware programs.

Nicknamed PoSeidon, the new malware family targets POS systems, infects machines and scrapes the memory for credit card information which it then exfiltrates to servers, primarily .ru TLD, for harvesting or resale.

When consumers use their credit or debit cards to pay for purchases from a retailer, they swipe their card through POS systems. Information stored on the magnetic stripe on the back of those cards is read and retained by the POS. If the information on that stripe is stolen, it can be used to encode the magnetic strip of a fake card, which is then used to make fraudulent purchases. POS malware and card fraud has been steadily rising, affecting large and small retailers. Target, one of the most visible victims of security breach involving access to its payment card data, incurred losses approximated at $162 million (before insurance recompense).

PoSeidon employs a technique called memory scraping in which the RAM of infected terminals are scanned for unencrypted strings which match credit card information. When PoSeidon take over a terminal, a loader binary is installed to allow the malware to remain on the target machine even during system reboots. The Loader then contacts a command and control server, and retrieves a URL which contains another binary, FindStr, to download and execute. FindStr scans the memory of the POS device and finds strings (hence its name) and installs a key logger which looks for number strings and keystrokes analogous to payment card numbers and sequences. CSS referred to the number sequences that begin with numbers generally used by Discover, Visa, MasterCard and American Express cards (6, 5, 4, and 3 respectively, as well as the number of digits following those numbers; 16 digits for the former three, 15 digits for the American Express card). This data is then encoded and sent to an exfiltration server.

A whitepaper for detecting and protecting from PoSeidon malware infection is also available from EventTracker.

Tired of keeping up with the ever changing Threatscape? Consider SIEM Simplified. Let our managed SIEM solution do the heavy lifting.

Enriching Event Log Monitoring by Correlating Non Event Security Information

Sometimes we get hung up on event monitoring and forget about the “I” in SIEM which stands for information. Not forgetting Information is important because there are many sources of non-event security information that your SIEM should be ingesting and correlating with security events more than ever before. There’s at least 4 categories of security information that you can leverage in your SIEM to provide better analysis of security events:

1. Identify information from your directory (e.g. Active Directory)

Your directory has a wealth of identify information that can help sift the wheat from the chaff in your security logs. For example, let’s say you regularly import a list of all the members of Administrator groups from Active Directory into your SIEM and call the list Privileged Accounts. Now, enhance any rules or reports looking for suspicious user activity by also comparing the user name in the event against the Privileged Accounts list. If there’s a match, then the already suspicious event becomes even more important since it involves a privileged user. In many cases you’re likely to have certain control over privileged user sessions. The Privileged Accounts list helps you identify anyone bypassing those controls whether a malicious inside or outside outside attacker is ignorant of your controls. Perhaps you require all administrators to go through a clean and hardened “jump box”. You can setup a rule to identify logon sessions where the username is in Privileged Accounts but not initiated from the jump box.

2. Environmental information (both internal and global)

A global example of environment information is geocoding. Perhaps there are certain countries that you do not do business with due to their bad reputation for cybercrime and espionage. Another popular way to leverage geocoding is to detect when a given user is apparently in two places at once which can indicate compromised credentials.

3. Threat intelligence feeds available from security organizations

There’s a growing array of threat intelligence feeds ranging from community-based free feeds to those commercially produced and available for a fee. These feeds range from lists of IP addresses linked to command and control networks, botnets and compromised hosts to network indicators of compromise and malware signatures. We recently looked at the free feeds available from emergingthreats.net in our most recent webinar with EventTracker. Correlating event logs from all levels of your network to threat intelligence can help you identify compromised systems and persistent attackers much earlier in the process.

But you can also leverage organization specific (i.e. internal) environment information. For instance, perhaps all of your administrators’ workstations fall within a certain range of IP addresses. Use this information in a rule examining logon attempts to your jump box or other hardened infrastructure systems (such as the management network interface on ESXi and HyperV systems) and alert when you see attempts to access these systems from non-administrators. (As always, the real world may be a little more complicated. Case in point: you may also need to factor in logon attempts through whatever means administrators use for remote access.)

4. Internal threat intelligence
EventTracker CEO, A. N. Ananth coined this term to describe information that you can compile from your own network and systems using similar techniques as outside threat intelligence organizations. There’s no arguing the “crowd-sourced” value of external threat intelligence but such information is missing a key aspect that is addressed by internal threat intelligence. External threat intelligence tend to be “black lists” of “known bad” data. On the other hand, internal threat intelligence usually take the form of “white lists” of “known good” data. White lists tend to be much smaller, more effective and easier to tune and maintain. For instance if your SIEM can determine from past history that server A normally only communicates with 10 other hosts – that is very valuable to know – especially if your SIEM can alert you when it sees that host suddenly start sending gigabytes of data to an entirely new host on an unusual port.

The bottom line is that your SIEM needs as much data (both event and non-event) as possible and it needs to be effective at correlating it into valuable situational intelligence. Don’t stop at logs. Look for other kinds of security information from your directory, the local and global environment, threat intelligence from the security community and from the internal.

Want to be acquired? Get your cyber security in order!

Want to be acquired? Get your cyber security in order!

Washington Business Journal Senior Staff Reporter, Jill Aitoro hosted  a panel of cyber experts Feb. 26 at Crystal Tech Fund in Arlington, VA.

The panel noted that how well a company has locked down their systems and data will have a direct effect on how much a potential buyer is willing to shell out for an acquisition — or whether a buyer will even bite in the first place.

Howard Schmidt, formerly CISO at Microsoft recalled ‘”We did an acquisition one time — about $10 million. It brought tons of servers, a big IT infrastructure, when all was said and done, it cost more than $20 million to rebuild the systems that had been owned by criminals and hackers for at least two years. That’s a piece of M&A you need to consider.”

Many private investors are doing exactly that, calling in security companies to assess a target company’s cyber security posture before making an offer. In some cases, the result will be to not invest at all, with the venture capitalist telling a company to “get their act together and then call back”.

Support your Local Gunfighter

WANTED

Looking for a SIEM fighter to clean up Dodge? Click here!

The Pyramid of Pain

There is great excitement amongst security technology and service providers about the intersection of global threat intelligence with local observations in the network. While there is certainly cause for excitement, it’s worth pausing to ask the question “Is Threat Intelligence being used effectively?”

David Bianco explains that not all indicators of compromise are created equal. The pyramid defines the pain it will cause the adversary when you are able to deny those indicators to them.

info

Hash Values: SHA1, MD5 or other similar hashes that correspond to specific suspicious or malicious files. Hash Values are often used to provide unique references to specific samples of malware or to files involved in an intrusion. EventTracker can provide this functionality via its Change Audit feature.
IP Addresses: or even net blocks. If you deny the adversary the use of one of their IPs, they can usually recover quickly. EventTracker addresses these via its Behavior Module and the associated IP Reputation lookup.
Domain Names: These are harder to change than IP addresses. EventTracker can either use logs from a proxy or scan web server logs to detect such artifacts.
Host Artifact: For example, if the attacker’s HTTP recon tool uses a distinctive User-Agent string when searching your web content (off by one space or semicolon, for example. Or maybe they just put their name. Don’t laugh. This happens!). This can be detected by the Behavior Module in EventTracker when focused on the User Agent string from web server logs.
Tools: Artifacts of tools (eg DLLs or EXE names or hashes) that the attacker is using, can be detected via the Unknown Process module within EventTracker via the Change Audit feature.
Tactics, Techniques & Procedures: An example can be detecting Pass-the-hash attacks as called out by the NSA in their white paper and discussed in our webinar “Spotting the adversary with Windows Event Log Monitoring

Bottom line: Having Threat Intelligence is not the same as using it effectively. The former is something you can buy, the latter is something you develop as a capability. It not only requires tools but also persistent, well trained humans.

Want both? Consider SIEM Simplified.

What good is Threat Intelligence integration in a SIEM?

Bad actors/actions are more and more prevelant on the Internet. Who are they? What are they up to? Are they prowling in your network?

The first two questions are answered by Threat Intelligence (TI), the last one can be provided by a SIEM that integrates TI into its functionality.

But wait, don’t buy just yet, there’s more, much more!

Threat Intelligence when fused with SIEM can:
• Validate correlation rules and improve base lining alerts by upping the priority of rules that also point at TI-reported “bad” sources
• Detect owned boxes, bots, etc. that call home when on your network
• Qualify entities related to an incident based on collected TI data (what’s the history of this IP?)
• Historical matching of past, historical log data to current TI data
• Review past TI history as key context for reviewed events, alerts, incidents, etc.
• Enable automatic action due to better context available from high-quality TI feeds
• Run TI effectiveness reports in a SIEM (how much TI leads to useful alerts and incidents?)
• Validate web server logs source IP to profile visitors and reduce service to those appearing on bad lists (uncommon)
and the beat goes on…

Want the benefits of SIEM without the heavy lifting involved? SIEM Simplified  may be for you.

Gathering logs or gathering dust?

Did you wrestle your big name SIEM vendor to throw in their “enterprise class” solution for a huge discount as part of the last negotiation? If so, good from you – you should be pleased with yourself for wrangling something so valuable for them. 90% discounts are not unheard of, by the way.

But do you know why they caved and included it? It’s because there is very high probability that you really won’t ever obtain any significant value from it.

You see the “enterprise class” SIEM solutions from the top name vendors all require significant trained staff to even just get them up and running, never mind tuning and delivering any real value. They figured, you probably just don’t have the staff or the time to do any of that so they can just give it away at that huge discount. It only adds some value to their invoice, preventing any other vendor from horning in on their turf and makes you happy – what’s not to like?

The problem of course is that you are not any closer to solving any of the problems that a SIEM can address. Is that ok with you? If so, why even bother to pay that 10%?

From a recent webinar on the topic by Gartner Analyst Anton Chuvakin:

Q: For a mid-size company what percent of time would a typical SIEM analyst spend in monitoring / management of the tool – outstanding incident management?
A: Look at my SIEM skill model of Run/Watch/Tune and the paper where it is described in depth. Ideally, you don’t want to have one person running the SIEM system, doing security monitoring and tuning SIEM content (such as writing correlation rules, etc) since it would be either one busy person or one really talented one. Overall, you want to spend a small minority of time on the management of the tool and most of the time using it. SIEM works if you work it! SIEM fails if you fail to use it.

So is your SIEM gathering logs? Or gathering dust?

If the latter, give us a call! Our SIEM Simplified service can take the sting out of the bite.

Why add more hay?

Recent terrorist attacks in France have shaken governments in Europe. The difficulty of defending against insider attacks is once again front and center. How should we respond? The UK government seems to feel that greater mass surveillance is a proper response. The Communications Data Bill  proposed by Prime Minister Cameron would compel telecom companies to keep records of all Internet, email, and cellphone activity. He also wants to ban encrypted communications services.

This approach would add even more massive data sets for analysis by computer programs than currently thought to be analyzed by NSA/GCHQ, in hopes that algorithms would be able to pinpoint the bad guys. Of course France has blanket surveillance but that did not prevent the Charlie Hebdo attack.

In the SIEM universe, the equivalent would be to gather every log from every source in hopes that attacks could be predicted and prevented. In practice,accepting data like this into a SIEM solution reduces it to a quivering mess of barely functioning components. In fact the opposite approach “output driven SIEM” is favored by experienced implementers.

Ray Corrigan writing Mass Surveillance Will Not Stop Terrorism  in the New Scientist notes “Surveillance of the entire population, the vast majority of whom are innocent, leads to the diversion of limited intelligence resources in pursuit of huge numbers of false leads. Terrorists are comparatively rare, so finding one is a needle-in-a-haystack problem. You don’t make it easier by throwing more needleless hay on the stack.”

Threat Intelligence – Paid or Free?

Threat Intelligence (TI) is evidence-based knowledge, including context, mechanisms, indicators, implications and actionable advice, about an existing or emerging menace or hazard to assets that can be used to inform decisions regarding the subject’s response to that menace or hazard. The challenge is that leading indicators of risk to an organization are difficult to identify when the organization’s adversaries, including their thoughts, capabilities and actions, are unknown. Therefore “black lists” of various types have become popular which list top attackers, spammers, poisoned URLs, malware domains etc have become popular. These lists are either community (free) maintained (eg SANS DShield), paid for by your tax dollars (eg InfraGuard) or paid services.

EventTracker 7.6 introduced formal support to automatically import and use such lists. We are often asked the question, which list(s) to use. Is it worth it to pay for TI service? Here is our thinking on the subject:

– External v/s Internal
In most cases, we find “white lists” to be much smaller, more effective and easier to tune/maintain than any “black list”. EventTracker supports the generation of such White lists from internal sources (the Change Audit feature) or the list of known good IP ranges (internal range, your Amazon EC2 or Azure instances, your O365 instances etc). Using the NOTIN match option of the Behavior module gives you a small list of suspicious activities (grey list) which can be quickly sorted to either black or white for future processing. As a first step, this is a quick/inexpensive/effective solution.

– Paid v/s Free
Free services include well regarded sources such as shadowservers.org, abuse.ch, dshield.org, FBI Infraguard, US CERT and EventTracker ThreatCenter (a curated list of low volume, high confidence sources formatted for quick import into EventTracker. Many customers in industry verticals (e.g., Electric power have lists circulated within their community.)

If you are thinking of paid services, then ask yourself:

– Will the feed allow me to detect threats faster? (e.g., a feed of top attackers updated in real-time v/s once in 6/12 hours). If faster is your motivation, are you able to respond to the threat detection faster? If the threat is detected at 8PM on a Friday, when will you be able to properly respond (not just acknowledge)?

– Will the feed allow me to detect threats better? i.e., you would have missed this threat if it had not been for that paid feed. At this time, many paid services for tactical TI are aggregating, cleaning and de-duplicating free sources and/or offering analysis that is also available in the public domain (e.g. McAfee and Kaspersky analysis of Dark Seoul, the malware that created havoc at Sony Pictures is available from US CERT).

Bottom line, Threat Intelligence is an excellent extension to SIEM solutions. The order of implementation should be internal/whitelist first, external free lists next and finally paid services to cover any remaining gaps.

Looking for 80% coverage at 20% cost? Let us do the detection with SIEM Simplified so you can remain focused on remediation.

Why Naming Conventions are Important to Log Monitoring

Log monitoring is difficult for many reasons. For one thing there are not many events that unquestionably indicate an intrusion or malicious activity. If it were that easy the system would just prevent the attack in the first place. One way to improve log monitoring is to name implement naming conventions that imbed information about objects like user accounts, groups and computers such as type or sensitivity. This makes it easy for relatively simple log analysis rules to recognize important objects or improper combinations of information that would be impossible otherwise.

However asking for special naming convention changes for the sake of log monitoring may be difficult to pull off. It’s common to treat log monitoring as strictly one-way activity in relation to the production environment. By that I mean that security analysts are expected to monitor logs and detect intrusions with no interaction or involvement with the administrators of the systems being monitored other than for facilitating log collection.

I realize that such a situation may not be easy to change but if security analysts can have some input in the standards and procedures followed upstream from log collection they can greatly increase the detectability of suspicious or questionable security events. Here’s a few examples.

There are at least 3 kinds of user accounts that every organization uses:
• End-user accounts
• Privileged accounts for administrators
• Service/application accounts

Each of these 3 accounts are used in different ways and should be subject to certain best practice controls. For instance no person should ever logon to an interactive logon session (local console or remote desktop) with a service or application account. But of course a malicious insider or external threat actor is more than happy to exploit such accounts since they often have privileged authority and are frequently insecure because of difficulties in managing these accounts. Conversely, end-user and admin accounts assigned to people should not be used to run services and applications. Doing so will cause all kinds of problems. For instance, if Service A is running as User B and that user leaves the company, Service A will fail the next time it is started after User B is disabled. In audits I’ve seen highly privileged admin accounts of long departed employees still active because staff knew that there were different applications and services running with these credentials. This of course creates all kinds of security holes including residual access for the terminated employee.

Event ID 4624 makes it easy to distinguish between different logon session types with the Logon Type field. See the table below. But of course Windows can’t tell you what type of account just logged on. Windows doesn’t know the difference between end user, admin or service accounts. But if your naming convention embeds that information you can easily compare account type and logon type and alert on inappropriate combinations. Let’s say that your naming convention specifies that service accounts all begin with “s-“. Now all you need to do is set up a rule to alert you whenever it sees Event ID 4624 where Logon Type is 2 or 10 and account name is like “s-*”.

This is just one example of why it is so valuable to implement naming conventions that embed key information about objects. If you name groups with prefixes or something else that tags privileged groups as such, it now becomes very easy to detect whenever a member is added to privileged group. Perhaps you follow certain procedures to protect privileged accounts from pass-the-hash attacks such as limiting admins to only logging on to certain jump boxes. If privileged accounts and jump box systems are recognizable as such by their name then you can easily alert when a privileged account attempts logon from a non jump box system.

This of course requires upfront cooperation from administrators who may resistant to changing their naming styles just for the sake of logs. And you need to get to know the procedures and controls used to keep your network secure so that you can configure your SIEM to recognize when intruders or malicious insiders bypass these controls. But both challenges are worth the effort to face.

RFP table