Do you need a Log Whisperer?

Quick, take a look at these four log entries

  1. Mar 29 2014 09:54:18: %PIX-6-302005: Built UDP connection for faddr 198.207.223.240/53337 gaddr10.0.0.187/53 laddr 192.168.0.2/53
  2. Mar 12 12:00:08 server2 rcd[308]: id=304 COMPLETE ‘Downloading https://server2/data/red-carpet.rdf’time=0s (failed)
  3. 200.96.104.241 – – [12/Sep/2006:09:44:28 -0300] “GET /modules.php?name=Downloads&d_op=modifydownloadrequest&%20lid=-%20UNION%20SELECT%200,username,user_id,
    user_password,name,%20user_email,user_level,0,0%20FROM%20nuke_users HTTP/1.1″ 200 9918 “-”
    “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)”
  4. Object Open:Object Server: Security
    Object Type: File
    Object Name: E:\SALES RESOURCE\2010\Invoice 2010 7-30-2010.xls
    Handle ID: –
    Operation ID: {0,132259258}
    Process ID: 4
    Image File Name:
    Primary User Name: ACCOUNTING$
    Primary Domain: PMILAB
    Primary Logon ID: (0x0,0x3E7)
    Client User Name: Aaron
    Client Domain: CONTOSO
    Client Logon ID: 0x0,0x7E0808E)
    Accesses: DELETE
    READ_CONTROL
    ACCESS_SYS_SEC
    ReadData (or ListDirectory)
    ReadEA
    ReadAttributes
    Privileges: –
    Restricted Sid Count: 0
    Access Mask: 0x1030089

Any idea what they mean?

No? Maybe you need a Log Whisperer — someone who understands these things.

Why, you ask?
Think security — aren’t these important?

Actually #3 and #4 are a big deal and you should be jumping on them, whereas #1 and #2 are routine — nothing to get excited about.

Here is what they mean:

  1. A Cisco firewall allowed a packet through (not a “connection” because it’s a UDP packet — never mind what the text says)
  2. An attempt to update by an OpenSuSE Linux machine, but some software packages are failing to be updated.
  3. A SQL injection attempt on PHP Nuke
  4. Access denied to a shared resource in a Windows environment

Log Whisperers are the heart of our SIEM Simplified. They are the experts who review logs, determine what they mean and provide remediation recommendations in simple, easy to understand language.

Not to be confused with these guys.

And no, they don’t look like Robert Redford either. You are thinking about the Horse Whisperer.



Three Indicators of Attack

For many years now, the security industry has become somewhat reliant on ‘indicators of compromise’ (IoC) to act as clues that an organization has been breached. Every year, companies invest heavily in digital forensic tools to identify the perpetrators and which parts of the network were compromised in the aftermath of an attack.

All too often, businesses are realizing that they are the victims of a cyber attack once it’s too late. It’s only after an attack that a company finds out what made them vulnerable and what they must do to make sure it doesn’t happen again.
This reactive stance was never useful to begin with and given the threat landscape, is totally undone as described by Ben Rossi.

Given the importance of identifying these critical indicators of attack (IoAs), here are eight common attack activities that IT departments should be tracking in order to gain the upper hand in today’s threat landscape.

Here are three IoAs that are both meaningful and relatively easy to detect:

  1. After hours: Malware detection after office hours; unusual activity including access to workstations or worse yet, servers and applications, should raise a red flag.
  2. Destination Unknown: Malware tends to “phone home” for instructions or to exfiltrate data. Connections from non-browsers and/or on non-standard ports and/or to poor reputation of “foreign” destinations is a low noise indicator of breaches.
  3. Inside Out: More than 75% of attacks, per the the Mandian m-report, are done using stolen credentials. It is often acknowledged that Insider attacks are much less common but much more damaging. When an outsider becomes a (privileged) insider, your worst nightmare has come true.

Can you detect out-of-ordinary or new behavior? To quote the SANS Institute…Know Abnormal to fight Evil. Read more here.



It’s all about detection, not protection

What did the 2015 Verizon DBIR show us?
• 200+ days on average before persistent attackers are discovered within the enterprise network
• 60%+ breaches are reported by a third party
• 100% of breached networks were up to date on Anti Virus

We’ve got detection deficit disorder.
And it’s costing us. Direly!

Think of the time and money spent in detecting, with some degree of confidence, the location of Osama Bin Laden. Then think of the time and money to dispatch Seal Team 6 on the mission. Detection took ten years and cost hundreds of millions of dollars while remediation took 10 days and a few million dollars.

The same situation is happening in your network. You have for example 5,000 endpoints and of those, maybe 5 are compromised as you’re reading this. But which endpoints are compromised? How do you get actionable intelligence so that you can dispatch your own Seal Team 6?

This is the problem, EventTracker 8 was designed to address. Continuous digital forensics data collection using purpose built sensors. The machine learning at the EventTracker Console, sifts through collected data to identify possible malware, lateral movement and exfiltration of data. The processes are all backed by experts of the SIEM Simplified service.

Detection deficit disorder.
You can get coverage with EventTracker 8.



The Detection Deficit

The gap between the ‘time to compromise’ and the ‘time to discover’ is the detection deficit. According to Verizon VBIR, the trend lines of these have been diverging significantly in the past few years.

Worse yet, the data shows that attackers are able to compromise the victim in days but thereafter are able to spend an average of 243 days undetected within the enterprise network before they are exposed. More often than not, this is happening by a third party.

This trend points to an ongoing detection deficit disorder. The suggestion is that defenders struggle to uncover the indicators of compromise.

While the majority of these attacks are via malware inserted to the victim’s system by a variety of methods, there is also theft of credentials that make it look like an inside job.

To overcome the detection deficit, defenders must look for other common evidence of compromise. These include: command and control activity, suspicious network traffic, file access and unauthorized use of valid credentials.

EventTracker 8 includes features incorporated into our Windows sensor that provide continuous forensics to look for evidence of compromise.



The Agent Advantage

For some time, “We use an agent for that” was a death spell for many security tools  while “agent-less” was the only game in town worth playing. Yes, people tolerate AV and device management agents, but that is where many organizations seemed to draw the line.  And an agent just to collect logs? – You’ve got to be kidding!

In this blog from 2006, Richard Bejtlich pointed out, enterprise security teams should seek to minimize their exposure to endpoint agent vulnerabilities.

Lets not confuse the means with the end. The end is “security information/event monitoring,” while getting the logs is the means to the end. Whereas, the threatscape of 2015 is dominated by polymorphic, persistent malware (dropped by phishing and stolen credentials); where our current mission still remains to defend the network.

Malware doesn’t write logs but it does however leave behind trace evidence on the host. This is evidence that you can’t get by monitoring the network. In any case, the rise of https by default has limited the ability of the network monitor to peer inside the payload.

Thus the Agent Advantage or the Sensor Advantage if you prefer.

Endpoints have first hand information when it comes to non-signature based attacks. This includes processes, file accesses, configuration changes, network traffic, etc. This data is critical to early detection of malicious activity.

Is an “agent” just to collect logs not doing it for you? How about a “sensor” that gathers endpoint data critical to detect persistent cyber attacks? That is the EventTracker 8 sensor which incorporates DFIR and UBA.



Why host data is essential for DFIR

Attacks on our IT network are a daily fact of life. As a defender, its job is to make the attackers life harder and to deter them to go elsewhere. Any attack, almost inevitably causes some type of host artifact to be left behind.

If defenders are able to quickly uncover the presence of host artifacts, it may be possible to disrupt the attack, thereby causing pain to the attacker. Such artifacts are present on the target/host and usually not visible to network monitors.

Many modern attacks use malware that is dropped and executed on the target machine or hollows out existing valid processes to spawn child processes that can be hijacked.

A common tactic when introducing malware on a target is to blend in. If the legitimate process is called svchost.exe, then the malware may be called svhost.exe. Another tactic is to maintain the same name as the legitimate EXE but have it executed from a different path.

EventTracker 8 includes a new module called Advanced Security Analytics which provides tools to help automate the detection of such attacks. When any process is launched, EventTracker gathers various bits of information about the EXE including, its hash, its full path name, its parent process, the publisher name and if it’s digitally signed or not. Then at the EventTracker Console, if the hash is being seen for the first time, it gets compared to lists of known malware from sources such as virustotal.com, virusshare.com etc. Analysts can also look and see if the EXE was digitally signed by the publisher name and source to determine if further investigation is warranted.

When tuned properly, this capability results in low false positive and can be useful to rapidly detect attackers.

Want more information on EventTracker 8? Click here.



User location affinity

It’s clear that we are now working under the assumption of a breach. The challenge is to find the attacker before they cause damage.

Once attackers gain a beach head within the organization, they pivot to other systems. The Verizon DBIR  shows that compromised credentials make up a whopping 76% of all network incursions.

However, the traditional IT security tools deployed at the perimeter, used to keep the bad guys out, are helpless in these cases. Today’s complex cyber security attacks require a different approach.

EventTracker 8 includes an advanced security analytic package which includes behavior rules to self-learn user location affinity heuristics and use this knowledge to pinpoint suspicious user activity.

In a nutshell, EventTracker learns typical user behavior for interactive login. Once a baseline of behavior is established, out of ordinary behavior is identified for investigation. This is done in real-time and across all enterprise assets.

For example if user susan typically logs into wks5 but now because her credentials are stolen, they are used to login to server6, this would be identified as out-of-ordinary and tagged for closer inspection.

EventTracker 8 has new features designed to support security analysts involved in Digital Forensics and Incident Response. For a quick introduction click here.



The PCI-DSS Compliance Report

A key element of the PCI-DSS standard is Requirement 10: Track and monitor all access to network resources and cardholder data. Logging mechanisms and the ability to track user activities are critical in preventing, detecting and minimizing the impact of a data compromise. The presence of logs in all environments allows thorough tracking, alerting and analysis when something does go wrong. Determining the cause of a compromise is very difficult, if not impossible, without system activity logs.

However the 2014 Verizon PCI Report is billed as an inside look at the business need for protecting payment card information says: “Only 9.4% of organizations that our RISK team investigated after a data breach was reported were compliant with Requirement 10. By comparison, our QSAs found 31.7% compliance with Requirement 10. This suggests a correlation between the lack of effective log management and the likelihood of suffering a security breach.”

Here is a side benefit of paying attention to compliance: Consistent and complete audit trails can also significantly reduce the cost of a breach. A large part of post-compromise cost is related to the number of cards thought to be exposed. Lack of conclusive log information reduces the forensic investigator’s ability to determine whether the card data in the environment was exposed only partially or in full.

In other words, when (not if) you detect the breach, having good audit records will reduce the cost of the breach.

Organizations can’t prevent or address a breach unless they can detect it. Active monitoring of the logs from their cardholder data environments enables organizations to spot and respond to suspected data breaches much more quickly.

Organizations generally find enterprise log management hard, in terms of generating logs (covered in controls 10.1 and 10.2), protecting them (10.5), reviewing them (10.6), and archiving them (10.7).

Is this you? Here is how you spell relief – SIEM Simplified.



Which is it? Security or Compliance?

This is a classic chicken/egg question but it’s too often thought to be the same. Take it from Merriam – Webster:
Compliance: (1a) the act or process of complying to a desire, demand, proposal, or regimen or to coercion. (1b) conformity in fulfilling official requirements. (2) a disposition to yield to others.
Security: (1) the quality or state of being secure. (4a) something that secures : protection. (4b1) measures taken to guard against espionage or sabotage, crime, attack, or escape. (4b2) an organization or department whose task is security.

Clearly they are not the same. Compliance means you meet a technical or non-technical requirement and periodically someone verifies that you have met them.

Compliance requirements are established by standards bodies, who obviously do not know your network. They are established for the common good because of industry wide concerns that information is not protected, usually because the security is poor. When you see an emphasis of compliance over security, it’s too often because the organization does not want to take the time to ensure that the network and information is secure, so they rely on compliance requirements to feel better about their security.

The problem with that is that it gives a false sense of hope. It gives the impression that if you check this box; everything is going to be ok. Obviously this is far from true, with examples like Sony, Target, TJMaxx and so many other breaches. Although there are implementations of compliance that will make you more secure, you cannot base your companies’ security policy on a third party’s compliance requirements.

So what comes first? Wrong question! Let’s rephrase – there needs to be a healthy relationship between the two but one cannot substitute one for the other.



Threat data or Threat Intel?

Have you noticed the number of vendors that have jumped on the “Threat Intelligence” bandwagon recently?

Threat Intel is the hot commodity with paid sources touting their coverage and timeliness while open sources tout the size of their lists. The FBI shares its info via Infraguard while many other ISACs are popping up across industry verticals allowing many large companies to compile internal data.

All good right? More is better, right? Actually, not quite.
Look closely. You are confusing “intelligence” with “data”.

As the Lt Commander of the Starship Enterprise would tell you, Data is not Intelligence. In this case, intelligence is really problem solving. As defenders, we want this data in order to answer “Who is attacking our assets and how?” Which would lead to coherent defense.

The steps to use Threat Data are easily explained:
1) Compare observations on the local network against the threat data.
2) Alert on matches.

Now comes the hard part…

3) Examine and validate the alert to decide if remediation is needed. This part is difficult to automate and really the crux of converting threat data to threat intelligence. To do this effectively would require human skills that combine both expert knowledge of the modern ThreatScape with knowledge of the network architecture.

This last part is  where most organizations come up hard against ground reality. The fact is thatdetailed knowledge of the internal network architecture is more common within an organization (more or less documented but present in some fashion/degree), than the expert knowledge of the modern ThreatScape and the contours/limitations of the threat data.

You could, of course hire and dedicate staff to perform this function but a) such staff are hard to come by and b) budget for this is even harder.

What now?

Consider a co-managed solution like SIEM Simplified where the expert knowledge of the modern ThreatScape in the context of your network is provided by an external group. When this is combined with your internal resources to co-manage the problem, it can result in improved coverage at an affordable price point.