Focus on assets, not threats


As defenders, it is our job to make the attackers’ lot in life harder. Push them up the “pyramid of pain“. Be a hard target so they move on to a softer/easier one.

Hallmarks of a successful security monitoring team


Over the years, we have seen many approaches to implementing a security monitoring capability.

The “checkbox mentality” is common—when the team uses the out-of-the-box functionality, including perhaps rules/reports, to meet a specific regulation.

The “big hero” approach is found in chaotic environments where tools are implemented with no planning or oversight, in a very “just do it” approach. The results may be fine, but are lost when the “big hero” moves on or loses interest.

The “strict process” organizations that implement a waterfall model and have rigid processes for change management and the like frequently lack the agility and dynamics required by today’s constantly evolving threats.

So what then are the hallmarks of a successful approach? Augusto Barrios described these factors here. Three factors are common:

  • Good people: Team members who know the environment and can create good use cases. Members who know the selected technology and can weave the rules, configuration and customize to suit.
  • Lightweight, but clear processes: Recognize that it’s very hard to move from good ideas to real (and deployed) use cases without processes. Absent this, things go to a slow death.
  • Top down and lateral support: The security team may have good people and processes to put together the use cases, but they are not an island. They will need continuous support to bring in new log sources, context data and the knowledge about the business and the environment required for implementation and optimization. They will need other people’s (IT ops, business applications specialists) time and commitment, and that’s only possible with top down support and empowerment.

Since it’s quite hard to get all of it right, an increasingly popular approach is to split the problem between the SIEM vendor and the buyer. Each has strengths critical to success. The SIEM vendor is expert with the technology, likely has well defined processes for implementation and operational success, whereas the buyer knows the environment intimately. Together, good use cases can be crafted. Escalation from the SIEM vendor who performs the monitoring is passed to the buyer team to provide lateral support. This approach has the potential to ramp up very quickly, since each team plays to their existing strengths.

The Gartner term for this approach is “co-managed SIEM.”

Want to get started quickly? Here is a link for you.

Where to focus efforts: Endpoint or Network?


The release of EventTracker 8 with new endpoint threat detection capabilities has led to many to ask: a) how to obtain these new features and b) where the focus on monitoring efforts should be, on the endpoint or on traditional attack vectors.

The answer to “a” is fairly simple and involves upgrading to the latest version; if you have licensed the suitable modules, the new features are immediately available to you.

The answer to “b” is not so simple and depends on your particular situation. After all, endpoint threat detection is not a replacement of signature based network packet sniffers. If your network permits BYOD or allows business partners to connect entire networks to yours, or permits remote access, why then network-based intrusion detection would be a must (how can you insist on sensors on BYOD?).

On the other hand, malware can be everywhere and anti-virus effectiveness is known to be weak. Phishing and drive-by exploits are real things. Perhaps even accurate inventory of endpoints (think traveling laptops) is hard. This all leads to endpoint-focused efforts as being paramount.

So really, it’s not endpoint or network-focused monitoring; rather it’s endpoint and network-focused monitoring efforts.

Feeling overwhelmed at having to deploy/manage so much complexity? Help is at hand. Our co-managed solution called SIEM Simplified is designed to take the sting out of the cost and complexity of mounting an effective defense.

The fallacy of “protect critical systems”


Risk management 101 says you can’t possibly apply the same safeguards to all systems in the network. Therefore, you must classify your assets and apply greater protection to the “critical” systems—the ones where you have more to lose in the event of a breach. And so, desktops are considered less critical as compared to servers, where the crown jewels are housed.

But think about this: an attacker will most likely probe for the weakly defended spot, and thus many widespread breaches originate at the desktop. In fact, in many cases, attackers discover crown jewels are sometimes also available at some workstations of key employees (e.g., the CEO’s assistant?), in which case there is not even a need to attack a hardened server.

So while it still makes sense to mount better defenses of critical systems, it’s equally sensible to be able to investigate compromised systems, regardless of their criticality. To do so, you must be gathering telemetry from all systems. While you may not be able to do this if you are allowing a BYOD policy, you should definitely think about data gathering from beyond just “critical systems.”

The ETDR functionality built in to the EventTracker 8 sensor (formerly agent) for Windows lets you collect this telemetry easily and efficiently. The argument here being it’s very worthwhile given the current threat landscape, to cover not just critical systems, but also desktops, with this technology.

What’s new in EventTracker 8? Find out here.

Security Subsistence Syndrome


Security Subsistence Syndrome (SSS) is defined as a mindset in an organization that believes it has no security choices and is underfunded, so it minimally spends to meet perceived statutory and regulatory requirements.

Andy Ellis describes this mindset as one “with attitude, not money. It’s possible to have a lot of money and still be in a bad place, just as it’s possible to operate a good security program on a shoestring budget.”

Catching Hackers Living off the Land Requires More than Just Logs


If attackers can deploy a remote administration tool (RAT) on your network, it makes it so much easier for them. RATs make it luxurious for bad guys; it’s like being right there on your network. RATs can log keystrokes, capture screens, provide RDP-like remote control, steal password hashes, scan networks, scan for files and upload them back to home. So if you can deny attackers the use of RATs, you’ve just made life a lot harder for them.

Can you defeat a casual attacker?


The news is rife with stories on “advanced” and “persistent” attacks, in the same way as exotic health problems like Ebola. The reality is that you are much more likely to come down with the common cold than Ebola. Thus, it makes more sense to pay close attention to what the Center for Disease Control has to say about it than to stockpile Ebola serum.

In similar vein, how good is your organization in fighting basic, commodity attacks?

It is true that the scary monsters called 0-day, advanced/persistent attacks and state sponsored superhackers are real. But before worrying about these, how are you set up for traditional intrusion attempts that use (5+) year old tools, tactics and exploits? After all, the vast majority of successful attacks are low tech and old school.

Want to rapidly improve your security maturity? Consider SIEM Simplified, our surprisingly affordable service that can protect you from 90% of the attacks for 10% of the do-it-yourself cost.

When is an alert not an alert?


The Riddler is one of Batman’s enduring enemies who takes delight in incorporating riddles and puzzles into his criminal plots—often leaving them as clues for the authorities and Batman to solve.

Question: When is a door, not a door?
Answer: When it’s ajar.

So riddle me this, Batman: When is an alert not an alert?

EventTracker users know that one of its primary functions is to apply built-in knowledge to reduce the flood of all security/log data to a much smaller stream of alerts. However, in most cases, without applying local context, this is still too noisy, so a risk score is computed which factors in the asset value and CVSS score of the source.

This allows us to separate “alerts” into different priority levels. The broad categories are:

  • Actionable Alerts: these require that you pay immediate attention because it’s likely to affect the network or critical data. An analogy is that you have had a successful break-in and the intruder is inside the premises.
  • Awareness Alerts: there may not be anything to do, but administrators should become aware and perhaps plan to shore up defenses. The analogy is that bad guys have been lurking on your street and making observations about when you enter/exit the premises and when its unoccupied.
  • Compliance Alerts: these may affect your compliance posture and so bear either awareness or action on your part.

And so, there are alerts and there are alerts. Over-reacting to awareness or compliance alerts will drain your energy and eventually sap your enthusiasm, not to mention cost you in real terms. Under-reacting to actionable alerts will also hurt you by inaction.

Can your SIEM differentiate between actionable and awareness alerts?
EventTracker can.
Find out more here.

Can you predict attacks?


The “kill chain” is a military concept related to the structure of an attack. In the InfoSec area, this concept is a way of modeling intrusions on a computer network.

Threats occur in up to seven stages. Not all threats need to use every stage, and the actions available at each stage can vary, giving an almost unlimited diversity to attack sets.

  • Reconnaisance
  • Weaponization
  • Delivery
  • Exploitation
  • Installation
  • Command and Control
  • Actions on Objective

Of course, some of the steps can happen outside the defended network, and in those cases, it may not be possible or practical to identify or counter. However, the most common variety of attack is unstructured in nature and originates from external sources. These use scripts or commonly available cracking tools that are widely available. Such attacks are identified by many techniques including:

Evidence of such activities is a pre-cursor to an attack. If defenders observe the activities from external sources, then it is important to review what the targets are. Often times, these can be uncovered by a penetration test. Repeated attempts against specific targets are a clue.

A defense-in-depth strategy gives defenders multiple clues about such activities. These include IDS systems that detect attack signatures, logs showing the activities and vulnerability scans that identify weaknesses.

To be sure, defending requires carefully orchestrated expertise. Feeling overwhelmed? Take a look at our SIEM Simplified offering where we can do the heavy lifting.

How to Detect Low Level Permission Changes in Active Directory


We hear a lot about tracking privileged access today because privileged users like Domain Admins can do a lot of damage. But more importantly, if their accounts are compromised the attacker gets full control of your environment. In line with this concern, many security standards and compliance documents recommend tracking changes to privileged groups like Administrators, Domain Admins and Enterprise Admins in Windows, and related groups and roles in other applications and platforms.

The Attack on your infrastructure: a play in three parts


To defend against an attacker, you must know him and his methods. The typical attack launched on an IT infrastructure can be thought of in three stages.

Part 1: Establish a beachhead

The villain lures the unsuspecting victim to install malware. This can be done in a myriad of ways: by sending an attachment from an apparently trustworthy source, causing a drive by infection through a website hosting malware, or via a USB drive. Attackers target the weakest link, the less guarded desktop or a test system. Frontal assaults against heavily fortified and carefully watched servers are not practical.

Once installed, the malware usually copies itself to multiple spots to deter eradication and it can possibly “phone home” for further instructions. Malware usually lurks in the background, trying to obtain passwords or system lists to further enable Part 2.

Part 2: Move laterally

As a means to deter removal, malware will move laterally, copying itself to other machines/locations. This movement is also often from peripheral to more central systems (e.g., from workstations to file shares).

Part 3: Exfiltrate secrets

Having patiently gathered up (usually zip or rar) secrets (intellectual property, passwords, credit card info, PII, etc.), the malware (or attacker)now sends the data outside the network back to the attacker.
How do you defend yourself against this? A SIEM solution can help, or a managed SIEM solution if you are short on expertise.

Outsourcing versus As-a-Service


The (toxic) term “outsourcing” has long been vilified as the substitution of onshore jobs with cheaper offshore people. As noted here, outsourcing, by and large, has really always been about people. The story of outsourcing to-date is of service providers battling it out to deliver people-based services more productively, promising delights of delivery beyond merely doing the existing stuff significantly cheaper and a bit better.

Do you need a Log Whisperer?


Quick, take a look at these four log entries

  1. Mar 29 2014 09:54:18: %PIX-6-302005: Built UDP connection for faddr 198.207.223.240/53337 gaddr10.0.0.187/53 laddr 192.168.0.2/53
  2. Mar 12 12:00:08 server2 rcd[308]: id=304 COMPLETE ‘Downloading https://server2/data/red-carpet.rdf’time=0s (failed)
  3. 200.96.104.241 – – [12/Sep/2006:09:44:28 -0300] “GET /modules.php?name=Downloads&d_op=modifydownloadrequest&%20lid=-%20UNION%20SELECT%200,username,user_id,
    user_password,name,%20user_email,user_level,0,0%20FROM%20nuke_users HTTP/1.1” 200 9918 “-”
    “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)”
  4. Object Open:Object Server: Security
    Object Type: File
    Object Name: E:\SALES RESOURCE\2010\Invoice 2010 7-30-2010.xls
    Handle ID: –
    Operation ID: {0,132259258}
    Process ID: 4
    Image File Name:
    Primary User Name: ACCOUNTING$
    Primary Domain: PMILAB
    Primary Logon ID: (0x0,0x3E7)
    Client User Name: Aaron
    Client Domain: CONTOSO
    Client Logon ID: 0x0,0x7E0808E)
    Accesses: DELETE
    READ_CONTROL
    ACCESS_SYS_SEC
    ReadData (or ListDirectory)
    ReadEA
    ReadAttributes
    Privileges: –
    Restricted Sid Count: 0
    Access Mask: 0x1030089

Any idea what they mean?

No? Maybe you need a Log Whisperer — someone who understands these things.

Why, you ask?
Think security — aren’t these important?

Actually #3 and #4 are a big deal and you should be jumping on them, whereas #1 and #2 are routine — nothing to get excited about.

Here is what they mean:

  1. A Cisco firewall allowed a packet through (not a “connection” because it’s a UDP packet — never mind what the text says)
  2. An attempt to update by an OpenSuSE Linux machine, but some software packages are failing to be updated.
  3. A SQL injection attempt on PHP Nuke
  4. Access denied to a shared resource in a Windows environment

Log Whisperers are the heart of our SIEM Simplified. They are the experts who review logs, determine what they mean and provide remediation recommendations in simple, easy to understand language.

Not to be confused with these guys.

And no, they don’t look like Robert Redford either. You are thinking about the Horse Whisperer.

Three Indicators of Attack


For many years now, the security industry has become somewhat reliant on ‘indicators of compromise’ (IoC) to act as clues that an organization has been breached. Every year, companies invest heavily in digital forensic tools to identify the perpetrators and which parts of the network were compromised in the aftermath of an attack.

All too often, businesses are realizing that they are the victims of a cyber attack once it’s too late. It’s only after an attack that a company finds out what made them vulnerable and what they must do to make sure it doesn’t happen again.
This reactive stance was never useful to begin with and given the threat landscape, is totally undone as described by Ben Rossi.

Given the importance of identifying these critical indicators of attack (IoAs), here are eight common attack activities that IT departments should be tracking in order to gain the upper hand in today’s threat landscape.

Here are three IoAs that are both meaningful and relatively easy to detect:

  1. After hours: Malware detection after office hours; unusual activity including access to workstations or worse yet, servers and applications, should raise a red flag.
  2. Destination Unknown: Malware tends to “phone home” for instructions or to exfiltrate data. Connections from non-browsers and/or on non-standard ports and/or to poor reputation of “foreign” destinations is a low noise indicator of breaches.
  3. Inside Out: More than 75% of attacks, per the the Mandian m-report, are done using stolen credentials. It is often acknowledged that Insider attacks are much less common but much more damaging. When an outsider becomes a (privileged) insider, your worst nightmare has come true.

Can you detect out-of-ordinary or new behavior? To quote the SANS Institute…Know Abnormal to fight Evil. Read more here.

Are You Listening to Your Endpoints?


There’s plenty of interest in all kinds of advanced security technologies like threat intelligence, strong/dynamic authentication, data loss prevention and information rights management. However, so many organizations still don’t know that the basic indicators of compromise on their network are new processes and modified executables.

It’s all about detection, not protection


What did the 2015 Verizon DBIR show us?
• 200+ days on average before persistent attackers are discovered within the enterprise network
• 60%+ breaches are reported by a third party
• 100% of breached networks were up to date on Anti Virus

We’ve got detection deficit disorder.
And it’s costing us. Direly!

Think of the time and money spent in detecting, with some degree of confidence, the location of Osama Bin Laden. Then think of the time and money to dispatch Seal Team 6 on the mission. Detection took ten years and cost hundreds of millions of dollars while remediation took 10 days and a few million dollars.

The same situation is happening in your network. You have for example 5,000 endpoints and of those, maybe 5 are compromised as you’re reading this. But which endpoints are compromised? How do you get actionable intelligence so that you can dispatch your own Seal Team 6?

This is the problem, EventTracker 8 was designed to address. Continuous digital forensics data collection using purpose built sensors. The machine learning at the EventTracker Console, sifts through collected data to identify possible malware, lateral movement and exfiltration of data. The processes are all backed by experts of the SIEM Simplified service.

The Detection Deficit


The gap between the ‘time to compromise’ and the ‘time to discover’ is the detection deficit. According to Verizon DBIR, the trend lines of these have been diverging significantly in the past few years. Worse yet, the data shows that attackers are able to compromise the victim in days but thereafter are able to spend an average of 243 days undetected within the enterprise network before they are exposed. More often than not, this is happening by a third party. This trend points to an ongoing detection deficit disorder. The suggestion is that defenders struggle to uncover the indicators of compromise. While the majority of these attacks are via malware inserted to the victim’s system by a variety of methods, there is also theft of credentials that make it look like an inside job. To overcome the detection deficit, defenders must look for other common evidence of compromise. These include: command and control activity, suspicious network traffic, file access and unauthorized use of valid credentials. EventTracker 8 includes features incorporated into our Windows sensor that provide continuous forensics to look for evidence of compromise.” target=”_blank”>Verizon VBIR, the trend lines of these have been diverging significantly in the past few years.

Worse yet, the data shows that attackers are able to compromise the victim in days but thereafter are able to spend an average of 243 days undetected within the enterprise network before they are exposed. More often than not, this is happening by a third party.

This trend points to an ongoing detection deficit disorder. The suggestion is that defenders struggle to uncover the indicators of compromise.

While the majority of these attacks are via malware inserted to the victim’s system by a variety of methods, there is also theft of credentials that make it look like an inside job.

To overcome the detection deficit, defenders must look for other common evidence of compromise. These include: command and control activity, suspicious network traffic, file access and unauthorized use of valid credentials.

EventTracker 8 includes features incorporated into our Windows sensor that provide continuous forensics to look for evidence of compromise.

The Agent Advantage


For some time, “We use an agent for that” was a death spell for many security tools  while “agent-less” was the only game in town worth playing. Yes, people tolerate AV and device management agents, but that is where many organizations seemed to draw the line.  And an agent just to collect logs? – You’ve got to be kidding!

In this blog from 2006, Richard Bejtlich pointed out, enterprise security teams should seek to minimize their exposure to endpoint agent vulnerabilities.

Lets not confuse the means with the end. The end is “security information/event monitoring,” while getting the logs is the means to the end. Whereas, the threatscape of 2015 is dominated by polymorphic, persistent malware (dropped by phishing and stolen credentials); where our current mission still remains to defend the network.

Malware doesn’t write logs but it does however leave behind trace evidence on the host. This is evidence that you can’t get by monitoring the network. In any case, the rise of https by default has limited the ability of the network monitor to peer inside the payload.

Thus the Agent Advantage or the Sensor Advantage if you prefer.

Endpoints have first hand information when it comes to non-signature based attacks. This includes processes, file accesses, configuration changes, network traffic, etc. This data is critical to early detection of malicious activity.

Is an “agent” just to collect logs not doing it for you? How about a “sensor” that gathers endpoint data critical to detect persistent cyber attacks? That is the EventTracker 8 sensor which incorporates DFIR and UBA.

Strengthen your defenses where the battle is actually being fought – the endpoint


Defense-in-depth pretty much secures and confirms the thought that every security technology has a place but are they really all created equal? Security is not a democratic process and no one is going to complain about security inequality if you are successful at halting breaches. So I think we need to acknowledge a few things. Right now the bad guys are winning on the endpoint – in particular on the workstations. One way or another the attackers are getting users to execute bad

Why host data is essential for DFIR


Attacks on our IT network are a daily fact of life. As a defender, its job is to make the attackers life harder and to deter them to go elsewhere. Any attack, almost inevitably causes some type of host artifact to be left behind.

If defenders are able to quickly uncover the presence of host artifacts, it may be possible to disrupt the attack, thereby causing pain to the attacker. Such artifacts are present on the target/host and usually not visible to network monitors.

Many modern attacks use malware that is dropped and executed on the target machine or hollows out existing valid processes to spawn child processes that can be hijacked.

A common tactic when introducing malware on a target is to blend in. If the legitimate process is called svchost.exe, then the malware may be called svhost.exe. Another tactic is to maintain the same name as the legitimate EXE but have it executed from a different path.

EventTracker 8 includes a new module called Advanced Security Analytics which provides tools to help automate the detection of such attacks. When any process is launched, EventTracker gathers various bits of information about the EXE including, its hash, its full path name, its parent process, the publisher name and if it’s digitally signed or not. Then at the EventTracker Console, if the hash is being seen for the first time, it gets compared to lists of known malware from sources such as virustotal.com, virusshare.com etc. Analysts can also look and see if the EXE was digitally signed by the publisher name and source to determine if further investigation is warranted.

When tuned properly, this capability results in low false positive and can be useful to rapidly detect attackers.

Want more information on EventTracker 8? Click here.

User Location Affinity


It’s clear that we are now working under the assumption of a breach. The challenge is to find the attacker before they cause damage.

Once attackers gain a beach head within the organization, they pivot to other systems. The Verizon DBIR  shows that compromised credentials make up a whopping 76% of all network incursions.

However, the traditional IT security tools deployed at the perimeter, used to keep the bad guys out, are helpless in these cases. Today’s complex cyber security attacks require a different approach.

EventTracker 8 includes an advanced security analytic package which includes behavior rules to self-learn user location affinity heuristics and use this knowledge to pinpoint suspicious user activity.

In a nutshell, EventTracker learns typical user behavior for interactive login. Once a baseline of behavior is established, out of ordinary behavior is identified for investigation. This is done in real-time and across all enterprise assets.

For example if user susan typically logs into wks5 but now because her credentials are stolen, they are used to login to server6, this would be identified as out-of-ordinary and tagged for closer inspection.

EventTracker 8 has new features designed to support security analysts involved in Digital Forensics and Incident Response.

The PCI-DSS Compliance Report


A key element of the PCI-DSS standard is Requirement 10: Track and monitor all access to network resources and cardholder data. Logging mechanisms and the ability to track user activities are critical in preventing, detecting and minimizing the impact of a data compromise. The presence of logs in all environments allows thorough tracking, alerting and analysis when something does go wrong. Determining the cause of a compromise is very difficult, if not impossible, without system activity logs.

However the 2014 Verizon PCI Report is billed as an inside look at the business need for protecting payment card information says: “Only 9.4% of organizations that our RISK team investigated after a data breach was reported were compliant with Requirement 10. By comparison, our QSAs found 31.7% compliance with Requirement 10. This suggests a correlation between the lack of effective log management and the likelihood of suffering a security breach.”

Here is a side benefit of paying attention to compliance: Consistent and complete audit trails can also significantly reduce the cost of a breach. A large part of post-compromise cost is related to the number of cards thought to be exposed. Lack of conclusive log information reduces the forensic investigator’s ability to determine whether the card data in the environment was exposed only partially or in full.

In other words, when (not if) you detect the breach, having good audit records will reduce the cost of the breach.

Organizations can’t prevent or address a breach unless they can detect it. Active monitoring of the logs from their cardholder data environments enables organizations to spot and respond to suspected data breaches much more quickly.

Organizations generally find enterprise log management hard, in terms of generating logs (covered in controls 10.1 and 10.2), protecting them (10.5), reviewing them (10.6), and archiving them (10.7).

Is this you? Here is how you spell relief – SIEM Simplified.

Which is it? Security or Compliance?


This is a classic chicken/egg question but it’s too often thought to be the same. Take it from Merriam – Webster:
Compliance: (1a) the act or process of complying to a desire, demand, proposal, or regimen or to coercion. (1b) conformity in fulfilling official requirements. (2) a disposition to yield to others.
Security: (1) the quality or state of being secure. (4a) something that secures : protection. (4b1) measures taken to guard against espionage or sabotage, crime, attack, or escape. (4b2) an organization or department whose task is security.

Clearly they are not the same. Compliance means you meet a technical or non-technical requirement and periodically someone verifies that you have met them.

Compliance requirements are established by standards bodies, who obviously do not know your network. They are established for the common good because of industry wide concerns that information is not protected, usually because the security is poor. When you see an emphasis of compliance over security, it’s too often because the organization does not want to take the time to ensure that the network and information is secure, so they rely on compliance requirements to feel better about their security.

The problem with that is that it gives a false sense of hope. It gives the impression that if you check this box; everything is going to be ok. Obviously this is far from true, with examples like Sony, Target, TJMaxx and so many other breaches. Although there are implementations of compliance that will make you more secure, you cannot base your companies’ security policy on a third party’s compliance requirements.

So what comes first? Wrong question! Let’s rephrase – there needs to be a healthy relationship between the two but one cannot substitute one for the other.

Threat data or Threat Intel?


Have you noticed the number of vendors that have jumped on the “Threat Intelligence” bandwagon recently?

Threat Intel is the hot commodity with paid sources touting their coverage and timeliness while open sources tout the size of their lists. The FBI shares its info via Infraguard while many other ISACs are popping up across industry verticals allowing many large companies to compile internal data.

All good right? More is better, right? Actually, not quite.
Look closely. You are confusing “intelligence” with “data”.

As the Lt Commander of the Starship Enterprise would tell you, Data is not Intelligence. In this case, intelligence is really problem solving. As defenders, we want this data in order to answer “Who is attacking our assets and how?” Which would lead to coherent defense.

The steps to use Threat Data are easily explained:
1) Compare observations on the local network against the threat data.
2) Alert on matches.

Now comes the hard part…

3) Examine and validate the alert to decide if remediation is needed. This part is difficult to automate and really the crux of converting threat data to threat intelligence. To do this effectively would require human skills that combine both expert knowledge of the modern ThreatScape with knowledge of the network architecture.

This last part is  where most organizations come up hard against ground reality. The fact is thatdetailed knowledge of the internal network architecture is more common within an organization (more or less documented but present in some fashion/degree), than the expert knowledge of the modern ThreatScape and the contours/limitations of the threat data.

You could, of course hire and dedicate staff to perform this function but a) such staff are hard to come by and b) budget for this is even harder.

What now?

Consider a co-managed solution like SIEM Simplified where the expert knowledge of the modern ThreatScape in the context of your network is provided by an external group. When this is combined with your internal resources to co-manage the problem, it can result in improved coverage at an affordable price point.

How to shoot yourself in the foot with SIEM


shootyoself

Six ways to shoot yourself with SIEM technology:
1) Dont plan; just jump in
2) Have no defined scope or use cases; whatever
3) Confuse SIEM with Log Management
4) Monitor noise; apply no filters
5) Don’t correlate with any other technologies eg IDS, Vulnerability scanner, Active Directory
6) Staff poorly or not at all

For grins, here’s how programmers shoot themselves in the foot:

ASP.NET
Find a gun, it falls apart. Put it back together, it falls apart again. You try using the .GUN Framework, it falls apart. You stab yourself in the foot instead.
C
You try to shoot yourself in the foot, but find out the the gun is actually a howitzer cannon.
C++
You accidentally create a dozen clones of yourself and shoot them all in the foot. Emergency medical assistance is impossible since you can’t tell which are bitwise copies and which are just pointing at others and saying, “That’s me, over there.”
JavaScript
You’ve perfected a robust, rich user experience for shooting yourself in the foot. You then find that bullets are disabled on your gun.
SQL
SELECT @ammo:=bullet FROM gun WHERE trigger = ‘PULLED’;
INSERT INTO leg (foot) VALUES (@ammo);
UNIX
% ls
foot.c foot.h foot.o toe.c toe.o
% rm * .o
rm: .o: No such file or directory
% ls
%

Click here for the Top 6 Uses of SIEM.

Venom Vulnerability exposes most Data Centers to Cyber Attacks


Just after a new security vulnerability surfaced Wednesday, many tech outlets started comparing it with HeartBleed, the serious security glitch uncovered last year that rendered communications with many well-known web services insecure, potentially exposing millions of plain-text passwords.

But don’t panic. Though the recent vulnerability has a more terrific name than HeartBleed, it is not going to cause as much danger as HeartBleed did.

Dubbed VENOM, standing for Virtualized Environment Neglected Operations Manipulation, is a virtual machine security flaw uncovered by security firm CrowdStrike that could expose most of the data centers to malware attacks, but in theory.

Yes, the risk of Venom vulnerability is theoretical as there is no real-time exploitation seen yet, while, on the other hand, last year’s HeartBleed bug was practically exploited by hackers an unknown number of times, leading to the theft of critical personal information.

Now let’s know more about Venom:

Venom (CVE-2015-3456) resides in the virtual floppy drive code used by a several number of computer virtualization platforms that if exploited…

…could allow an attacker to escape from a guest ‘virtual machine’ (VM) and gain full control of the operating system hosting them, as well as any other guest VMs running on the same host machine.

According to CrowdStrike, this roughly decade-old bug was discovered in the open-source virtualization package QEMU, affecting its Virtual Floppy Disk Controller (FDC) that is being used in many modern virtualization platforms and appliances, including Xen, KVM, Oracle’s VirtualBox, and the native QEMU client.

Jason Geffner, a senior security researcher at CrowdStrike who discovered the flaw, warned that the vulnerability affects all the versions of QEMU dated back to 2004, when the virtual floppy controller was introduced at the very first.

However, Geffner also added that so far, there is no known exploit that could successfully exploit the vulnerability. Venom is critical and disturbing enough to be considered a high-priority bug.

Successful exploitation of Venom required:
For successful exploitation, an attacker sitting on the guest virtual machine would need sufficient permissions to get access to the floppy disk controller I/O ports.

When considering on Linux guest machine, an attacker would need to have either root access or elevated privilege. However on Windows guest, practically anyone would have sufficient permissions to access the FDC.

However, comparing Venom with Heartbleed is something of no comparison. Where HeartBleed allowed hackers to probe millions of systems, Venom bug simply would not be exploitable at the same scale.

Flaws like Venom are typically used in a highly targeted attack such as corporate espionage, cyber warfare or other targeted attacks of these kinds.

Did venom poison Clouds Services?

Potentially more concerning are most of the large cloud providers, including Amazon, Oracle, Citrix, and Rackspace, which rely heavily on QEMU-based virtualization are vulnerable to Venom.

However, the good news is that most of them have resolved the issue, assuring that their customers needn’t worry.
“There is no risk to AWS customer data or instances,” Amazon Web Services said in a statement.
Rackspace also said the flaw does affect a portion of its Cloud Servers, but assured its customers that it has “applied the appropriate patch to our infrastructure and are working with customers to remediate fully this vulnerability.”

Azure cloud service by Microsoft, on the other hand, uses its homemade virtualization hypervisor technology, and, therefore, its customers are not affected by Venom bug.

Meanwhile, Google also assured that its Cloud Service Platform does not use the vulnerable software, thus was never vulnerable to Venom.

Patch Now! Prevent yourself

Both Xen and QEMU have rolled out patches for Venom. If you’re running an earlier version of Xen or QEMU, upgrade and apply the patch.

Note: All versions of Red Hat Enterprise Linux, which includes QEMU, are vulnerable to Venom. Red Hat recommend its users to update their system using the commands, “yum update” or “yum update qemu-kvm.”

Once done, you must “power off” all your guests Virtual Machines for the update to take place, and then restart it to be on the safer side. Remember, only restarting without power off the guest operating system is not enough for the administrators because it would still use the old QEMU binary.

See more at Hacker News.

Five quick wins to reduce exposure to insider threats


Q. What is worse than the attacks at Target, Home Depot, Michael’s, Dairy Queen, Sony, etc?
A. A disgruntled insider (think Edward Snowden)

A data breach has serious consequences both directly and indirectly. Lost revenue and a tarnished brand reputation both inflict harm long after incident resolution and post breach clean-up. Still, many organizations don’t take necessary steps to protect themselves from a potentially detrimental breach.

But, the refrain goes, “We don’t have the budget or the manpower or the buy in from senior management. We’re doing the best we can.”

How about going for some quick wins?
Quick wins provide solid risk reduction without major procedural, architectural or technical changes to an environment. Quick wins also provide such substantial and immediate risk reduction against very common attacks that most security-aware organizations prioritize these key controls.

1) Control the use of Administrator privilege
The misuse of administrative privileges is a primary method for attackers to spread inside a target enterprise. Two very common attacker techniques take advantage of uncontrolled administrative privileges. For example, a workstation user running as a privileged user, is fooled by simply surfing to a website hosting attacker content that can automatically exploit browsers. The file or exploit contains executable code that runs on the victim’s machine. Since the victim user’s account has administrative privileges, the attacker can take over the victim’s machine completely and install malware to find administrative passwords and other sensitive data.

2) Limit access to documents to employees based on the need to know
It’s important to limit permissions so employees only have access to the data necessary to perform their jobs. Steps should also be taken to ensure users with access to sensitive or confidential data are trained to recognize which files require more strict protection.

3) Evaluate your security tools – can they detect insider theft?
Whether it’s intentional or inadvertent, would you even know if someone inside your network compromised or leaked sensitive data?

4) Assess security skills of employees, provide training
The actions of people play a critical part in the success or failure of an enterprise. People fulfill important functions at every stage of the business function. Attackers are very conscious of these issues and use them to plan their exploitations by: carefully crafting phishing messages that look like routine and expected traffic to an unwary user; exploiting the gaps or seams between policy and technology; working within the time window of patching or log review; using nominally non-security-critical systems as jump points or bots….

5) Have an incident response plan
How prepared is your information technology (IT) department or administrator to handle security incidents? Many organizations learn how to respond to security incidents only after suffering attacks. By this time, incidents often become much more costly than needed. Proper incident response should be an integral part of your overall security policy and risk mitigation strategy.

A guiding principle of IT Security is “Prevention is ideal but detection is a must.”

Have you reduced your exposure?

Secure, Usable, Cheap: Pick any two


This fundamental tradeoff between security, usability, and cost is critical. Yes, it is possible to have both security and usability, but at a cost, in terms of money, time and personnel. While making something both cost efficient and usable, or even making something secure and cost-efficient may not be very hard, it is however  more difficult and time consuming to make something both secure and usable. This takes a lot of effort and thinking because security takes planning and resources.

As a system administrator, usability is at the top of their list. However, as a security administrator, security will be on top of their list – no surprise here really.

What if I tell you that the two job roles are orthogonal? What gets a sys admin bouquets, will get a security admin, brickbats and vice versa.

Oh and when we say “cheap” we mean in terms of effort – either by the vendor or by the user.

Security administrators face some interesting tradeoffs. Fundamentally, the choice to be made is between a system that is secure and usable, one that is secure and cheap or one that is cheap and usable. Unfortunately, we cannot have everything. The best practice is not to make the same person responsible for both security and system administration. The goals of those two tasks are far too often in conflict to make this a position that someone can become successful at.

PCI-DSS 3.1 is here – what now?


On April 15, 2015, the PCI Security Standards Council (PCI SSC) announced the release of PCI DSS v3.1. This follows closely on the heels of PCI DSS 3.0, that just went into full effect on January 1, 2015. There is a three-year cycle between major updates of PCI DSS and, outside of that cycle, the standard can be updated to react to threats as needed.

The major driver of PCI DSS 3.1 is the industry’s conclusion that SSL version 3.0 is no longer a secure protocol and therefore must be addressed by the PCI DSS.

What happened to SSL?
The last-released version of encryption protocol to be called “SSL”—version 3.0—was superseded by “TLS,” or Transport Layer Security, in 1999. While weaknesses were identified in SSL 3.0 at that time, it was still considered safe for use up until October of 2014, when the POODLE vulnerability  came to light. POODLE is a flaw in the SSL 3.0 protocol itself, so it’s not something that can be fixed with a software patch.

Bottom line
Any business software running SSL 2.0 or 3.0 must be reconfigured or upgraded.
Note: Most SSL/TLS deployments support both SSL 3.0 and TLS 1.0 in their default configuration. Newer software may support SSL 3.0, TLS 1.0, TLS 1.1 and TLS 1.2. In these cases the software simply needs to be reconfigured. Older software may only support SSL 2.0 and SSL 3.0 (if this is the case, it is time to upgrade).

How to detect SSL/TLS usage and version?
A vulnerability scan from EventTracker Vulnerability Assessment Service (ETVAS) or other scanner, will identify insecure implementations.

SSL/TLS is a widely deployed encryption protocol. The most common use of SSL/TLS is to secure websites (HTTPS), though it is also used to:
• Secure email in transit (SMTPS or SMTP with STARTTLS, IMAPS or IMAP with STARTTLS)
• Share files (FTPS)
• Secure connections to remote databases and secure remote network logins (SSL VPN)
SIEM Simplified customers
The EventTracker Control Center will have contacted you to correctly configure the Windows server instance hosting the EventTracker Manager Console, to comply with the guideline. You must upgrade or reconfigure all other vulnerable servers in your network.

If you subscribe to ETVAS, the latest vulnerability reports will highlight any servers that must be reconfigured along with detailed recommendations on how to do so.

Three myths of IT Security


Myth 1: Hardening a system makes it secure
Security is a process, to be evaluated on a constant basis. There is nothing that will put you into a “state of security.” Did you really think that simply applying some hardening guide to a system will make it secure?

Threats exploit unpatched vulnerabilities and not one of them would have been stopped by any security settings. Few settings can prevent your network from getting attacked through unpatched vulnerabilities.

Myth 2: If We Hide It, the Bad Guys Won’t Find It
Also known as security by obscurity, hiding the system doesn’t really help. For instance, turning off SSID broadcast in wireless networks. Not only will you now have a network that is non compliant with the standard, but your clients will also prefer a rogue network with the same name over the legitimate one. Oh, and it takes a few minutes to actually find the network anyway, given the proper tools. Another example is changing the banners on your Web site so the bad guys will not know it is running IIS. First, it is relatively simple to figure out what the Web site is running anyway. Second, most of the bad guys are not smart enough to do that, so they just try all the exploits, including the IIS ones. Yet another example is renaming the Administrator account. It is a matter of a couple of API calls to find the real name. Our favorite is when administrators use Group Policy to rename the Administrator account. They now have an account called “Janitor3;” with a comment of “Built in account for administering the computer/domain.” This is not really likely to fool anyone.

Myth 3: “High Security” Is an End-Goal for All Environments
High security, in the sense of the most restrictive security possible, is not for everyone. In some environments you are willing to break things in the name of protection that you are not willing to break in others.

Some systems are subjected to incredibly serious threats. If these systems get compromised, people will die, nations and large firms will go bankrupt, and society as we know it will collapse. Other systems contain far less sensitive information and thus need not be subjected to the same level of security. The protective measures that are used on the former are entirely inappropriate for the latter; yet we keep hearing that “high security” is some sort of end-goal toward which all environments should strive.

Safeguards should be applied in proportion to risk.