Archive

Why a Co-Managed SIEM?

In simpler times, security technology approaches were clearly defined and primarily based on prevention with things like firewalls, anti-virus, web, and email gateways. There were relatively few available technology segments and a relatively clear distinction between buying security technology purchases and outsourcing engagements.

Organizations invested in the few well-known, broadly used security technologies themselves, and if outsourcing the management of these technologies was needed, they could be reasonably confident that all major security outsourcing providers would be able to support their choice of technology.

Gartner declared this was a market truth for both on-premises management of security technologies and remote monitoring/management of the network security perimeter (managed security services).

Gartner Magic Quadrant

So, what has changed? A recent survey of over 300 IT professionals by SC Magazine indicates two main factors at play (get the full report here ). The increasing complexity of the threat landscape has spawned more complex and expensive security technologies to combat those threats. This escalation in cost and complexity is then exacerbated by budget constraints and an ultra-tight cybersecurity labor market.

Net result? The “human element” is back into the forefront of security management discussions. The skilled security analyst and subject matter expert for the technology in use have become exponentially more difficult to recruit, hire, and retain. The market agrees: The security gear is only as good as the people you are able to get to manage it.

With the threat landscape of today, the focus is squarely on detection, response, prediction, continuous monitoring and analytics. This means a successful outcome is critically dependent on the “human element.” The choices are to procure security technology and:

  • Deploy adequate internal resources to use them effectively, or
  • Co-source the staffing who already has experience with the selected technology (for instance, using our Co-managed SIEM)

If co-sourcing is a thought, then selection criteria must consider the expertise of the provider with the selected security technology. Our Co-managed SIEM offering bundles comprehensive technology with expertise in its use.

Technology represents 20% or less of the overall challenges to better security outcomes. The “human element” coupled with mature processes are the rest of the iceberg, hiding beneath the waterline.

Compliance is not a proxy for due care

Regulatory compliance is a necessary step for IT leaders, but it’s not sufficient enough to reduce residual IT security risk to tolerable levels. This is not news. But why is this the case? Here are three reasons:

  • Compliance regulations are focused on “good enough,” but the threat environment mutates rapidly. Therefore, any definition of “good enough” is temporary. The lack of specificity in most regulations is deliberate to accommodate these factors.
  • IT technologies change rapidly. An adequate technology solution today will be obsolete within a few years.
  • Circumstances and IT networks are so varied, that no single regulation can address them all. Prescribing a common set of solutions for all cases is not possible.

The key point to understand is that the compliance guidance documents are just that — guidance. Getting certification for the standard, while necessary, is not sufficient. If your network becomes the victim of a security breach and a third party suffers harm, then compliance to the guidelines alone will not be an adequate defense, although it may help mitigate certain regulatory penalties. All reasonable steps to mitigate the potential for harm to others must have been implemented, regardless of whether those steps are listed within the guidance.

A strong security program is based on effective management of the organization’s security risks. A process to do this effectively is what regulators and auditors look for.

‘Twas the Night Before Christmas – an EventTracker Story

Christmas Tree

‘Twas the night before Christmas and all through HQ

Not a creature was stirring, except greedy Lou –

An insider thief who had planned with great care

A breach to occur while no one was there.

Lou began his attack without trepidation,

For all his co-workers were on their vacations.

He logged into Payroll and then in a flash

Transferred to his account a large sum of cash.

But Lou didn’t realize that what he was doing

Had sent an alert that something was brewing.

And who was receiving this urgent alert?

Why EventTracker’s staff, who are always at work.

While monitoring all of their client locations

EventTracker’s team received notifications.

Their software had noticed some behavior changes

That seemed to fall outside of the normal ranges.

Immediately, they picked up the phone

And rang for Lou’s boss, but no one was home.

But EventTracker’s staff had more than one number.

And Lou’s boss heard his cell, despite being mid-slumber.

During the call, they exchanged information.

And while Lou’s boss called the police station,

EventTracker immediately got to work

Shutting down Lou’s access to HQ’s network.

Lou is now spending his Christmas in jail.

And the money he stole was returned without fail.

As for EventTracker, what else can I say?

This story will be one more Catch of the Day.

Work Smarter – Not Harder: Use Internal Honeynets to Detect Bad Guys Instead of Just Chasing False Positives

Log collection, SIEM and security monitoring are the journey not the destination.  Unfortunately, the destination is often a false positive.  This is because we’ve gotten very good at collecting logs and other information from production systems, then filtering that data and presenting it on a dashboard.  But we haven’t gotten that good at distinguishing events triggered by bad guys from those triggered by normal everyday activity.

A honeynet changes that completely.

At the risk of perpetuating a bad analogy, I’m going to refer to the signal-to-noise ratio often thrown around when you talk about security monitoring.  If you like that noise/signal concept then the difference is like putting an egg timer in the middle of Times Square at rush hour.  Trying to hear it is like trying to pick out bad guy activity in logs collected from production systems.  Now put that egg timer in a quiet room.  That’s the sound of a bad guy hitting an internal honeynet.

Honeynets on your internal network are normally very quiet.  The only legitimate stuff that’s going to hit them are things like vulnerability scanners, network mapping tools and… what else?  What else on your network routinely goes out and touches IP addresses that it’s not specifically configured to communicate with?

So you either configure those few scanners to skip your honeynet IP ranges, or else you leverage them as positive confirmation that your honeynet is working and reporting when it’s touched.  You just de-prioritize that expected traffic to an “honorable mention” pane on your dashboard.

On the other hand, (unless someone foolishly publishes it) the bad guy isn’t going to know the existence of your honeynet or its coordinates.  So as he routinely scans your network, he’s inevitably going to trip over your honeynet — if you’ve done it right.  But let’s talk about some of these points.

First, how would a bad guy find out about your honeynet?

  • Once he gets control of IT admin user accounts and reads their email, has access to your network and security documentation, etc. But if you have good privileged access controls this should be fairly late stage.  Honeynets are intended to catch intrusions at early to mid-stage.
  • By lurking on support forums and searching the Internet (e.g. Stackoverflow, honeynet vendor support sites). It goes without saying — don’t reveal your name, company or company email address in your postings.
  • By scanning your network. It’s pretty easy to identity honeynets when you come across them – especially low-interaction honeynets, which are most common.  But guess what?  Who cares?  They’ve already set off the alarm.  So this one doesn’t count.

So, honeynets are definitely a matter of security through obscurity.  But you know what?  We rely on security through obscurity a lot more than we think.  Encryption keys are fundamentally security through obscurity.  Just really, really, really, good obscurity.  And security through obscurity is only a problem when you are relying on it as a preventive control – like using a “secret” port number instead of requiring an authenticated connection.  Honeynets are detective controls.

But what if you are up against not just a persistent threat actor but a patient, professional and cautious one who assumes you have a honeynet and you’re listening to it?  He’s going to tiptoe around much more carefully.  If I were him, I would only touch systems out there that I had reason to believe were legitimate production servers.  Where would I collect such information?  Places like DNS, browser history, netstat output, links on intranet pages and so on.

At this time, most attackers aren’t bothering to do that.  It really slows them down and they know it just isn’t necessary in most environments.  But this is a constant arms race, so it’s good to think about the future.  First, a bad guy who assumes you have a honeynet is a good thing because of what I just mentioned.  It slows them down, giving more time for your other layers of defense to do their job.

But are there ways you to optimize your honeynet implementation for catching the honeynet-conscious, patient attacker?   One thing you can do is go through the extra effort and coordination with your network team to reserve more and smaller sub-ranges of IP addresses for your honeynet so that it’s widely and granularly dispersed throughout address space.  This makes it harder to make a move without hitting your honeynet, and further reduces the assumption that attackers usually find it safe to make — that all your servers are in range for static addresses, workstations in another discreet range for DHCP, and then another big block devoted to your honeynet.

The bottom line though is honeynets are awesome.  You get very high detection with a comparatively small investment.  Checkout my recent webinar on Honeynets sponsored by EventTracker, who now offers Honeynet-as-a-Service that is fully integrated with your SIEM.  Deploying a honeynet and keeping it running is one thing, but integrating it with your SIEM is another.  EventTracker nails both.

Top three reasons SIEM solutions fail

We have been implementing Security Information and Event Management (SIEM) solutions for more than 10 years. We serve hundreds of active SIEM users and implementations. We have had many awesome, celebratory, cork-popping successes. Unfortunately, we’ve also had our share of sad, tearful, profanity-filled failures. Why? Why do some companies succeed with SIEM while others fail? Here is a secret for you: the product doesn’t matter. The size of the company doesn’t matter. It’s something else. SIEM can deliver great results but it can soak up budget, time and leave you frustrated with the outcome. Here are the (all too) common reasons why SIEM implementations fail.

Reason 1: You don’t have an administrator in charge.

We call this the RUN function. A person in charge of platform administration. A Sys Admin who:

  • Keeps the solution up-to-date with upgrades and new versions
  • Performs system health checks, storage projections and log volume/performance analysis
  • Analyzes changes in log collection for new systems and non-reporting systems
  • Adds and configures users, standardized reports, dashboards and alerts
  • Generates Weekly System Status Report
  • Confirms external/third party integration’s are functioning normally: threat intel feeds, IDS, VAS

Reason 2: The boss isn’t committed.

For the SIEM solution to deliver value, the executive in charge must be fully committed to it, providing emotional, financial and educational support to the administrator. You tell your team that this is the company’s system and everyone’s going to use it. You invest in outside help to get it up and running, and use it the right way with the proper training and service. You don’t cave in when people complain because they don’t like the color of the screen or the font, or that things take extra clicks, or that it’s not “user friendly.” For this system to work, your people will need to do more work. You provide resources to help them, but you stand firm because this is your network. You realize that using this product the right way will help you make your company safer…and more valuable. Stand firm. Commit. Or you will fail.

Reason 3: You’re not using the data.

Our best implementations have 2-3 key objectives satisfied by the SIEM systems each day. Managers read these reports and rely on the data to help them secure their network. Have a few key objectives or you will fail. We call this the WATCH function for obvious reasons.

We are a premier provider of SIEM solutions and services, but with all due respect we would advise against buying a SIEM solution if a client is not prepared to invest in an administrator or reports, or shows little interest in adopting the system into their company culture.

How the EventTracker/Netsurion merger will bring you more powerful cybersecurity solutions

We are delighted that EventTracker is now part of the Netsurion family.

On October 13, 2016 we announced our merger with managed security services Netsurion. As part of the agreement, Netsurion’s majority shareholder, Providence Strategic Growth, the equity affiliate of Providence Equity Partners, made an investment in EventTracker to accelerate growth for our combined company. Netsurion’s managed security services protect multi-location businesses’ information, payment systems, and on-premise public and private Wi-Fi networks from data breaches, data loss, and other risks posed by hackers.

We are thrilled to join with a dynamic and leading security organization to provide a managed network security service that couples our cutting-edge managed SIEM offering with a state-of-the-art managed firewall.

As the threat landscape evolves rapidly and hackers become more sophisticated, it’s become clear that comprehensive security solutions, like SIEM, are necessary to protect organizations from current and emerging threats and ensuring your brand is safe. However, many small and multi-location businesses cannot afford, and do not have the knowledge to manage such complex systems. Combining our cloud-based SIEM capabilities with Netsurion’s expertise in managed security services allows us to deliver SIEM to a class of businesses that previously was unable to afford and manage such sophisticated security measures. Now any sized branch or remote office, franchise, or sole proprietor operation can use Netsurion’s managed network security service or EventTracker’s SIEM services without the costs and complexity of full-time dedicated resources.

This transaction is only the beginning of a series of amazing new offerings we will be announcing in the coming months. We will soon be introducing a new product offering that will bring enterprise-level SIEM security down to the multi-location environment, as well as enhanced PCI-DSS compliance services, including a new FIM solution and PCI QSA consulting services.

Tracking Physical Presence with the Windows Security Log

How do you figure out when someone was actually logged onto their PC?  By “logged onto” I mean, physically present and interacting with their computer. The data is there in the security log, but it’s so much harder than you’d think.

First of all, while I said it’s in the security log, I didn’t say which one. The bad news is, it isn’t in the domain controller log.  Domain controllers know when you logon, but they don’t know when you logoff. This is because domain controllers just handle initial authentication to the domain and subsequent authentications to each computer on the network. These are reflected as Kerberos events for Ticket-Granting Tickets and Service Tickets, respectively. But domain controllers are not contacted and have no knowledge of when you logoff – at all.  In fact, look at the events under the Account Logon audit policy subcategory. These are the key domain controller events that are generated when a user logs on with a domain account. As you can see, there is no logoff event. That event it only logged by the Logoff subcategory.

And really, the whole concept of a discreet session with a logon and logoff has disappeared.  You may remain “logged on” to your PC for days, if not weeks.  So the real question is not, “Was Bob logged in?” It’s more about, “Was Bob physically present, interacting with the PC?”  To answer this, you have to look at much more than simple logon/logoff events, which may be separated by long periods of time during which Bob is anywhere but at his computer.

Physical presence auditing requires looking at all the events between logon and logoff, such as when the console locks, the computer sleeps and screen saver events.

Logon session auditing isn’t just a curious technical challenge. At every tradeshow and conference I go to, people come to me with various security and compliance requirements where they need this capability. In fact, one of the cases that I was consulted as an expert witness centered around the interpretation of logon events for session auditing.

The absolute only way to track actual logon sessions is to go to the workstation’s security log. There you need to enable 3 audit subcategories:

  1. Logon
  2. Logoff
  3. Other Logon/Logoff

Together, these 3 categories log 9 different events relevant to our topic:

  • 4624 – An account was successfully logged on
  • 4634 – An account was logged off
  • 4647 – User initiated logoff
  • 4800 – The workstation was locked
  • 4801 – The workstation was unlocked
  • 4802 – The screen saver was invoked
  • 4803 – The screen saver was dismissed

But how do you correlate these events? Because that’s what it’s all about when it comes to figuring out logon sessions. It is by no means a cakewalk.  Matching these events is like sequencing DNA, but the information is there.  The best thing to do is experiment for yourself.  Enable the 3 audit policies above and then logon, wait for your screen saver to kick in, dismiss the screen saver, lock the console as though you are walking away and then unlock it.  Allow the computer to sleep. Wake it back up.

As you can see, there is some overlap among the above events. What you have to do is –between a given logon/logoff event pair (linked by Logon ID) — identity the time periods within that session where the user was not present as a result of:

  • Sleep (that of the computer)
  • Hibernation
  • Screen saver
  • Console locked

And count any session as ending if you see:

  • Event ID 4647 for that session’s Logon ID (User initiated logoff)
  • Event ID 4354 for that session’s Logon ID (Logoff)
  • Event ID 4608 – System startup

As you can see, the information is there. But you have to collect it, and that is a challenge for most organization because of the sheer number of workstations. SIEM solutions like EventTracker automate this for you whether by remote event collection, which can be practical in some cases, or with the more feasible end-point agent.

What is privilege escalation and why should you care?

A common hacking method is to steal information by first gaining lower-level access to your network. This can happen in a variety of ways: through a print server, via a phished email, or taking advantage of a remote control program with poor security. Once inside, the hacker will escalate their access rights until they find minimally protected administrative accounts. That is where the real damage and data theft starts. Given the number of Internet-available servers and reused passwords, this rough outline of attack happens more often than anyone wants to admit, and it can be a very big threat. The good news is that fixing this isn’t very difficult, just requiring diligence and vigilance. It also helps if you have the right protective software, such as what you can purchase from EventTracker, to stop these sorts of “privilege escalation” attacks.

The first thing is in understanding how prevalent this really is, and not bury your hand in the virtual sandbox. Consider the Black Hat 2015 Hacker Survey Report, which was done on behalf of Thycotic last December. The results showed 20% of those surveyed were able to steal privileged account credentials “all the time”. Wow. And what is worse is that three fourths of those surveyed during the conference saw no recent improvements in the security of privileged accounts too. Finally, to be more depressing, only six percent of those surveyed could never find any account information when they penetrated a network

Granted, the survey is somewhat self-serving, since Thycotic (like EventTracker) sells security tools to track and prevent privilege escalation events.

Next, you should understand how the hackers work and what methods they use to penetrate your network. A great play-by-play article can be found here in Admin magazine. The author shows you how a typical hacker can move through your network, gathering information and trying to open various files and find unprotected accounts.  In the sample system used for the article, the author “found a very old kernel, 28 ports open for incoming connections, and 441 packages installed and not updated for a while.” This is certainly very typical.

So what can do you to be more pro-active in this arena? First, if you aren’t using one of these tools start checking them out today. You should certainly have one in your arsenal, and I am not just saying this because I am writing this blog here. They are essential security tools for any enterprise.

Second, clean up your server password portfolio. You want to strengthen privileged accounts and shared administrative access to critical local Windows and Linux servers (Lieberman Software has something called Enterprise Random Password Manager that will do this quite nicely). Any product you use should discover and strengthen all server passwords and then encrypt them and store them in an electronic vault, and will change them as often as your password policies dictate. These types of tools will also report on those resources that are still using their default passwords: a definite no-no and one of the easiest ways that a hacker can gain entry to your network.

An alternative, or an addition to the password cleanup is to use a single sign-on tool that can automate sign ons and strengthen passwords at the same time. There are more than a dozen different tools for this purpose: I reviewed a bunch of them for Network World about a year ago here.

Next, regularly audit your account and access logs to see if anyone has recently become a privileged user. Many security tools will provide this information: the trick is to use them on a regular basis, not once when you first purchase them. Send yourself a reminder if you need the added incentive.

Finally, start thinking like a hacker. Become familiar with tools such as Metasploit and BackTrack that can be used to pry your way into a remote network and see any weaknesses. Know thy enemy!

Monitoring DNS Traffic for Security Threats

Cyber criminals are constantly developing increasingly sophisticated and dangerous malware programs. Statistics for the first quarter of 2016 compared to 2015 shows that malware attacks have quadrupled.

Why DNS traffic is important

DNS has an important role in how end users in your enterprise connect to the internet. Each connection made to a domain by the client devices is recorded in the DNS logs. Inspecting DNS traffic between client devices and your local recursive resolver could reveal a wealth of information for forensic analysis.

DNS queries can reveal:

  • Botnets/Malware connecting to C&C servers
  • What websites visited by an employee
  • Which malicious and DGA domains were accessed
  • Which dynamic domains (DynDNS) accessed
  • DDOS attack detection like NXDomain, phantom domain. random subdomain

Identifying the threats using EventTracker

While parsing each DNS log, we verify each domain accessed against:

  • Malicious domain database (updated on regular basis)
  • Domain Generation Algorithm (DGA)

Any domain which matches any of the above mentioned criteria warrants attention and an alert is generated along with the client which accessed it, and the geological information of the domain (IP, Country).

Using behavior analysis, EventTracker tracks the volume of connections to each domain accessed in the enterprise. If the volume of traffic to a specific domain is more than average, alert conditions are triggered. When a domain is accessed for the first time, we check the following:

  • Is this a dynamic domain?
  • Is the domain registered recently or expiring soon?
  • Does the domain have a known malicious TLD?

Recent trends show that cyber criminals may create dynamic domains as command and control centers. These domains are activated for a very short duration and then discarded, which makes the above checks even more important.

EventTracker does statistical/threshold monitoring of query, client, record type and error. This helps in detecting many DDOS attacks like NXDOMAIN attack, Phantom domain attack, random sub-domain attack, etc. EventTracker’s monitoring of client DNS settings will help to detect DNS hijacking and generate an alert for anything suspicious, including information about the client as well as its DNS setting. The EventTracker flex dashboard helps in correlating attack detection data and client details, making attack detection simpler.

Monitoring the DNS logs is a powerful way to identify security attacks as they happen in the enterprise, enabling successful blocking of attacks and fixing vulnerabilities.

Idea to retire: Do more with less

Ideas to Retire is a TechTank series of blog posts that identify outdated practices in public sector IT management and suggest new ideas for improved outcomes.

Dr. John Leslie King is W.W. Bishop Professor in the School of Information at the University of Michigan and contributed a blog hammering the idea of “do more with less” calling it a “well-intentioned but ultimately ridiculous suggestion.”

King writes: “Doing more with less flies in the face of what everyone already knows: we do less with less. This is not our preference, of course. Most of us would like to do less, especially if we could have more. People are smart: they do not volunteer to do more if they will get less. Doing more with less turns incentive upside down. Eliminating truly wasteful practices and genuine productivity gains sometimes allows us to do more with less, but these cases are rare. The systemic problems with HealthCare.gov were not solved by spending less, but by spending more. Deep wisdom lies in matching inputs with outputs.”

IT managers should respond to suggestions of doing more with less by assessing what really needs to be done…what can reasonably be discarded or added that enables the IT staff to go about their responsibilities without exceeding their limits?

Considering these ideas as they relate to IT Security, a way to optimize input with outputs may be by considering a co-managed solution focused on outcome. Rather than merely acquiring technology and then watching it gather dust as you struggle to build process and train (non-existent) staff to utilize it properly, start with the end in mind – the desired outcome. If this is a well managed SIEM solution, (and associated technology) then perhaps a co-managed SIEM approach may provide the way to match output with input.

How to control and detect users logging onto unauthorized computers

Windows gives you several ways to control which computers can be logged onto with a given account.  Leveraging these features is a critical way to defend against persistent attackers.  By limiting accounts to appropriate computers you can:

  • Enforce written policies based on best practice
  • Slow down or stop lateral movement by attackers
  • Protect against pass-the-hash and related credential harvesting attacks

The first place to start using this mitigation technique is with privileged accounts.  And the easiest way to restrict accounts to specified computers is with the allow and deny logon rights.  In Group Policy, under User Rights, you will find an “allow” and “deny” right for each of Windows’ five types of logon sessions:

  • Local logon (i.e. interactive logon at the console)
  • Network logon (e.g. accessing remote computer’s file system via shared folder)
  • Remote Desktop (i.e. Terminal Services)
  • Service (when a service is started in the background, its service account is logged on in this type of session)
  • Batch (i.e. Scheduled Task)

Of course, if an account has both “Logon locally” and “Deny logon locally,” the deny right will take precedence. By careful architecture of OUs, group policy objects and user groups, you can assign these rights to the desired combinations of computers and users.

But because of the indirect nature of group policy and the many objects involved it, can be complicated to configure the rights correctly.  It’s easy to leave gaps in your controls or inadvertently prevent appropriate logon scenarios.

In Windows Server 2012 R2, Microsoft introduced Authentication Policy Silos.  Whereas logon rights are enforced at the member computer level, silos are enforced centrally by the domain controller.  Basically, you create an Authentication Policy Silo container and assign the desired user accounts and computers to that silo.  Now those user accounts can only be used for logging on to computers in that silo.  Domain controllers only enforce silo restrictions when processing Kerberos authentication requests – not NTLM.  To prevent users accounts from bypassing silo restrictions by authenticating via NTLM, silo’d accounts must also be members of the new Protected Users group.  Membership in Protected Users triggers a number of different controls designed to prevent pass-the-hash and related credential attacks – including disabling NTLM for member accounts.

ADAdmin Silo

For what it’s worth, Active Directory has one other way to configure logon restrictions, and that’s with the Logon Workstations setting on domain user accounts.  However, this setting only applies to interactive logons and offers no control over the other logon session types.

Detecting Logon Violation Attempts

You can monitor failed attempts to violate both types of logon restrictions.  When you attempt to logon but fail because you have not been granted or are explicitly denied a given logon right, here’s what to expect in the security log.

Which Security Log Event ID Notes
Local computer being attempted for logon 4625

Logon Failure

Failure reason: The user has not been granted the requested logon type at this machine.

Status: 0xC000015B

Domain Controller 4768

Successful Kerberos TGT Request

Note that this is a successful event.  To the domain controller this was as a successful authentication.

As you can see there is no centralized audit log record of logon failures due to logon right restrictions.  You must collect and monitor the logs of each computer on the network.

On the other hand, here are the events logged when you attempt to violate an authentication silo boundary.

Which Security Log Event ID Notes
Local computer being attempted for logon 4625

Logon Failure

Failure reason: User not allowed to logon at this computer

Status: 0xC000006E

Domain Controller 4820 Failure A Kerberos Ticket-granting-ticket (TGT) was denied because the device does not meet the access control restrictions.

The silo is identified

4768 Failed Kerberos TGT Request Result Code: 0xC

An obvious advantage of Authentication Silos is the central control and monitoring.  Just monitor your domain controllers for event ID 4820 and you’ll know about all attempts to bypass your logon controls across the entire network.  Additionally, event ID 4820 reports the name of the silo which makes policy identification instant.

Restricting privileged accounts is a key control in mitigating the risk of pass-the-hash and fighting modern attackers.  Whether you enforce logon restrictions with user rights on local systems or centrally with Authentication Silos make sure you don’t just use a “fire and forget” approach in which you configure but neglect monitoring these valuable controls.  You need to know when an admin is attempting to circumvent controls or when an attacker is attempting to move laterally across your network using harvested credentials.

Detect Persistent Threats on a Budget

detect-persistent-threats

There’s a wealth of intelligence available in your DNS logs that can help you detect persistent threats.

So how can you use them to see if your network has been hacked, or check for unauthorized access to sensitive intellectual property after business hours?

All intruders in your network must re-connect with their “central command” in order to manage or update the malware they’ve installed on your system. As a result, your infected network devices will repeatedly resolve to the domain names that the attackers use. By mining your DNS logs, you can determine if known bad domain names and/or IP addresses have affected your systems. Depending on the most current “blacklist” of criminal domains is, and how rigid your network rules are regarding IP destinations that the domain names resolve to, DNS logs can help you spot these anomalies.

It’s not a a comprehensive technique for detecting persistent threats, but a good, budget friendly start.

Here is recent webinar we did on the subject of mining DNS logs.

Dirty truths your SIEM vendor won’t tell you

Analytics is an essential component of a modern SIEM solution. The ability to crunch large volumes of log and security data in order to extract meaningful insight can lead to improvements in security posture. Vendors love to tell you all about features and how their particular product is so much better than the competition.

Yeah, right!

The fact is, many products are available and most of them have comparable features. While software is a necessary part of the analytics process, it’s less critical than product marketing hype would have you believe.

As Meta Brown noted in Forbes, “Your own thought processes – the effort you put in to understand the business problem, investigate the data available, and plan a methodical approach to analysis – can do much more to simplify your work and maximize your chance for success than any product could.”

Techies just love to show off their tech macho. They can’t get together without arguing about the power of their code, speed of their response or the size of their clusters.

The reality? Once you invested in any of the comparable products, it’s the person behind the wheel that makes all the difference.

If you suffer from skill shortage, our remote managed SIEM Simplified solution may be for you.

Should I be doing EDR? Why isn’t anti-virus enough anymore?

Detecting virus signatures is so last year. Creating a virus with a unique signature or hash is quite literally child’s play, and most anti-virus products catch just a few percent of the malware that is active these days. You need better tools, called endpoint detection and response (EDR), such as those that integrate with SIEMs, that can recognize errant behavior and remediate endpoints quickly.

The issue is that hackers are getting better at covering their tracks, and leaving very few footprints of their dastardly deeds.

I like to think about EDR products in terms of hunting and gathering. Most traditional endpoint products that come from the anti-malware heritage are gatherers: they are used to collect malware that they can identify, based on some known patterns. That works well in the era when writing malware was a black art that had specialized skills and tools. Now there are ready-made exploit kits, such as Angler and tools called packers and crypters. These have made it so easy to produce custom malware that the average teen can do it with a Web browser and little programming knowledge.

But gathering is just one part of the ideal EDR product: they also need to be hunters too. They should be able to find that proverbial needle in the haystack, especially when you don’t even know what a needle looks like, except that it is sharp and can hurt you. The ideal hunter should be able to track down malware based on a series of unfortunate events, by observing behaviors such as making changes to the Windows registry, dropping a command shell remotely or from within a browser session, or by inserting an infected PDF document. While some “normal” apps exhibit these activities, most don’t. For example, some EDR products can track privilege escalation and credential spoofing, common activities of many hackers today that like to gain access to your network from a formerly trusted endpoint and use it as a base of operations to collect and export confidential data. To block this kind of behavior, today’s tools need to map the internal or lateral network movement so you can track down what PCs were compromised and neutralize them before your entire network falls into the wrong hands.

Part of the hunting experience is also being able to record what is happening to your network so you can go to the “videotape” playback function and see when something entered your environment and what endpoints it has infected. From there you should be able to isolate and remediate your PCs and return them to an uninfected state. Some EDR products offer a special kind of isolation feature that basically turns their network connection off, except for communicating back to the central monitoring console. That is a pretty nifty feature.

Finally, an EDR product should be able to use big data techniques to visualize trends and block potential attacks. Another aspect of this is to integrate with a variety of security event feeds and intelligence from Internet sources such as VirusTotal.com. You might as well leverage what researchers around the world already know and have already seen in the wild. Microsoft has jumped into this arena with their Windows Defender Advanced Threat Protection. Announced at the RSA show in March, it will be slowly rolled out to all Windows 10 users (whether they want it or not) thanks to Windows Update.  Basically what Microsoft is doing is turning every Windows 10 endpoint into a sensor with this tool, and sending this information to its cloud-based detection service called Security Graph. Other EDR vendors do similar things with their endpoint agents.

When you go shopping for an EDR product, ask your vendor these questions:

  • Do you need agents or agentless? There are advantages to both methods, depending on the mix of endpoint OS’s and what you are trying to accomplish and protect.
  • What does the user see on their protected desktop? Some tools will obscure any listing in the Control Panel Programs or toolbar icons to make them stealthier.
  • Does the product offer real-time protection? This may be important, depending on your needs. Some products aren’t designed for this kind of response time and need to take a longer view of trends and behaviors.
  • How is the product configured, managed and priced? Some install quickly, some take consulting contracts to set up. Some are priced per endpoint or per server, others by purchasing a physical appliance.

EventTracker offers EDR functionality within its SIEM platform. You can learn more about it here.

Uncover C&C traffic to nip malware

In a recent webinar, we demonstrated techniques by which EventTracker monitors DNS logs to uncover attempts by malware to communicate with Command and Control (C&C) servers. Modern malware uses DNS to resolve algorithm generated domain names to find and communicate with C&C servers. These algorithms have improved by leaps and bounds since they were first see in Conficker.C. Early attempts were based on a fixed seed and so once the malware was caught, it could be decompiled to predict the domain names it would generate. The next improvement was to use the current time as a seed. Here again, once the malware is reverse engineered it’s possible to predict the domain names it will generate. Nowadays, the algorithms may use things like the current trending twitter topic as a seed to make prediction harder.

But hold on a second, you say – we don’t allow free access, we have installed a proxy with configuration and it will stop these attempts. Possibly. However, a study conducted between Sep 2015-Jan 2016 showed that less than 34% of outbound connection attempts to C&C infrastructure were blocked by firewalls or proxy servers. Said differently, more than 60% of the time an infected device successfully called out to a criminal operator.

Prevention technologies look for known threats. They examine inbound files and look for malware signatures. It’s more or less a one-time chance to stop the attacker from getting inside the network. Attackers have learned that time is their friend. Evasive malware attacks develop over time, allowing them to bypass prevention altogether. When no one is watching, the attack unfolds. Ultimately, an infected device will ‘phone home’ to a C&C server to receive instructions from the attacker.

DNS logs are a rich source of intelligence and bear close monitoring.

Maximize your SIEM ROI

Aristotle put forth the idea in his Poetics that a drama has three parts — a beginning or protasis, middle or epitasis, and end or catastrophe. Far too many SIEM implementations are considered to be catastrophes. Having implemented hundreds of such projects, here are the three parts of a SIEM implementation which if followed will in fact minimize the drama but maximize the ROI. If you prefer the video version of this, click here.

The beginning or protasis

  • Identify log sources and use cases.
  • Establish retention period for the data set and who gets access to which parts.
  • Nominate a SIEM owner and a sponsor for the project.

The middle or epitasis

  • Install the SIEM Console
  • Push out and configure sensors or the log sources to send data
  • Enable alerting and required reporting schedules
  • Take log volume measurements and compare against project disk space requirements
  • Perform preliminary tuning to eliminate most noisy and less useful log sources and type
  • Train the product owner and users on features and how-to use

The end or catastrophe

  • Review log volume and tune as needed
  • Review alerts for correctness and establish notification methods, if appropriate
  • Establish escalation policy – when and to whom
  • Establish report review process to generate artifacts for audit review
  • Establish platform maintenance cycle (platform and SIEM updates)

Detecting Ransomware: The Same as Detecting Any Kind of Malware?

Ransomware burst onto the scene with high profile attacks against hospitals, law firms and other organizations.  What is it and how can you detect it?  Ransomware is just another type of malware; there’s nothing particularly advanced about ransomware compared to other malware.

Ransomware uses the same methods to initially infect an endpoint such as drive-by-downloads, phishing emails, etc.  Then it generates necessary encryption keys, communicates with command and control servers and gets down to business encrypting every file on the compromised endpoint. Once that’s done it displays the ransom message and waits for the user to enter an unlock code purchased from the criminals.  So at the initial stages of attack, trying to detect ransomware is like any other end-point based malware.  You look for new EXEs and DLLs and other executable content-like scripts.  For this level of detection check out my earlier webinars with EventTracker:

As criminals begin to move from consumer attacks to targeting the enterprise, we are going to see more lateral movement between systems as the attackers try to either encrypt enough endpoints or work their way across the network to one or more critical servers.  In either case their attacks will take a little longer before they pull the trigger and display the ransom message because they need to encrypt enough end-user endpoints or at least one critical server to bring the organization to its knees.  These attacks begin to look similar to a persistent data theft (aka APT) attack.

Detecting lateral movement requires watching for unusual connections between systems that typically don’t communicate with each other.  You also want to watch for user accounts attempting to logon to systems they normally never access.  Pass-the-Hash indicators tie in closely with later movement and that one of the things discussed in “Spotting the Adversary with Windows Event Log Monitoring: An Analysis of NSA Guidance”.

So much of monitoring for ransomware is covered by the monitoring you do for any kind of malware as well as persistent data theft attacks.  But what is different about ransomware?

  1. Detonation: The actually detonation of ransomware (file encryption) is a very loud and bright signal. There’s no way to miss it if you are watching.
  2. Speed: Enterprise ransomware attacks can potentially proceed much faster than data theft attacks.

Detonation

When ransomware begins encrypting files, it’s going to generate a massive amount of file i/o – both read and write.  It has to read every file and write every file back out in encrypted format.  The write activity may occur on the same file if directly being re-written, the ransomware can delete the original file after writing out an encrypted copy.  In addition, if you watch which files ransomware is opening you’ll see every file in each folder being opened one file after another for at least read access.  You will also see that read activity in bytes should be matched by write activity.

Of course there are potential ways ransomware could cloak this activity by either going low and slow, encrypting files over many days or by scattering its file access between many different folders instead of following an orderly process of all files in one folder after another.  But I think it will a long time before enough attacks are getting foiled by such detection techniques that the attackers go to this extra effort.

How prone to false positives is this tactic?  Well, what other legitimate applications have a similar file i/o signature? Backup and indexing programs would have a nearly identical file read signature but would lack the equal amount of write activity.

The downside to ransomware detonation monitoring is that detection means a ransomware attack is well underway.  This is late stage notification.

Speed

Ransomware attacks against an enterprise may proceed much faster than persistent data theft attacks because data thieves have to find and gain access to the data that is not just confidential but also re-saleable or otherwise valuable to the attacker.  That may take months.  On the other hand, ransomware criminals just need to do either of the following:

  1. Lockdown at least one critical server – without which the organization can’t function. The server doesn’t necessarily need any confidential data nor need it be re-saleable.  On a typical network there’s many more such critical servers than there are servers with data that’s valuable to the bad guy for re-sale or other exploitation.
  2. Forget servers and just spread to as many end-user endpoints as possible. If you encrypt enough endpoints and render them useless you can ransom the organization without compromising and servers at all.  Endpoints are typically much easier to compromise because of their intimate exposure and processing of untrusted content and usage by less security savvy end-users among other reasons.

So beefing up your ransomware monitoring means continue with what you are (hopefully) already doing: monitoring for indicators of any type of malware on your network and watching for signs of lateral movement between systems.  But for ransomware you can also possibly detect late stage ransomware attacks by watching for signature file i/o by unusual processes.  So you need to be fast in responding.

And that’s the other way that ransomware differentiates itself from data theft attacks: the need for speed.  Ransomware attacks can potentially reach detonation much faster than data thieves can find, gain access and exfiltrate data worth stealing.  So, while the indicators of compromise might be the same for most of a ransomware or persistent data theft attack, reducing your time-to-response is even more important with ransomware.

Research points to SIEM-as-a-Service

SC Magazine released the results of a research survey focused on the rising acceptance of SIEM-as-a-Service for the small and medium sized enterprise.

The survey, conducted in April 2016, found that SMEs and companies with $1 billion or more in revenue or 5,000-plus employees faced similar challenges:

  • 64 percent of respondents agreed that they “lack the time to manage all the security activities.”
  • 49 percent reported a lack of internal staff to address IT security challenges
  • 48 percent said they lacked the IT security budget needed to meet those challenges

This come as no surprise to us. We’ve been seeing these trends rise over the past several years. Gartner reports that by 2019, total enterprise spending on security outsourcing services will be 75 percent of the spending on security software and hardware products, and that by 2020, 40 percent of all security technology acquisitions will be directly influenced by managed security service provider (MSSP) and on-premises security outsourcing providers, up from less than 15% today.

It used to be that firewalls and antivirus were sufficient enough stop gaps; but in today’s complex threatscape, the cyber criminals are more sophisticated. The weak point of any security approach is usually the unwitting victim of a phishing scam or the person who plugs in the infected USB; but “securing the human” requires the expertise of other humans, trained staff with the certification and expertise to monitor the network and analyze the anomalies. An already busy IT staff can become even more overburdened; identifying, training and keeping security expertise is hard. So is keeping up with the alerts that come in on a daily basis, and being current on the SIEM technology.

Thus, the increasing movement towards a co-managed SIEM which allows the enterprise to have access to the expertise and resources they need to run an effective security program without ceding control. SIEM-as-a-Service: saving time and money.

You can download the SC Magazine report here.

Is it all about zero-day attacks?

The popular press makes much of zero-day attacks. These are attacks based on vulnerabilities in software that is unknown to the vendor. This security hole is then exploited by hackers before the vendor becomes aware and hurries to fix it—this exploit is called a zero day attack.

However, the reality is 99.99% of exploits are based on vulnerabilities already known for at least one year. Hardly zero-day.

What does this mean to you? It means you should prioritize vulnerability scanning to first identify and then patch and manage these vulnerabilities in your defense strategy. What is the point in obsessing over zero-day vulnerabilities when unpatched systems exist within your perimeter?

What’s so hard about this? Well, for many organizations, it’s the process and expertise that is needed to accomplish the related tasks. Procuring the technology is easy but that represents, at most, 20% of the challenges to obtain a successful outcome.

The people and process to leverage the technology are 80% of the challenge. The bulk of the iceberg below the waterline, which can sink your otherwise massive ship.

Welcome to the New Security World of SMB Partners

Yet another recent report confirms the obvious, that SMBs in general do not take security seriously enough. The truth is a bit more nuanced than that, of course—SMB execs generally take security very seriously, but they don’t have the dollars to do enough about it—although it amounts to the same thing.

This year, though, SMBs are going to have to look at security differently. Why? That is because enterprise execs are repeatedly seeing their own networks hurt because of less-than-terrific security from SMB partners that do distribution, providing supplies or handling anything from backup to bookkeeping. Faced with their own security mandates—whether from PCI, HIPAA, European Union or any other external body—they are going to crack down on SMB partners.

Hence, unless you want those enterprise-level contracts to take a walk, your security return-on-investment (ROI) calculation just got a lot messier.

What new actions can SMBs expect from their enterprise-level partners in 2016? Until now, most have satisfied their obligations and kept their corporate counsels at bay through contractual agreements. In short, they put in their partner contracts that the partner is obligated to comply with a laundry list of security measures. Write it down, make SMB partners sign it and they’re all done.

The problem with enterprises going solely with the contractual obligation route is that the proverbial stick (as in carrot and stick) is limited to reactive situations. If something bad happens with the enterprise operation’s security and a forensic investigation eventually points the finger at the SMB partner and that probe specifically concludes that the SMB had violated the contract’s obligations, that SMB partner doesn’t merely lose the contract. They will also certainly be sued for the resultant damages, which could easily bankrupt some SMBs. That’s sufficient incentive/deterrent, right?

Not anymore. From the enterprise’s perspective, that stick only kicks in after a breach and only if enough evidence exists to tie it back to the SMB partner. Given the ever-increasing talent of many cyberthieves to hide and delete their trails, it’s a gamble that many cash-strapped SMBs are willing to take. What are the odds of both of those things happening, those SMB execs think, given the vast security arsenal deployed by their multi-billion-dollar enterprise partner?

Therefore, to up the real—as opposed to merely pledged—compliance with its SMB-partner security rules, enterprises are going to start surprise snap inspections and demanding access to sensitive IT systems. Some might even go so far as to try and entrap partners by creating fake sub-suppliers to respond to the SMB partner’s RFPs and see if they follow the rules and demand what they are supposed to demand.

Why would enterprises go through this effort, seemingly to hurt partners? Because that’s what will be required. If XYZ enterprise doesn’t loudly and publicly expose and punish a couple of SMB partners, a sufficient deterrence won’t exist.

The whole point here is to change that SMB exec’s ROI calculation. By increasing the number of ways an SMB partner’s lack of security compliance can be caught/detected, they want that ROI to force those partners to invest the security dollars. The rationale is essentially: “If you won’t invest in security because you need to for your own company’s protection, or because you have signed a contract that you will, then do so because we need to make an example of somebody and you don’t want that to be you.”

Next Step: how to deliver the most cost-effective security. Once you have conceded to the new ROI calculations and have decided that you must increase your security budget, the natural inclination—especially in an SMB environment—is to calculate the absolute minimum dollars to comply.

This is also known as checklist security, which is frowned upon. That said, it’s a step-up from rolling the dice that you won’t get caught. Here’s a trick: Guarantee your safety by having your people work with the enterprise partner’s IT security people on what your options are.

You may be surprised at how reasonable they can be. The best part is that by doing so—in e-mail as much as possible, to create a powerful paper trail—you are protected. Despite the bogus reputation of enterprise IT that they don’t sweat pricing details, they do. No one is better at squeezing a contractor nickel than a Fortune 500 IT security manager.

Not only will they steer you to the most cost-effective options, but they might even make a referral for you, so that you can benefit from a small taste of your partner’s volume-purchasing pricing. They might even help you out by participating directly in those vendor calls. After all, you are a partner.

And because you are working with them—and don’t forget that paper trail—you can’t be blamed for choosing whoever the enterprise IT people suggested.

OK, in reality, you can be blamed for anything.

Top 3 traits of a successful Security Operations Center

Traditional areas of risk — financial risk, operational risk, geopolitical risk, risk of natural disasters — have been part of organizations’ risk management for a long time. Recently, information security has bubbled to the top, and now companies are starting to put weight behind IT security and Security Operations Centers (SOC).

Easier said than done, though. Why you ask? Two reasons:

  • It’s newer, so it’s less understood; process maturity is less commonly available
  • Skill shortages — many organizations might not yet have the right skill mix and tools in-house.

From our own experience creating and staffing an SOC over the past three years, here are the top three rules:

1) Continuous communication

It’s the fundamental dictum (sort of like “location” in real estate). Bi-directional management to the IT team.

Management communicates business goals to the technology team. In turn, the IT team explains threats and their translation to risk. Management decides the threat tolerance with their eye on the bottom line.

We maintain a Runbook for every customer which records management objectives and risk tolerance.

2) Tailor your team

People with the right skills are critical to success and often the hardest to assemble, train and retain. You may be able to groom from within. Bear in mind, however, that even basic skills, such as log management, networking expertise and technical research (scouring through blogs, pastes, code, and forums), often come after years of professional information security experience.

Other skills, such as threat analysis, are distinct and practiced skill sets. Intelligence analysis, correlating sometimes seemingly disparate data to a threat, requires highly developed research and analytical skills and pattern recognition.

When building or adding to your threat intelligence team, especially concerning external hires, personalities matter. Be prepared for Tuckman’s stages of group development.

3) Update your infrastructure

Security is 24x7x365 – automatically collect, store, process and correlate external data with internal telemetry such as security logs, DNS logs, Web proxy logs, Netflow and IDS/IPS. Query capabilities across the information store requires an experienced data architect. Design fast and nimble data structures with which external tools integrate seamlessly and bi-directionally. Understand not only the technical needs of the organization, but also be involved in a continuous two-way feedback loop with the SOC, vulnerability management, incident response, project management and red teams.

Easy, huh?

Feeling overwhelmed? Get SIEM Simplified on your team. We analyze billions of logs every day. See what we’ve caught.

Is the IT Organizational Matrix an IT Security Problem?

Do you embrace the matrix?

Not this one, but the IT Organizational Matrix, or org chart. The fact is, once networks get to a certain size, IT organizations begin to specialize and small kingdoms emerge. For example, endpoint management (aka Desktop) may be handled by one team, whereas the data center is handled by another (Server team).  Vulnerability scanning may be handled by a dedicated team but identity management (Active Directory? RSA tokens?) is handled by another.  At this level of organization, these teams tend to have their own support infrastructure.

However, InfoSec controls are not separable from IT.  What this matrix at the organizational level becomes is a graph of security dependencies at the information level.  John Lambert explains in this blog post.

For example, the vulnerability scanning systems may use a “super privileged account” that has admin rights on every host in the network to scan for weaknesses, but the scanners may be patched or backed up by the Server team with admin rights to them.  And the scanner servers themselves are accessed with admin rights from a set of endpoints that are managed by the Desktop team.

This matrix arising from domain specialization creates a honeycomb of critical dependencies. Why is this a problem? Well because it enables lateral movement. Attackers who don’t know the map or org chart can only navigate the terrain as it exists. In this case, though, the defenders may manage from the network map like good little blue tin soldiers.

If this is your situation, it’s time to simplify. Successful defenders manage from the terrain, not the map.

Cloud Security Starts at Home

Cloud security is getting attention and that’s as it should be.  But before you get hung up on techie security details, like whether SAML is more secure than OpenID Connect and the like, it’s good to take a step back.  One of the tenets of information security is to follow the risk.  Risk is largely a measure of damage and likelihood.  When you are looking at different threats to the same cloud-based data then it becomes a function of the likelihood of those risks.

In the cloud we worry about the technology and the host of the cloud.  Let’s focus on industrial-strength infrastructure and platform-as-a-service clouds like AWS and Azure.  And let’s throw in O365 – it’s not infrastructure or platform, but its scale and quality of hosting fits our purposes in terms of security and risk.  I don’t have any special affection for any of the cloud providers, but it’s a fact that they have the scale to do a better, more comprehensive, more active job on security than my little company does, and I’m far from alone.  This level of cloud doesn’t historically get hacked because of stupid operational mistakes or flimsy coding practices with cryptography and password handling, or because of obscure vulnerabilities in standards like SAML and OpenID Connect (they are present). It’s because of tenant-vectored risks.  Either poor security practices by the tenant’s admins or vulnerabilities in the tenant’s technology which the cloud is exposed to or on which it is reliant.

Here are just a few scenarios of cloud intrusions with a tenant origin vector

S.no. Tenant Vulnerability Cloud Intrusion
1. Admin’s PC infected with malware Cloud tenant admin password stolen
2. Tenant’s on-prem network penetrated VPN connection between cloud and on-prem network
3. Tenant’s Active Directory unmonitored Federation/synchronization with on-prem AD results in an on-prem admin’s account having privileged access to the cloud.

I’m going to focus on the latter scenario.  The point is that most organizations integrate their cloud with their on-prem Active Directory, and that’s as it should be.  We hardly want to go back to the inefficient and insecure world of countless user accounts and passwords per person.  We were able to largely reduce that of the years by bringing more and more on-prem apps, databases and systems online with Active Directory.  Let’s not lose ground on that with the cloud.

But your greatest risk in the cloud might just be right under your nose here in AD on your local network.  Do you monitor changes in Active Directory?  Are you aware when there are failed logons or unusual logons to privileged accounts?  And I’m not just talking about admin accounts.  Really, just as important, are those user accounts who have access to the data that your security measures are all about.  So that means identifying not just the IT groups in AD, but also those groups which are used to entitle users to that important data.  Very likely some of those groups are re-used in the cloud to entitle users there as well.  Of course the same goes for the actual user accounts.

Even for those of us who can say our network isn’t connected by VPN or any direct connections (like ExpressRoute for Azure/O365) and there’s no federation or sync between our on-prem and cloud directories your on-prem, internal security efforts will make or break your security in the cloud and that’s simply because of #1.  At some point your cloud admin has to connect to the cloud from some device.  And if that device isn’t secure or the cloud admin’s credential handling is lax, you’re in trouble.

That’s why I say that for most of us in the cloud need to first look inward for risks.  Monitoring, as always, is key.  The detective control you get with a well implemented and correctly used SIEM is incredible and often the only control you can deploy at key points, technologies or processes in your network.

2015 Cyber Attack Trends — 2016 Implications

Red teams attack, blue teams defend.
That’s us – defending our network.

So what attack trends were observed in 2015? And what do they portend for us blue team members in 2016?

The range of threats included trojans, worms, trojan downloaders and droppers, exploits and bots (backdoor trojans), among others. When untargeted (more common), the goal was profit via theft. When targeted, they were often driven by ideology.

Over the years, attackers have had to evolve their tactics to get malware onto computers that have improved security levels. Attackers are increasingly using social engineering to compromise computer systems because vulnerabilities in operating systems have become harder to find and exploit.

Ransomware that seeks to extort victims by encrypting their data is the new normal, replacing rogue security software or fake antivirus software of yesteryear that was used to trick people into installing malware and disclosing credit card information. Commercial exploit kits now dominate the list of top exploits we see trying to compromise unpatched computers, which means the exploits that computers are exposed to on the Internet are professionally managed and constantly optimized at an increasingly quick rate.

However, one observation made by Tim Rains, Chief Security Advisor at Microsoft was, “although attackers have accumulated more tricks and tactics and seem to be using them in a more focused, fast paced way, they still focus on a relatively small number of ways to compromise computers.” These include:

  • Unpatched vulnerabilities
  • Misconfigured computers
  • Weak passwords
  • Social engineering

In fact, Rains goes on to note: “Notice I didn’t use the word ‘advanced.’

As always, it’s back to basics for blue team members. The challenge is to defend:

  • At scale (every device on the network, no exceptions)
  • Continuously (even on weekends, holidays etc.), and
  • Update/upgrade tactics constantly

If this feels like Mission Impossible, then you may be well served by a co-managed service offering in which some of the heavy lifting can be taken on by a dedicated team.

Your SIEM relationship status: It’s complicated

On Facebook, when two parties are sort-of-kind-of together but also sort-of, well, not, their relationship status reads, “It’s complicated.” Oftentimes, Party A really wants to like Party B, but Party B keeps doing and saying dumb stuff that prevents Party A from making a commitment.

Is it like that between you and your SIEM?

Here are dumb things that a SIEM can do to prevent you from making a commitment:

  • Require a lot of work, giving little in return
  • Be high maintenance, cost a lot to keep around
  • Be complex to operate, require lots of learning
  • Require trained staff to operate

Simplify your relationship with your SIEM with a co-managed solution.

Certificates and Digitally Signed Applications: A Double Edged Sword

Windows supports the digitally signing of EXEs and other application files so that you can verify the provenance of software before it executes on your system.  This is an important element in the defense against malware.  When a software publisher like Adobe signs their application they use the private key associated with a certificate they’ve obtained from one of the major certification authorities like Verisign.

Later, when you attempt to run a program, Windows can check the file’s signature and verify that it was signed by Adobe and that its bits haven’t been tampered with such as by the insertion of malicious code.

Windows doesn’t enforce digital signatures or limit which publisher’s programs can execute by default, but you can enable that with AppLocker.  As powerful as AppLocker potentially is, it is also complicated to set up, except for environments with a very limited and standardized set of applications.  You must create rules for at least every publisher whose code runs on your system.

The good news, however, is that AppLocker can also be activated in audit mode.  And you can quickly set up a base set of allow rules by having AppLocker scan a sample system.  The idea with running AppLocker in audit mode is that you then monitor the AppLocker event log for warnings about programs that failed to match any of the allow rules.  This means the program has an invalid signature, was signed by a publisher you don’t trust or isn’t signed at all.  The events you look for are 8003, 8006, 8021 and 8024 and these events are in the logs under AppLocker as shown here:

AppLocker_Events to Look For

These events are described here, which is part of the AppLocker Technical Reference.

If you are going to use AppLocker in audit mode for detecting untrusted software remember that Windows logs these events on each local system.  So be sure you are using a SIEM with an efficient agent, like EventTracker, to collect these events or use Windows Event Forwarding.

Better yet, if you have EventTracker, don’t bother with AppLocker – use EventTracker’s automatic Digital Forensics Incident and Incident Response feature for unknown processes.  EventTracker watches each process (and soon each DLL) that your endpoints load and checks the EXE’s hash against your environment’s local whitelist (which EventTracker can automatically build). If not found there, EventTracker checks it against the National Software Reference Library.  If the EXE still isn’t found to be legit, EventTracker posts it to the dashboard for you to review.  EventTracker automatically provides publisher information if the file is signed, and other forensics such as the endpoint, user and parent process.  With one click you can check the process against anti-malware sites such as VirusTotal. EventTracker goes way beyond AppLocker in its ability to detect suspicious software and giving the tools and information to quickly determine if the program is a risk or not, including the use of digital signatures.

There are some other issues to be aware of, though, with digitally signed applications and certificates.  Certificates are part of a very complicated technology called Public Key Infrastructure (PKI).  PKI has so many components and ties together so many different parties there is unfortunately a lot of room for error.   Here’s a brief list of what has gone wrong in the past year or so with signed applications and the PKI that signatures depend on:

  1. Compromised code-signing server: I’d said earlier that code signing allows you to make sure a program really came from the publisher and that it hasn’t been modified (tampered).  But it depends on how well the publisher protects their private key.  And unfortunately Adobe is a case in point.  A while back some bad guys broke into Adobe’s network and eventually found their way to the very server Adobe uses to sign applications like Acrobat.  They uploaded their own malware and signed it with Adobe’s code signing certificate’s private key and then proceeded to deploy that malware to target systems that graciously ran the program as a trusted Adobe application.  How do you protect against publishers that get hacked?  There’s only so much you can do.  You can create stricter rules that limit execution to specific versions of known applications but of course that makes your policy much more fragile.
  1. Fraudulently obtained certificates: Everything in PKI depends on the Certification Authority only issuing certificates after rigorously verifying the party purchasing the certificate is really who they say they are.  This doesn’t always work.  A pretty recent example is Spymel a piece of malware signed by a certificate DigiCert issued to a company called SBO Invest.  What can you do here?  Well, using something like AppLocker to limit software to known publishers does help in this case.  Of course if the CA itself is hacked then you can’t trust any certificate issued by it.  And that brings us to the next point.
  1. Untrustworthy CAs: I’ve always been amazed at all the CA Windows trusts out of the box.  It’s better than it used to be but at one time I remember that my Windows 2000 system automatically trusted certificates issued by some government agency of Peru.  But you don’t have trust every CA Microsoft does.  Trusted CAs are defined in the Trusted Root CAs store in the Certificates MMC snap-in and you can control the contents of this store centrally via group policy
  1. Insecure CAs from PC Vendors: Late last year Dell made the headlines when it was discovered that they were shipping PCs with their own CA’s certificate in the Trusted Root store.  This was so that drivers and other files signed by Dell would be trusted.   That might have been OK, but they mistakenly broke The Number One Rule in PKI.  They failed to keep the private key private.  That’s bad with any certificate let alone a CA’s root certificate.  Specifically, Dell included the private key with the certificate.  That allowed anyone that bought an affected Dell PC to sign their own custom malware with Dell’s private key and then once deployed on other affected Dell systems to run it with impunity since it appeared to be legit and from Dell.

So, certificates and code signing are far from perfect — show me any security control that is.  I really encourage you to try out AppLocker in audit mode and monitor the warnings it produces.  You won’t break any user experience, the performance impact is hardly measurable and if you are monitoring those warnings you might just detect some malware the first time it executes instead of the 6 months or so that it takes on average.

Top 5 SIEM complaints

Here’s our list of the Top 5 SIEM complaints:

1) We bought a security information and event management (SIEM) system, but it’s too complicated and time-consuming, so we’re:

a) Not using it
b) Only using it for log collection
c) Taking log feeds, but not monitoring the alerts
d) Getting so many alerts that we can’t keep up with them
e) Way behind because the person who knew about the SIEM left

2) We’re updating technology and need to retrain to support it

3) It’s hard to find, train and retain security expertise

4) We don’t have enough trained staff to manage all of our devices

5) We don’t have trained resources to successfully respond to a security incident

What’s an IT Manager to do?
Get a co-managed solution, of course.
Here’s our’s. It’s called SIEM Simplified.
Billions of logs analyzed daily. See what we’ve caught.

The Cost of False IT Security Alarms

Think about the burglar alarm systems that are common in residential neighborhoods. In the eye of the passive observer, an alarm system makes a lot of sense. They watch your home while you’re asleep or away, and call the police or fire department if anything happens. So for a small monthly fee you feel secure. Unfortunately, there are a few things that the alarm companies don’t tell you.

1)      Between 95% and 97% of calls (depending on the time of year) are false alarms.

2)      The police regard calls from alarm companies as the lowest priority and it can take anywhere between 20-30 minutes for them to arrive. It only takes the average burglar 5 minutes to break and enter, and be off with your valuables.

3)      In addition to this, if your call does turn out to be a false alarm, the police and fire department have introduced hefty fines. It is about $130 for the police to be called out, and if fire trucks are sent, they charge around $410 per truck (protocol is to send 3 trucks). So as you can see, one false alarm can cost you well over $1,200.

With more than 2 million annual burglaries in the U.S., perhaps it’s worth putting up with so many false positives in service of the greater deterrent? Yes, provided we can sort out the false alarms which sap the first responder.

The same is true of information security. If we know which alerts to respond to, we can focus our time on those important alerts. Tuning the system to reduce the alerts, and removing the false positives so we can concentrate only on valid alerts, gives us the ability to respond only to the security events that truly matter.

While our technology does an excellent job of detecting possible security events, it’s our service, which examines these alerts and provides experts who make it relevant using context and judgement, that makes the difference between a rash of false positives and the ones that truly matter.

SIEM: Sprint or Marathon?

Winning a marathon requires dedication and preparation. Over long periods of time. A sprint requires intense energy but for a short period of time. While some tasks in IT Security are closer to a sprint (e.g., configuring a firewall), many, like deploying and using a Security Information and Event Management (SIEM) solution, are closer to a marathon.

What are the hard parts?

  1. Identifying the scope
  2. Ingesting log data and filtering out noise events
  3. Reviewing the data with discipline

Surveys show that 75% of organizations need to perform significant discovery to determine which devices, platforms, applications and databases should be included in the scope for log monitoring. The point is that when most companies really evaluate their log monitoring process, most of them don’t really know what systems are even available for them to include. They don’t know what they have. Additionally, 50% of organizations later realize that this initial discovery phase is not sufficient to meet their security needs. So, even after performing the discovery, they are not sure they have identified the right systems.

While on-boarding new clients, we usually identify legacy systems or firewall policies that generate large volumes of unnecessary data. This includes discovery of service accounts or scripts with expired credentials that appear to generate suspicious looking login failures. Other common items uncovered include network health monitoring systems which generate an abnormal amount of ICMP or SNMP activity, backup tools and internal applications using non-standard ports and cleartext protocols. Each of these false positives or legitimate activities add straw to the haystack(s), which makes it more difficult to find the needle. Every network contains activities that might appear suspicious or benign to an outside observer that lacks background on everyday activities of the company being monitored. It is important for network and security administrators to provide monitoring tools with additional context and background detail to account for the variety of networks that are thrown at them.

Reviewing the data with discipline is a difficult ask for organizations with a lean IT staff. Since IT is often viewed as a “cost center,” it is rare to see organizations (esp. mid-sized ones) with suitably trained IT Security staff.

Take heart — if getting there using only internal resources is a hard problem, our SIEM Simplified service gets you there. The bonus is the cost savings compared to a DIY approach.

The Assume Breach Paradigm

Given today’s threat landscape, let’s acknowledge that a breach has either already occurred within our network or that it’s only a matter of time until it will. Security prevention strategies and technologies cannot guarantee safety from every attack. It is more likely that an organization has already been compromised, but just hasn’t discovered it yet.

Operating with this assumption reshapes detection and response strategies in a way that pushes the limits of any organization’s infrastructure, people, processes and technologies.

In the current threat landscape, a prevention-only focus is not enough to address determined and persistent adversaries. Additionally, with common security tools, such as antivirus and Intrusion Detection Systems (IDS), it is difficult to capture or mitigate the full breadth of today’s breaches. Network edge controls may keep amateurs out, but talented and motivated attackers will always find the means to get inside these virtual perimeters. As a result, organizations are all too often ill prepared when faced with the need to respond to the depth and breadth of a breach.

Assume Breach is a mindset that guides security investments, design decisions and operational security practices. Assume Breach limits the trust placed in applications, services, identities and networks by treating them all—both internal and external—as not secure and probably already compromised.

While Prevent Breach security processes, such as threat modeling, code reviews and security testing may be common in secure development lifecycles, Assume Breach provides numerous advantages that help account for overall security by exercising and measuring reactive capabilities in the event of a breach.

assume breach

With Assume Breach, security focus changes to identifying and addressing gaps in:

  • Detection of attack and penetration
  • Response to attack and penetration
  • Recovery from data leakage, tampering or compromise
  • Prevention of future attacks and penetration

Assume Breach verifies that protection, detection and response mechanisms are implemented properly — even reducing potential threats from “knowledgeable attackers” (using legitimate assets, such as compromised accounts and machines).

To defend effectively, we must:

  • Gather evidence left by the adversary
  • Detect the evidence as an Indication of Compromise
  • Alert the appropriate Engineering and Operation team(s)
  • Triage the alerts to determine whether they warrant further investigation
  • Gather context from the environment to scope the breach
  • Form a remediation plan to contain or evict the adversary
  • Execute the remediation plan and recover from breach

Since this can be overwhelming for any but the largest organizations, our SIEM Simplified service is used by many organizations to supplement their existing teams. We contribute our technology, people and processes to the blue team and help defend the network.

See what we’ve caught recently.

5 IT Security resolutions

Ho hum. Another new year, time for some more New Year’s resolutions. Did you keep the ones you made last year? Meant to but somehow did not get around to it? This time how about making it easy on yourself?

New Year Resolutions for IT security

Here are some New Year’s resolutions for IT security that you can keep easily — by doing nothing at all!

5) Give out administrator privileges freely to most users. Less hassle for you. They don’t need to bother asking you install software or access some files.

4) Don’t bother inventorying hardware or software. It changes all the time. It’s hard to maintain a list, and what’s the point anyway?

3) Allow unfettered mobile device usage in the network. You know they are going to bring their own phone and tablet anyway. It’s better this way. Maybe they’ll get more work done now.

2) Use default configurations everywhere. It’s far easier to manage. Factory default resets are needed anyway and then you can find the default password on google.

And our favorite:

1) Ignore logs of every kind — audit logs, security logs, application logs. They just fill up disk space anyway.