Cybersecurity Trends and Predictions 2019

The year 2018 saw ransomware families such as CryptoLocker and variants like Locky continue to plague organizations as cybersecurity adversaries morph their techniques to avoid detection. Several massive data breaches this year include Quora, Ticketmaster, and Facebook that exposed over 200 million records worldwide. While high-profile breaches may make the news headlines, over 60% of small and mid-sized firms have experienced data loss or a breach themselves. While smaller firms may believe that they are not targeted by hackers, they comprise the global supply chain connected to much larger enterprises. SMBs also find that their IT and security staff is stretched thin juggling day-to-day operations with cybersecurity capabilities insufficient for their unique organization and industry sector risks.

As the year winds down, here’s what small and mid-sized organizations may experience in 2019 with an eye towards enhancing security.

Cybersecurity Threats Impact Uptime:

Organizations of all sizes struggle to maintain uptime of point of sale (POS) systems and avoid lost productivity due to business data loss. Patching, ransomware, and data breaches all impact network and system uptime. Enhanced investment in your infrastructure and cybersecurity during 2019 ensures that your organization can detect and remediate threats quickly to meet resiliency and uptime objectives.

Malware Continues to Endanger Organizations:

Malware like viruses, worms, bots, and banking trojans will continue using advanced evasion techniques to challenge organizations and consumers alike. Malware that morphs and evades detection increases recovery costs; rapid detection and blocking will continue to be essential in minimizing dwell time and damage. While traditional anti-virus alone is not enough to stop malware, endpoint detection and response (EDR) software provides enhanced protection necessary to catch new and otherwise unknown malware strands.

Cybersecurity Shortages Drive New Business Models:

According to Ponemon Institute research, 73% of small and mid-sized organizations state that insufficient personnel keep IT security from being fully productive. A lack of cybersecurity staff and skills can lead to creative approaches to maintain protection and compliance. Many organizations will tap a trusted managed security services provider (MSSP) to complement their existing staff and capabilities.

You Can’t Manage What You Can’t See:

Over 40% of organizations consider getting full visibility to all assets and vulnerabilities to be a top challenge, according to a threat monitoring report. Comprehensive infrastructure and log monitoring provide real-time insights that can identify suspicious behavior, flag further action, and help prioritize where to focus limited resources. A Security Information and Event Management (SIEM) service such as EventTracker SIEMphonic provides the visibility and actionable intelligence you need for sustained protection.

New Privacy and Data Breach Regulations Gain Traction:

Following the strict privacy and breach notification guidelines in EU GDPR (General Data Protection Regulation), many anticipate that US lawmakers will consider enacting similar regulations. The California Consumer Privacy Act signed into law in 2018 is a harbinger of such legislation. The Forbes Technology Council weighs in on data privacy impacts for organizations of all sizes.

Effective Security Starts at the Top:

You and your executives set the tone on security that successfully balances organizational growth with risk mitigation. Over 62% of small and medium-sized firms have experienced a data breach, so it’s important to be proactive and invest accordingly. Year-end is the ideal time to evaluate your current security posture and ensure that you are evolving and investing in security as your adversaries step up the game. If you don’t have the right skills or staff, engage a trusted advisor like a managed security services provider (MSSP) to assess any security gaps.

The cost of cybersecurity threats includes reduced productivity, lost online revenue, compliance gaps, and even fines. Many small and mid-sized organizations should approach 2019 with both strategic and tactical security measures that involve people, processes, and technology. Detecting a data breach takes 107 days on average so augment your expertise in security and compliance to maintain uptime and growth.

For more real-time information on cyberthreats, view our Catch of the Day resources that outlines actual cybersecurity war stories.

Why a Co-Managed SIEM?

In simpler times, security technology approaches were clearly defined and primarily based on prevention with things like firewalls, anti-virus, web, and email gateways. There were relatively few available technology segments and a relatively clear distinction between buying security technology purchases and outsourcing engagements.

Organizations invested in the few well-known, broadly used security technologies themselves, and if outsourcing the management of these technologies was needed, they could be reasonably confident that all major security outsourcing providers would be able to support their choice of technology.

Gartner declared this was a market truth for both on-premises management of security technologies and remote monitoring/management of the network security perimeter (managed security services).

Gartner Magic Quadrant

So, what has changed? A recent survey of over 300 IT professionals by SC Magazine indicates two main factors at play (get the full report here ). The increasing complexity of the threat landscape has spawned more complex and expensive security technologies to combat those threats. This escalation in cost and complexity is then exacerbated by budget constraints and an ultra-tight cybersecurity labor market.

Net result? The “human element” is back into the forefront of security management discussions. The skilled security analyst and subject matter expert for the technology in use have become exponentially more difficult to recruit, hire, and retain. The market agrees: The security gear is only as good as the people you are able to get to manage it.

With the threat landscape of today, the focus is squarely on detection, response, prediction, continuous monitoring and analytics. This means a successful outcome is critically dependent on the “human element.” The choices are to procure security technology and:

  • Deploy adequate internal resources to use them effectively, or
  • Co-source the staffing who already has experience with the selected technology (for instance, using our Co-managed SIEM)

If co-sourcing is a thought, then selection criteria must consider the expertise of the provider with the selected security technology. Our Co-managed SIEM offering bundles comprehensive technology with expertise in its use.

Technology represents 20% or less of the overall challenges to better security outcomes. The “human element” coupled with mature processes are the rest of the iceberg, hiding beneath the waterline.

Accelerate Your Time-to-Value with Security Monitoring

A hot trend in the Managed Service Provider (MSP) space is emerging, transforming from an MSP to a Managed Security Service Provider (MSSP). Typically, MSPs act as an IT administrator, however, the rapid rise of cloud-based Software-as-a-Service (SaaS) is reducing margins for MSPs. This change is forcing MSPs to compete on price, causing buyers to become less loyal. Many MSPs are looking to add cybersecurity and IT compliance practices to their offerings for customers that are aware of the implications of a breach.

The statistics are remarkable. Gartner Inc. predicts worldwide information security spending will climb to $93 billion in 2018. In addition, Cybersecurity Ventures predicts that by 2021, global cybersecurity spending will exceed $1 trillion.

Customers recognize the necessity for better cybersecurity, which increases demand for your solutions, and are willing to pay for it, which increases your margin. Once you get to know their network and compliance requirements, customers are much more apt to stay put and not shop around on price alone. It’s no surprise that MSPs are actively seeking ways to get in on the ground floor in cybersecurity.
 
So how would you go about this? The classic approach is to frame the problem as a technical one. After all, most MSPs are, at their heart, technical people. All too often, MSPs seeking to become MSSPs approach the problem by reviewing available technologies and seek the best fit from a features viewpoint. And that's where you would be wrong.
 
It’s About People, Platform, and Process
 
74% of organizations are only reviewing logs weekly. The simple reason is, that while you can buy security tools, you simply cannot buy security monitoring capability. The "big hero" approach is neither scalable nor effective. To successfully implement a 24/7 security monitoring service aside of acquiring tools, an MSP would need to:
a) Hire and train a team of at least 6 staff members
b) Create and refine the security operations processes
c) Provide both lateral and top-down support
 
From our own experience, given full commitment plus the necessary budget and tools, this is a year-long process. Expect to be in the red during this year with costs far outstripping revenue. Tool vendors leave these "problems" for you to solve which makes for a high time-to-value (TTV) and lower probability of success.

Don't let your (technical) heart overrule your (business) head. It may sound exciting to get low-cost tools, maybe even one that is open source, allowing you to roll up your sleeves as a Linux guru, but that approach will put you in a world of hurt.
 
Why Drive Your Cybersecurity When You Can Uber?

The good news is that it’s the age of Uber. Compare Hertz rental car, the equivalent of buying software, versus Uber ride share, the equivalent of a co-managed security and compliance service. There are numerous advantages to adopting a co-managed approach. These include proven technology backed by a robust team of experts. Most important, low TTV, a minimal upfront investment, and a high probability of success.

When seeking a partner as an MSP or MSSP, keep these evaluation criteria in mind:
 
  • Do they offer top of the line, industrial strength technology including multi-tenancy, broad features, and support for popular log sources and compliance standards?
  • Is the software backed by a certified 24/7 Security Operations Center (SOC)?
  • Is the SOC ISO 27001 or a PCI DSS service provider?
  • Does your potential new partner require an upfront investment in hardware or software licenses?
  • Does the service provider have established processes and incident response procedures?
  • Will the SOC escalate incidents with detailed context and remediation recommendations so you can act?
  • Does the business model support monthly payments?
  • Will the service provider grow with you?
  • What is the TTV?
 
MSPs can and should definitely consider adding a security and compliance practice. Your customers are asking for it and your stockholders will thank you for it. Accelerate your TTV by partnering with a service provider, not buying more tools, that allows you to focus on your core competency.
 

Big Data or Smart Questions for Effective Threat Hunting

Advances in data analytics and increased connectivity have merged to create a powerful platform for change. Today, people, objects, and connections are producing data at unprecedented rates. According to DOMO, 90% of all data today was created in the last two years with a whopping 2.5 quintillion bytes of data being produced per day. With more Internet of Things (IoT) devices being produced, new social media outlets created, and the increasing number of people turning to search engines for information, the numbers will continue to grow.

So, what do we do with this overwhelming amount of data? Big data may be analyzed to reveal patterns, associations, and trends. Big data is the engine of data analytics growth and in most big data circles is defined by the Four Vs below.
 
  1. Volume: massive and passively generated
  2. Variety: originating from both individuals and machines at multiple points in the data value chain
  3. Velocity: generally operating in real time
  4. Veracity: referring to the uncertainty due to bias, noise or abnormality in data

Smart Questions

In a reasonably sized network, log data can be big data, but how do you extract value or intelligence from it? That has more to do with analytic capability and the ability to ask smart questions. Known Data Known Question, the lower left quadrant, is for optimizing data standardized processes and procedures. The data sources are known, leaving the only question of timeliness and data quality.

The Known Data Unknown Question, the lower right quadrant, is best suited for domain experts such as our SIEMphonic SOC team to discover questions they didn’t know to ask. It’s part of the “threat hunting” model. You go into the known jungle but cannot say what you will find. Once you stumble upon an anomaly, you move up/down and sideways to outline the contours and study the adjacent data till the entire kill-chain is revealed.

The Known Question Unknown Data, the upper left quadrant, is about pre-defined queries and reports that have been learned from past experiences or at other installations. They produce questions that are worth asking and in search of data to be asked against. A value-add of a co-managed SIEM is community intelligence. Once the community is aware of a certain pattern of attacks at one installation and uncover it, the lessons are rapidly applied to others to determine if similar attacks have or are occurring there.

The Unknown Data Unknown Question, the upper right quadrant, is the domain of machine learning or explorative or predictive computing. EventTracker uses the same Elasticsearch engine as a data store. Work is underway to leverage this investment to automatically model the behavior of your data – in real time to identify issues faster, streamline root cause analysis, and reduce false positives.

As the saying goes, it’s not what you have but what you do with it, that counts. Our SIEMphonic Co-managed security service extracts actionable intelligence from big data for more effective security monitoring, threat detection, and incident response. Unlike other solution, you don’t just get technology, but outcome!

Master the Art of Selling Managed Security Services as an MSP

Contributed by: Lily Teplow, Content Marketing Manager at Continuum
 
When it comes to selling security, one of the major challenges faced by managed services providers (MSPs) is changing the mind set of small- and medium-sized business (SMB) owners. With massive breaches hogging news headlines today, security is hard to ignore—yet many SMBs choose to do so because they don’t realize how “at risk” they may be.
 
Oftentimes, MSPs can’t progress in their sales conversations because of this mindset. But as you look to break further into the security space and offer clients with a reliable solution, your journey will start with how you position yourself. In this post, we’ll share important tricks of the trade to help you master the art of selling managed security, starting with these tips.

Redefine Cybersecurity and Risk

Generally, small businesses assume they’re already protected from cyber attacks. With basic protections like anti-virus and firewall, they should be completely covered, right? Wrong.
 
Cybercriminals and their attacks have grown more sophisticated in recent years, innovating their attempts to evade basic protections and legacy solutions that most SMBs rely on. What’s more, cybercriminals recognize that this is a vulnerability and continuously look to exploit it. It’s exactly why 61 percent of SMBs were the target of a cyber attack last year.
 
When first approaching sales conversations with SMB clients or prospects, it’s best to re-set the standard of how they might perceive cybersecurity and its associated risks. This doesn’t mean hitting them over the head with scaremongering statistics they’ve probably seen before. It means putting into perspective the threat landscape and the level of risk they’re willing to accept.
 
Ask them: “what security threats are you most concerned about?” Simply posing this question will get them thinking about what they’re up against and what they need protection from. And, their answer may be that they’ve struggled with ransomware or their employees need better security training—giving you even better ammunition when proposing your solution to address these specific needs.
 
Then you can ask them, “are you equipped to handle these threats on your own?” If the answer is “no”—which it likely will be—it means that their level of risk is higher than they might’ve thought. However, by partnering with the right managed security services provider, they’ll have access to a more advanced security solution to stay protected against these threats and substantially lower their risk level.

Build Trust

An SMB won’t put their business in the hands of someone they do not trust. Therefore, it’s important to present your services—and your relationship—in a way that establishes and builds trust.
 
This all starts with transparency. Provide peace of mind by keeping clients updated on major vulnerabilities and help them deploy an effective and secure plan of action. Also, discuss how you’re committed to keeping lines of communication open with your clients and meeting with them on a regular basis. You can even give examples as to how you’ve helped mitigate active threats for clients that are similar to them.
 
The next step in building trust is accuracy. A trusted MSP will be able to confirm the accuracy of threats and have the tools necessary to remain protected. Conducting routine network assessments, for example, will reassure your clients that the solution you’re providing is working and that they can rely on your partnership to keep them secure.
 
Lastly, showcase how you’ll be part of their team. Position yourself as a true security advisor, providing both the technical support and the security expertise they need to maintain their ideal level of protection. For many, knowing that they have a team of security experts watching out for them 24/7/365 is enough to get them to listen and seriously consider investing in your services.

Focus on the Business Benefits, Not Tech Specs

In any sales conversation or proposal, you want to stray away from concentrating on the technical features of your solution. This may be difficult for many MSPs because these features are what make the solution work, but that doesn’t necessarily resonate with the person or prospect sitting in front of you.
 
Instead, highlight the business benefits. How does your solution solve some of the pain points they’re experiencing? How does it align with their key business initiatives? Essentially, what’s the benefit of them doing business with you?
 
Let’s look at one example, with the business benefit being a more comprehensive security strategy. You could say something along the lines of:

“How do you fight an infection you may not even know you have? Your business needs to be able to address infections that aren't as blatant as ransomware—ones that are instead getting increasingly stealthy and evasive. Your security strategy needs to adapt, and the best answer is to partner with us.
 
Our cybersecurity solution can provide you with both the foundational and highly advanced protections you need. Together, we’ll be able to establish a unique protection plan for your specific environment—protecting you from the cyber threats that you’re most concerned about. Additionally, our services are backed by our team of highly-skilled security experts who take care of the analysis, monitoring, and threat intervention needed to stop attacks in their tracks and keep your business safe.”

When selling security services, keep in mind that it’s no longer a question of if businesses need security; it’s a question of what level of security they need. With these selling tips, you’ll be better equipped in your sales conversations to convince prospects and clients that you can provide the level of protection they seek.

Three Causes of Incident Response Failure

 Breaches continue to be reported at a dizzying pace. In 2018 alone, a diverse range of companies — including Best Buy, Delta, Orbitz, Panera, Saks Fifth Avenue, and Sears — have been victimized. These are not small companies, nor did they have small IT budgets. So, what’s the problem?  
  
Threats are escalating in scope and sophistication. Often times, new technologies are added to the enterprise network and not fully tested for security flaws. This creates issues for security teams, making it difficult to defend gaps and protect against persistent threats. Another issue facing security team is over emphasis on prevention has caused an under investment in security monitoring and incident response. 
  
Is your team faced with any of these three issues that can lead to failure to respond to incidents, malware, and threats properly?
 
1: Alert fatigue- multiplying security solutions to tackle the threat avalanche causing a large alert volume.
Even when centrally managed and correlated with a Security Information and Event Management (SIEM) solution, the workload of verifying and triaging an alert often overwhelms an in-house security team. The harder parts of research and enrichment come after the alert is verified, defining the who, what, where, when, and what to do about it. In the meantime, more alerts continue to pile up, making it difficult for an in-house security team to keep up with the everchanging threat landscape. 
  
2: Skill shortage- everyone has a limited security budget.
Even if budget was a non-issue, skill shortage continues to be acute globally. Where can you find a mass of capable people? And how do you train and keep them? By the way, did you notice that management seems to be somehow more amenable to buying yet another tool than adding headcount? Artificial Intelligence (AI) continues to be a mirage, self-driving cars anyone?
  
3: Tribal knowledge- security processes require a transfer of knowledge from senior to new or junior resources.
Incident response requires a deep knowledge of existing systems and reasons why things are set up the way they are. Even when highly documented policies and procedures are in place, companies often rely heavily on their most senior analysts to make decisions based on their experience and knowledge of the organization. 
  
Throwing money at this problem is not the answer, working smarter is the better answer. If you have problems with alert fatigue, skill shortage, or tribal knowledge, Co-Managed SIEM can help you. According to Gartner’s How and When to Use Co-Managed Security Information and Event Management report, “Co-managed SIEM services enable security and risk management leaders to maximize value from SIEM and enhance security monitoring capabilities, while retaining control and flexibility.”
 
Download the full report to gain insights including how to identify current gaps, project goals and use cases, as well as guidance to help you evaluate and select the right provider.
 
 
 
 

Venom Vulnerability exposes most Data Centers to Cyber Attacks

Just after a new security vulnerability surfaced Wednesday, many tech outlets started comparing it with HeartBleed, the serious security glitch uncovered last year that rendered communications with many well-known web services insecure, potentially exposing millions of plain-text passwords.

But don’t panic. Though the recent vulnerability has a more terrific name than HeartBleed, it is not going to cause as much danger as HeartBleed did.

Dubbed VENOM, standing for Virtualized Environment Neglected Operations Manipulation, is a virtual machine security flaw uncovered by security firm CrowdStrike that could expose most of the data centers to malware attacks, but in theory.

Yes, the risk of Venom vulnerability is theoretical as there is no real-time exploitation seen yet, while, on the other hand, last year’s HeartBleed bug was practically exploited by hackers an unknown number of times, leading to the theft of critical personal information.

Now let’s know more about Venom:

Venom (CVE-2015-3456) resides in the virtual floppy drive code used by a several number of computer virtualization platforms that if exploited…

…could allow an attacker to escape from a guest ‘virtual machine’ (VM) and gain full control of the operating system hosting them, as well as any other guest VMs running on the same host machine.

According to CrowdStrike, this roughly decade-old bug was discovered in the open-source virtualization package QEMU, affecting its Virtual Floppy Disk Controller (FDC) that is being used in many modern virtualization platforms and appliances, including Xen, KVM, Oracle’s VirtualBox, and the native QEMU client.

Jason Geffner, a senior security researcher at CrowdStrike who discovered the flaw, warned that the vulnerability affects all the versions of QEMU dated back to 2004, when the virtual floppy controller was introduced at the very first.

However, Geffner also added that so far, there is no known exploit that could successfully exploit the vulnerability. Venom is critical and disturbing enough to be considered a high-priority bug.

Successful exploitation of Venom required:
For successful exploitation, an attacker sitting on the guest virtual machine would need sufficient permissions to get access to the floppy disk controller I/O ports.

When considering on Linux guest machine, an attacker would need to have either root access or elevated privilege. However on Windows guest, practically anyone would have sufficient permissions to access the FDC.

However, comparing Venom with Heartbleed is something of no comparison. Where HeartBleed allowed hackers to probe millions of systems, Venom bug simply would not be exploitable at the same scale.

Flaws like Venom are typically used in a highly targeted attack such as corporate espionage, cyber warfare or other targeted attacks of these kinds.

Did venom poison Clouds Services?

Potentially more concerning are most of the large cloud providers, including Amazon, Oracle, Citrix, and Rackspace, which rely heavily on QEMU-based virtualization are vulnerable to Venom.

However, the good news is that most of them have resolved the issue, assuring that their customers needn’t worry.
“There is no risk to AWS customer data or instances,” Amazon Web Services said in a statement.
Rackspace also said the flaw does affect a portion of its Cloud Servers, but assured its customers that it has “applied the appropriate patch to our infrastructure and are working with customers to remediate fully this vulnerability.”

Azure cloud service by Microsoft, on the other hand, uses its homemade virtualization hypervisor technology, and, therefore, its customers are not affected by Venom bug.

Meanwhile, Google also assured that its Cloud Service Platform does not use the vulnerable software, thus was never vulnerable to Venom.

Patch Now! Prevent yourself

Both Xen and QEMU have rolled out patches for Venom. If you’re running an earlier version of Xen or QEMU, upgrade and apply the patch.

Note: All versions of Red Hat Enterprise Linux, which includes QEMU, are vulnerable to Venom. Red Hat recommend its users to update their system using the commands, “yum update” or “yum update qemu-kvm.”

Once done, you must “power off” all your guests Virtual Machines for the update to take place, and then restart it to be on the safer side. Remember, only restarting without power off the guest operating system is not enough for the administrators because it would still use the old QEMU binary.

See more at Hacker News.

Tracking removable storage with the Windows Security Log

With data breaches and Snowden-like information grabs, I’m getting increased requests for how to track data moving to and from removable storage, such as flash drives.  The good news is that the Windows Security Log does offer a way to audit removable storage access.  I’ll show you how it works, and since EventTracker has some enhanced capabilities in this area, I’ll briefly compare native auditing to EventTracker.

Removable storage auditing in Windows works similar to and logs the exact same events as File System auditing.  The difference is in controlling what activity is audited.

To review, with File System auditing, there are 2 levels of audit policy.  First you enable the Audit File System audit subcategory at the computer level.  Then you choose which folders you wish to audit and enable object level auditing on those folders for the users/groups, permissions and success/failure results that need to be monitored.   For instance, you can audit Read access on C:\documents for the SalesReps group.

However Removable Storage auditing is much simpler to enable and far less flexible.  After enabling the Removable Storage audit subcategory (see below) Windows begins auditing all access requests for all removable storage.  It’s equivalent to enabling auditing Full Control for Everyone.

Local Security Policy

As you can see, auditing removable storage is an all or nothing proposition.  Once enabled, Windows logs the same event ID 4663 as for File System auditing.  For example, the event below shows that user rsmith wrote a file called checkoutrece.pdf to a removable storage device Windows arbitrarily named \Device\HarddiskVolume4 with the program named Explorer (the Windows desktop).

Microsoft Windows Security Auditing

How do we know this is a removable storage event and not just normal File System auditing?  After all, it’s the same event ID as used for normal file system auditing.  Notice the Task Category above which says Removable Storage.  The information under Subject tells you who performed the action.  Object Name gives you the name of the file, relative path on the removable storage device and the arbitrary name Windows assigned the device the first time it was connected to this system.  Process information indicates the program used to perform the access.  To understand what type of access (e.g. Delete, Write, Read) was performed look at the Accesses field which lists the permissions actually used.

If you wish to track information being copied from your network to removable storage devices you should enable Audit Removable Storage via group policy on all your endpoints.  Then monitor for Event ID 4663 where Task Category is Removable Storage and Accesses is wither WriteData or AppendData.

As you can see Microsoft took the most expedient route possible to providing an audit trail of removable storage access.  There are events for tracking the connection of devices – only the file level access events of the files on the device.  These events also do not provide the ability to see the device model, manufacturer or serial number.  That device information is known to Windows – it just isn’t logged by these events since they captured at the same point in the operating system that other file access events are logged.  On the other hand, EventTracker’s agent logs both connection events and information about each device.  In fact EventTracker event allows you selectively block or allow access to specific devices based on policy you specify.  I encourage you to check out EventTracker’s enhanced abilities.

Pay Attention to System Security Access Events

There are five different ways you can log on in Windows called “logon types.” The Windows Security Log lists the logon type in event ID 4624 whenever you log on. Logon type allows you to determine if the user logged on at the actual console, via remote desktop, via a network share or if the logon is connected to a service or scheduled task starting up. The logon types are:

System Security

There are a few other logon types recorded by event ID 4624 for special cases like unlocking a locked session, but these aren’t real logon session types.

In addition to knowing the session type in logon events, you can also control users’ ability to logon in each of these five ways. A user account’s ability to logon is governed by five user rights found in group policy under Computer Configuration/Windows Settings/Security Setting/User Right Assignments. There is an allow and deny right for each logon type. In order to logon in a given way you must have the corresponding allow right. But the deny right for that same logon type takes precedence. For instance, in order to logon at the local keyboard and screen of a computer you must have the “Allow logon locally” right. But if the “Deny logon locally” right is also assigned to you or any group you belong to, you won’t be able to logon. The table lists each logon type and its corresponding allow and deny rights.

Logon rights are very powerful. They are your first level of control – determining whether a user can access a given system at all. After logging in of course their abilities are limited by object level permissions. Since logon rights are so powerful it’s important to know if they are suddenly granted or revoked. You can do this with Windows Security Log events 4717 and 4718 which are logged whenever a given right is granted or revoked respectively. To get these events you need to enable the Audit Authentication Policy Change audit subcategory.

Events 4717 and 4718 identify the logon right involved in the “Access Granted”/”Access Removed” field using a system name for the right as shown in corresponding column in the table above. The events also specify the user or group who was granted or revoked from having the right in the “Account Modified” field.

Here’s an example of event ID 4717 where we granted the “Access this computer from the network” to the local Users group.

System security access was granted to an account.
Subject:

Security ID: SYSTEM
Account Name: WIN-R9H529RIO4Y$
Account Domain: WORKGROUP
Logon ID: 0x3e7

Account Modified:

Account Name: BUILTIN\Users

Access Granted:

Access Right: SeNetworkLogonRight

One consideration is that the events do not tell you who (which administrator) granted or revoked the right. The reason is that user rights are controlled via group policy objects. Administrators do not directly assign or revoke user rights on individual systems; even if you modify the Local Security Settings of a computer you are really just editing the local group policy object. When Windows detects a change in group policy it applies the changes to the local configuration and that’s when 4717 and 4718 are logged. At that point the user making the change directly is just the local operating system itself and that’s why you see SYSTEM listed as the Subject in the event above.

So how can you figure out who a granted or removed the right? You need to be tracking group policy object changes, a topic I’ll cover in the future.

Implementing a Central Log Collection System

Implement a Central Collection System 

Microsoft has made some considerable changes to event management in Windows Vista. But are these changes enough to help you control your entire infrastructure? This article is the last in a series that looks at Vista event management. 

Read the first article           Read the second article           
Read the third article          Read the fourth article
Read the fifth article

As you have seen, Microsoft has made considerable changes to the Vista Event Log—changes that move it from a PC-based system to an enterprise level tool. Collecting events from remote systems is something that administrators of Windows systems have wanted to do for many years. Vista finally makes it possible. But, is the Vista event management and collection system enough in and of itself, even with its improvements? Let’s take a look.

Collecting Events with Vista Only

If you decide to run your event management strategy based on Vista’s new features, then you’ll need to configure your environment to meet the following guidelines:

  • All of the machines you will be managing must run Vista because only Vista supports the new features of the Event Log.
  • In addition, your collector system will not be a server because Windows Server 2008—the server operating system that supports the same event collection features as Vista—will not be available to the market until the end of this year. This means you will have to run this central service on a workstation, yet because it is a central service, it should really be located on a server.
  • Event automation is local to each system and must be configured as such. Of course, you could use Microsoft PowerShell to automate the collection and configuration process on each machine, but you’ll have to prepare this script yourself (see Resources).
  • There will be no centralized policy management console because each system sends information on its own to the collector. If you need to make a change to your collection policy for any reason, you will have to make it on each machine individually.
  • By default, updating each endpoint system is a manual task, unless you use the right tools such as Microsoft PowerShell to automate it.
  • It will be difficult to implement standards since each device in the collection is independent.

So, as you can see, you can do it with Vista alone, but it has some limitations.

Requirements for a Central Collection System

If you are interested in centrally collecting events and use it to gain complete control of your distributed environment, then look to these requirements:

  • When managing distributed systems, you must have some form of centralized control and distributed processing. Otherwise, you’ll end up having to interact with each specific endpoint. A good example is software distribution. Few if any organizations today would deploy applications manually on each machine. No, every organization automates the installation process and deploys application through a centralized systems management tool. The same should go for event management.
  • Managing events through a centralized event management is important. You need a centralized system to update policies on all systems from one single location and automate policy deployment. You do need to collect all critical events centrally because otherwise you cannot get a global view of your systems.
  • In addition, while Microsoft has gone a long way to document events as much as possible, it is really nice to have access to a Windows event ‘expert’ to guide you towards the most important events to watch for. And, it is convenient to have access to an advanced knowledge base to demystify any Windows event.

These requirements are just a few examples of what you’ll need to have to perform complete event management in your network.

A Professional Event Management Tool

Is Vista enough on its own? Not really. The changes Microsoft has implemented make the Vista Event Log a much more solid and robust event management environment. The fact that all events are stored in XML format, the fact that Windows Remote Management now lets you manage systems through common HTTP ports and the fact that the task scheduler is now linked with event management are excellent examples of how Microsoft can implement and design a standards-based operating system. These changes make it easier for third party software manufacturers to develop and integrate comprehensive management systems to the Vista OS.

Vendors such as Prism Microsystems have been supporting event management for years. That’s partly because like their customers, they know that event management is the best way to manage change in any Windows network. True event management requires a separate tool, one that is focused on event management and only on event management (see Figure 1). That’s what EventTracker does. It is Windows version agnostic in that it works with any Windows version. It supports the needs of multiple audiences such as auditors, CxOs, system administrators, security officers and Help Desk engineers. It automatically categorizes events so that you know what you’re looking at. It is linked to one of the largest databases of Windows events in the world so that you always understand what Windows is telling you. It is centrally controlled through a Web-based console so you can have access to it from any location in your network. And, it is policy-driven, letting you design a standard policy which can be applied to any node in the network from one central location. All you need is administrative access to each node.

Gamut

Figure 1. Event Tracker covers the entire gamut of Event Management needs

There is no doubt that if you want to manage your Windows network, whether it be Vista or not, then you need a proper event management tool—one that will support all of your needs and let you know what is going on in the network at any time. And, if you do the math right, you’ll find out that EventTracker quickly pays for itself. For example, in a network of 50 servers, implementing EventTracker could pay itself back within about four months (see Table 1)—even less if you deploy it in a virtualized operating system instead of on an actual physical server.

Costs

 

Software

 $24,000.00

Deployment Planning

 $2,019.23

Training & Consulting

 $3,211.54

Hardware

 $10,000.00

Total Costs

 $39,230.77

Savings
Productivity Savings

 $24,062.50

Availability Savings

 $11,682.69

Improved Support Savings

 $3,164.06

Security Savings

 $12,387.92

Usability Savings

 $65,625.00

Total Savings

 $116,922.18

Return on Investment

198.04

Savings per month

 $9,743.51

Payback in months

4.03

Table 1. Sample EventTracker Return on Investment Calculation

If you’re interested in making sure you know what is going on in your network, then look to tools such as EventTracker. If you’re moving to Vista, then do it right. Introduce complete network management and move to a managed network model. You won’t regret it. Not only will you have information at your fingertips once and for all, but you’ll also be able to take full advantage of all that Vista offers.

About the Authors

Danielle Ruest and Nelson Ruest, MCSE+Security, MCT, Microsoft MVP, are IT professionals specializing in systems administration, migration planning, software management and architecture design. They are authors of multiple books, and are currently working on the Definitive Guide to Vista Migration for Realtime Publishers as well as the Complete Reference to Windows Server 2008 for McGraw-Hill Osborne. They have extensive experience in systems management and operating system migration projects.

Resnet

Industry News

HIPAA Audit: 42 questions that the US Department of Health and Human Service (HHS) might ask.

Everything from security to employee status to internet use

Automating the HIPAA compliance process

Like many of the other Compliance standards in wide spread use today, HIPAA calls for a risk-based assessment by the Covered Entity to implement safeguards to meet HIPAA compliance. Can HIPAA compliance be achieved without a log management solution? The answer to that is “perhaps”, but especially at the larger CE’s, at a considerable increase of risk of information breach and audit failure. Achieving compliance also becomes an extremely labor intensive activity.

Data Loss and ID Theft Fears Altering Consumer Purchasing Behavior

With the headlines announcing almost on a weekly basis another data breach at businesses, educational institutions and medical facilities, a recent study shows consumers are modifying their purchasing behavior, including online buying, out of concern for the security of their personal information.

Audit your organization year-round for best results, experts say

Enterprise security managers and others who work with auditors would do well by taking a page out of the National Football League’s playbook, a CISO advised attendees at the Burton Group Catalyst Conference.

Detecting Zeus, Logging for incident response, and more

Logging for Incident Response: Part 1 – Preparing the Infrastructure

From all the uses for log data across the spectrum of security, compliance, and operations, using logs for incident response presents a truly universal scenario –you can be forced to use logs for incident response at any moment, whether you’re prepared or not.  An incident response (IR) situation is one where having as much log data as possible is critical. You might not use it all, and you might have to work hard to find the proverbial needle in the haystack of logs – still, having reliable log data from all – affected and unaffected – systems is indispensable in a hectic post-incident environment.

The security mantra “prevention-detection-response” still defines most of the activities of today’s security professionals. Each of these three components is known to be of crucial importance to the organization’s security posture. However, unlike detection and prevention, the response is impossible to avoid. While it is not uncommon for an organization to have weak prevention and nearly non-existent detection capabilities, they will often be forced into response mode by attackers or their evil creations – malware. Even in cases where ignoring the incident that happened might be the chosen option, the organization will implicitly follow a response plan, even if it is as ineffective as to do nothing.

In this paper, we will focus on how to “incident-response-proof” your logging – how to prepare your logging infrastructure for incident response. The previous six articles focused on specific regulatory issues, and it is not surprising that many organizations are doing log management just to satisfy compliance mandates. Still, technology and processes implemented for PCI DSS or other external mandates are incredibly useful for other uses such as incident response.  On top of this, many of the same regulations prescribe solid incident response practices (for additional discussion see “Incident management in the age of compliance”)

Basics
Even though a majority of incidents are still discovered by third parties (seeVerizon Breach Report 2010 and other recent research), it is clear that organizations should still strive to detect incidents in order to limit the damage stemming from extensive, long-term compromises. On the other hand, even for incidents detected by the third parties, the burden of investigation – and thus using logs for figuring out what happened –falls on the organization itself.

We have therefore identified two focal points for use of logs in incident response:

  • Detecting incidents
  • Investigating incidents

Sometimes the latter use-case is called “forensics” but we will stay away from such definitions since we would rather reserve the term “forensics” for legal processes.

Incident Response Model and Logs
While incidents and incident response will happen whether you want it to or not, a structured incident response process is an effective way to reduce the damage suffered by the organization.  The industry-standard SANS incident response model organizes incident response in six distinct stages (see (http://www.sans.org/rr/whitepapers/incident/Incident Management 101 Preparation & Initial Response (aka Identification)  By: Robin Dickerson (posted on January 17, 2005)

Preparation includes tasks that need to be done before the incident: from assembling the team, training people, collecting, and building tools, to deploying additional monitoring and creating processes and incident procedures

  • Identification starts when the signs of an incident are seen and then confirmed, so that incident is declared
  • Containment is important for documenting what is going on, quarantining the affected systems, as well as possibly taking systems offline
  • Eradication is preparing to return to normal by evaluating the available backups, and preparing for either restoration or rebuilding of the systems
  • Recovery is where everything returns to normal operation
  • Follow-Up includes documenting and discussing lessons learned, and reporting the incident to management

Logs are extremely useful, not just for identification and containment as we mention above, but for all phases of incident response process.  Specifically, here is how logs are used at each stage of the IR process:

  1. Preparation: incident response logs help us verify controls (for example, review login success and failure histories), collect normal usage data (learn what log messages show up during routine system activity), and perform a baseline (create log-based metrics that describe such normal activity), etc.
  2. Identification: logs containing attack traces, other evidence of a successful attack, or insider abuse are pin-pointed, or alerts might be sent to notify about an emerging incident; also, a quick search and review of logs helps to confirm an incident, etc.
  3. Containment: logs help us scope the damage (for example, firewall logs show which other machines display the same scanning behavior in case of a worm or spyware infestation), and learn what else is lost by looking at logs from other systems that might contain traces similar to the one that is known to be compromised, etc.
  4. Eradication: while restoring from backups, we need to also make a backup of logs and other evidence:  preserving logs for the future is required, especially if there is risk of a lawsuit (even if you don’t plan to sue, the other side might)
  5. Recovery: logs are used for confirming the restoration and then measures are put in place to increase logging so that we have more data in case it happens again; incident response will be much easier next time
  6. Follow-Up: apart from summarizing logs for a final report, we might use the incident logs for peaceful purposes: training the new team members, etc.

As a result, the IT infrastructure has to be prepared for incident response logging way before the first signs of an incident are spotted.

Preparing the Infrastructure
In light of predominantly 3rd party incident discovery, the incident response process might need to be activated at any moment when notification of a possible incident arrives.  From this point onward, the security team will try to contain the damage and investigate the reason for the attack or abuse based on initial clues. Having logs will allow an organization to respond better and faster!

What logs needs to be collected for effective IR? This is very simple: any and all logs from networks, hosts, applications, and other information systems can be useful during response to an incident. The same applies to context data – information about users, assets, and vulnerabilities will come in handy during the panic of incident response. As we say above, having as much log data as possible will allow your organization to effectively investigate what happened, and have a chance of preventing its recurrence in the future.

Specifically, make sure that the following log sources have logs enabled and centrally collected:

  • Network Devices – routers and switches
  • Firewalls – including firewall modules in other Network Devices
  • IDS, IPS and UTM devices – while firewalls are ubiquitous and can create useful logs, IDS/IPS alerts add to a useful dimension to IR process
  • Servers running Windows, Unix and other common operating systems; logging should include all server components such as web servers, email servers, DNS servers, and other server components
  • VPN logs are often key to IR since they can reveal who was accessing your systems from remote locations
  • Web proxies – these logs are extremely useful for tracking “drive-by downloads” and other web malware attacks
  • Database – logs from RDBMS systems contain records indicating access to data as well as changes to database systems
  • Applications ranging from large enterprise applications such as SAP to custom and vertical applications specific to a company

Detailed discussion of logging settings on all those systems goes beyond the scope of this paper and might justify not just reading a document, but engaging specialty consultants focused on logging and log management.

Tuning Log Settings for Incident Response
What logs should be enabled on the systems covered above? While “log everything” makes for a good slogan, it also makes log analysis a nightmare by mixing together more relevant log messages with debugging logs which are used much less often, if at all. Still, many logging defaults should be changed as described below.

A typical Unix (Solaris, AIX, etc.) or Linux system will log the following into syslog: various system status and error messages, local and remote login/logout, some program failures, and system start/stop/restart messages. Logs that will not be found will be all logs tracking access to files, running processes, and configuration changes. For example, to log file access on Linux, one needs to use a kernel audit facility, and not simply default syslog.

Similarly, on Windows systems the Event Log will contain a plethora of system status and error messages, login/logout records, account changes, as well as system and component failures.  To have more useful data for incident response , one needs to modify the audit policy to start logging access to files and other objects.

Most web servers (such as Apache and Microsoft IIS) will record access to web resources located on a server, as well as access errors. Unlike the OS platforms, there is not a pressing need for more logging, but one can modify the /etc/http/httpd.conf to add logging of additional details, such as referrer and browser type.

Databases such as Oracle and MS SQL Server log painfully little by default, even though the situation is improving in recent database versions such as Oracle 11g. With older databases, you have to assume to have no database logs if you have not enabled them during the incident preparation stage. A typical database will log only major errors, restarts, and some administrator access, but will not log access, or changes to data or database structures.

Firewalls typically log denied or blocked connections, but not the allowed connections by default: as our case study showed, connection allowed logs are one of the most indispensable for incident response. Follow the directions for your firewall to enable such logging.

VPN servers will log connections, user login/logouts, errors; default logging will be generally sufficient.  Making sure that successful logins – not just failures-  are logged is one of the important preparation tasks for VPN concentrators.

Network IDS and IPS will usually log their alerts, various failures, user access to the sensor itself; the only additional type of “logging” is recording full packet payload.

Implementing Log Management
Log management tools that can collect massive volumes of diverse log data without issues are hugely valuable for incident response.  Having a single repository for all activity records, audit logs, alerts, and other log types allows incident responders to quickly assess what was going on during an incident, and what led to a compromise or insider abuse.

After logging is enabled and configured for additional details and additional logged events, the logs have to be collected and managed to be useful for incident response.  Even if a periodic log review process is not occurring, the logs have to be available for investigations.  Following the maturity curve (see http://chuvakin.blogspot.com/2010/02/logging-log-management-and-log-review.html), even simply having logs is a huge step forward for many organizations.

When organizations start collecting and retaining logs, the question of retention policy comes to the forefront.  Some regulations give specific answers: PCI DSS for example, mandates storing logs for one year.  However, determining proper log storage for incident response can be more difficult. One year might still be a good rule of thumb for many organizations, since it is likely that investigating incidents more than one year after they happened will be relatively uncommon,but certainly possible – so longer retention periods such as three years may be useful).

In the next paper, we will address how to start reviewing logs for discovering incidents, and also how to review logs during incident response. At this point, we have made a huge step forward by making sure that logs will be around when we really need them!

Conclusions
Even though compliance might compel organizations to enable logging, deploy log management, and even start reviewing logs, incident response scenarios allow the value of logs to truly manifest itself.

However, in order to use logs for incident response, the IT environment has to be prepared – follow the guidance and tips from this paper in order to “IR-proof” your logging infrastructure.  A useful resource to jumpstart your  incident response log review is “Critical Log Review Checklist for Security Incidents” which can be obtained at http://chuvakin.blogspot.com/2010/03/simple-log-review-checklist-released.html in various formats.

About Author

Dr. Anton Chuvakin (http://www.chuvakin.org) is a recognized security expert in the field of log management and PCI DSS compliance.  He is an author of books “Security Warrior” and “PCI Compliance” and a contributor to “Know Your Enemy II”, “Information Security Management Handbook”; he is now working on a book about computer logs.  Anton has published dozens of papers on log management, correlation, data analysis, PCI DSS, security management (see list www.info-secure.org) . His blog http://www.securitywarrior.org is one of the most popular in the industry.

In addition, Anton teaches classes (including his own SANS class on log management) and presents at many security conferences across the world; he recently addressed audiences in United States, UK, Singapore, Spain, Russia and other countries.  He works on emerging security standards and serves on the advisory boards of several security start-ups.

Currently, Anton is building his security consulting practice www.securitywarriorconsulting.com, focusing on logging and PCI DSS compliance for security vendors and Fortune 500 organizations.  Dr. Anton Chuvakin was formerly a Director of PCI Compliance Solutions at Qualys. Previously, Anton worked at LogLogic as a Chief Logging Evangelist, tasked with educating the world about the importance of logging for security, compliance and operations. Before LogLogic, Anton was employed by a security vendor in a strategic product management role. Anton earned his Ph.D. degree from Stony Brook University.

 Previously on EventSource: Logging for FISMA Part 1 and Part 2

5 types of DNS attacks and how to detect them

The Domain Name System, or DNS, is used in computer networks to translate domain names to IP addresses which are used by computers to communicate with each other. DNS exists in almost every computer network; it communicates with external networks and is extremely difficult to lock down since it was designed to be an open protocol. An adversary may find that DNS is an attractive mechanism for performing malicious activities like network reconnaissance, malware downloads, or communication with their command and control servers, or data transfers out of a network. Consequently, it is critical that DNS traffic be monitored for threat protection.

Attack 1: Malware installation. This may be done by hijacking DNS queries and responding with malicious IP addresses. The goal of malware installation can also be achieved by directing requests to phishing domains.

Indicators of compromise: Forward DNS lookups of typo squatting, domain names that look or sound similar (gooqle.com for example); modifications to hosts file; DNS cache poisoning.
Attack 2: Credential theft. An adversary may create a malicious domain name that resembles a legitimate domain name and use it in phishing campaigns to steal credentials.

Indicators of compromise: Forward DNS lookups of typo squatting, domain names that look or sound similar (gooqle.com for example); modifications to hosts file; DNS cache poisoning.
Attack 3: Command & Control communication. As part of lateral movement, after an initial compromise, DNS communications is abused to communicate with a C2 server. This typically involves making periodic DNS queries from a computer in the target network for a domain controlled by the adversary. The responses contain encoded messages that may be used to perform unauthorized actions in the target network.

Indicators of compromise: DNS beaconing queries to anomalous domain, low time-to-live, orphan DNS requests.
Attack 4: Network footprinting. Adversaries use DNS queries to build a map of the network. Attackers live off the terrain so developing a map is important to them.

Indicators of compromise: Large number of PTR queries, SOA and AXFER queries, forward DNS lookups for non-existent subdomains in the root domain.
Attack 5: Data theft. Abuse of DNS to transfer data; this may be performed by tunneling other protocols like FTP, SSH through DNS queries and responses. Attackers make multiple DNS queries from a compromised computer to a domain owned by the adversary. DNS tunneling can also be used for executing commands and transferring malware into the target network.

Indicators of compromise: Large number of subdomain lookups or large lookup size; long subdomains; uncommon query types (TXT records).
Feeling overwhelmed? There is a ton of detail to absorb and process discipline to put it into practice for 24/7 threat detection and response. Allow us to do the heavy lifting with our co-managed SIEM, SIEMphonic. Whether you use on-premise DNS like Microsoft DNS server or Infoblox or cloud services from OpenDNS, we’ve got you covered. Check out our "Catch of the Day" to read true stories from our SOC in which we detected and thwarted cyber-attacks including DNS-based threats.

The Ultimate Playbook to Become an MSSP

Contributed by: Meaghan Moraes, Content Marketing Manager at Continuum

Now that advanced cybersecurity protections are a must-have in today’s landscape, organizations of all sizes are increasingly seeking out and leaning on a trusted security partner to manage their security services. A recent study released by Forrester revealed that 57 percent of companies are seeking outside help for IT systems monitoring and 45 percent are outsourcing threat detection and intelligence. As a result, managed IT service providers (MSPs) are presented with a major opportunity to step in as that cybersecurity leader through an expanded services portfolio that officially deems them an “MSSP”—a Managed Security Services Provider.
 
As it stands, 42 percent of employees in small- and medium-sized businesses (SMBs) would not know what to do if their business experienced a cyber attack, which stems from the fact that 47 percent do not have employee security awareness and training programs in place. As MSPs integrate security into their services, they will not only significantly decrease the margin of error for their clients’ information security, but they will be one step closer to cementing their status as their go-to provider on an ongoing basis.

But that doesn’t happen overnight, and there’s no silver bullet to security. As you start to think about adding layers of security to your offering in an effort to address your clients’ top concerns, your strategy will begin to develop. Here are some helpful steps to devising a solid strategy and then successfully selling what you have to offer as an MSSP.

Devising Your Cybersecurity Strategy
 
With advanced threats like rapidly evolving and hyper-targeted malware and ransomware, basic security tools alone aren’t enough to keep SMB clients secure; additional cybersecurity is needed for more complete and holistic protection.
 
MSPs and SMBs need more advanced and comprehensive security—such as endpoint and network security, security operations center (SOC) services, log management, DNS filtering, and user training—in order to remain one step ahead of threats at all times. A proactive approach to cybersecurity will inform MSPs of exactly how well-protected their clients are from specific risks. Capabilities such as advanced security profiling and risk scoring, employee security training, and incident response planning can help you consistently predict and manage risk.
 
When it comes to immediate and robust detection capabilities, it’s crucial to offer endpoint and network management so you can detect suspicious behaviors on all endpoints and across the network so you can immediately roll back and minimize any damage.
 
Lastly, with SOC services, you’ll have the ability to monitor and mitigate threats in real time, and offer remediation services and deep forensics as well.
 
Once you have pinned down which protections will comprise your comprehensive solution, it’s time to package your unique offering with effective messaging.

Selling Your Managed Security Services
 
When prospecting or cross-selling to clients, you can refine your message to speak to the SMB mindset around security. MSPs need to not only evolve their strategies to survive, but get client buy-in on them.
 
When working to achieve buy-in, the best method for engaging clients is to develop a common language. Compare a typical business function your client performs—like marketing, for instance—to security. Just as you work to know your audience, understand where to focus and report on those efforts, the same methodology can be applied to your security service delivery. You need to understand the threat landscape, consistently measure risk, and report on risk levels. Finding that type of common ground will help you clearly illustrate how you’re aiming to deliver your cybersecurity offering.
 
It’s helpful to frame the conversation with clients around risk. You can work with them to define acceptable risk and determine what it will take to get to their desired state. Make sure your client sees your relationship as ongoing. If they’re at an unacceptable risk level, you can ensure them that your security services will get them to the acceptable range, and you will maintain that by consistently identifying, prioritizing, and mitigating gaps in coverage.
 
Taking an approach that not only brings to life what your services will represent, but also justifies additional fees and services will cement you as the MSSP that will undoubtedly keep your clients as protected and profitable as possible.

Top 3 Office 365 Security Concerns and What to do About Them

Office 365 (O365) is immensely popular across all industry verticals in the small and medium enterprise space. It is often the killer app for a business and contains valuable, critical information about the business. Accordingly, O365 defense is a top concern on IT leader’s minds.

Is O365 defense totally up to the vendor, Microsoft, and the user has no responsibility? Hardly. Microsoft is merely providing the software-as-a-service, hosted on their infrastructure. While they do have some responsibility for securing the infrastructure and keeping the application up to date, you are the admin and it’s your data, therefore it is your responsibility to secure your tenant.

While the motivations and capabilities of attackers vary widely, most attacks still follow a common process, a basic pattern, and proceed from one step to the next to achieve the desired outcomes. This step-wise process can be defended against by focusing defense measures on choke points in the chain. Of course, any step can be bypassed through exploit technologies, so the best strategies apply defenses at every step along the chain.

Concern 1: Data Exfiltration

O365 contains many different types of data including: Email, documents, instant messaging conversations, Yammer threads, etc. In fact, even breaching your directory information can be useful to an attacker. Data can be stolen in any number of ways, including through a breach of an account with access to the data, or through system and infrastructure attacks that give them local or system admin privileges to computers that store the data outside of Office 365. Why would the bad guys want to do this? Many reasons such as the theft of intellectual property, the desire to blackmail you, the intention to sell your data on the black market, or to use the data to further entrench themselves in your systems.

Prevention: Focus on not just the data, but also the accounts needed to access the data. Enforce least privilege, establish access control lists, define external sharing policies, use data classification schemes to identify high risk data

Detection: Finding a breach is complicated because it is difficult to distinguish normal usage from abnormal usage patterns, especially since the data will most likely be accessed with an account that has the needed privileges. Out-of-ordinary behavior detectors within SIEM platforms are useful in such cases. Especially when reviewed by experienced eyes to catch anomalous interactions with data, especially for large downloads. Attackers often like to 'smash and grab' large amounts of data at a time.

Remediation: This is the hardest attack scenario to fix because the cat is already out of the bag. Two things to focus on

  • Identify how the exfiltration happened so that you can stop it
  • Have a plan of how to deal with the impacts of losing control of the data

Concern 2: Privilege escalation and lateral movement

The attacker has managed to compromise one or more accounts in your tenancy and is now working towards global administrator privileges.

Prevention: Make your global administrator community small; a minimum of two and a maximum of five for any size of tenant. Require multi-factor authentication (MFA) for global administrators, and regularly review activity of such users.

Detection: The key here is to monitor activity. This type of attack causes anomalous activity that deviates from a well-understood baseline.

Remediation: Enable multi-factor authentication. Examine everything that the attacker has done to your data and what they have done to further entrench themselves in your tenancy. Look for new accounts that have had recent changes (such as promotion to tenant admin), global configuration changes, and every interaction with data from the affected accounts.

Concern 3: Account compromise

An account in your O365 tenant is breached such that it can be used by an attacker to interact with either resources in Office 365, or with your on-premises infrastructure. There are a variety of ways that this can happen including spear phishing for credentials with harvesting websites, or spear phishing with malware to install rootkits and keyloggers.

Prevention: Use high quality authentication mechanisms - passwords and MFA. Watch for multiple failed logon attempts.

Detection: The key to an effective account breach detection is understanding what a normal pattern of activity looks like for your users. There are several features that exist in the activity data that you can use to find illicit or anomalous activity. For example, the data includes the following: IP addresses (which can be correlated to geographies), date and time, the specific action performed, and user agent.

Remediation: Enabling multi-factor authentication is a common, and powerful remediation to keep the account safe after it has been breached. Monitor the account for a period of time to ensure it hasn’t been re-breached.

While Microsoft has provided guidelines on how a user should secure their O365 tenant, making sure everything is secure and remains secure can become complicated and is time consuming. Looking for the easy button? EventTracker makes securing O365 and your systems easier by providing predefined reports, dashboards, alerts via the SIEMphonic service. The service is backed by a 24/7 Security Operations Center (SOC) to be ever vigilant.

The Bite Behind the Bark: Enforcement Power of GDPR

There’s an old saying: Their bark is worse than their bite. However, this is not the case with the penalties of non-compliance when it comes to the General Data Protection Regulation (GDPR). With the enforcement date of the GDPR having passed on May 25, 2018, any company not in compliance could be in for a very nasty shock. And remember, GDPR is not limited to European Union (EU) businesses. Any entities processing the personal data of EU citizens have to comply. This impacts mostly any website today as well. So, what is personal data in the GDPR world? It’s things like tracking IP addresses, geographic data, and basically any information relating to an identified or identifiable person.
 
Ignorance does not equal compliance and GDPR is sure to make its “bite” felt for non-compliance. GDPR even recommends that businesses employ a privacy officer, as there is no more hiding behind a vendor or consultancy. This goes for small- and medium-size businesses (SMBs) as well as large global organizations. The penalties of non-compliance and the new power given to data protection authorities makes enforcement of these regulations the key to ensuring these rules get followed.
 
The bark heard around the world
The scope of GDPR positions the EU as a leader in data protection, so don’t be surprised if other countries follow suit. Under GDPR, should a company of any size fall short of compliance, financial penalties abound…which is the bite that could bring an SMB to its knees.
 
If you process sensitive data on a large scale (like some social media platforms for example), you might have to appoint a data protection officer. Some large organizations are forming huge cross-functional teams to support GDPR compliance. This might include leaders from areas like product/services, UX/UI, policy, and legal. Imagine the financial impact of any organization trying to pull resources to dedicate to this one mandate? Any way you slice it, businesses collecting consumer information through online tracking, which is a given nowadays, will need to comply – which impacts sea to shining sea. 
 
The data breach bite
With no lack of data breaches on the horizon, a big GDPR focus is around security and data breach. The EU is doing what the U.S. hasn’t been able to do yet – set a universal standard for breach disclosures, which include:
 
  • Reporting any security incident involving personal data with 72 hours. That’s right – not next month or within the year like some brands have in the past.
  • Come clean early on. So, if a data breach has a high risk of adversely affecting individuals’ rights and freedoms, then it’s expected a business should report without “undue delay”.

Backed by fines that are sure to hurt, GDPR unleashes the fury on sloppy security which could not only cost reputation harm, but really hurt the bottom line, or perhaps bottom out an SME altogether. Some factors that play into substantial fines might be:
 
  • How many were impacted, and the extent of the damage inflicted?
  • Was the damage intentional or just negligence?
  • Did the company take steps to stop the damage?
  • What steps have been taken by the organization, either technical or personnel-wise to address the issue?
  • Is this a first-time offense?
  • What is the cooperation level of the offending organization?
  • What was the data that was compromised?
  • Was this self-reported?

If your answers to these questions find that the issue arose from technical problems or lack of reporting, fines can reach up to 2% of revenue from the prior year. However, if the issue is found to be a general lack of compliance with key parts of the GDPR regulation, the fines rise to 4% of revenue from the prior year.

So, what are some of the issues that could lead to the higher fines? Sending personal data to “third countries” or international organizations that don’t provide proper data protection, or not adhering to the principles of processing personal data can lead to these larger fines. As you can imagine, some of these companies have annual revenues in the tens of billions, so the fines are substantial. Add to that the image blow a business takes when found to have been breached, and the revenue hit becomes even larger.

For over a year now, the GDPR’s bark has certainly been heard. And now that the compliance date has come and gone, companies will soon find out that the bite for non-compliance can really hurt. What can you do now?
Visit the EventTracker GDPR compliance page and download the solution brief to learn more about what needs to be done and how to protect your company.

Also, check out this webcast, Five Things You Should Know about GDPR Compliance, hosted by EventTracker’s CEO A.N. Ananth and the CEO of Fifth Step and GDPR author, Darren Wray.
 
References:
GDPREU.org
TechCrunch

Today’s CISO Challenges…The Talent Gap

It continues to be challenging being a Chief Information Security Officer (CISO) today – and 2018 promises no rest. As high-profile data breaches escalate, CISOs, CIOs, and other information security professionals believe their organizations are more likely than ever to fall victim to a data breach or cyber attack. What’s more, they’re most worried about something simple, and it’s not even technology. The top concern among CISOs for 2018 was “lack of competent in-house staff”.   
 
Larry Ponemon, author of the report, says he was also surprised by the finding, adding that typically data breaches, ineffective security tools, or some other technical aspect of guarding security tops the concerns list. “Workforce issues are usually somewhere in the middle,” he says. According to the survey of 612 CIOs and IT security pros, the top five threats that worry them the most in 2018 are:
  • 70%:  lack of competent in-house staff
  • 67%:  data breach
  • 59%:  cyber attack
  • 54%:  inability to reduce employee negligence
  • 48%:  ransomware
The majority of respondents expect breaches and attacks to stem from inadequate in-house expertise (65%); inability to guard sensitive and confidential data from unauthorized access (59%); an inability to keep pace with sophisticated attackers (56%); and a failure to control third parties' use of company's sensitive data (51%), according to the survey.
 
Looking for a way to bridge the talent gap? Consider co-managed services such as SIEMphonic.
 

Do you have a cyber blind spot?

What's the cost of securing your network from a cyber attack? According to Precision Analytics and The CAP Group, many companies are now spending less than 0.2 percent of their revenue on cybersecurity, at least one-third less than financial institutions. If that's you then you may have a cyber blind spot. Brian Walker, a former head of global information technology for Marathon Oil says, "It’s scary…Executives making funding decisions aren't necessarily millennials who intuitively understand how cyber threats work. It’s guys my age that are the problem,” according to Walker, who said he's in his early 50s. “We've been 30-years-trained in a world that doesn't work this way anymore. This cyber blind spot is a real challenge,” Walker said. “Our fear is that we will play an ostrich and put our head in the sand until something blows up and people get killed, or until the lights go out for a month.”
 
The threat isn't new, but it is escalating.
 
Financial services and retailers have been in the limelight for data breaches. Based on analysis developed over 15 years, energy companies that earn $1 billion in revenue a year generally spend about $1 million for cybersecurity; precision found. In comparison, companies within the financial industry with $1 billion in revenue could spend as much as $3 million.
 
The approach to cybersecurity is also affected by the normal separation of departments within individual companies, the experts said. “At many companies, IT security typically falls under the purview of the chief information officer, while operations security staff report to a different boss,” Walker said. The result, there is a communications gap.
 
It's not that the companies don't care about security. But the threat is growing exponentially, and companies of all types have had a hard time keeping up. For instance, “there's been a dramatic rise in so-called supply-chain attacks where a software update itself has been compromised before it's even introduced into a company system,” Walker said.


Do you have a blind spot? Is it under investment in cybersecurity? Or do you have an overdose of confidence in the shiny security whizzbang, which the vendor promised would be as effective as Iron Dome?
 

Time is money. Downtime is loss of money.

The technological revolution has introduced a plethora of advanced solutions to help identify and stop intrusions. There is no shortage of hype, innovation, and emerging trends in today's security markets. However, data leaks and breaches persist. Shouldn't all this technology stop attackers from gaining access to our most sensitive data? Stuxnet and WannaCry are examples of weaknesses in the flesh-and-bone portion of a security plan. These attacks could have been prevented had it not been for human mistakes.
 
Stuxnet is the infamous worm (allegedly) authored by a joint U.S.-Israeli coalition, designed to slow the enrichment of uranium by Iran's nuclear program. The worm exploited multiple zero-day flaws in industrial control systems, damaging enrichment centrifuges. So, how did this happen?
  • The Natanz nuclear facility, where Stuxnet infiltrated, was air-gapped.
  • Somebody had to physically plant the worm. This requires extensive coordination, but personnel in Natanz should have been more alert.
  • Stuxnet was discovered on systems outside of Natanz, and outside of Iran. Somebody likely connected a personal device to the network, then connected their device to the public Internet.
  • While Stuxnet went from inside to outside, the inverse could easily have happened by connecting devices to internal and external networks.
 
If human beings had updated their systems, we may never have added "WannaCry" to our security lexicon. WannaCry and its variants are recent larger-scale examples. Microsoft had issued patches for the SMBv1 vulnerability, eventually removing the protocol version from Windows. Still, some 200,000 computer systems were infected in over 150 countries worldwide to the tune of an estimated $4 billion in ransoms and damages.
 
The lesson here? We care too much about gadgets and logical control systems, and not enough about the skilled staff needed to operate this technology. Gartner estimates that 40 percent of mid-size enterprises don't have a cybersecurity expert in their organization. A labor shortage for security professionals will prevent you from filling this talent gap for at least three years. A logical solution is to assess which security functions can be effectively delivered as a service to minimize internal staffing requirements.

Services (such as SIEMphonic) solve popular use cases including:
  • Operational tasks such as log monitoring, vulnerability scanning, and firewall management
  • Delivering 24/7 security monitoring when there is not enough staff to accomplish this internally (a minimum of eight to 12 dedicated security analysts are required for 24/7 monitoring)
  • Security monitoring for public cloud environments to ensure users are not placing sensitive data in the cloud in ways that are insecure or non-compliant
  • Building out advanced attack detection capabilities by employing advanced analytics to identify threats through statistical or behavioral anomalies in security events, IT logs, network behavior, network forensics, payload analysis, endpoint behavior, and endpoint forensics
 
Time is money; downtime is loss of money. The cost of doing nothing is significant.
 

Cybersecurity is an Investment, Not a Cost Center

The cybersecurity threat landscape is in constant motion – ever evolving. According to Kaspersky Labs, 323,000 new malware strains are discovered daily! Clearly, this rate of increased risk to a company’s assets and business continuity warrants a smart investment in cybersecurity. Unfortunately, many companies are not keeping pace with their increasing risk, nor could they ever be expected to if their leadership views cybersecurity as a cost center while still viewing other innovations, such as digital transformation, as an investment.

For any digital transformation project to be successful and return the anticipated value, cybersecurity must be considered foundational.

Just as that new $500 suit is an investment to help you get that new job, the cost to have it tailored is part of that investment. The same goes for digital transformation and cybersecurity. But for many companies, the digital transformation is long underway, and cybersecurity desperately needs to catch up. That new suit needs to be tailored quickly before another person sees you in that poor-fitting getup.

A successful cybersecurity strategy is without much hope if executive leadership does not champion the proper investment and prioritize the efforts. The result is too often organizations piecemealing pointed IT security solutions one-at-a-time, failing to prioritize wholistic cybersecurity projects. This only exacerbates the risks to the business, but also hampers the efficiency in accomplishing other technology projects deemed as competitive differentiators.

So, where do you start to improve your cybersecurity posture ASAP?
  1. Get executive support immediately so you don’t spin your wheels on half-baked inefficient IT security practices.
  2. Change the mindset by showing cybersecurity is an investment in the company’s future.
  3. Keep in mind the cybersecurity triad of “platform, people and process”, and seek complete solutions that can ensure long-term success.
Here are some tools to help you along your journey…

Cybersecurity Maturity Model

It’s important to take a step back and understand where you are today, where you should be, and where you want to go next. By considering all four key aspects of a complete security architecture – prevent, detect, respond, and predict – a good Cybersecurity Maturity Model provides a practical stair-step approach toward the appropriate level for your organization.



SIEM Total Cost of Ownership Calculator

Security Information and Event Management (SIEM) is the foundation of any well-grounded IT security strategy. However, depending on your organization’s unique requirements, staffing, and deployment situation, the total cost of SIEM can vary widely. Use our SIEM TCO calculator to compare 1-year and 3-year costs of self-managed and Co-Managed SIEM solutions.

 
 
Calculate your TCO now

 

How to Protect Your Network from Ransomware Tips from the FBI

The FBI estimates that more than 4,000 ransomware attacks have occurred daily since the beginning of 2016. That’s a 300% increase from the previous year. This is due in part to the thriving sector of “ransomware-as-a-service.” Individuals don’t need to possess a certain skill set, but rather, malware developers advertise their ransomware on the dark web to be distributed by less sophisticated attackers. This allows developers/advertisers to take their cut from the ransom amount paid.
 
The cyber criminals behind these attacks aren’t necessarily picky; they target big companies, small businesses, government entities, and individuals. But the damage they cause to small- and medium-size businesses (SMBs) is particularly alarming. A recent report by a security firm last year noted that 22% of SMBs affected by ransomware had to cease operations immediately. One-third had suffered a ransomware attack in the previous year.
 
“If you haven’t been a victim of ransomware or any other type of computer attack, you have to operate as if it’s just a matter of time before you are – and take the steps to protect yourself and mitigate the resulting damage or loss,” says Sheraun Howard, supervisory special agent with the FBI’s Cyber Division in Washington, D.C.
 
How it Works
While the names, details, and entry points of each attack vary, the concept remains the same. First, the bad actors deliver the ransomware. This is often done by spearphishing emails – targeted phishing emails aimed at specific employees that contain personal details to perpetuate the fraud. These emails or email attachments will contain an exploit for a particular software application vulnerability that provides the attacker access to your computer. After the attacker has access to your computer, they typically use additional malware to propagate throughout your network and drop their ransomware onto your environment. Once the ransomware has been delivered in one way or another, it prevents the targeted user from accessing their data or systems by encrypting their files. The targets receive an email, text file, or screen message demanding that they pay a ransom in order to regain that access.
 
How to Defend Yourself
The FBI recommends that all businesses take the following steps to reduce their risk of a ransomware attack:
 
  1. Educate your employees about the risks
  2. Create a security incident response plan
  3. Update and patch software and firmware
  4. Manage privileged accounts
  5. Audit user access to your systems
  6. Use firewalls, spam filters, and anti-virus programs
 
These six recommendations are a solid start for individuals and companies, but at some point, advanced threat protection with Co-Managed SIEM will need to be evaluated and adopted to truly stay ahead of attacks.
 

The Difference Between a SIEM Solution and SIEM Tool: Features vs. Outcomes

Can you simply buy a “SIEM solution”? Turns out you really cannot, no matter how hard you try nor how passionately the vendor promises. What you can buy at the store is a SIEM tool, which is a completely different thing. SIEM tools are products, while implementing a security or compliance solution involves people, process, and technology. SIEM tools are a critical part of SIEM, but they’re not the whole solution.
 
Security processes – unlike appliances, software and services – cannot be acquired in exchange for cash. They can only be established by an organization and then mature to an appropriate level. Developing a policy, as well as operational procedures for SIEM, is an important task that has to be handled by the security team.
 
Over the past decade in working with SIEM technology, this is the one unescapable lesson: People + Process is synonymous with that portion of the iceberg that is under the waterline (not visible and frankly, very large). It has caused very large unsinkable ships to go down (think Titanic).
 
And it is a problem that our Co-Managed SIEMphonic solution was expressly designed to solve. Let us help you strengthen your security defenses, respond effectively, control costs, and optimize your team's capabilities.
 
Catch more threats. Respond quicker. Simplify compliance.
 

Catch Malware Hiding in WMI with Sysmon

By Randy Franklin Smith

Security is an ever-escalating arms race. The good guys have gotten better about monitoring the file system for artifacts of advanced threat actors. They in turn are avoiding the file system and burrowing deeper into Windows to find places to store their malware code and dependably trigger its execution in order to gain persistence between reboots.

For decades, the Run and RunOnce keys in the registry have been favorite bad-guy locations for persistence, but we know to monitor them using Windows auditing for sysmon. This is so that attackers in-the-know have moved on to WMI.

WMI is such a powerful area of Windows for good or evil. Indeed, the bad guys have found effective ways to hide and persist malware in WMI. In this article, I’ll show you a particularly sophisticated way to persist malware with WMI Event Filters and Consumers.

WMI allows you to link these two objects in order to execute a custom action whenever specified things happen in Windows. WMI Events are related to but more general than the events we all know and love in the event log. WMI Events include system startup, time intervals, program execution and many, many other things. You can define a __EventFilter which is basically a WQL query that specifies what events you want to catch in WMI. This is a permanent object saved in the WMI Repository. It’s passive until you create a consumer and link them with a binding. The WMI Event Consumer defines what the system should do with any events caught by the filter. There are different kinds of Event Consumers for action like running a script, executing a command line, sending an email, or writing to a log file. Finally, you link the filter and consumer with a __FilterToConsumerBinding. After saving the binding, everything is now active and whenever events matching the filter occur, they are fed to the consumer. 

So, how would an attacker cause his malware to start up each time Windows reboots? Just create a filter that catches some event that happens shortly after startup. Here’s what PowerSploit uses to for that purpose:
 
SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE
TargetInstance ISA 'Win32_PerfFormattedData_PerfOS_System' AND
TargetInstance.SystemUpTime >= 200 AND
TargetInstance.SystemUpTime < 320

Then you create a WMI Event Consumer which is another permanent object stored in the WMI Repository. Here’s some VB code adapted from mgeeky’s WMIPersistence.vbs script on Github. It’s incomplete, but edited for clarity.  If you want to play with this functionality refer to https://gist.github.com/mgeeky/d00ba855d2af73fd8d7446df0f64c25a:
 
Set objInstances2 = objService1.Get("CommandLineEventConsumer") 
Set consumer = objInstances2.Spawninstance_
consumer.name = “MyConsumer”
consumer.CommandLineTemplate = “c:\bad\malware.exe”
consumer.Put_

Now you have a filter that looks for when the system has recently started up, and a consumer which runs c:\bad\malware.exe but nothing happens until they are linked like this:
 
Set objInstances3 = objService1.Get("__FilterToConsumerBinding")
Set binding = objInstances3.Spawninstance_
binding.Filter = "__EventFilter.Name=""MyFilter"""
binding.Consumer = "CommandLineEventConsumer.Name=""MyConsumer"""
binding.Put_

At this point, you have a filter that looks for when the system has recently started up and a consumer which runs c:\bad\malware.exe.

As a good guy (or girl), how do you catch something like this? There are no events in the Windows Security Log, but thankfully Sysmon 6.10 added three new events for catching WMI Filter and Consumer Activity as well as the binding which makes them active.
 
Sysmon Event ID Example
19 - WmiEventFilter activity detected WmiEventFilter activity detected:
EventType: WmiFilterEvent
UtcTime: 2018-04-11 16:26:16.327
Operation: Created
User: LAB\rsmith
EventNamespace:  "root\\cimv2"
Name:  "MyFilter"
Query:  "SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA 'Win32_PerfFormattedData_PerfOS_System' AND TargetInstance.SystemUpTime >= 200 AND TargetInstance.SystemUpTime < 320"
20 - WmiEventConsumer activity detected WmiEventConsumer activity detected:
EventType: WmiConsumerEvent
UtcTime: 2018-04-11 16:26:16.360
Operation: Created
User: LAB\rsmith
Name:  "MyConsumer"
Type: Command Line
Destination:  "c:\\bad\\malware.exe "
21 - WmiEventConsumerToFilter activity detected WmiEventConsumerToFilter activity detected:
EventType: WmiBindingEvent
UtcTime: 2018-04-11 16:27:02.565
Operation: Created
User: LAB\rsmith
Consumer:  "CommandLineEventConsumer.Name=\"MyConsumer\""
Filter:  "__EventFilter.Name=\"MyFilter\""
 
As you can see, the events provide full details so that you analyze the WMI Operations to determine if they are legitimate or malicious. From event ID 19 I can see that the filter is looking for system startup.  Event Id 20 shows me the name of the program that executes, and I can see from event ID 21 they are linked.

If you add these events to your monitoring you’ll want to analyze activity for a while in order to whitelist the regular, legitimate producers of these events in your particular environment. 

That’s persistence via WMI for you, but you might have noted that we are not file-less at this point; my malware is just a conventional exe in c:\bad. To stay off the file system, bad guys have resorted to creating new WMI classes and storing their logic in a PowerShell script in a property on that class. Then they set up a filter that kicks off a short PowerShell command that retrieves their larger code from the custom WMI Class and calls. Usually this is combined with some obfuscation like base64 encoding and maybe encryption too.

Host-based Versus Network-based Security

The argument is an old one; are you better off with a network-based detector, assuming all hosts will eventually communicate, or should you look at each host to determine what they are up to?

Over five years ago, the network was far simpler. There was a clear perimeter – us versus them, if you will. You could examine all traffic at the egress point (so-called North/South traffic) for potentially hostile patterns while pretty much ignoring local traffic (so-called called East/West traffic) as usually benign. This is usually done with the help of attack signatures which are updated periodically. In other words, classic network-based, signature-driven detection.

This applied to firewalls. You could be network-based and/or have one for each host. The attraction of the network-based firewall is simplicity; one device to deploy and manage versus the hassle of configuring one firewall per host. Notice that this depends on the traditional (simple) network with a clear us/them perimeter. But that is a pretty simple, traditional model that is vanishing fast. Applications are moving to the cloud and the perimeter is porous. You pretty much need a micro-fortress around a host or location.

So, what arguments are the network-based passive monitoring solutions making for themselves? And how do they stack up against a host-based managed solution? Let me count the ways…
 
Claim Response
Passive network monitoring has no impact on endpoint performance A well-designed, user-space host-based solution has virtually no impact on the endpoint 
A network-based solution is transparent to system users The host-based sensor runs as a service and is also invisible to users
Network monitoring is invisible to attackers Insiders know of its existence because they have access to the network diagram; every external attacker assumes that network traffic is being monitored and seeks to be stealthy
Network-based monitoring can listen to all endpoints, regardless of type; no specific sensor is needed A host-based sensor must be provided for each endpoint type; the common ones are Windows and Linux
Passive network monitoring devices are easy to install When host-based sensors are provided as a managed service, they are also simple to install
When monitoring at the egress point only, endpoints can move or be added with no extra effort Endpoints are usually not added/moved randomly, but through a defined process; extending this process to accommodate sensor deployment is no more work than deploying patches or anti-virus
 
And then here are challenges with network based monitoring…
 
Challenge Problem
Network-based signatures are always out-of-date or lagging Zero-day attacks are not detected, maybe worse; detection is limited to attacks with signatures only
Packet inspection is blind to encrypted traffic North/south network traffic is increasingly encrypted
Packet inspection is hard to scale as network speeds increase OTOH host-based approaches scale neatly both up and down; we're going to need a bigger boat
Network monitors can’t handle switched networks; it requires span ports Now you need span ports, more hardware, and networking skills
Network monitors usually can only see north/south traffic Insider threat, anyone? Remember Nyety? It spread laterally. Here’s an article about how to detect.
Network monitoring is blind to host activity; new processes, removable media Remember Edward Snowden?
 
Network monitoring does no log collection; therefore, it can’t meet compliance requirements
 
PCI-DSS, NIST 800-171, and all other compliance standards mandate log collection and retention for 1+ years to be able to perform forensics
 
And now, the advantages of a host-based solution…
 
Advantage of a Host-based Solution
Collect audit trail; meets compliance needs
Develop detailed understanding of user behavior; fight insider attacks
Scales well; no single choke point
Detect subtle patterns of misuse which can’t be seen at a higher layer (first-time-seen, zero day)
Effective for encrypted traffic as well
Sees all actions including east/west
Effective against removable media
Works even in switched networks
 
And to be fair, how to address the challenges…
 
Challenge Response
Sensor deployment to nodes SIEMphonic is a managed service; leave the deployment/configuration to us
Sensor can impact node performance The EventTracker Windows sensor consumes 0.1% of memory/CPU resources and 0.001% network bandwidth
Adding nodes means adding sensors It’s no more complicated than deploying anti-virus
Can’t see all network traffic; only those where a sensor is installed The next-gen firewall you already paid for does see this traffic; we get all of its logs, so why duplicate effort/cost
Sensor must be available for chosen platform An EventTracker endpoint sensor is available for Windows, Linux, AS/400, and IBM iSeries
 
Don't bring a knife to a gunfight. Passive network monitoring may be attractive because of deployment simplicity, and the fit and forget promise, but it is not capable of solving today's network security ad compliance challenges.
 

Once More Unto the Data (Breach), Dear Friends

As I reflect on this year, a Shakespearean quote plays out in my mind – when King Henry the Fifth is rallying his troops to attack a breach, or gap, in the wall of a city, “Once more unto the breach, dear friends”. Sadly, this has become the new normal. But even more so, 2017 has felt like Lemony Snicket's, A Series of Unfortunate Events. There were massive data breaches, unintended exposures of sensitive information on the internet, and other unfortunate tech incidents. 
 
Here are the five to illustrate the variety:
  1. Dallas Emergency Sirens: Just before midnight on a Friday in early April, all 156 of the emergency sirens in Dallas started sounding simultaneously for no apparent reason. The hubbub lasted a full 90 minutes before the sirens could be manually overridden and shut down, during which time panicked residents flooded 911 with calls. Dispatchers who typically pick up within 10 seconds were so overwhelmed that the wait time hit six minutes. Officials blamed hackers for the intrusion into their emergency alert system. Nobody had ever thought this could happen.
  2. WannaCry The National Security Agency has for years been diligently finding major weaknesses in commonly used pieces of software. Instead of alerting the affected companies about the vulnerabilities, however, it’s been hiding those aces up its sleeve for future use. This year, a group of hackers calling themselves the Shadow Brokers, stole a bunch of those exploits then proceeded to turn them loose on the internet. North Korea used one such NSA-developed hacking technique to target Windows, resulting in a piece of ransomware called “WannaCry” that crippled an estimated 230,000 computers around the world. Brad Smith, Microsoft’s Chief Legal Officer remarked, "An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen.”
  3. State Election Systems: Russian hackers targeted election systems in 21 states during the 2016 presidential election (to say nothing of their activity on FacebookTwitter, Reddit, etc.), as part of what the Department of Homeland Security called “a decade-long campaign of cyber-enabled operations directed at the U.S. Government and its citizens.” Jeanette Manfra, acting as assistant secretary for the office of cybersecurity and communications, told the Senate Select Committee on Intelligence that "the cyberattacks were intended or used to undermine public confidence in electoral processes.”
  4. : In September, consumer credit ratings agency, Equifax, revealed hackers had stolen the personal details of roughly half of all Americans – 143 million people. Equifax waited five months to tell anyone and then bungled its response, initially forcing those affected to sign a legal document prohibiting them from joining a class-action suit, then inadvertently directing potential victims to a fake phishing site which proceeded to steal yet more information.
  5. Deep Root Analytics: This summer, a Republican data analysis company called Deep Root Analytics left exposed a 1.1-terabyte online database containing the personal information of 200 million American voters. Not just birthdays and addresses, this leak included deeply personal information about individual voters, including their likely stance on abortion, gun control, stem cell research, environmental issues, and 44 other categories.
Will 2018 be better? 
There is the promise of advancements in fields like AI and machine learning. And we could learn from our mistakes but nah, not really. I don't mean to be a nattering nabob of negativism. Given the increasing penetration of IT in every facet of life, so long as those tasked with administering these increasingly complex systems are equipped with weaponry from the last war, then it’s hard to see improvement.

Still bringing a knife to a gunfight? SIEMphonic can help level the odds.
 

For of all sad words of tongue or pen, the saddest are these: 'We weren't logging'

It doesn't rhyme and it's not what Whittier said but it's true. If you don't log it when it happens, the evidence is gone forever. I know personally of many times where the decision was made not to enable logging and was later regretted when something happened that could have been explained, attributed or proven had the logs been there. On the bright-side there're plenty of opposite situations where thankfully the logs were there when needed. In fact, in a recent investigation we happened to enable a certain type of logging hours before the offender sent a crucial email that became the smoking gun in the case thanks to our ability to correlate key identifying information between the email and log.

Why don't we always enable auditing everywhere? Sometimes it's simple oversight but more often the justification is:

  • We can't afford to analyze it with our SIEM
  • We don't have a way to collect it
  • It will bog down our system

Let's deal with each of those in turn and show why they aren't valid.

We can't afford to analyze it with our SIEM

Either because of hardware resources, scalability constraints or volume based licensing organizations limit what logging they enable. Let's just assume you really can't upgrade your SIEM for whatever reason. That doesn't stop you from at least enabling the logging. Maybe it doesn't get analyzed for intrusion detection. But at least it's there (the most recent activity anyway) when you need it. Sure, audit logs aren't safe and shouldn't be left on the system where they are generated but I'd still rather have logging turned on even if it just sits there being overwritten. Many times, that's been enough to explain/attribute/prove what happened. But here's something else to consider, even if you can't analyze it "live" in your SIEM, doesn't mean you have to leave it on the system where it's generated - where's it's vulnerable to deletion or overwriting as it ages out. At least collect the logs into a central, searchable archive like open-source Elastic.

We don't have a way to collect it

That just doesn't work either. If your server admins or workstation admins push back against installing an agent, you don't have to resort to remote polling-based log collection. On Windows use native Windows Event Forwarding and on Linux use syslog. Both technologies are agentless and efficient. And Windows Event Forwarding is resilient. You can even define noise filters so that you don't clog your network and other resources with junk events.

Logging will bog down our system

This bogey-man is still active. But it's just not based on fact. I've never encountered a technology or installation where properly configured auditing made a material impact on performance. And today storage is cheap and you only need to worry about scheduling and compression on the tiniest of network pipes - like maybe a ship with a satellite IP link. Windows auditing is highly configurable and as noted earlier you can further reduce volume by filtering noise at the source. SQL Server auditing introduced in 2008 is even more configurable and efficient. If management is serious they will require this push-back be proven in tests and - if you carefully configure your audit policy and output destination - likely the tests will show auditing has negligible impact.

When it comes down to it, you can't afford not to log. Even if today you can't collect and analyze all your logs in real-time at least turn on logging in each system and application. And keep working to expand collection and analysis. You won't regret it.

True Cost of Data Breaches

The Cisco 2017 Annual Cybersecurity Report provides insights based on threat intelligence gathered by Cisco's security experts, combined with input from nearly 3,000 Chief Security Officers (CSOs), and other security operations leaders from businesses in 13 countries. 
 
Here are some takeaways:
  • Data breaches have repercussions: More than 50 percent of organizations faced public scrutiny after a security breach. Operations and finance systems were the most affected, followed by brand reputation and customer retention.
    Lesson: Is sunlight the best disinfectant?
  • Repercussions are expen$ive: For organizations that suffered a breach, the effect was substantial: 22% of breached organizations lost customers – 40% of them lost more than a fifth of their customer base and 29% lost revenue, with 38% of that group losing more than a fifth of their revenue. In addition, 23% of breached organizations lost business opportunities, with 42% of them losing more than a fifth of such opportunities.
    Lesson: There's a bad moon rising.
  • Complexity and skill shortage drive risk: CSOs cite budget constraints, poor compatibility of systems, and a lack of trained talent as the biggest barriers to advancing their security postures. Security leaders also reveal that their security departments are increasingly complex environments with nearly two-thirds of organizations using six or more security products – some with even more than 50 – increasing the potential for security effectiveness gaps and mistakes.
    Lesson: Calculate asset risk to prioritize spending; co-sourcing can help.
  • It’s the basics: Criminals are leveraging "classic" attack mechanisms such as adware and email spam in an effort to easily exploit the gaps that such complexity can create. Old-fashioned adware software that downloads advertising without user permission continues to prove successful, infecting 75% of organizations polled.
    Lesson: Security laggards, beware. Here are "some stories that never happened" from "files that do not exist".
  • Spam works: Spam is now at a level not seen since 2010, and accounts for nearly two-thirds of all email – with 8-10% of it being outright malicious. Global spam volume is rising, often spread by large and thriving botnets.
    Lesson: Spam is easy and effective, so a mix of technology and awareness is needed.
  • Data is everywhere; not much actionable intelligence: Just 56% of security alerts are investigated and less than half of legitimate alerts are actually remediated. Defenders, while confident in their tools, are undermined by complexity and manpower challenges. Criminals are exploiting the inability of organizations to handle all important security matters in a timely fashion.
    Lesson: Look for ease of use; get access to expertise via co-sourcing.
What can/should you do?
  1. Improve threat defense technologies and processes after attacks by separating IT and security functions 
  2. Increase security awareness training for employees 
  3. Implement risk mitigation techniques

The Perimeter is Dead: Long-live the Perimeter

In 2005, the Department of Homeland Security commissioned Livermore National Labs to produce a kind of pre-emptive post-mortem report. Rather than wait for a vengeful ex-KGB hacker agent to ignite an American pipeline until it could be seen from space, the report issued recommendations for preventing an incursion that had yet never happened, from ever happening again.
 
Recommendation Number 1: Know your perimeter.
"The perimeter model is dead," pronounced Bruce Schneier, author of The New York Times' best seller Data and Goliath, and the CTO of IBM Resilient. "But there are personal perimeters. It doesn't mean there exists no perimeters. It just means it's not your underlying metaphor any more. So, I wouldn't say to anyone running a corporate network: There are no perimeters, zero."

"The traditional fixed perimeter model is rapidly becoming obsolete," stated the CSA's December 2013 white paper,” because of BYOD and phishing attacks providing untrusted access inside the perimeter, and SaaS and IaaS changing the location of the perimeter. Software defined perimeters address these issues by giving application owners the ability to deploy perimeters that retain the traditional model's value of invisibility and inaccessibility to ‘outsiders’, but can be deployed anywhere – on the internet, in the cloud, at a hosting center, on the private corporate network, or across some or all of these locations."

This reality invalidates the model of safeguarding the corporate network via the fortress model, one where all assets are inside and a well-defined perimeter exists, which can be defended. Instead, each asset requires a micro-fortress around it, regardless of where it is located. The EventTracker sensor enables a micro-fortress around and near the endpoint on which it operates. It provides host-based intrusion detection, data leak protection and endpoint threat detection. While the sensor itself operates on any Windows platform, it is able to act as a forwarder for any local syslog sources, relaying logs over an encrypted connection.
 
Welcome to your software defined perimeter.
 

Can your Cybersecurity Posture be Called "Reactive Chaos"?

Does this sound familiar? You have no control of your environment and most of your efforts are diverted into understanding what happened, containing the damage, and remediating the issue. New projects, including cloud development and mergers and acquisitions, are significantly stalled. If this does sound familiar, then most likely you are blind to what is happening on the network, unaware of where the weaknesses are, and without the ability to quickly assess risk.

This is the alternate reality organizations enter once they have been materially compromised. It stops business, costs millions, and can have an incalculable impact on current and future customers. You get here by thinking tactically all the time. No time to step back and consider the big picture, instead always making small changes and more investments in new, disparate tools. This wasn't the business plan you started the year with, but it is what will be managed for months, and likely a few years to come.

How can you avoid this? Get visibility of your entire security posture and be able to measure it easily, and preferably, continuously so you can take proactive action – including endpoints and networks. This is important and useful in monitoring, responding to, and in some cases, being able to block potential exploits. But this is only a start.

Embed the culture of security: Have you appointed a cybersecurity champion?
You need a cybersecurity champion just as you need a leader for a fire drill – one who practices and directs the possibly panicked staff in evacuating the floor/building in the event of a fire or other emergencies. By embedding security culture into the organization, you can have the visibility and assurance that you need for the best defense against reactive chaos.

Systemically avoid reactive chaos.
Automate and orchestrate wherever possible to provide better visibility. Co-source when necessary, as it gives you access to experts in cybersecurity at an affordable price point.

Forget 007 Intel…What Truly Wins the War?

How important is intelligence in bringing victory or averting defeat? In our IT Security universe, this refers to "threat intelligence", which has been all the rage for some years now. Indeed, a number of providers charge hefty sums to provide best-of-breed, mixed strategic and tactical, with full actor information, detailed indicators, and with revelations about future attacks targeted at your organization. During a conference, attendees at a roundtable were asked, "If you hear 3 days in advance that you will be hit with a colossal DDoS attack of a particular type, will it help you?" Some people answered “yes” and pointed at specific things they can do in the time they have, while others said, “sort of”. They would still take heavy damage, but may be able to reduce panic and avoid some mistakes in responding. A few said that they will be able to do a few things only… and if the “3-day attack warning” costs them $100K, they won’t sign for it.

F.H. Hinsley, the historian of British intelligence in the real war against Hitler, made a sustained attempt to show how intelligence affected its outcome. His conclusion, which did not please the intelligence establishment, is that the efforts of MI6 and Bletchley Park shortened the war, but emphatically did not win it. As John Keegan noted "The reason is that the fiction of intelligence has worked so powerfully on the Western imagination that many of its readers, including presidents and prime ministers, have been brought to believe that intelligence solves everything. It stops wars starting. If they start nevertheless, it assures that the wrong side loses and the right-side wins."

Actual warfighters (= skilled security professionals) with weapons (=security tools), on top of threat intelligence are needed to win the war. As Chuvakin observed in this Threat Intelligence and Operational Agility article, telling armed peasants and spearmen that a ballistic missile is coming does not help – even if you know the exact model and who launched it. You need to have the defenses, tools, people, and effective processes already in place.

This is the value proposition of our SIEMphonic co-managed SIEM-as-a-service offering. Put our 24/7, ISO 27001-certified team of experts to work for you. They come armed with deep subject matter experience, robust processes,and award-winning weaponry. And oh yes, it’s all integrated with up-to-the-minute threat intelligence.

Still skeptical? See use cases about what the team has caught, in top-secret 007 fashion: from stories that "never happened" from "files that do not exist". Intel never wins wars on its own, but combined with effective teams, defenses, and processes, the right-side may always triumph.

Security Signals Everywhere: Finding the Real Crisis in a World of Noise

Imagine dealing with a silent, but mentally grating barrage of security alerts every day. The security analyst’s dilemma? They either need to cast nets wide enough to identify all potential security incidents, or laser-focus on a few and risk missing an important attack.

A recent Cisco study covered in CSO found that 44 percent of security operations managers saw more than 5,000 security alerts a day. As a consequence, they can only investigate half of the alerts they receive every day, and follow up on less than half of alerts deemed legitimate. VentureBeat says the problem is far worse. Just 5 percent of alerts are investigated due to the time and complexity of completing preliminary investigations.

The CSO article recommends better filtering to reduce threat fatigue, while focusing efforts on the most important risks to a company’s industry and business. These are great suggestions. However, in a world of exploding risks, you need a dedicated team of experts on point 24/7, while deploying technology to stay ahead of the threat landscape.

This is all very cumbersome and expensive. Even the largest companies in the world may not have this level of resources. That is where a tailored, affordable managed threat detection and response or co-managed SIEM comes into play. Here’s why co-managed SIEM is better than a DIY scenario for the digital transformation era:
 
  1. A dedicated SWAT team for security – You may have great analysts, but they’re stretched and may be tired. Expand their reach with a team of external experts who can partner on calibrating and monitoring security services, follow up on alerts, and augment your team when you need more resources due to business growth, staff departures, or an inability to hire enough experts.
  2. – It’s challenging to optimize processes when you’re constantly fighting fires. Leave that work to your partner. EventTracker’s Security Operations Center, for example, is ISO/IEC 27001-certified, and we have to work hard to maintain that certification by continually improving our information management systems for our clients.
  3. – Self-managing a SIEM solution can be expensive and difficult. Co-management is on the rise and expected to grow five-fold by 2020. EventTracker’s SIEMphonic platform provides all the managed security services you need, including SIEM and log management, threat detection and response, vulnerability assessment, user behavior analysis, and compliance management. It collects data from a variety of sources, including your platform, application and network logs; alerts from intrusion detection systems; and vulnerability scans and analyzes it all.  In addition, our HoneyNet deception technology uses virtualized decoys throughout your network to lure bad actors and sniff out attacks.

If you’re concerned about the rise of risks, you should be. Your information security team has great expertise and skills – but it’s probably time to extend their reach.
 
Empower your company with co-managed SIEM and hone in on the real crises, despite a world of noise. Get SIEMphonic managed security service today.