Download the Report
Advanced Threat Protection
Download the Datasheet
Let's Go Threat Hunting: Gain Visibility and Insight into Potential Threats and Risks
Download the Whitepaper
Bracing for the Tidal Wave of Data Privacy Compliance in America
View Recent Catches
Catch More Threats
September 30, 2019
Threats and threat actors continue to evolve and morph, creating advanced and even more dangerous tactics to mitigate. October is National Cybersecurity Awareness Month (NCSAM). NSCAM 2019 centers on the theme of Own IT. Secure IT. Protect IT., advocating a proactive approach to enhanced cybersecurity in the workplace and at home.
August 20, 2019
A financially motivated ransomware gang hit 23 local governments in Texas in a coordinated attack last week. Ransomware is a type of malicious software, often delivered via email or drive-by web downloads, that locks up an organization’s systems until a ransom is paid or files are recovered by other means such as backup restoration.
July 23, 2019
Just how much should you be spending on IT Security? It’s a vexing question to answer for many reasons as each situation has their unique circumstances and factors. But here are some insights garnered over the last decade in cybersecurity.
June 04, 2019
Overwhelmed by the hype from security vendors in overdrive? Notice the innovation and trends and feel like jumping on the bandwagon? It’s a urge that many buyers in mid-size companies feel and it can be overpowering. That flashy vendor demo, that rousing speech at a tradeshow, that pressure of keeping up with the Joneses. So what have you done for your security lately is a nagging thought.
April 03, 2019
Increasing complexity and frequency of attacks have escalated the need for detection of attacks and incident response. Endpoints are the new battleground as they are a) more pervasive across the network, b) more commonly used by non-IT personnel, and c) less well-defended by IT teams who first move to secure the data center. Endpoint detection and response (EDR) solutions meet the need to rapidly investigate large numbers of systems for evidence of malicious activity, quickly uncover, and then remediate attacks and incidents.
March 19, 2019
Did you know that Microsoft is a security vendor? No, it’s true. For years, the company was hammered by negative public perception and the butt of jokes around the 2002 "trustworthy computing" memo. The company has steadily invested in developing a security mindset and the product results are now more visible to the public.
February 25, 2019
Over 7 billion global devices in an always on and continuously connected world create a soft target for today’s attacker. Whether working to monetize data or make a political statement, adversaries are well funded and staffed in the battle for endpoint access and control.
January 31, 2019
We recently released the findings of the Security Information and Event Management (SIEM) study conducted by Cybersecurity Insights. The study surveyed over 345 IT and Security executives and practitioners, with 45% of them small and mid-sized firms with 999 or fewer employees and the balance comprised of enterprise organizations with 1,000 or more employees.
January 24, 2019
If you think your organization is too small to be targeted by threat actors, think again. Over 60% of organizations have experienced an exploit or breach, so the stealthy and ever-evolving hacker may already be in your organization performing reconnaissance or awaiting strategic command and control (C&C) instructions.
October 05, 2018
In simpler times, security technology approaches were clearly defined and primarily based on prevention with things like firewalls, anti-virus, web, and email gateways. There were relatively few available technology segments and a relatively clear distinction between buying security technology purchases and outsourcing engagements.
October 03, 2018
A hot trend in the Managed Service Provider (MSP) space is emerging, transforming from an MSP to a Managed Security Service Provider (MSSP). Typically, MSPs act as an IT administrator, however, the rapid rise of cloud-based Software-as-a-Service (SaaS) is reducing margins for MSPs. This change is forcing MSPs to compete on price, causing buyers to become less loyal.
September 17, 2018
Advances in data analytics and increased connectivity have merged to create a powerful platform for change. Today, people, objects, and connections are producing data at unprecedented rates. According to DOMO, 90% of all data today was created in the last two years with a whopping 2.5 quintillion bytes of data being produced per day. With more Internet of Things (IoT) devices being produced, new social media outlets created, and the increasing number of people turning to search engines for information, the numbers will continue to grow.
September 11, 2018
When it comes to selling security, one of the major challenges faced by managed services providers (MSPs) is changing the mind set of small- and medium-sized business (SMB) owners. With massive breaches hogging news headlines today, security is hard to ignore—yet many SMBs choose to do so because they don’t realize how “at risk” they may be.
September 04, 2018
Breaches continue to be reported at a dizzying pace. In 2018 alone, a diverse range of companies — including Best Buy, Delta, Orbitz, Panera, Saks Fifth Avenue, and Sears — have been victimized. These are not small companies, nor did they have small IT budgets. So, what’s the problem? Threats are escalating in scope and sophistication. Often times, new technologies are added to the enterprise network and not fully tested for security flaws. This creates issues for security teams, making it difficult to defend gaps and protect against persistent threats. Another issue facing security team is over emphasis on prevention has caused an under investment in security monitoring and incident response. Is your team faced with any of these three issues that can lead to failure to respond to incidents, malware, and threats properly?
August 31, 2018
Just after a new security vulnerability surfaced Wednesday, many tech outlets started comparing it with HeartBleed, the serious security glitch uncovered last year that rendered communications with many well-known web services insecure, potentially exposing millions of plain-text passwords. But don’t panic. Though the recent vulnerability has a more terrific name than HeartBleed, it is not going to cause as much danger as HeartBleed did.
August 31, 2018
With data breaches and Snowden-like information grabs, I’m getting increased requests for how to track data moving to and from removable storage, such as flash drives. The good news is that the Windows Security Log does offer a way to audit removable storage access. I’ll show you how it works, and since EventTracker has some enhanced capabilities in this area, I’ll briefly compare native auditing to EventTracker. Removable storage auditing in Windows works similar to and logs the exact same events as File System auditing. The difference is in controlling what activity is audited.
August 31, 2018
There are five different ways you can log on in Windows called “logon types.” The Windows Security Log lists the logon type in event ID 4624 whenever you log on. Logon type allows you to determine if the user logged on at the actual console, via remote desktop, via a network share or if the logon is connected to a service or scheduled task starting up.
August 30, 2018
Implement a Central Collection System Microsoft has made some considerable changes to event management in Windows Vista. But are these changes enough to help you control your entire infrastructure? This article is the last in a series that looks at Vista event management.
August 29, 2018
Logging for Incident Response: Part 1 – Preparing the Infrastructure From all the uses for log data across the spectrum of security, compliance, and operations, using logs for incident response presents a truly universal scenario –you can be forced to use logs for incident response at any moment, whether you’re prepared or not.
August 22, 2018
The Domain Name System, or DNS, is used in computer networks to translate domain names to IP addresses which are used by computers to communicate with each other. DNS exists in almost every computer network; it communicates with external networks and is extremely difficult to lock down since it was designed to be an open protocol.
August 06, 2018
Now that advanced cybersecurity protections are a must-have in today’s landscape, organizations of all sizes are increasingly seeking out and leaning on a trusted security partner to manage their security services. A recent study released by Forrester revealed that 57 percent of companies are seeking outside help for IT systems monitoring and 45 percent are outsourcing threat detection and intelligence.
July 26, 2018
Office 365 (O365) is immensely popular across all industry verticals in the small and medium enterprise space. It is often the killer app for a business and contains valuable, critical information about the business. Accordingly, O365 defense is a top concern on IT leader’s minds.
June 28, 2018
There’s an old saying: Their bark is worse than their bite. However, this is not the case with the penalties of non-compliance when it comes to the General Data Protection Regulation (GDPR). With the enforcement date of the GDPR having passed on May 25, 2018, any company not in compliance could be in for a very nasty shock.
June 25, 2018
It continues to be challenging being a Chief Information Security Officer (CISO) today – and 2018 promises no rest. As high-profile data breaches escalate, CISOs, CIOs, and other information security professionals believe their organizations are more likely than ever to fall victim to a data breach or cyber attack.
May 28, 2018
The technological revolution has introduced a plethora of advanced solutions to help identify and stop intrusions. There is no shortage of hype, innovation, and emerging trends in today's security markets. However, data leaks and breaches persist. Shouldn't all this technology stop attackers from gaining access to our most sensitive data? Stuxnet and WannaCry are examples of weaknesses in the flesh-and-bone portion of a security plan. These attacks could have been prevented had it not been for human mistakes.
May 09, 2018
The FBI estimates that more than 4,000 ransomware attacks have occurred daily since the beginning of 2016. That’s a 300% increase from the previous year. This is due in part to the thriving sector of “ransomware-as-a-service.” Individuals don’t need to possess a certain skill set, but rather, malware developers advertise their ransomware on the dark web to be distributed by less sophisticated attackers. This allows developers/advertisers to take their cut from the ransom amount paid.
April 26, 2018
Can you simply buy a “SIEM solution”? Turns out you really cannot, no matter how hard you try nor how passionately the vendor promises. What you can buy at the store is a SIEM tool, which is a completely different thing. SIEM tools are products, while implementing a security or compliance solution involves people, process, and technology. SIEM tools are a critical part of SIEM, but they’re not the whole solution.
April 24, 2018
Security is an ever-escalating arms race. The good guys have gotten better about monitoring the file system for artifacts of advanced threat actors. They in turn are avoiding the file system and burrowing deeper into Windows to find places to store their malware code and dependably trigger its execution in order to gain persistence between reboots
April 12, 2018
The argument is an old one; are you better off with a network-based detector, assuming all hosts will eventually communicate, or should you look at each host to determine what they are up to?
March 29, 2018
As I reflect on this year, a Shakespearean quote plays out in my mind – when King Henry the Fifth is rallying his troops to attack a breach, or gap, in the wall of a city, “Once more unto the breach, dear friends”...
March 29, 2018
It doesn't rhyme and it's not what Whittier said but it's true. If you don't log it when it happens, the evidence is gone forever. I know personally of many times where the decision was made not to enable logging and was later regretted when something happened that could have been explained, attributed or proven had the logs been there.
March 15, 2018
The Cisco 2017 Annual Cybersecurity Report provides insights based on threat intelligence gathered by Cisco's security experts, combined with input from nearly 3,000 Chief Security Officers (CSOs), and other security operations leaders from businesses in 13 countries.
March 01, 2018
In 2005, the Department of Homeland Security commissioned Livermore National Labs to produce a kind of pre-emptive post-mortem report.
February 15, 2018
Does this sound familiar? You have no control of your environment and most of your efforts are diverted into understanding what happened, containing the damage, and remediating the issue.
February 01, 2018
How important is intelligence in bringing victory or averting defeat? In our IT Security universe, this refers to "threat intelligence", which has been all the rage for some years now.
January 18, 2018
Imagine dealing with a silent, but mentally grating barrage of security alerts every day. The security analyst’s dilemma? They either need to cast nets wide enough to identify all potential security incidents, or laser-focus on a few and risk missing an important attack.
January 12, 2018
On January 3, 2018, an industry-wide hardware-based security vulnerability was disclosed. CVE-2017-5753 and CVE-2017-5715 are the official references to Spectre, and CVE-2017-5754 is the official reference to Meltdown.
December 28, 2017
We all hear it over and over again: complying with data protection requirements is expensive. But did you know that the financial consequences of non-compliance can be far more expensive?
December 14, 2017
When we are attacked, we feel a sense of outrage and the natural tendency is to want to somehow punish the attacker. To do this, you must first identify the attacker, preferably accurately, or else. This is easier said than done, especially online.
December 07, 2017
Given the acute shortage of security skills, managed solutions like SIEM-as-a-Service and SOC-as-a-Service such as SIEMphonic have become more widely adopted. It has proven to be an excellent way to leverage outside expertise and reduce cost, which is a challenge for companies globally. Seem too good to be true? It is and it isn’t. Regardless of how much responsibility you delegate, accountability lays firmly on the shoulders of the organization doing the delegating.
December 01, 2017
While you’ve been busy defending against ransomware, the bad guys have been scheming about new ways to steal from you. Let’s review a tactic seen in the news called bitcoin mining. Hackers broke into servers hosted at Amazon Web Services (AWS) that holds information from multi-national, multi-billion-dollar companies, Aviva and Gemalto. The criminals were using computer power to mine the cryptocurrency, bitcoin.
November 30, 2017
“You see, but you do not observe. The distinction is clear.” Sherlock Holmes said this to John Watson in “A Scandal in Bohemia.” Holmes was referring to the number of steps from the hall to the rooms upstairs. Watson, by his own admission, has mounted those steps hundreds of times, but could not say how many there were.
November 29, 2017
Interest continues to build around pass-the-hash and related credential artifact attacks, like those made easy by Mimikatz. The main focus surrounding this subject has been hardening Windows against credential attacks, cleaning up artifacts left behind, or at least detecting PtH and related attacks when they occur. All of this is important – especially because end-users must logon to end-user workstations, which are the most vulnerable systems on the network.
November 22, 2017
The traditional enterprise network has seen a tectonic shift in recent years thanks to cloud, mobility and now IoT. Where once enterprise data was confined to the office network and data center, it’s now expanded past its traditional perimeter. For instance, in a hospital, traditionally data resided in the data center, laptops, and desktop machines.
November 16, 2017
The evolution of Security Information and Event Management (SIEM) solutions has made a few key shifts over time. It started as simply collecting and storing logs, then morphed into correlating information with rules and alerting a team when something suspicious was happening.
November 07, 2017
“You’re in the fight, whether you thought you were or not”, Gen. Mike Hayden, former Director of the CIA and NSA. It may appear at first to be a scare tactic or an attempt to sow fear, uncertainty, and doubt, but truly, what this means is that it’s time to adopt the Assume Breach paradigm.
October 26, 2017
The IT security industry’s skill shortage is a well-worn topic. Survey after survey indicates that a lack of skilled personnel is a critical factor in weak security posture. If the skills are not available in your organization then you could: a) ignore the problem and hope for the best, or b) get help from the outside. Approach “a” is simply a dereliction of duty, and approach “b” has some negative connotations associated with the word “outsource”. It throws up images of loss of control and misaligned priorities.
October 13, 2017
While the threats have changed over the past decade, the way systems and networks are managed have not. We continue with the same operations and support paradigm, despite the fact that internal systems are compromised regularly.
October 05, 2017
A common dysfunction in many companies is the disconnect between the CISO, who views cybersecurity as an everyday priority, versus top management who may see it as a priority only when an intrusion is detected. The seesaw goes something like this: If breaches have been few and far between then leaders tighten the reins on the cybersecurity budget until the CISO proves the need for further investment in controls.
September 28, 2017
Computers do what they are told, whether good or bad. One of the best ways to detect intrusions is to recognize when computers are following bad instructions – whether in binary form or in some higher level scripting language. We’ll talk about scripting in the future, but in this article I want to focus on monitoring execution of binaries in the form of EXEs, DLLs and device drivers.
September 27, 2017
This post got me thinking about a recent conversation I had with the CISO of a financial company. He commented on how quickly his team was able to instantiate a big data project with open source tools. He was of the view that such power could not be matched by IT security vendors who, in his opinion, charged too much money for demonstrably poorer performance.
September 11, 2017
Recently Equifax, one of the big-three US credit bureaus, disclosed a major data breach. It affects 143 million individuals — mostly Americans, although data belonging to citizens of other countries, for the most part Canada and the United Kingdom, were also hit.
August 31, 2017
2017 has been a banner year for IT Security. The massive publicity of attacks like WannaCry have focused public attention like never before on a hitherto obscure field. Non-technical people, including board members, nod gravely when listening as the CISO or wise friend harangue them for attention, behavior change or budget on the topic of IT Security. It’s in a way comforting to think that such attention is a good thing.
August 29, 2017
As a small business, how would you survive an abrupt demand for $250,000? It’s ransomware, and as this poll shows, that’s what an incident would cost a small business. Just why has ransomware exploded on to the scene in 2017? Because it works. Because most bad guys are capitalists and are driven by the profit motive. Because most small business have not taken the time to guard their data. Because they are soft targets.
August 14, 2017
How much security is enough? That’s a hard question to answer. You could spend $1 or $1M on security and still ask the same question. It’s a trick question; there is no correct answer. The better/correct question is how much risk are you willing to tolerate? Mind you, the answer to this question is a “beauty in the beholder” deal, and again there is no one correct answer.
July 27, 2017
Have we seen the true business impact of of ransomware yet, or has this just been a proof-of-concept? The recent news about WannaCrypt and Petya ransomware should not come as a surprise. The outbreaks are due not only to the ransomware’s ability to spread but also to mutate. While IT security teams identify, hunt, and remove specific variants of the ransomware, there may already be unknown mutated varieties lurking dormant and ready to execute.
June 29, 2017
As I write this, yet another ransomware attack is underway. This time it’s called Petya, and it again uses SMB to spread. But here’s the thing — it uses an EXE to get its work done. That’s important because there are countless ways to infect systems, with old ones being patched and new ones being discovered all the time. You definitely want to reduce your attack surface by disabling/uninstalling unneeded features. Plus, you want to patch systems as soon as possible.
June 28, 2017
A new ransomware variant is sweeping across the globe known as Petya. It is currently having an impact on a wide range of industries and organizations, including critical infrastructure such as energy, banking, and transportation systems. While it was first observed in 2016, it contained notable differences in operation that caused it to be “immediately flagged as the next step in ransomware evolution.”
June 05, 2017
With distressing regularity, new breaches continue to make headlines. The biggest companies, the largest institutions both private and government are affected. Every sector is in the news. Recounting these attacks is fruitless. Taking action based on the trends and threat landscape is the best step. Smarter threats that evade basic detection, mixed with the operational challenge of skills shortage, make the protection gap wider.
May 31, 2017
Ransomware is a popular weapon for the modern attacker with >50% of the 330,000+ attacks in 3Q15 targeted against US companies. No industry is immune to these attacks, which if successful are a blot on financial statements of the targeted companies. Despite their success, ransomware attacks are not sophisticated, exploit traditional infection vectors and are not stealthy.
May 25, 2017
A global pandemic of ransomware hit Windows based systems in 150 countries in a matter of hours. The root cause was traced to a vulnerability corrected by Microsoft for supported platforms (Win 7, 8.1 and higher) in March 2017, about 55 days before the malware was widespread. Detailed explanations and mitigation steps are described here. The first step to mitigation is to apply the update from Microsoft. A version for XP and 2003 was also released by Microsoft on Friday May 12, 2017.
May 09, 2017
Shared threat intelligence is an attractive concept. The good guys share experiences about what the bad guys are doing thereby blunting attacks. This includes public-private partnerships like InfraGard, a partnership between the FBI and the private sector dedicated to sharing information and intelligence to prevent hostile acts against the U.S.
April 27, 2017
I’m a big believer in security analytics and detective controls in general. At least sometimes, bad guys are going to evade your preventive controls, and you need the critical defense-in-depth layers that detective controls provide through monitoring logs and all the other information a modern SIEM consumes. Better yet, going on the offensive with threat hunting approaches the concept of taking the battle to enemy instead of passively waiting.
April 12, 2017
IT workers in general, but more so IT Security professionals, pride themselves on their technical skills. Keeping abreast of the latest threats and the newest tactics to demonstrate to management and peers that one is “worthy.” The long alphabet soup in the signature, CISSP, CISA, MCSE, CCNA and so on, is all very necessary and impressive. However, cybersecurity puzzles are not solved by technical skills alone. In fact, the case can be made that soft skills are just as important, especially because everyone in the organization needs to cooperate. Security is everyone’s job.
March 30, 2017
So you got hit by a data breach, an all too common occurrence in today’s security environment. Who gets hit? Odds are you will say the customer. After all it’s their Personally Identifiable Information (PII) that was lost. Maybe their credit card or social security number or patient records were compromised. But pause a moment and consider the hit on the company itself. The hit includes attorney fees, lost business, reputational damage, and system remediation costs.
March 30, 2017
The insider threat is typically much more infrequent than external attacks, but they usually pose a much higher severity of risk for organizations when they do happen. While they can be perpetrated by malicious actors, it is more common the result of negligence. In addition to investing in new security tools and technology to protect against external threats, companies should place higher priority on identifying and fixing internal risks. Here are the top 3 high risk behaviors that compromise IT security.
March 20, 2017
Made you look! It’s a clickbait headline, a popular tactic with the press to get people to click on their article. Cyber criminals, the ones after the gold in your network, are at heart, capitalists. In other words, they seek efficiency. How to get maximum returns for the minimum possible work. This tendency reveals itself in multiple ways.
February 28, 2017
By Randy Franklin Smith Ransomware is about denying you access to your data via encryption. But that denial has to be of a great enough magnitude create sufficient motivation for the victim to pay. Magnitude of the denial is a factor – Value of the encrypted copy of the data, which is a function of: Intrinsic value of the data (irrespective of how many copies exist) The number of copies of the data and their availability Extent of operations interrupted
January 26, 2017
The Cyber Kill Chain model by Lockheed Martin describes how attackers use the cycle of compromise, persistence and ex filtration against an organization. Defense strategies that focus exclusively on the perimeter and on prevention do not take into account the kill chain life cycle approach; this is a reason why attackers are continuing to be so successful. Defending against persistent and advanced threats requires methods that detect and deny threats at each stage of the kill chain.
January 17, 2017
A common assumption is that security expenditure is a proxy for security maturity. This may make sense at first blush but paradoxically, a low relative level of information security spending compared to peers can be equally indicative of a very well-run or a poorly run security program. Spending analysis is, therefore, imprecise and a potentially misleading indicator of program success. In fact, it is necessary to ensure that the right risks are being adequately managed, and understand that spending may fluctuate accordingly.
December 21, 2016
Regulatory compliance is a necessary step for IT leaders, but it’s not sufficient enough to reduce residual IT security risk to tolerable levels. This is not news. But why is this the case? Here are three reasons:
December 21, 2016
‘Twas the night before Christmas and all through HQ Not a creature was stirring, except greedy Lou – An insider thief who had planned with great care A breach to occur while no one was there. Lou began his attack without trepidation, For all his co-workers were on their vacations. He logged into Payroll and then in a flash Transferred to his account a large sum of cash. But Lou didn’t realize that what he was doing Had sent an alert that something was brewing.
November 30, 2016
Log collection, SIEM and security monitoring are the journey not the destination. Unfortunately, the destination is often a false positive. This is because we’ve gotten very good at collecting logs and other information from production systems, then filtering that data and presenting it on a dashboard. But we haven’t gotten that good at distinguishing events triggered by bad guys from those triggered by normal everyday activity.
November 16, 2016
We have been implementing Security Information and Event Management (SIEM) solutions for more than 10 years. We serve hundreds of active SIEM users and implementations. We have had many awesome, celebratory, cork-popping successes. Unfortunately, we’ve also had our share of sad, tearful, profanity-filled failures.
October 26, 2016
We are delighted that EventTracker is now part of the Netsurion family. On October 13, 2016 we announced our merger with managed security services Netsurion. As part of the agreement, Netsurion’s majority shareholder, Providence Strategic Growth, the equity affiliate of Providence Equity Partners, made an investment in EventTracker to accelerate growth for our combined company.
September 29, 2016
How do you figure out when someone was actually logged onto their PC? By “logged onto” I mean, physically present and interacting with their computer. The data is there in the security log, but it’s so much harder than you’d think. First of all, while I said it’s in the security log, I didn’t say which one. The bad news is, it isn’t in the domain controller log. Domain controllers know when you logon, but they don’t know when you logoff. This is because domain controllers just handle initial authentication to the domain and subsequent authentications to each computer on the network.
August 24, 2016
A common hacking method is to steal information by first gaining lower-level access to your network. This can happen in a variety of ways: through a print server, via a phished email, or taking advantage of a remote control program with poor security. Once inside, the hacker will escalate their access rights until they find minimally protected administrative accounts.
August 17, 2016
Cyber criminals are constantly developing increasingly sophisticated and dangerous malware programs. Statistics for the first quarter of 2016 compared to 2015 shows that malware attacks have quadrupled.
July 28, 2016
Ideas to Retire is a TechTank series of blog posts that identify outdated practices in public sector IT management and suggest new ideas for improved outcomes. Dr. John Leslie King is W.W. Bishop Professor in the School of Information at the University of Michigan and contributed a blog hammering the idea of “do more with less” calling it a “well-intentioned but ultimately ridiculous suggestion.”
July 26, 2016
Windows gives you several ways to control which computers can be logged onto with a given account. Leveraging these features is a critical way to defend against persistent attackers. By limiting accounts to appropriate computers you can
July 07, 2016
There’s a wealth of intelligence available in your DNS logs that can help you detect persistent threats. So how can you use them to see if your network has been hacked, or check for unauthorized access to sensitive intellectual property after business hours?
June 30, 2016
Analytics is an essential component of a modern SIEM solution. The ability to crunch large volumes of log and security data in order to extract meaningful insight can lead to improvements in security posture. Vendors love to tell you all about features and how their particular product is so much better than the competition.
June 22, 2016
Detecting virus signatures is so last year. Creating a virus with a unique signature or hash is quite literally child’s play, and most anti-virus products catch just a few percent of the malware that is active these days. You need better tools, called endpoint detection and response (EDR), such as those that integrate with SIEMs, that can recognize errant behavior and remediate endpoints quickly.
June 13, 2016
In a recent webinar, we demonstrated techniques by which EventTracker monitors DNS logs to uncover attempts by malware to communicate with Command and Control (C&C) servers. Modern malware uses DNS to resolve algorithm generated domain names to find and communicate with C&C servers. These algorithms have improved by leaps and bounds since they were first see in Conficker.C. Early attempts were based on a fixed seed and so once the malware was caught, it could be decompiled to predict the domain names it would generate.
June 01, 2016
Aristotle put forth the idea in his Poetics that a drama has three parts — a beginning or protasis, middle or epitasis, and end or catastrophe. Far too many SIEM implementations are considered to be catastrophes. Having implemented hundreds of such projects, here are the three parts of a SIEM implementation which if followed will in fact minimize the drama but maximize the ROI.
May 25, 2016
Ransomware burst onto the scene with high profile attacks against hospitals, law firms and other organizations. What is it and how can you detect it? Ransomware is just another type of malware; there’s nothing particularly advanced about ransomware compared to other malware.
May 11, 2016
SC Magazine released the results of a research survey focused on the rising acceptance of SIEM-as-a-Service for the small and medium sized enterprise. The survey, conducted in April 2016, found that SMEs and companies with $1 billion or more in revenue or 5,000-plus employees faced similar challenges:
April 27, 2016
The popular press makes much of zero-day attacks. These are attacks based on vulnerabilities in software that is unknown to the vendor. This security hole is then exploited by hackers before the vendor becomes aware and hurries to fix it—this exploit is called a zero day attack.
April 20, 2016
Yet another recent report confirms the obvious, that SMBs in general do not take security seriously enough. The truth is a bit more nuanced than that, of course—SMB execs generally take security very seriously, but they don’t have the dollars to do enough about it—although it amounts to the same thing. This year, though, SMBs are going to have to look at security differently. Why? That is because enterprise execs are repeatedly seeing their own networks hurt because of less-than-terrific security from SMB partners tha
April 14, 2016
Traditional areas of risk — financial risk, operational risk, geopolitical risk, risk of natural disasters — have been part of organizations’ risk management for a long time. Recently, information security has bubbled to the top, and now companies are starting to put weight behind IT security and Security Operations Centers (SOC).
March 30, 2016
Do you embrace the matrix? Not this one, but the IT Organizational Matrix, or org chart. The fact is, once networks get to a certain size, IT organizations begin to specialize and small kingdoms emerge. For example, endpoint management (aka Desktop) may be handled by one team, whereas the data center is handled by another (Server team). Vulnerability scanning may be handled by a dedicated team but identity management (Active Directory? RSA tokens?) is handled by another.
March 23, 2016
Cloud security is getting attention and that’s as it should be. But before you get hung up on techie security details, like whether SAML is more secure than OpenID Connect and the like, it’s good to take a step back. One of the tenets of information security is to follow the risk. Risk is largely a measure of damage and likelihood. When you are looking at different threats to the same cloud-based data then it becomes a function of the likelihood of those risks.
March 04, 2016
The range of threats included trojans, worms, trojan downloaders and droppers, exploits and bots (backdoor trojans), among others. When untargeted (more common), the goal was profit via theft. When targeted, they were often driven by ideology.
February 24, 2016
On Facebook, when two parties are sort-of-kind-of together but also sort-of, well, not, their relationship status reads, “It’s complicated.” Oftentimes, Party A really wants to like Party B, but Party B keeps doing and saying dumb stuff that prevents Party A from making a commitment.
February 17, 2016
Windows supports the digitally signing of EXEs and other application files so that you can verify the provenance of software before it executes on your system. This is an important element in the defense against malware. When a software publisher like Adobe signs their application they use the private key associated with a certificate they’ve obtained from one of the major certification authorities like Verisign.
February 10, 2016
Here’s our list of the Top 5 SIEM complaints:1) We bought a security information and event management (SIEM) system, but it’s too complicated and time-consuming, so we’re:
February 04, 2016
Think about the burglar alarm systems that are common in residential neighborhoods. In the eye of the passive observer, an alarm system makes a lot of sense. They watch your home while you’re asleep or away, and call the police or fire department if anything happens. So for a small monthly fee you feel secure. Unfortunately, there are a few things that the alarm companies don’t tell you.
January 20, 2016
Given today’s threat landscape, let’s acknowledge that a breach has either already occurred within our network or that it’s only a matter of time until it will. Security prevention strategies and technologies cannot guarantee safety from every attack. It is more likely that an organization has already been compromised, but just hasn’t discovered it yet. Operating with this assumption reshapes detection and response strategies in a way that pushes the limits of any organization’s infrastructure, people, processes and technologies.
January 07, 2016
Ho hum. Another new year, time for some more New Year’s resolutions. Did you keep the ones you made last year? Meant to but somehow did not get around to it? This time how about making it easy on yourself?
December 30, 2015
The traditional method for calculating standard Return on Investment (RoI) is that it equals the gain minus the cost, divided by the cost. The higher the resulting value, the greater the RoI. The difficulty in calculating a return on security investment (RoSI), however, is that security tends not to increase profits (gain), but to decrease loss – meaning that the amount of loss avoided rather than the amount of gain achieved is the important element.
Following the standard RoI approach, RoSI can be calculated by the sum of the loss reduction minus the cost of the solution, divided by the cost of the solution. In short, a high result is better for RoI, and a low result is better for RoSI.
This is where it gets difficult: how do you measure the ‘loss reduction’? To a large extent it is based on guesswork and surveys. Bruce Schneier in The Data Imperative concluded, “Depending on how you answer those two questions, and any answer is really just a guess — you can justify spending anywhere from $10 to $100,000 annually to mitigate that risk.”
What we find as a practical outcome of delivering our SIEM-as-a-service offering (SIEM Simplified) is that many customers value the anecdotes and statistics that are provided in the daily reports and monthly reviews to demonstrate RoSI to management. Things such as how many attacks were repulsed by the firewalls, how many incidents were addressed by criticality, anecdotal evidence of an attack disrupted or misconfiguration detected. We publish some of these anonymously as Catch of the Day.
It’s a practical way to demonstrate RoSI which is easier to understand and does not involve any guesses.
December 23, 2015
Did you know that SIEM and Log Management are different?
The latter (log management) is all about collecting logs first and worrying about why you need them second (if at all). The objective is “let’s collect it all and have it indexed for possible review. Why? Because we can.”
The former (SIEM) is about specific security use cases. SIEM is a use-case driven technology. Use cases are implementation specific, unlike antivirus or firewalls.
Treating SIEM like Log Management, is a lot like a turducken.
Don’t want that bloated feeling like Aunt Mildred explains here? Then don’t stuff your SIEM with logs absent a use case.
Need help doing this effectively? A co-managed SIEM may be your best bet.
December 09, 2015
You have, no doubt, heard that cyber security is everyone’s job. So then, as the prime defender of your network, what specifically are you doing to empower people so they can all act as sentries? After all, security cannot be automated as much as you’d like. Human adversaries will always be smarter than automated tools and will leverage human ingenuity to skirt around your protections.
But, marketing departments in overdrive are busy selling the notion of “magic” boxes that can envelope you in a protective shell against Voldemort and his minions. But isn’t that really just fantasy? The reality is that you can’t replace well-trained security professionals exercising judgment with computers.
So what does an effective security buyer do?
Answer: Empower the people by giving them tools that multiply their impact and productivity, instead of trying to replace them.
When we were designing EventTracker 8, an oft repeated observation from users was the shortage of senior analysts. If they existed at all in the organization, they were busy with higher level tasks such as policy creation, architecture updates and sometimes critical incident response. The last task on their plates was the bread-and-butter of log review and threat monitoring. Such tasks are often the purview of junior analysts (if they exist). In response, many of the features of EventTracker 8 are designed specifically to enable junior administrators to make effective contributions to cyber security.
Still feeling overwhelmed by the daily tasks that need doing, consoles that need watching, alerts that need triaging? Don’t fret – that is precisely what our SIEM Simplified service (SIEMaas) is designed to provide – as much, or as little help as you need. Become empowered, be effective.
December 02, 2015
Account Lockouts in Active Directory
“User X” is getting locked out and Security Event ID 4740 are logged on respective servers with detailed information.
The common causes for account lockouts are:
Troubleshooting Steps Using EventTracker
Here we are going to look for Event ID 4740. This is the security event that is logged whenever an account gets locked.
2. Select search on the menu bar
3. Click on advanced search
4. On the Advanced Log Search Window fill in the following details:
Once done hit search at the bottom.
You can see the details below. If you want to get more information about a particular log, click on the + sign
Below shows more information about this event.
Now, let’s take a closer look at 4740 event. This can help us troubleshoot this issue.
Logon into the computer mentioned on “Caller Computer Name” (DEMOSERVER1) and look for one of the aforementioned reasons that produces the problem.
To understand further on how to resolve issues present on “Caller Computer Name” (DEMOSERVER1) let us look into the different logon types.
How to identify the logon type for this locked out account?
Just like how it is shown earlier for Event ID 4740, do a log search for Event ID 4625 using EventTracker, and check the details.
Logon Type 7 says User has typed a wrong password on a password protected screen saver.
Now we understand what reason to target and how to target the same.
Microsoft Windows Servers
Microsoft Windows Desktops
Ashwin Venugopal, Subject Matter Expert at EventTracker
Satheesh Balaji, Security Analyst at EventTracker
November 25, 2015
Late binding is a computer programming mechanism in which the method being called upon an object or the function being called with arguments is looked up by name at runtime. This contrasts with early binding, where everything must be known in advance. This method is favored in object-oriented languages and is efficient but incredibly restrictive. After all, how can everything be known in advance?
In EventTracker, late binding allows us to continue learning and leveraging new understanding instead of getting stuck in whatever was sensible at the time of indexing. The upside is that it is very easy to ingest data into EventTracker without knowing much (or anything) about its meaning or organization. Use any one of several common formats/protocols, and voila, data is indexed and available for searching/reporting.
As understanding improves, users can create a “Knowledge Pack” to describe the indexed data in reports, search output, dashboards, co-relation rules, behavior rules, etc. There is no single, forced “normalized” schema and thus no connectors to transform incoming data to the fixed schema.
As your understanding improves, the knowledge pack improves and so does the resulting output. And oh by the way, since the same data can be viewed by two different roles in very different ways, this is easily accommodated in the Knowledge Pack. Thus the same data (e.g., Login failures) can be viewed in one way by the Security team (in real time, as an alert, with trends) and in an entirely different way by the Compliance team (as a report covering a time-span with annotation to show due care).
November 18, 2015
As defenders, it is our job to make the attackers’ lot in life harder. Push them up the “pyramid of pain“. Be a hard target so they move on to a softer/easier one.
November 11, 2015
Over the years, we have seen many approaches to implementing a security monitoring capability.
The “checkbox mentality” is common—when the team uses the out-of-the-box functionality, including perhaps rules/reports, to meet a specific regulation.
The “big hero” approach is found in chaotic environments where tools are implemented with no planning or oversight, in a very “just do it” approach. The results may be fine, but are lost when the “big hero” moves on or loses interest.
The “strict process” organizations that implement a waterfall model and have rigid processes for change management and the like frequently lack the agility and dynamics required by today’s constantly evolving threats.
So what then are the hallmarks of a successful approach? Augusto Barrios described these factors here. Three factors are common:
Since it’s quite hard to get all of it right, an increasingly popular approach is to split the problem between the SIEM vendor and the buyer. Each has strengths critical to success. The SIEM vendor is expert with the technology, likely has well defined processes for implementation and operational success, whereas the buyer knows the environment intimately. Together, good use cases can be crafted. Escalation from the SIEM vendor who performs the monitoring is passed to the buyer team to provide lateral support. This approach has the potential to ramp up very quickly, since each team plays to their existing strengths.
The Gartner term for this approach is “co-managed SIEM.”
Want to get started quickly? Here is a link for you.
November 04, 2015
The release of EventTracker 8 with new endpoint threat detection capabilities has led to many to ask: a) how to obtain these new features and b) where the focus on monitoring efforts should be, on the endpoint or on traditional attack vectors.
The answer to “a” is fairly simple and involves upgrading to the latest version; if you have licensed the suitable modules, the new features are immediately available to you.
The answer to “b” is not so simple and depends on your particular situation. After all, endpoint threat detection is not a replacement of signature based network packet sniffers. If your network permits BYOD or allows business partners to connect entire networks to yours, or permits remote access, why then network-based intrusion detection would be a must (how can you insist on sensors on BYOD?).
On the other hand, malware can be everywhere and anti-virus effectiveness is known to be weak. Phishing and drive-by exploits are real things. Perhaps even accurate inventory of endpoints (think traveling laptops) is hard. This all leads to endpoint-focused efforts as being paramount.
So really, it’s not endpoint or network-focused monitoring; rather it’s endpoint and network-focused monitoring efforts.
Feeling overwhelmed at having to deploy/manage so much complexity? Help is at hand. Our co-managed solution called SIEM Simplified is designed to take the sting out of the cost and complexity of mounting an effective defense.
October 28, 2015
Risk management 101 says you can’t possibly apply the same safeguards to all systems in the network. Therefore, you must classify your assets and apply greater protection to the “critical” systems—the ones where you have more to lose in the event of a breach. And so, desktops are considered less critical as compared to servers, where the crown jewels are housed.
But think about this: an attacker will most likely probe for the weakly defended spot, and thus many widespread breaches originate at the desktop. In fact, in many cases, attackers discover crown jewels are sometimes also available at some workstations of key employees (e.g., the CEO’s assistant?), in which case there is not even a need to attack a hardened server.
So while it still makes sense to mount better defenses of critical systems, it’s equally sensible to be able to investigate compromised systems, regardless of their criticality. To do so, you must be gathering telemetry from all systems. While you may not be able to do this if you are allowing a BYOD policy, you should definitely think about data gathering from beyond just “critical systems.”
The ETDR functionality built in to the EventTracker 8 sensor (formerly agent) for Windows lets you collect this telemetry easily and efficiently. The argument here being it’s very worthwhile given the current threat landscape, to cover not just critical systems, but also desktops, with this technology.
What’s new in EventTracker 8? Find out here.
October 21, 2015
Security Subsistence Syndrome (SSS) is defined as a mindset in an organization that believes it has no security choices and is underfunded, so it minimally spends to meet perceived statutory and regulatory requirements.
Andy Ellis describes this mindset as one “with attitude, not money. It’s possible to have a lot of money and still be in a bad place, just as it’s possible to operate a good security program on a shoestring budget.”
October 14, 2015
If attackers can deploy a remote administration tool (RAT) on your network, it makes it so much easier for them. RATs make it luxurious for bad guys; it’s like being right there on your network. RATs can log keystrokes, capture screens, provide RDP-like remote control, steal password hashes, scan networks, scan for files and upload them back to home. So if you can deny attackers the use of RATs, you’ve just made life a lot harder for them.
October 07, 2015
The news is rife with stories on “advanced” and “persistent” attacks, in the same way as exotic health problems like Ebola. The reality is that you are much more likely to come down with the common cold than Ebola. Thus, it makes more sense to pay close attention to what the Center for Disease Control has to say about it than to stockpile Ebola serum.
In similar vein, how good is your organization in fighting basic, commodity attacks?
It is true that the scary monsters called 0-day, advanced/persistent attacks and state sponsored superhackers are real. But before worrying about these, how are you set up for traditional intrusion attempts that use (5+) year old tools, tactics and exploits? After all, the vast majority of successful attacks are low tech and old school.
Want to rapidly improve your security maturity? Consider SIEM Simplified, our surprisingly affordable service that can protect you from 90% of the attacks for 10% of the do-it-yourself cost.
September 30, 2015
The Riddler is one of Batman’s enduring enemies who takes delight in incorporating riddles and puzzles into his criminal plots—often leaving them as clues for the authorities and Batman to solve.
Question: When is a door, not a door?
Answer: When it’s ajar.
So riddle me this, Batman: When is an alert not an alert?
EventTracker users know that one of its primary functions is to apply built-in knowledge to reduce the flood of all security/log data to a much smaller stream of alerts. However, in most cases, without applying local context, this is still too noisy, so a risk score is computed which factors in the asset value and CVSS score of the source.
This allows us to separate “alerts” into different priority levels. The broad categories are:
And so, there are alerts and there are alerts. Over-reacting to awareness or compliance alerts will drain your energy and eventually sap your enthusiasm, not to mention cost you in real terms. Under-reacting to actionable alerts will also hurt you by inaction.
Can your SIEM differentiate between actionable and awareness alerts?
Find out more here.
September 16, 2015
The “kill chain” is a military concept related to the structure of an attack. In the InfoSec area, this concept is a way of modeling intrusions on a computer network.
Threats occur in up to seven stages. Not all threats need to use every stage, and the actions available at each stage can vary, giving an almost unlimited diversity to attack sets.
Of course, some of the steps can happen outside the defended network, and in those cases, it may not be possible or practical to identify or counter. However, the most common variety of attack is unstructured in nature and originates from external sources. These use scripts or commonly available cracking tools that are widely available. Such attacks are identified by many techniques including:
Evidence of such activities is a pre-cursor to an attack. If defenders observe the activities from external sources, then it is important to review what the targets are. Often times, these can be uncovered by a penetration test. Repeated attempts against specific targets are a clue.
A defense-in-depth strategy gives defenders multiple clues about such activities. These include IDS systems that detect attack signatures, logs showing the activities and vulnerability scans that identify weaknesses.
To be sure, defending requires carefully orchestrated expertise. Feeling overwhelmed? Take a look at our SIEM Simplified offering where we can do the heavy lifting.
September 16, 2015
We hear a lot about tracking privileged access today because privileged users like Domain Admins can do a lot of damage. But more importantly, if their accounts are compromised the attacker gets full control of your environment. In line with this concern, many security standards and compliance documents recommend tracking changes to privileged groups like Administrators, Domain Admins and Enterprise Admins in Windows, and related groups and roles in other applications and platforms.
September 09, 2015
To defend against an attacker, you must know him and his methods. The typical attack launched on an IT infrastructure can be thought of in three stages.
The villain lures the unsuspecting victim to install malware. This can be done in a myriad of ways: by sending an attachment from an apparently trustworthy source, causing a drive by infection through a website hosting malware, or via a USB drive. Attackers target the weakest link, the less guarded desktop or a test system. Frontal assaults against heavily fortified and carefully watched servers are not practical.
Once installed, the malware usually copies itself to multiple spots to deter eradication and it can possibly “phone home” for further instructions. Malware usually lurks in the background, trying to obtain passwords or system lists to further enable Part 2.
As a means to deter removal, malware will move laterally, copying itself to other machines/locations. This movement is also often from peripheral to more central systems (e.g., from workstations to file shares).
Having patiently gathered up (usually zip or rar) secrets (intellectual property, passwords, credit card info, PII, etc.), the malware (or attacker)now sends the data outside the network back to the attacker.
How do you defend yourself against this? A SIEM solution can help, or a managed SIEM solution if you are short on expertise.
September 03, 2015
The (toxic) term “outsourcing” has long been vilified as the substitution of onshore jobs with cheaper offshore people. As noted here, outsourcing, by and large, has really always been about people. The story of outsourcing to-date is of service providers battling it out to deliver people-based services more productively, promising delights of delivery beyond merely doing the existing stuff significantly cheaper and a bit better.
August 19, 2015
For many years now, the security industry has become somewhat reliant on ‘indicators of compromise’ (IoC) to act as clues that an organization has been breached. Every year, companies invest heavily in digital forensic tools to identify the perpetrators and which parts of the network were compromised in the aftermath of an attack.
All too often, businesses are realizing that they are the victims of a cyber attack once it’s too late. It’s only after an attack that a company finds out what made them vulnerable and what they must do to make sure it doesn’t happen again.
This reactive stance was never useful to begin with and given the threat landscape, is totally undone as described by Ben Rossi.
Given the importance of identifying these critical indicators of attack (IoAs), here are eight common attack activities that IT departments should be tracking in order to gain the upper hand in today’s threat landscape.
Here are three IoAs that are both meaningful and relatively easy to detect:
Can you detect out-of-ordinary or new behavior? To quote the SANS Institute…Know Abnormal to fight Evil. Read more here.
August 17, 2015
There’s plenty of interest in all kinds of advanced security technologies like threat intelligence, strong/dynamic authentication, data loss prevention and information rights management. However, so many organizations still don’t know that the basic indicators of compromise on their network are new processes and modified executables.
August 05, 2015
What did the 2015 Verizon DBIR show us?
• 200+ days on average before persistent attackers are discovered within the enterprise network
• 60%+ breaches are reported by a third party
• 100% of breached networks were up to date on Anti Virus
We’ve got detection deficit disorder.
And it’s costing us. Direly!
Think of the time and money spent in detecting, with some degree of confidence, the location of Osama Bin Laden. Then think of the time and money to dispatch Seal Team 6 on the mission. Detection took ten years and cost hundreds of millions of dollars while remediation took 10 days and a few million dollars.
The same situation is happening in your network. You have for example 5,000 endpoints and of those, maybe 5 are compromised as you’re reading this. But which endpoints are compromised? How do you get actionable intelligence so that you can dispatch your own Seal Team 6?
This is the problem, EventTracker 8 was designed to address. Continuous digital forensics data collection using purpose built sensors. The machine learning at the EventTracker Console, sifts through collected data to identify possible malware, lateral movement and exfiltration of data. The processes are all backed by experts of the SIEM Simplified service.
July 30, 2015
The gap between the ‘time to compromise’ and the ‘time to discover’ is the detection deficit. According to Verizon DBIR, the trend lines of these have been diverging significantly in the past few years. Worse yet, the data shows that attackers are able to compromise the victim in days but thereafter are able to spend an average of 243 days undetected within the enterprise network before they are exposed. More often than not, this is happening by a third party. This trend points to an ongoing detection deficit disorder. The suggestion is that defenders struggle to uncover the indicators of compromise. While the majority of these attacks are via malware inserted to the victim’s system by a variety of methods, there is also theft of credentials that make it look like an inside job. To overcome the detection deficit, defenders must look for other common evidence of compromise. These include: command and control activity, suspicious network traffic, file access and unauthorized use of valid credentials. EventTracker 8 includes features incorporated into our Windows sensor that provide continuous forensics to look for evidence of compromise.” target=”_blank”>Verizon VBIR, the trend lines of these have been diverging significantly in the past few years.
Worse yet, the data shows that attackers are able to compromise the victim in days but thereafter are able to spend an average of 243 days undetected within the enterprise network before they are exposed. More often than not, this is happening by a third party.
This trend points to an ongoing detection deficit disorder. The suggestion is that defenders struggle to uncover the indicators of compromise.
While the majority of these attacks are via malware inserted to the victim’s system by a variety of methods, there is also theft of credentials that make it look like an inside job.
To overcome the detection deficit, defenders must look for other common evidence of compromise. These include: command and control activity, suspicious network traffic, file access and unauthorized use of valid credentials.
EventTracker 8 includes features incorporated into our Windows sensor that provide continuous forensics to look for evidence of compromise.
July 14, 2015
Defense-in-depth pretty much secures and confirms the thought that every security technology has a place but are they really all created equal? Security is not a democratic process and no one is going to complain about security inequality if you are successful at halting breaches. So I think we need to acknowledge a few things. Right now the bad guys are winning on the endpoint – in particular on the workstations. One way or another the attackers are getting users to execute bad
July 13, 2015
Attacks on our IT network are a daily fact of life. As a defender, its job is to make the attackers life harder and to deter them to go elsewhere. Any attack, almost inevitably causes some type of host artifact to be left behind.
If defenders are able to quickly uncover the presence of host artifacts, it may be possible to disrupt the attack, thereby causing pain to the attacker. Such artifacts are present on the target/host and usually not visible to network monitors.
Many modern attacks use malware that is dropped and executed on the target machine or hollows out existing valid processes to spawn child processes that can be hijacked.
A common tactic when introducing malware on a target is to blend in. If the legitimate process is called svchost.exe, then the malware may be called svhost.exe. Another tactic is to maintain the same name as the legitimate EXE but have it executed from a different path.
EventTracker 8 includes a new module called Advanced Security Analytics which provides tools to help automate the detection of such attacks. When any process is launched, EventTracker gathers various bits of information about the EXE including, its hash, its full path name, its parent process, the publisher name and if it’s digitally signed or not. Then at the EventTracker Console, if the hash is being seen for the first time, it gets compared to lists of known malware from sources such as virustotal.com, virusshare.com etc. Analysts can also look and see if the EXE was digitally signed by the publisher name and source to determine if further investigation is warranted.
When tuned properly, this capability results in low false positive and can be useful to rapidly detect attackers.
Want more information on EventTracker 8? Click here.
July 06, 2015
It’s clear that we are now working under the assumption of a breach. The challenge is to find the attacker before they cause damage.
Once attackers gain a beach head within the organization, they pivot to other systems. The Verizon DBIR shows that compromised credentials make up a whopping 76% of all network incursions.
However, the traditional IT security tools deployed at the perimeter, used to keep the bad guys out, are helpless in these cases. Today’s complex cyber security attacks require a different approach.
EventTracker 8 includes an advanced security analytic package which includes behavior rules to self-learn user location affinity heuristics and use this knowledge to pinpoint suspicious user activity.
In a nutshell, EventTracker learns typical user behavior for interactive login. Once a baseline of behavior is established, out of ordinary behavior is identified for investigation. This is done in real-time and across all enterprise assets.
For example if user susan typically logs into wks5 but now because her credentials are stolen, they are used to login to server6, this would be identified as out-of-ordinary and tagged for closer inspection.
EventTracker 8 has new features designed to support security analysts involved in Digital Forensics and Incident Response.
June 19, 2015
This is a classic chicken/egg question but it’s too often thought to be the same. Take it from Merriam – Webster:
Compliance: (1a) the act or process of complying to a desire, demand, proposal, or regimen or to coercion. (1b) conformity in fulfilling official requirements. (2) a disposition to yield to others.
Security: (1) the quality or state of being secure. (4a) something that secures : protection. (4b1) measures taken to guard against espionage or sabotage, crime, attack, or escape. (4b2) an organization or department whose task is security.
Clearly they are not the same. Compliance means you meet a technical or non-technical requirement and periodically someone verifies that you have met them.
Compliance requirements are established by standards bodies, who obviously do not know your network. They are established for the common good because of industry wide concerns that information is not protected, usually because the security is poor. When you see an emphasis of compliance over security, it’s too often because the organization does not want to take the time to ensure that the network and information is secure, so they rely on compliance requirements to feel better about their security.
The problem with that is that it gives a false sense of hope. It gives the impression that if you check this box; everything is going to be ok. Obviously this is far from true, with examples like Sony, Target, TJMaxx and so many other breaches. Although there are implementations of compliance that will make you more secure, you cannot base your companies’ security policy on a third party’s compliance requirements.
So what comes first? Wrong question! Let’s rephrase – there needs to be a healthy relationship between the two but one cannot substitute one for the other.
June 10, 2015
Have you noticed the number of vendors that have jumped on the “Threat Intelligence” bandwagon recently?
Threat Intel is the hot commodity with paid sources touting their coverage and timeliness while open sources tout the size of their lists. The FBI shares its info via Infraguard while many other ISACs are popping up across industry verticals allowing many large companies to compile internal data.
All good right? More is better, right? Actually, not quite.
Look closely. You are confusing “intelligence” with “data”.
As the Lt Commander of the Starship Enterprise would tell you, Data is not Intelligence. In this case, intelligence is really problem solving. As defenders, we want this data in order to answer “Who is attacking our assets and how?” Which would lead to coherent defense.
The steps to use Threat Data are easily explained:
1) Compare observations on the local network against the threat data.
2) Alert on matches.
Now comes the hard part…
3) Examine and validate the alert to decide if remediation is needed. This part is difficult to automate and really the crux of converting threat data to threat intelligence. To do this effectively would require human skills that combine both expert knowledge of the modern ThreatScape with knowledge of the network architecture.
This last part is where most organizations come up hard against ground reality. The fact is thatdetailed knowledge of the internal network architecture is more common within an organization (more or less documented but present in some fashion/degree), than the expert knowledge of the modern ThreatScape and the contours/limitations of the threat data.
You could, of course hire and dedicate staff to perform this function but a) such staff are hard to come by and b) budget for this is even harder.
Consider a co-managed solution like SIEM Simplified where the expert knowledge of the modern ThreatScape in the context of your network is provided by an external group. When this is combined with your internal resources to co-manage the problem, it can result in improved coverage at an affordable price point.
June 03, 2015
Six ways to shoot yourself with SIEM technology:
1) Dont plan; just jump in
2) Have no defined scope or use cases; whatever
3) Confuse SIEM with Log Management
4) Monitor noise; apply no filters
5) Don’t correlate with any other technologies eg IDS, Vulnerability scanner, Active Directory
6) Staff poorly or not at all
For grins, here’s how programmers shoot themselves in the foot:
Find a gun, it falls apart. Put it back together, it falls apart again. You try using the .GUN Framework, it falls apart. You stab yourself in the foot instead.
You try to shoot yourself in the foot, but find out the the gun is actually a howitzer cannon.
You accidentally create a dozen clones of yourself and shoot them all in the foot. Emergency medical assistance is impossible since you can’t tell which are bitwise copies and which are just pointing at others and saying, “That’s me, over there.”
You’ve perfected a robust, rich user experience for shooting yourself in the foot. You then find that bullets are disabled on your gun.
SELECT @ammo:=bullet FROM gun WHERE trigger = ‘PULLED’;
INSERT INTO leg (foot) VALUES (@ammo);
foot.c foot.h foot.o toe.c toe.o
% rm * .o
rm: .o: No such file or directory
Click here for the Top 6 Uses of SIEM.
May 20, 2015
Just after a new security vulnerability surfaced Wednesday, many tech outlets started comparing it with HeartBleed, the serious security glitch uncovered last year that rendered communications with many well-known web services insecure, potentially exposing millions of plain-text passwords.
But don’t panic. Though the recent vulnerability has a more terrific name than HeartBleed, it is not going to cause as much danger as HeartBleed did.
Dubbed VENOM, standing for Virtualized Environment Neglected Operations Manipulation, is a virtual machine security flaw uncovered by security firm CrowdStrike that could expose most of the data centers to malware attacks, but in theory.
Yes, the risk of Venom vulnerability is theoretical as there is no real-time exploitation seen yet, while, on the other hand, last year’s HeartBleed bug was practically exploited by hackers an unknown number of times, leading to the theft of critical personal information.
Now let’s know more about Venom:
Venom (CVE-2015-3456) resides in the virtual floppy drive code used by a several number of computer virtualization platforms that if exploited…
…could allow an attacker to escape from a guest ‘virtual machine’ (VM) and gain full control of the operating system hosting them, as well as any other guest VMs running on the same host machine.
According to CrowdStrike, this roughly decade-old bug was discovered in the open-source virtualization package QEMU, affecting its Virtual Floppy Disk Controller (FDC) that is being used in many modern virtualization platforms and appliances, including Xen, KVM, Oracle’s VirtualBox, and the native QEMU client.
Jason Geffner, a senior security researcher at CrowdStrike who discovered the flaw, warned that the vulnerability affects all the versions of QEMU dated back to 2004, when the virtual floppy controller was introduced at the very first.
However, Geffner also added that so far, there is no known exploit that could successfully exploit the vulnerability. Venom is critical and disturbing enough to be considered a high-priority bug.
Successful exploitation of Venom required:
For successful exploitation, an attacker sitting on the guest virtual machine would need sufficient permissions to get access to the floppy disk controller I/O ports.
When considering on Linux guest machine, an attacker would need to have either root access or elevated privilege. However on Windows guest, practically anyone would have sufficient permissions to access the FDC.
However, comparing Venom with Heartbleed is something of no comparison. Where HeartBleed allowed hackers to probe millions of systems, Venom bug simply would not be exploitable at the same scale.
Flaws like Venom are typically used in a highly targeted attack such as corporate espionage, cyber warfare or other targeted attacks of these kinds.
Did venom poison Clouds Services?
Potentially more concerning are most of the large cloud providers, including Amazon, Oracle, Citrix, and Rackspace, which rely heavily on QEMU-based virtualization are vulnerable to Venom.
However, the good news is that most of them have resolved the issue, assuring that their customers needn’t worry.
“There is no risk to AWS customer data or instances,” Amazon Web Services said in a statement.
Rackspace also said the flaw does affect a portion of its Cloud Servers, but assured its customers that it has “applied the appropriate patch to our infrastructure and are working with customers to remediate fully this vulnerability.”
Azure cloud service by Microsoft, on the other hand, uses its homemade virtualization hypervisor technology, and, therefore, its customers are not affected by Venom bug.
Meanwhile, Google also assured that its Cloud Service Platform does not use the vulnerable software, thus was never vulnerable to Venom.
Patch Now! Prevent yourself
Both Xen and QEMU have rolled out patches for Venom. If you’re running an earlier version of Xen or QEMU, upgrade and apply the patch.
Note: All versions of Red Hat Enterprise Linux, which includes QEMU, are vulnerable to Venom. Red Hat recommend its users to update their system using the commands, “yum update” or “yum update qemu-kvm.”
Once done, you must “power off” all your guests Virtual Machines for the update to take place, and then restart it to be on the safer side. Remember, only restarting without power off the guest operating system is not enough for the administrators because it would still use the old QEMU binary.
See more at Hacker News.
May 13, 2015
Q. What is worse than the attacks at Target, Home Depot, Michael’s, Dairy Queen, Sony, etc?
A. A disgruntled insider (think Edward Snowden)
A data breach has serious consequences both directly and indirectly. Lost revenue and a tarnished brand reputation both inflict harm long after incident resolution and post breach clean-up. Still, many organizations don’t take necessary steps to protect themselves from a potentially detrimental breach.
But, the refrain goes, “We don’t have the budget or the manpower or the buy in from senior management. We’re doing the best we can.”
How about going for some quick wins?
Quick wins provide solid risk reduction without major procedural, architectural or technical changes to an environment. Quick wins also provide such substantial and immediate risk reduction against very common attacks that most security-aware organizations prioritize these key controls.
1) Control the use of Administrator privilege
The misuse of administrative privileges is a primary method for attackers to spread inside a target enterprise. Two very common attacker techniques take advantage of uncontrolled administrative privileges. For example, a workstation user running as a privileged user, is fooled by simply surfing to a website hosting attacker content that can automatically exploit browsers. The file or exploit contains executable code that runs on the victim’s machine. Since the victim user’s account has administrative privileges, the attacker can take over the victim’s machine completely and install malware to find administrative passwords and other sensitive data.
2) Limit access to documents to employees based on the need to know
It’s important to limit permissions so employees only have access to the data necessary to perform their jobs. Steps should also be taken to ensure users with access to sensitive or confidential data are trained to recognize which files require more strict protection.
3) Evaluate your security tools – can they detect insider theft?
Whether it’s intentional or inadvertent, would you even know if someone inside your network compromised or leaked sensitive data?
4) Assess security skills of employees, provide training
The actions of people play a critical part in the success or failure of an enterprise. People fulfill important functions at every stage of the business function. Attackers are very conscious of these issues and use them to plan their exploitations by: carefully crafting phishing messages that look like routine and expected traffic to an unwary user; exploiting the gaps or seams between policy and technology; working within the time window of patching or log review; using nominally non-security-critical systems as jump points or bots….
5) Have an incident response plan
How prepared is your information technology (IT) department or administrator to handle security incidents? Many organizations learn how to respond to security incidents only after suffering attacks. By this time, incidents often become much more costly than needed. Proper incident response should be an integral part of your overall security policy and risk mitigation strategy.
A guiding principle of IT Security is “Prevention is ideal but detection is a must.”
Have you reduced your exposure?
May 06, 2015
This fundamental tradeoff between security, usability, and cost is critical. Yes, it is possible to have both security and usability, but at a cost, in terms of money, time and personnel. While making something both cost efficient and usable, or even making something secure and cost-efficient may not be very hard, it is however more difficult and time consuming to make something both secure and usable. This takes a lot of effort and thinking because security takes planning and resources.
As a system administrator, usability is at the top of their list. However, as a security administrator, security will be on top of their list – no surprise here really.
What if I tell you that the two job roles are orthogonal? What gets a sys admin bouquets, will get a security admin, brickbats and vice versa.
Oh and when we say “cheap” we mean in terms of effort – either by the vendor or by the user.
Security administrators face some interesting tradeoffs. Fundamentally, the choice to be made is between a system that is secure and usable, one that is secure and cheap or one that is cheap and usable. Unfortunately, we cannot have everything. The best practice is not to make the same person responsible for both security and system administration. The goals of those two tasks are far too often in conflict to make this a position that someone can become successful at.
April 22, 2015
Myth 1: Hardening a system makes it secure
Security is a process, to be evaluated on a constant basis. There is nothing that will put you into a “state of security.” Did you really think that simply applying some hardening guide to a system will make it secure?
Threats exploit unpatched vulnerabilities and not one of them would have been stopped by any security settings. Few settings can prevent your network from getting attacked through unpatched vulnerabilities.
Myth 2: If We Hide It, the Bad Guys Won’t Find It
Also known as security by obscurity, hiding the system doesn’t really help. For instance, turning off SSID broadcast in wireless networks. Not only will you now have a network that is non compliant with the standard, but your clients will also prefer a rogue network with the same name over the legitimate one. Oh, and it takes a few minutes to actually find the network anyway, given the proper tools. Another example is changing the banners on your Web site so the bad guys will not know it is running IIS. First, it is relatively simple to figure out what the Web site is running anyway. Second, most of the bad guys are not smart enough to do that, so they just try all the exploits, including the IIS ones. Yet another example is renaming the Administrator account. It is a matter of a couple of API calls to find the real name. Our favorite is when administrators use Group Policy to rename the Administrator account. They now have an account called “Janitor3;” with a comment of “Built in account for administering the computer/domain.” This is not really likely to fool anyone.
Myth 3: “High Security” Is an End-Goal for All Environments
High security, in the sense of the most restrictive security possible, is not for everyone. In some environments you are willing to break things in the name of protection that you are not willing to break in others.
Some systems are subjected to incredibly serious threats. If these systems get compromised, people will die, nations and large firms will go bankrupt, and society as we know it will collapse. Other systems contain far less sensitive information and thus need not be subjected to the same level of security. The protective measures that are used on the former are entirely inappropriate for the latter; yet we keep hearing that “high security” is some sort of end-goal toward which all environments should strive.
Safeguards should be applied in proportion to risk.
April 16, 2015
Is it possible to avoid security breaches? Judging from recent headlines, probably not. Victims range from startups like Kreditech, to major retailers like Target,to the US State Department and even the White House. Regardless of the security measures you have in place, it is prudent to assume you will suffer a breach at some point. Be sure to have a response plan in place — just in case.
April 15, 2015
In the next couple months, Congress will likely pass CISA, the Cybersecurity Information Sharing Act. The purpose is to “codify mechanisms for enabling cybersecurity information sharing between private and government entities, as well as among private entities, to better protect information systems and more effectively respond to cybersecurity incidents.”
Can it help? It’s interesting to note two totally opposing views.
Arguing that it will help is Richard Bejtlich of Brookings. His analogy is Threat intelligence, is in some ways like a set of qualified sales leads provided to two companies. The first has a motivated sales team, polished customer acquisition and onboarding processes, authority to deliver goods and services and quality customer support. The second business has a small sales team, or perhaps no formal sales team. Their processes are broken, and they lack authority to deliver any goods or services, which in this second case isn’t especially valuable. Now, consider what happens when each business receives a bundle of qualified sales leads. Which business will make the most effective use of their list of profitable, interested buyers? The answer is obvious, and there are parallels to the information security world.
Arguing that it won’t help at all is Robert Graham, the creator of BlackICE Guard. His argument is “CISA does not work. Private industry already has exactly the information sharing the bill proposes, and it doesn’t prevent cyber-attacks as CISA claims. On the other side, because of the false-positive problem, CISA does far more to invade privacy than even privacy advocates realize, doing a form of mass surveillance.”
In our view, Threat Intel is a new tool. It’s usefulness depends on the artisan wielding the tool. A poorly skilled user would get less value.
Want experts on your team but don’t know where to start? Try our managed service SIEM Simplified. Start quick and leverage your data!
April 08, 2015
On Jan 13, 2015, the U.S. White House published a set of legislative proposals on cyber security threat intelligence (TI) sharing between private and public entities. Given the breadth of cyber attacks across the Internet, the sharing of information between commercial entities and the US Government is increasingly critical. Absent sharing, first responders charged with cyber defense are at a disadvantage in detecting and responding to common attacks.
Should this cause a privacy concern?
Richard Bejtlich, senior fellow at Brookings says “Threat intelligence does not contain personal information of American citizens, and privacy can be maintained while learning about threats. Intelligence should be published in an automated, machine-consumable, standardized manner.”
The White House proposal uses the following definition:
“The term `cyber threat indicator’ means information —
(A) that is necessary to indicate, describe or identify–
(i) malicious reconnaissance, including communications that reasonably appear to be transmitted for the purpose of gathering technical information related to a cyber threat;
(ii) a method of defeating a technical or operational control;
(iii) a technical vulnerability;
(iv) a method of causing a user with legitimate access to an information system or information that is stored on, processed by, or transiting an information system inadvertently to enable the defeat of a technical control or an operational control;
(v) malicious cyber command and control;
(vi) any combination of (i)-(v).
(B) from which reasonable efforts have been made to remove information that can be used to identify specific persons reasonably believed to be unrelated to the cyber threat.”
If you take the above at face value, then a reasonable assumption is that these sorts of cyber threat indicators should not trigger privacy concerns, whether shared between the private sector and the government or within the private sector.
Of course, getting TI and using it effectively are completely different as discussed here. Bejtlich reminds us that “private sector organizations should focus first on improving their own defenses before expecting that government assistance will mitigate their security problems.”
Looking for an practical, cost effective way to shore up your defenses? SIEM Simplified is one way to go about it.
April 02, 2015
You may recall that back in 2012, then Secretary of Defense Leon Panetta warned of “a cyber Pearl Harbor; an attack that would cause physical destruction and the loss of life.”
This hasn’t quite come to pass has it? Is it dumb luck? Or are we just waiting for it to happen?
In his annual testimony about the intelligence community’s assessment of “global threats,” Director of National Intelligence James Clapper sounded a more nuanced and less hyperbolic tone. “Rather than a ‘cyber Armageddon’ scenario that debilitates the entire U.S. infrastructure, we envision something different,” he said, “We foresee an ongoing series of low-to-moderate level cyber attacks from a variety of sources over time, which will impose cumulative costs on U.S. economic competitiveness and national security.”
The reality is that the U.S. is being bombarded by cyber attacks of a smaller scale every day—and those campaigns are taking a toll.
Now the DNI also went on to say “Although cyber operators can infiltrate or disrupt targeted [unclassified] networks, most can no longer assume that their activities will remain undetected, nor can they assume that if detected, they will be able to conceal their identities. Governmental and private sector security professionals have made significant advances in detecting and attributing cyber intrusions.”
Alan Paller of the SANS Institute says “Those words translate directly to a simpler statement: ‘The weapons and other systems we operate today cannot be protected from cyber attack.’ Instead, as a nation, we have to put in place the people and support systems who can find the intruders and excise them fast.”
So then what capabilities do you have in this area given that the attacks are continuous and ongoing against your infrastructure?
Want to do something about it quickly and effectively? Consider SIEM Simplified our service offering that can take the heavy lift required to implement such monitoring programs off your hands.
March 25, 2015
A new and harmful Point-of-Sale (“POS”) malware has been identified by security researchers at Cisco’s Security Intelligence & Research Group. The team says it is more sophisticated and damaging than previous POS malware programs.
Nicknamed PoSeidon, the new malware family targets POS systems, infects machines and scrapes the memory for credit card information which it then exfiltrates to servers, primarily .ru TLD, for harvesting or resale.
When consumers use their credit or debit cards to pay for purchases from a retailer, they swipe their card through POS systems. Information stored on the magnetic stripe on the back of those cards is read and retained by the POS. If the information on that stripe is stolen, it can be used to encode the magnetic strip of a fake card, which is then used to make fraudulent purchases. POS malware and card fraud has been steadily rising, affecting large and small retailers. Target, one of the most visible victims of security breach involving access to its payment card data, incurred losses approximated at $162 million (before insurance recompense).
PoSeidon employs a technique called memory scraping in which the RAM of infected terminals are scanned for unencrypted strings which match credit card information. When PoSeidon take over a terminal, a loader binary is installed to allow the malware to remain on the target machine even during system reboots. The Loader then contacts a command and control server, and retrieves a URL which contains another binary, FindStr, to download and execute. FindStr scans the memory of the POS device and finds strings (hence its name) and installs a key logger which looks for number strings and keystrokes analogous to payment card numbers and sequences. CSS referred to the number sequences that begin with numbers generally used by Discover, Visa, MasterCard and American Express cards (6, 5, 4, and 3 respectively, as well as the number of digits following those numbers; 16 digits for the former three, 15 digits for the American Express card). This data is then encoded and sent to an exfiltration server.
A whitepaper for detecting and protecting from PoSeidon malware infection is also available from EventTracker.
Tired of keeping up with the ever changing Threatscape? Consider SIEM Simplified. Let our managed SIEM solution do the heavy lifting.
March 16, 2015
Sometimes we get hung up on event monitoring and forget about the “I” in SIEM which stands for information. Not forgetting Information is important because there are many sources of non-event security information that your SIEM should be ingesting and correlating with security events more than ever before. There’s at least 4 categories of security information that you can leverage in your SIEM to provide better analysis of security events
March 11, 2015
Want to be acquired? Get your cyber security in order!
Washington Business Journal Senior Staff Reporter, Jill Aitoro hosted a panel of cyber experts Feb. 26 at Crystal Tech Fund in Arlington, VA.
The panel noted that how well a company has locked down their systems and data will have a direct effect on how much a potential buyer is willing to shell out for an acquisition — or whether a buyer will even bite in the first place.
Howard Schmidt, formerly CISO at Microsoft recalled ‘”We did an acquisition one time — about $10 million. It brought tons of servers, a big IT infrastructure, when all was said and done, it cost more than $20 million to rebuild the systems that had been owned by criminals and hackers for at least two years. That’s a piece of M&A you need to consider.”
Many private investors are doing exactly that, calling in security companies to assess a target company’s cyber security posture before making an offer. In some cases, the result will be to not invest at all, with the venture capitalist telling a company to “get their act together and then call back”.
March 04, 2015
Looking for a SIEM fighter to clean up Dodge? Click here!
February 18, 2015
Bad actors/actions are more and more prevelant on the Internet. Who are they? What are they up to? Are they prowling in your network?
The first two questions are answered by Threat Intelligence (TI), the last one can be provided by a SIEM that integrates TI into its functionality.
But wait, don’t buy just yet, there’s more, much more!
Threat Intelligence when fused with SIEM can:
• Validate correlation rules and improve base lining alerts by upping the priority of rules that also point at TI-reported “bad” sources
• Detect owned boxes, bots, etc. that call home when on your network
• Qualify entities related to an incident based on collected TI data (what’s the history of this IP?)
• Historical matching of past, historical log data to current TI data
• Review past TI history as key context for reviewed events, alerts, incidents, etc.
• Enable automatic action due to better context available from high-quality TI feeds
• Run TI effectiveness reports in a SIEM (how much TI leads to useful alerts and incidents?)
• Validate web server logs source IP to profile visitors and reduce service to those appearing on bad lists (uncommon)
and the beat goes on…
Want the benefits of SIEM without the heavy lifting involved? SIEM Simplified may be for you.
February 11, 2015
Did you wrestle your big name SIEM vendor to throw in their “enterprise class” solution for a huge discount as part of the last negotiation? If so, good from you – you should be pleased with yourself for wrangling something so valuable for them. 90% discounts are not unheard of, by the way.
But do you know why they caved and included it? It’s because there is very high probability that you really won’t ever obtain any significant value from it.
You see the “enterprise class” SIEM solutions from the top name vendors all require significant trained staff to even just get them up and running, never mind tuning and delivering any real value. They figured, you probably just don’t have the staff or the time to do any of that so they can just give it away at that huge discount. It only adds some value to their invoice, preventing any other vendor from horning in on their turf and makes you happy – what’s not to like?
The problem of course is that you are not any closer to solving any of the problems that a SIEM can address. Is that ok with you? If so, why even bother to pay that 10%?
From a recent webinar on the topic by Gartner Analyst Anton Chuvakin:
Q: For a mid-size company what percent of time would a typical SIEM analyst spend in monitoring / management of the tool – outstanding incident management?
A: Look at my SIEM skill model of Run/Watch/Tune and the paper where it is described in depth. Ideally, you don’t want to have one person running the SIEM system, doing security monitoring and tuning SIEM content (such as writing correlation rules, etc) since it would be either one busy person or one really talented one. Overall, you want to spend a small minority of time on the management of the tool and most of the time using it. SIEM works if you work it! SIEM fails if you fail to use it.
So is your SIEM gathering logs? Or gathering dust?
If the latter, give us a call! Our SIEM Simplified service can take the sting out of the bite.
February 04, 2015
Recent terrorist attacks in France have shaken governments in Europe. The difficulty of defending against insider attacks is once again front and center. How should we respond? The UK government seems to feel that greater mass surveillance is a proper response. The Communications Data Bill proposed by Prime Minister Cameron would compel telecom companies to keep records of all Internet, email, and cellphone activity. He also wants to ban encrypted communications services.
This approach would add even more massive data sets for analysis by computer programs than currently thought to be analyzed by NSA/GCHQ, in hopes that algorithms would be able to pinpoint the bad guys. Of course France has blanket surveillance but that did not prevent the Charlie Hebdo attack.
In the SIEM universe, the equivalent would be to gather every log from every source in hopes that attacks could be predicted and prevented. In practice,accepting data like this into a SIEM solution reduces it to a quivering mess of barely functioning components. In fact the opposite approach “output driven SIEM” is favored by experienced implementers.
Ray Corrigan writing Mass Surveillance Will Not Stop Terrorism in the New Scientist notes “Surveillance of the entire population, the vast majority of whom are innocent, leads to the diversion of limited intelligence resources in pursuit of huge numbers of false leads. Terrorists are comparatively rare, so finding one is a needle-in-a-haystack problem. You don’t make it easier by throwing more needleless hay on the stack.”
January 28, 2015
Threat Intelligence (TI) is evidence-based knowledge, including context, mechanisms, indicators, implications and actionable advice, about an existing or emerging menace or hazard to assets that can be used to inform decisions regarding the subject’s response to that menace or hazard. The challenge is that leading indicators of risk to an organization are difficult to identify when the organization’s adversaries, including their thoughts, capabilities and actions, are unknown. Therefore “black lists” of various types have become popular which list top attackers, spammers, poisoned URLs, malware domains etc have become popular. These lists are either community (free) maintained (eg SANS DShield), paid for by your tax dollars (eg InfraGuard) or paid services.
EventTracker 7.6 introduced formal support to automatically import and use such lists. We are often asked the question, which list(s) to use. Is it worth it to pay for TI service? Here is our thinking on the subject:
– External v/s Internal
In most cases, we find “white lists” to be much smaller, more effective and easier to tune/maintain than any “black list”. EventTracker supports the generation of such White lists from internal sources (the Change Audit feature) or the list of known good IP ranges (internal range, your Amazon EC2 or Azure instances, your O365 instances etc). Using the NOTIN match option of the Behavior module gives you a small list of suspicious activities (grey list) which can be quickly sorted to either black or white for future processing. As a first step, this is a quick/inexpensive/effective solution.
– Paid v/s Free
Free services include well regarded sources such as shadowservers.org, abuse.ch, dshield.org, FBI Infraguard, US CERT and EventTracker ThreatCenter (a curated list of low volume, high confidence sources formatted for quick import into EventTracker. Many customers in industry verticals (e.g., Electric power have lists circulated within their community.)
If you are thinking of paid services, then ask yourself:
– Will the feed allow me to detect threats faster? (e.g., a feed of top attackers updated in real-time v/s once in 6/12 hours). If faster is your motivation, are you able to respond to the threat detection faster? If the threat is detected at 8PM on a Friday, when will you be able to properly respond (not just acknowledge)?
– Will the feed allow me to detect threats better? i.e., you would have missed this threat if it had not been for that paid feed. At this time, many paid services for tactical TI are aggregating, cleaning and de-duplicating free sources and/or offering analysis that is also available in the public domain (e.g. McAfee and Kaspersky analysis of Dark Seoul, the malware that created havoc at Sony Pictures is available from US CERT).
Bottom line, Threat Intelligence is an excellent extension to SIEM solutions. The order of implementation should be internal/whitelist first, external free lists next and finally paid services to cover any remaining gaps.
Looking for 80% coverage at 20% cost? Let us do the detection with SIEM Simplified so you can remain focused on remediation.
January 22, 2015
Log monitoring is difficult for many reasons. For one thing there are not many events that unquestionably indicate an intrusion or malicious activity. If it were that easy the system would just prevent the attack in the first place. One way to improve log monitoring is to name implement naming conventions that imbed information about objects like user accounts, groups and computers such as type or sensitivity. This makes it easy for relatively simple log analysis rules to recognize important objects or improper combinations of information that would be impossible otherwise.
January 07, 2015
In the last few weeks of 2014 and in the aftermath of the Sony hack, the attacks at many retailers and the incessant news on shell shock, poodle and many other vulnerabilities, many manager are considering 2015 budgets and the eternal question “how much to invest in IT security” is a common one.
It sometimes see that there is no limit and the more you spend, the lower your risk. But the Gordon-Loeb model says that is in fact not the case.
As pointed out by the RH Smith College at the University of Maryland:
The security of information is a fundamental concern to organizations operating in the modern digital economy. There are technical, behavioral, and organizational aspects related to this concern. There are also economic aspects of information security. One important economic aspect of information security (including cybersecurity) revolves around deriving the right amount an organization should invest in protecting information. Organizations also need to determine the most appropriate way to allocate such an investment. Both of these aspects of information security are addressed by Drs. Lawrence A. Gordon and Martin P. Loeb – See more here.
The focus of the Gordon-Loeb Model is to present an economic framework that characterizes the optimal level of investment to protect a given set of information. The model shows that the amount a firm should spend to protect information should generally be only a small fraction of the expected loss. More specifically, it shows that it is generally uneconomical to invest in information security activities (including cybersecurity related activities) more than 37 percent of the expected loss that would occur from a security breach. For a given level of potential loss, the optimal amount to spend to protect an information set does not always increase with increases in the information sets vulnerability. In other words, organizations may derive a higher return on their security activities by investing in cyber/information security activities that are directed at improving the security of information sets with a medium level of vulnerability.
Want the most for your 37% of expected loss? Consider SIEM Simplified.
December 23, 2014
Solution Providers for Retail
Guest blog by A.N. Ananth
Cybercrime and stealing credit cards has been a hot topic all year. From the Target breach to Sony, the classic motivation for cybercriminals is profit. So how much is a stolen credit card worth?
The reason it is important to know the answer to this question is that it is the central motivation behind the criminal. If you could make it more expensive for a criminal to steal a card than what the thief would gain by selling them, then the attackers would find an easier target. That is what being a hard target is all about.
This article suggests prices of $35-$45 for a stolen credit card depending upon whether it is a platinum or corporate card. It is also worth noting that the viable lifetime of a stolen card is at most one billing cycle. After this time, the rightful owner will most likely detect its loss or the bank fraud monitor will pick up irregularities and terminate the account.
Why is a credit card with a high spending limit (say $10K) worth only $35? It is because monetizing a stolen credit card is difficult and requires a lot of expensive effort on part of the criminal. That is contrary to popular press which suggest that cybercrime results in easy billions. At the Workshop on Economics of Information Security, Herley and Florencio showed in their presentation, “Sex, Lies and Cybercrime Surveys,” that widely circulated estimates of cybercrime losses are wrong by orders of magnitude.For example:
Far from being broadly-based estimates of losses across the population, the cyber-crime estimates that we have appear to be largely the answers of a handful of people extrapolated to the whole population. A single individual who claims $50,000 losses, in an N = 1000 person survey, is all it takes to generate a $10 billion loss over the popu- lation. One unverified claim of $7,500 in phishing losses translates into $1.5 billion. …Cyber-crime losses follow very concentrated distributions where a representative sample of the pop- ulation does not necessarily give an accurate estimate of the mean. They are self-reported numbers which have no robustness to any embellishment or exaggeration. They are surveys of rare phenomena where the signal is overwhelmed by the noise of misinformation. In short they produce estimates that cannot be relied upon.
That’s a rational, fact based explanation as to why the most basic of information security is unusually effective in most cases. Pundits have been screaming this from the rooftops for a long time. What are your thoughts?
Read more at Solution Provider for Retail guest blog.
December 17, 2014
In computer terminology, a honeypot is a computer system set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of IT systems. Generally, a honeypot appears to be part of a network, but is actually isolated and monitored, and which seems to contain information or a resource of value to attackers.
Lance Spitzner covers this topic from his (admittedly) non-legal perspective.
Is it entrapment?
Honeypots are not a form of entrapment. For some reason, many people have this misconception that if they deploy honeypots, they can be prosecuted for entrapping the bad guys. Entrapment, by definition is “a law-enforcement officer’s or government agent’s inducement of a person to commit a crime, by means of fraud or undue persuasion, in an attempt to later bring a criminal prosecution against that person.”
Does it violate privacy laws?
Privacy laws in the US may limit your right to capture data about an attacker, even when the attacker is breaking into your honeypot but the exemption under Service Provider Protection is key. What this exemption means is that security technologies can collect information on people (and attackers), as long as that technology is being used to protect or secure your environment. In other words, these technologies are now exempt from privacy restrictions. For example, an IDS sensor that is used for detection and captures network activity is doing so to detect (and thus enable organizations to respond to) unauthorized activity. Such a technology is most likely not considered a violation of privacy as the technology is being used to help protect the organization, so it falls under the exemption of Service Provider Protection. Honeypots that are used to protect an organization would fall under this exemption.
Does it expose us to liability?
Liability is not a criminal issue, but civil. Liability implies you could be sued if your honeypot is used to harm others. For example, if it is used to attack other systems or resources, the owners of those may sue. The argument being that if you had taken proper precautions to keep your systems secure, the attacker would not have been able to harm my systems, so you share the fault for any damage occurred to me during the attack. The issue of liability is one of risk. First, anytime you deploy a security technology (even one without an IP stack), that technology comes with risk. For example, there have been numerous vulnerabilities discovered in firewalls, IDS systems, and network sniffers. Honeypots are no different.
Obviously this blog entry is not legal advice and should not be construed as such.
December 10, 2014
Effective security log monitoring is a very technical challenge that requires a lot of arcane knowledge and it is easy to get lost in the details. Over the years, there are 4 things that stand out to me as fundamentals when it comes to keeping the big picture and meeting the challenge:
November 19, 2014
2014 has seen a rash of high profile security breaches involving theft of personal data and credit card numbers from retailers Neiman Marcus, Home Depot, Target, Michaels, online auction site eBay, and grocery chains SuperValu and Hannaford among others. Hackers were able to steal hundreds of millions of credit and debit cards; from the information disclosed, this accounted for 40 million cards from Target, 350,000 from Neiman Marcus, up to 2.6 million from Michaels, 56 million from Home Depot.
The Identity Theft Resource Center (ITRC) reports that to date in 2014, 644 security breaches have occurred, an increase of 25.3 percent over last year. By far the majority of breaches targeted payment card data along with personal information like social security numbers and email addresses, and personal health information, and it estimates that over 78 million records were exposed.
Malware installed using third party credentials was found to be among the primary cause of the breaches in post-security analysis. Banks and financial institutions are critically dependent on their IT infrastructure and are also constantly exposed to attacks because of Sutton’s Law. Networks are empowering because they allow us to interact with other employees, customers and vendors. However, it is often the case that industry partners have a looser view of security and thus may be more vulnerable to being breached; exploiting industry interconnection is a favorite tactic used by attackers. After all, a frontal brute force attack on a well-defended large corporation’s doors are unlikely to be successful.
The Weak Link
The attackers target subcontractors, which are usually small companies with comparatively weaker IT security defenses and minimal cyber security expertise on hand. These small companies are also proud of their large customer and keen to highlight their connection. Likewise, companies often provide a surprising number of information meant for vendors on public sites for which logins are not necessary. This makes the first step of researching the target and their industry interconnections easier for the attacker.
The next step is to compromise the subcontractor network and find employee data. Social networking sites liked LinkedIn are a boon to attackers and used to create lists of IT admin and management staff who are likely to be privileged users. In West Virginia, state agencies were victims when malware infected computers of users whose email addresses ended with @wv.gov. The next step is to gain access to the contractors’ privileged users workstation, and from there, to breach the final target. In one retailer breach, the network credentials given to a heating, air conditioning and refrigeration contractor were stolen after hackers mounted a phishing attack, and were able to successfully lodge malware in the contractor’s systems, two months before they attacked the retailer, their ultimate target.
Good Practices, Good Security
Organizations can no longer assume that their enterprise is enforcing effective security standards; likewise, they cannot make the same assumption about their partners, vendors and clients, or anyone who has access to their networks. A Fortune 500 company has access to resources to acquire and manage security systems that a smaller vendor might not. So how can the enterprise protect itself while making the industry interconnections it needs to thrive?
Risk Assessments: When establishing a relationship with a vendor, partner, or client, consider vetting their security practices a part of due diligence. Before network access can be granted, the third party should be subject to a security appraisal that assesses where security gaps can occur (weak firewalls or security monitoring systems, lack of proper security controls). An inventory of the third party’s systems and applications and its control of those can help the enterprise develop an effective vendor management profile. Furthermore, it provides the enterprise with visibility into information that will be shared and who has access to that information.
Controlled Access: Third party access should be restricted and compartmentalized only to a segment of the network, and prevented access to other assets. Likewise, the organization can require that vendors and third parties use particular technologies for remote access, which enables the enterprise to catalog which connections are being made to the network.
Active Monitoring: Organizations should actively monitor network connections; SIEM software can help identify when remote access or other unauthorized software is installed, alert the organization when unauthorized connections are attempted, and establish baselines for “typical” versus unusual or suspicious user behaviors which can presage the beginning of a breach
Ongoing Audits: Vendors given access to the network should be required to submit to periodic audits; this allows both the organization and the vendor to assess security strengths and weaknesses and ensure that the vendor is in compliance with the organization’s security policies.
Financial institutions often implicitly trust vendors. But just as good fences make good neighbors, vendor audits produce good relationships. Initial due diligence and enforcing sound security practices with third parties can eliminate or mitigate security failures. Routine vendor audits send the message that the entity is always monitoring the vendor to ensure that it is complying with IT security practices.
November 10, 2014
I’ve always tried to raise awareness about the importance of workstation security logs. Workstation endpoints are a crucial component of security and the first target of today’s bad guys. Look at news reports and you’ll find that APT attacks and outsider data thefts always begin with the workstation endpoint. So unless you want to ignore your first opportunity to detect and disrupt such attacks you need to be monitoring them.
October 29, 2014
If you manage any Linux machines, it is essential that you know where the log files are located, and what is contained in them. Such files are usually in /var/log. Logging is controlled by the associated .conf file.
Some log files are distribution specific and this directory can also contain applications such as samba, apache, lighttpd, mail etc.
From a security perspective, here are 5 groups of files which are essential. Many other files are generated and will be important for system administration and troubleshooting.
1. The main log file
a) /var/log/messages – Contains global system messages, including the messages that are logged during system startup. There are several things that are logged in /var/log/messages including mail, cron, daemon, kern, auth, etc.
2. Access and authentication
a) /var/log/auth.log – Contains system authorization information, including user logins and authentication machinsm that were used.
b) /var/log/lastlog – Displays the recent login information for all the users. This is not an ascii file. You should use lastlog command to view the content of this file.
c) /var/log/btmp – This file contains information about failed login attemps. Use the last command to view the btmp file. For example, “last -f /var/log/btmp | more”
d) /var/log/wtmp or /var/log/utmp – Contains login records. Using wtmp you can find out who is logged into the system. who command uses this file to display the information.
e) /var/log/faillog – Contains user failed login attemps. Use faillog command to display the content of this file.
f) /var/log/secure – Contains information related to authentication and authorization privileges. For example, sshd logs all the messages here, including unsuccessful login.
3. Package install/uninstall
a) /var/log/dpkg.log – Contains information that are logged when a package is installed or removed using dpkg command
b) /var/log/yum.log – Contains information that are logged when a package is installed using yum
a) /var/log/daemon.log – Contains information logged by the various background daemons that runs on the system
b) /var/log/cups – All printer and printing related log messages
c) /var/log/cron – Whenever cron daemon (or anacron) starts a cron job, it logs the information about the cron job in this file
b) /var/log/maillog /var/log/mail.log – Contains the log information from the mail server that is running on the system. For example, sendmail logs information about all the sent items to this file
b) /var/log/Xorg.x.log – Log messages from the XWindows system
October 22, 2014
This post Seven Habits of Highly Fraudulent Users from Izzy at SiftScience describes patterns culled from 6 million transactions over a three month sample. The “fraud” sample consisted of transactions confirmed fraudulent by customers; “normal” samples consisted of transactions confirmed by customers to be non-fraudulent, as well as a subset of unlabeled transactions.
These patterns are useful to Security Operations Center (SOC) teams who “hunt” for these things.
Habit #1 Fraudsters go hungry
Whereas there is a dip in activity by normal users at lunch time, no such dip is observed in fraudulent transactions. When looking for out-of-ordinary behavior, the absence of any dip during the day might speak to a script which never tires.
Habit #2 Fraudsters are night owls
Analyzing fraudulent transactions as a percentage of all transactions, 3AM was found to be the most fraudulent hour in the day, and night-time in general was a more dangerous time. SOC teams should hunt for “after hours” behavior as a tip-off for bad actors.
Habit #3 Fraudsters are international
Look for traffic originating outside your home country. While these patterns change frequently, as a general rule, international traffic is worth trending and observing.
Habit #4 Fraudsters don multiple identities
Fraudsters tend to make multiple accounts on their laptop or phone to commit fraud. When multiple accounts are associated with the same device, the higher the likelihood of fraud. A user who has 6 accounts on her laptop is 15 times more likely to be fraudulent than the average person. Users with only 1 account however, are less likely to be fraudulent. SOC teams should look for multiple users using the same computer in a given time frame. Even in shared PC situations (e.g, nurses station in a hospital, it is unusual for much more than one user accessing a PC in a given shift.
Habit #5 Fraudsters use well known domains
The top 3 sources of fraud originate from Microsoft sites including outlook.com, Hotmail and live.com. Traffic from/to such sites is worthy of trending and examining.
Habit #6 Fraudsters are boring
A widely recognized predictor of fraud is the number of digits in an email address. The more numbers, the more likely that it’s fraud.
Habit #7 Fraudsters like disposable things
We know that attacks almost always originate from DHCP addresses (which is why dshield.org/block.txt gives out /24 ranges). Its also true that the older an account age, the less likely (in general) its involved in fraud. SOC teams must always look out for account creation.
October 16, 2014
Wouldn’t it be nice if you detect when an external threat actor, who’s taken over one of your users’ endpoints, goes on a poaching expedition through all the information that user has access to on your network?
Easier said than done, right? After all, when malware is running on an endpoint anything it does show up as being performed by that user. How high really are your chances of recognizing those events as being different from the user’s normal behavior?
October 02, 2014
In April 16 of 2013, a sniper took a hundred shots at Pacific Gas and Electric’s (PG&E) Metcalf Electric Power Transformer Station. The utility was able to reroute power on the grid and avert a black out. The whole ordeal took nineteen tension-filled minutes. The event added muscle to the regulatory grip of The North American Electric Reliability Corporation (NERC) – a not-for-profit entity whose mission is to ensure the reliability of the bulk power system in North America. A terrorist attack, domestic or otherwise, could bring the state’s power grid down.
September 10, 2014
If you spend any time at all looking at log data from any server that is accessible to the Internet, you will be shocked at the brazen attempts to knock the castle over. They being within minutes of the server being available. They most commonly include port scans, login attempts using default username/password, web server attacks described by OWASP.
How can this possibly be? Given the sheer number of machines that are visible on the Internet? Don’t these guys have anything better to do?
The answer is automation and scripted attacks, also known as spray and pray. The bad guys are capitalists too (regardless of country of origin!) and need to maximize their effort, computing capacity and network bandwidth usage. Accordingly, they use automation to “knock on all available doors in a wealthy neighborhood” as efficiently and regularly as possible. Why pick on servers in developed countries? Because that’s where the payoff is likely to be higher. Its Risk v. Reward all the way.
The automated (first) wave of these attacks is to identify vulnerable machines and establish presence. Following waves may be staffed depending on the the location and identity and thus the potential value to be obtained by a greater investment of (scarce) expertise by the attacker.
Such attacks can be deterred quite simply by using secure (non-default) configuration, system patching and basic security defenses such as firewall and anti-virus. This explains the repeated exhortations of security pundits on “best practice” and also the rationale behind compliance standards and auditors trying to enforce basic minimum safeguards.
The 80/20 rule applies to attackers just as it does to defenders. Attackers are trying to cover 80% of the ground at 20% of the cost so as to at-least identify soft high value targets and at most steal from them. Defenders are trying to deter 80% of the attackers at 20% of cost by using basic best practices.
Guidance such as SANS Critical Controls or lessons from Verizon’s Annual Data Breach studies can help you prioritize your actions. Attackers depend on the fact that the majority of users do not follow basic security hygiene, don’t collect logs which would expose the attackers actions and certainly never actually look at the logs.
Defeating a “spray and pray” attacks requires basic tooling and discipline. The easy way to so this? We call it SIEM Simplified. Drop us a shout, it beats being a victim.
August 22, 2014
I often get asked how to audit the deletion of objects in Active Directory. It’s pretty easy to do this with the Windows Security Log – especially for tracking deletion of users and groups which I’ll show you first. All you have to do is enable “Audit user accounts” and “Audit security group management” in the Default Domain Controllers Policy GPO.
July 24, 2014
Return on investment (ROI) — it is the Achilles heel of IT management. Nobody minds spending money to avoid costs, prevent disasters, and ultimately yield more than the initial investment outlay. But is the investment justified? It is challenging to calculate the ROI for any IT investment, and security information and event management (SIEM) tools are no exception. We recently explored some basic precepts or “pillars” of the ROI of SIEM tools and technology. These pillars provide some sensible groundwork for the difficult endeavor to justify intangible costs of SIEM tools and technology.
July 16, 2014
The three sides of the security triangle are People, Processes and Technology.
None of this is particularly new to CIOs and CSOs, yet how often have you seen six or seven digit “investments” sitting on datacenter racks, or even sometimes on actual storage shelves, unused or heavily underused? Organizations throw away massive amounts of money, then complain about “lack of security funds” and “being insecure.” Buying security technologies is far too often an easier task than utilizing them, and “operationalizing” them for many organizations. SIEM technology suffers from this problem as do many other “Monitoring” technologies.
Compliance and “checkbox mentality” makes this problem worse as people read the mandates and only pay attention to sections that refer to buying boxes.
Despite all this rhetoric, many managers equate information security with technology, completely ignoring the proper order. In reality, a skilled engineer with a so-so tool, but a good process is more valuable than an untrained person equipped with the best of tools.
As Gartner analyst Anton Chuvakin notes, “…if you got a $200,000 security appliance for $20,000 (i.e. at a steep 90% discount), but never used it, you didn’t save $180k – you only wasted $20,000!”
Security is not something you BUY, but something you DO.
May 20, 2014
The prevailing IT requirement tends toward doing more work faster, but with fewer resources to do such work, many companies must reconsider their traditional approaches to developing, deploying and maintaining software. One such approach, called DevOps, first gained traction as a viable software development and deployment strategy in Europe in the late 2000s. DevOps is a marriage of convenience
April 16, 2014
Analyzing all the login and pre-authentication failures within your organization can be tedious. There are thousands of login failures generated for several reasons. Here we will discuss the different event IDs and error codes and how you can simplify the login failure review process.
March 26, 2014
After an attacker has compromised a target infrastructure, the typical next step is credential theft. The objective is to propagate compromise across additional systems, and eventually target Active Directory and domain controllers to obtain complete control of the network.
February 19, 2014
Unstructured data access governance is a big compliance concern. Unstructured data is difficult to secure because there’s so much of it, it’s growing so fast and it is user created so it doesn’t automatically get categorized and controlled like structured data in databases. Moreover unstructured data is usually a treasure trove of sensitive and confidential information in a format that bad guys can consume and understand without reverse engineering the relationship of tables in a relational database.
January 08, 2014
In January 2013, the New York Times accused hackers from China with connections to its military of successful penetrating its network and gained access to the logins of 53 employees, including Shanghai bureau chief David Barboza who last October published an embarrassing article on the vast secret wealth of China’s prime minister, Wen Jiabao.
This came to light when AT&T noticed unusual activity which it was unable to trace or deflect. A security firm was brought into conduct a forensic investigation that uncovered the true extent of what had been going on.
Over four months starting in September 2012, the attackers had managed to install 45 pieces of targeted malware designed to probe for data such as emails after stealing credentials, only one of which was detected by the installed antivirus software from Symantec. Although the staff logins were hashed, that doesn’t appear to have stopped the hackers in this instance. Perhaps, the newspaper suggests, because they were able to deploy rainbow tables to beat the relatively short passwords.
Symantec offered this statement: “Turning on only the signature-based anti-virus components of endpoint solutions alone are not enough in a world that is changing daily from attacks and threats.”
Still think that basic AntiVirus and firewall is enough? Take it directly from Symantec – you need to monitor and analyze data from inside the enterprise for evidence of compromise. This is Security Information and Event Management (SIEM).
January 02, 2014
Eric Gartzke writing in International Security argues that attackers don’t have much motive to stage a Pearl Harbor-type attack in cyberspace if they aren’t involved in an actual shooting war.
Here is his argument:
It isn’t going to accomplish any very useful goal. Attackers cannot easily use the threat of a cyber attack to blackmail the U.S. (or other states) into doing something they don’t want to do. If they provide enough information to make the threat credible, they instantly make the threat far more difficult to carry out. For example, if an attacker threatens to take down the New York Stock Exchange through a cyber attack, and provides enough information to show that she can indeed carry out this attack, she is also providing enough information for the NYSE and the U.S. Government to stop the attack.
Cyber attacks usually involve hidden vulnerabilities — if you reveal the vulnerability you are attacking, you probably make it possible for your target to patch the vulnerability. Nor does it make sense to carry out a cyber attack on its own, since the damage done by nearly any plausible cyber attack is likely to be temporary.
Points to ponder:
Coming to commercial systems, attacks are usually for monetary gain. Attacks are often performed because “they can” [Remember George Mallory famously quoted as having replied to the question “Why do you want to climb Mount Everest?” with the retort “Because it’s there”].
December 08, 2013
Last year at this time, the running count already totaled approximately 27.8 million records compromised and 637 breaches reported. This year, that tally so far equals about 10.6 million records compromised and 483 breaches reported. It’s a testament to the progress the industry has made in the fundamentals of compliance and security best practices. But this year’s record is clearly far from perfect.
November 20, 2013
Over the years, security admins have repeatedly asked me how to audit file shares in Windows. Until Windows Server 2008, there were no specific events for file shares. The best we could do was to enable auditing of the registry key where shares are defined. But in Windows Server 2008 and later, there are two new subcategories for share related events
October 23, 2013
Since its inception, SIEM has been something for the well-to-do IT Department; the one that can spend tens or hundreds of thousands of dollars on a capital acquisition of the technology and then afford the luxury of qualified staff to use it in the intended manner. In some cases, they hire experts from the SIEM vendor to “man the barricades.” In the real world of a typical IT Department in the Medium Enterprise or Small Business, this is a ride in Fantasy Land. Budgets simply do not allow capital expenditures of multiple six or even five figures; expert staff, to the extent they exist, are hardly idling and available to work the SIEM console; and hiring outside experts – the less said, the better. And so, SIEM has remained the in the province of the well heeled.
August 21, 2013
There is a lot of discussion in the context of cloud as well as traditional computing regarding Smart IT, Smarter Planets, Smart and Smarter Computing. Which makes a lot of sense in light of the explosion in the amount of collected data and the massive efforts aimed at using analytics to yield insight, information and intelligence about — well, just about everything. We have no problem with smart activities.
July 17, 2013
What security events get logged when a user logs on to their workstation with a domain account and proceeds to run local applications and access resources on servers in the domain? When a user logs on at a workstation with their domain account, the workstation contacts domain controller via Kerberos and requests a ticket granting ticket (TGT).
July 10, 2013
At the typical office, computer equipment becomes obsolete, slow etc. and periodically requires replacement or refresh. This includes workstations, servers, copy machines, printers etc. Users who get the upgrades are inevitably pleased and carefully move their data carefully to the new equipment and happily release the older ones. What happens after this? Does someone cart them off the local recycling post? Do you call for a dumpster? This is likely the case of the Small Medium Enterprise whereas large enterprises may hire an electronics recycler.
This blog by Kyle Marks appeared in the Harvard Business Review and reminds us that sensitive data can very well be leaked via decommissioned electronics also.
A SIEM solution like EventTracker is effective when leakage occurs from connected equipment or even mobile laptops or those that connect infrequently. However, disconnected and decommissioned equipment is invisible to a SIEM solution.
If you are subject to regulatory compliance, leakage is leakage. Data security laws mandate that organizations implement “adequate safeguards” to ensure privacy protection of individuals. It’s equally applicable to that leakage comes from your electronic trash. You are still bound to safeguard the data.
Marks points out that detailed tracking data, however, reveals a troubling fact: four out of five corporate IT asset disposal projects had at least one missing asset. More disturbing is the fact that 15% of these “untracked” assets are devices potentially bearing data such as laptops, computers, and servers.
Treating IT asset disposal as a “reverse procurement” process will deter insider theft. This is something that EventTracker cannot help with but is equally valid in addressing compliance and security regulations.
You often see a gumshoe or Private Investigator in the movies conduct Trash Archaeology in looking for clues. Now you know why.
July 03, 2013
In the aftermath of the disclosure of the NSA program called PRISM by Edward Snowden to a reporter at The Guardian, commentators have gone into overdrive and the most iconic quote is one attributed to Benjamin Franklin “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety”.
It was amazing that something said over 250 years ago would be so apropos. Conservatives favor an originalist interpretation of documents such as the US Constitution (see Federalist Society) and so it seemed possible that very similar concerns existed at that time.
Trying to get to the bottom of this quote, Ben Wittes of Brookings wrote that it does not mean what it seems to say.
The words appear originally in a 1755 letter that Franklin is presumed to have written on behalf of the Pennsylvania Assembly to the colonial governor during the French and Indian War. The Assembly wished to tax the lands of the Penn family, which ruled Pennsylvania from afar, to raise money for defense against French and Indian attacks. The Penn family was willing to acknowledge the power of the Assembly to tax them. The Governor, being an appointee of the Penn family, kept vetoing the Assembly’s effort. The Penn family later offered cash to fund defense of the frontier–as long as the Assembly would acknowledge that it lacked the power to tax the family’s lands.
Franklin was thus complaining of the choice facing the legislature between being able to make funds available for frontier defense versus maintaining its right of self-governance. He was criticizing the Governor for suggesting it should be willing to give up the latter to ensure the former.
The statement is typical of Franklin style and rhetoric which also includes “Sell not virtue to purchase wealth, nor Liberty to purchase power.” While the circumstances were quite different, it seems the general principle he was stating is indeed relevant to the Snowden case.
June 26, 2013
Over the past year, enterprise IT has had more than a few things emerge to frustrate and challenge it. High on the list has to be limited budget growth in the face of increasing demand for and expectations of new services. In addition, there has been an explosion in the list of technologies and concerns that appear to be particularly intended to complicate the task of maintaining smooth running operations and service delivery.
May 22, 2013
One thing I always wished you could do in Windows auditing was mandate that access to an object be audited if the user was NOT a member of a specified group. Why? Well sometimes you have data that you know a given group of people will be accessing and for that activity you have no need of an audit trail. Let’s just say you know that members of the Engineering group will be accessing your Transmogrifier project folder and you do NOT need an audit trail for when they do. But this is very sensitive data and you DO need to know if anyone else looks at Transmogrifier.
April 18, 2013
Detecting Persistent Attacks with SIEM As you read this, attackers are working to infiltrate your network and ex-filtrate valuable information like trade secrets and credit card numbers. In this newsletter featuring research from Gartner, we discuss advanced persistent threats and how SIEM can help detect such attacks. We also discuss how you can quickly get on the road to deflecting persistent attacks. Read the entire newsletter here.
March 13, 2013
I think one of the most underutilized features of Windows Auditing and the Security Log are Process Tracking events. In Windows 2003/XP you get these events by simply enabling the Process Tracking audit policy. In Windows 7/2008+ you need to enable the Audit Process Creation and, optionally, the Audit Process Termination subcategories which you’ll find under Advanced Audit Policy Configuration in group policy objects.
February 13, 2013
On a recent flight returning from an engagement with a client, my seating companion and I exchanged a few words as we settled into the flight before donning and turning to the iPod music and games used to distract ourselves from the hassles of travel. He was a cardiologist, and introduced himself as such, before quickly describing his job as basically ‘a glorified plumber’. We both chuckled knowing that while sharing fundamentals in basic concepts, there was much more to cardiology than managing and controlling flow. BTW, my own practical plumbing experiences convinced me of the value of a good plumber.
January 30, 2013
Small businesses around the world tend to be more innovative and cost-conscious. Most often, the owners tend to be younger and therefore more attuned to being online. The efficiencies that come from being computerized and connected are more obvious and attractive to them. But we know that if you are online then you are vulnerable to attack. Are these small businesses too small for hackers to care?
Two recent reports say no.
The UK the Information Security Breaches survey 2012 survey results published by PWC shows:
From the US, the 2012 Verizon data breach report shows:
Lesson learned? Small may be beautiful, but in the interconnected world we live in, not too small to be hacked. Protect thyself – start simple by changing remote access credentials and enabling a firewall, monitor and mine your logs. ‘Nuff said.
January 23, 2013
Is this true for you? That your smartphone has merged your private and work lives. Smartphones now contain—by accident or by design—a wealth of information about the businesses we work for.
If your phone is stolen, the chance of getting it back approaches zero. How about lost in an elevator or the back seat of a taxi? Will it be returned? More importantly, from our point of view, what about the info on it – the corporate info?
Earlier this year, the Symantec HoneyStick project conducted an experiment by “losing” 50 smartphones in five different cities: New York City; Washington D.C.; Los Angeles; San Francisco; and Ottawa, Canada. Each had a collection of simulated corporate and personal data on them, along with the capability to remotely monitor what happened to them once they were found. They were left in high traffic public places such as elevators, malls, food courts, and public transit stops.
The corporate related apps included remote access as well as email accounts. What is the lesson for corporate IT staff?
See our webinar, ‘Using Logs to Deal With the Realities of Mobile Device Security and BYOD.’
January 09, 2013
I often encounter a dangerous misconception about the Windows Security Log: the idea that you only need to monitor domain controller logs. Domain controller security logs are absolutely critical to security but they are only a portion of your overall audit trail. Member server and workstation logs are really just as important and I’m going to focus this article on the top 4 questions you can only answer with workstation logon/logoff events.
For your workstations to generate these events you need to enable at least the following audit policy. Remember that XP is configured with the legacy 9 audit categories while Windows 7 and 8 should be configured with audit subcategories under Advanced Audit Policy in group policy objects:
December 19, 2012
“The beginning of a new year marks a time of reflection on the past and anticipation of the future. The result for analysts, pundits and authors is a near irresistible urge to identify important trends in their areas of expertise…” (from our January newsletter) We made a lot of predictions this past year and now it’s time to review them and assess our accuracy.
December 12, 2012
The newspapers are full of stories of the latest attack. Then vendors rush to put out marketing statements glorifying themselves for already having had a solution to the problem, if only you had their product/service, and the beat goes on.
Pause for a moment and compare this to health scares. The top 10 scares according to ABC News include Swine Flu (H1N1), BPA, Lead paint on toys from China, Bird Flu (H5N1) and so on. They are, no doubt, scary monsters but did you know that the common cold causes 22 million school days to be lost in the USA alone?
In other words, you are better off enforcing basic discipline to prevent days lost from common infections than stockpiling exotic vaccines. The same is true in IT security. Here then, are the top 5 attack vectors of all time. Needless to say these are not particularly hard to execute, and are most often successful simply because basic precautions are not in place or enforced. The Verizon Data Breach Report demonstrates this year in and year out.
1. Information theft and leakage
Personally Identifiable Information (PII) data stolen from unsecured storage is rampant. The Federal Trade Commission says 21% of complaints are related to identity theft and have accounted for 1.3M cases in 2009/10 in the USA. The 2012 Verizon DBIR shows 855 incidents and 174M compromised records.
Lesson learned: Implement recommendations like SANS CAG or PCI-DSS.
2. Brute force attack
Hackers leverage cheap computing power and pervasive broadband connectivity to breach security. This is a low cost, low tech attack that can be automated remotely. It can be easily detected and defended against, but it requires monitoring and eyes on the logs. It tends to be successful because monitoring is absent.
Lesson learned: Monitor logs from firewalls and network devices in real time. Set up alerts which are reviewed by staff and acted upon as needed. If this is too time consuming, then consider a service like SIEM Simplified.
3. Insider breach
Staff on the inside is often privy to a large amount of data and can cause much larger damage. The Wikileaks case is the poster child for this type of attack.
4. Process and Procedure failures
It is often the case that in the normal course of business, established process and procedures are ignored. Unfortunate coincidences can cause problems. Examples of this are e-mailing interim work products to personal accounts, taking work home in USB sticks and then losing them, sending CDROMs with source code by mail and then they are lost, etc.
Lesson learned: Reinforce policies and procedures for all employees on a regular basis. Many US Government agencies require annual completion of a Computer Security and Assessment Test. Many commercial banks remind users via message boxes in the login screen.
5. Operating failures
This includes oops moments, such as backing up data to the wrong server and sending backup data off-site where it can be restored by unauthorized persons.
Lesson learned: Review procedures and policies for gaps. An external auditor can be helpful in identifying such gaps and recommending compensating controls to cover them.
November 29, 2012
Troubleshooting problems with enterprise applications and services are often exercises in frustration for IT and business staff. The reasons are well documented – complex architectures, disparate, unintegrated monitoring solutions, and minimal coordination between technology and product experts while attempting to pinpoint and resolve problems under the pressures of an escalating negative impact of delays and/or downtime on revenues, customer satisfaction and the delivery of services.
October 24, 2012
I’ve spent the last 20 years analyzing the Information Technologies market. My work with vendors has ranged from developing business strategies and honing messaging to defining product requirements and identifying significant trends. My work with IT enterprise decision-makers has been to help define requirements, identify and evaluate alternatives, and recommend solutions, etc. We’ve always worked closely with our clients to understand first what they are trying to accomplish, then providing the advice, support and services that we believe will be most effective in achieving those goals.
September 20, 2012
Despite its significant costs and a mixed record of success, the compliance-related load imposed on today’s enterprise has yet to decrease. Current trends driven by government legislative efforts, and adopted at the executive level, favor the continuing proliferation of monitoring and reporting in operations, decision-making and service delivery. Even if existing legislation is repealed, it is not certain that compliance edicts will cease.
September 18, 2012
SIEM Fever is a condition that robs otherwise rational people of common sense in regard to adopting and applying Security Information and Event Management (SIEM) technology for their IT Security and Compliance needs. The consequences of SIEM Fever have contributed to misapplication, misuse, and misunderstanding of SIEM with costly impact. For example, some organizations have adopted SIEM in contexts where there is no hope of a return on investment. Others have invested in training and reorganization but use or abuse the technology with new terminology taken from the vendor dictionary. Alex Bell of Boeing first described these conditions.
Before you get your knickers in a twist due to a belief that it is an attack on SIEM and must be avenged with flaming commentary against its author, fear not. There are real IT Security and Compliance efforts wasting real money, and wasting real time by misusing SIEM in a number of common forms. Let’s review these types of SIEM Fevers, so they can be recognized and treated.
Lemming Fever: A person with Lemming Fever knows about SIEM simply based upon what he or she has been told (be it true or false), without any first-hand experience or knowledge of it themselves. The consequences of Lemming Fever can be very dangerous if infectees have any kind of decision making responsibility for an enterprise’s SIEM adoption trajectory. The danger tends to increase as a function of an afflictee’s seniority in the program organization due to the greater consequences of bad decision making and the ability to dismiss underling guidance. Lemming Fever is one of the most dangerous SIEM Fevers as it is usually a precondition to many of the following fevers.
Easy Button Fever: This person believes that adopting SIEM is as simple as pressing Staple’s Easy Button, at which point their program magically and immediately begins reaping the benefits of SIEM as imagined during the Lemming Fever stage of infection. Depending on the Security Operating Center (SOC) methodology, however, the deployment of SIEM could mean significant change. Typically, these people have little to no idea at all about the features which are necessary for delivering SIEM’s productivity improvements or the possible inapplicability of those features to their environment.
One Size Fits All Fever: Victims of One Size Fits All Fever believe that the same SIEM model is applicable to any and all environments with a return on investment being implicit in adoption. While tailoring is an important part of SIEM adoption, the extent to which SIEM must be tailored for a specific environment’s context is an important barometer of its appropriateness. One Size Fits All Fever is a mental mindset that may stand alone from other Fevers that are typically associated with the tactical misuse of SIEM.
Simon Says Fever: Afflictees of Simon Says Fever are recognized by their participation in SIEM related activities without the slightest idea as to why those activities are being conducted or why they important other than because they are included in some “checklist”. The most common cause of this Fever is failing to tie all log and incident review activities to adding value and falling into a comfortable, robotic regimen that is merely an illusion of progress.
One-Eyed King Fever: This Fever has the potential to severely impact the successful adoption of SIEM and occurs when the SIEM blind are coached by people with only a slightly better understanding of SIEM. The most common symptom occurring in the presence of One-Eyed King Fever is failure to tailor the SIEM implementation to its specific context or the failure of a coach to recognize and act on a low probability of return on investment as it pertains to a enterprise’s adoption.
The Antidote: SIEM doesn’t cause the Fevers previously described, people do. Whether these people are well intended have studied at the finest schools, or have high IQs, they are typically ignorant of SIEM in many dimensions. They have little idea about the qualities of SIEM which are the bases of its advertised productivity improving features, they believe that those improvements are guaranteed by merely adopting SIEM, or have little idea that the extent of SIEM’s ability to deliver benefit is highly dependent upon program specific context.
The antidote for the many forms of SIEM Fever is to educate. Unfortunately, many of those who are prone to the aforementioned SIEM infections are most desperately in need of such education, are often unaware of what they don’t know about SIEM, are unreceptive to learning about what they don’t know, or believe that those trying to educate them are simply village idiots who have not yet seen the brightly burning SIEM light.
While I’m being entirely tongue-in-cheek, the previously described examples of SIEM misuse and misapplication are real and occurring on a daily basis. These are not cases of industrial sabotage caused by rogue employees planted by a competitor, but are instead self-inflicted and frequently continue even amidst the availability of experts who are capable of rectifying them.
Interested in getting help? Consider SIEM Simplified.
August 22, 2012
Unfortunately, IT is not perfect; nothing in our world can be. Compounding the inevitable failures and weaknesses in any system designed by fallible beings, are those with malicious or larcenous intent that search for exploitable system weaknesses. As a result, IT and the businesses, enterprises and users depending upon reliable operations are no strangers to disruptions, problems, even embarrassing, even ruinous releases of data and information. The recent exposure of the passwords of hundreds of thousands of Yahoo! and Formspring  users are only two of the most recent, public occurrences that remind us of the risks and weaknesses that remain in the systems of even the most sophisticated service providers.
August 15, 2012
The Gartner hype cycle is a graphic “source of insight to manage technology deployment within the context of your specific business goals.” If you have already adopted Security Information and Event Management (SIEM) (aka log management) technology in your organization, how is that working for you? As candidate, Reagan famously asked “Are you better off than you were four years ago?”
Sadly, many buyers of this technology are wallowing in the “trough of disillusionment.” The implementation has been harder than expected, the technology more complex than demonstrated, the discipline required to use/tune the product is lacking, resource constraints, hiring freezes and the list goes on.
What next? Here are some choices to consider.
Do nothing: Perhaps the compliance check box has been checked off; auditors can be shown the SIEM deployment and sent on their way; the senior staff on to the next big thing; the junior staff have their hands full anyway; leave well enough alone.
Upside: No new costs, no disturbance in the status quo.
Downside: No improvements in security or operations; attackers count on the fact that even if you do collect log SIEM data, you will never really look at it.
Abandon ship: Give up on the whole SIEM concept as yet another failed IT project; the technology was immature; the vendor support was poor; we did not get resources to do the job and so on.
Upside: No new costs, in fact perhaps some cost savings from the annual maintenance, one less technology to deal with.
Downside: Naked in the face of attack or an auditor visit; expect an OMG crisis situation soon.
Try managed service: Managing a SIEM is 99% perspiration and 1% inspiration;offload the perspiration to a team that does this for a living; they can do it with discipline (their livelihood depends on it) and probably cheaper too (passing on savings to you); you deal with the inspiration.
Upside: Security usually improves; compliance is not a nightmare; frees up senior staff to do other pressing/interesting tasks; cost savings.
Downside: Some loss of control.
Interested? We call it SIEM SimplifiedTM.
August 09, 2012
Jill Dyche writing in the Harvard Business Review suggests that “the question on many business leaders’ minds is this: Does the potential for accelerating existing business processes warrant the enormous cost associated with technology adoption, project ramp up, and staff hiring and training that accompany Big Data efforts?”
A typical log management implementation, even in a medium enterprise is usually a big data endeavor. Surprised? You should not be. A relatively small network of a dozen log sources easily generates a million log messages per day with volumes in the 50-100 million per day being commonplace. With compliance and security guidelines requiring that logs be retained for 12 months or more, pretty soon you have big data.
So let’s answer the question raised in the article:
Q1: What can’t we do today that Big Data could help us do? If you can’t define the goal of a Big Data effort, don’t pursue it.
A1: Comply with regulations like PCI-DSS, SOX 404, and HIPAA etc.; be alerted to security problems in the enterprise; control data leakage via insecure endpoints; improve operational efficiency
Q2: What skills, technologies, and existing data development practices do we have in place that could help kick-start a Big Data effort? If your company doesn’t have an effective data management organization in place, adoption of Big Data technology will be a huge challenge.
A2: Absent a trained and motivated user of the power tool that is the modern SIEM, an organization that acquires such technology is consigning it to shelf ware. Recognizing this as a significant adoption challenge in our industry, we offer Monitored SIEM as a service; the best way to describe this is SIEM simplified! We do the heavy lifting so you can focus on leveraging the value.
Q3: What would a proof-of-concept look like, and what are some reasonable boundaries to ensure its quick deployment? As with many other proofs-of-concept the “don’t boil the ocean” rule applies to Big Data.
A3: The advantage of a software-only solution like EventTracker is that an on premises trial is easy to set up. A virtual appliance with everything you need is provided; set up as a VMware or Hyper-Virtual machine within minutes. Want something even faster? See it live online.
Q4: What determines whether we green light Big Data investment? Know what success looks like, and put the measures in place.
A4: Excellent point; success may mean continuous compliance; a 75% reduction in cost of compliance; one security incident averted per quarter; delegation of log review to a junior admin.
Q5: Can we manage the changes brought by Big Data? With the regular communication of tangible results, the payoff of Big Data can be very big indeed.
A5: EventTracker includes more than 2,000 pre-built reports designed to deliver value to every interested stakeholder in the enterprise ranging from dashboards for management, to alerts for Help Desk staff, to risk prioritized incident reports for the security team, to system uptime and performance results for the operations folk and detailed cost savings reports for the CFO.
The old adage “If you fail to prepare, then prepare to fail” applies. Armed with these questions and answers, you are closer to gaining real value with Big Data.
August 01, 2012
All warfare is based on deception says Sun Tzu. To quote:
“Hence, when able to attack, we must seem unable;
When using our forces, we must seem inactive;
When we are near, we must make the enemy believe we are far away;
When far away, we must make him believe we are near.”
With the new era of cyberweapons, Sun Tzu’s blueprint can be followed almost exactly: a nation can attack when it seems unable to. When conducting cyber-attacks, a nation will seem inactive. When a nation is physically far away, the threat will appear very, very near.
Amidst all the controversy and mystery surrounding attacks like Stuxnet and Flame, it is becoming increasingly clear that the wars of tomorrow will most likely be fought by young kids at computer screens rather than by young kids on the battlefield with guns.
In the area of technology, what is invented for use by the military or for space, eventually finds its way to the commercial arena. It is therefore a matter of time before the techniques used by Flame or Stuxnet become a part of the arsenal of the average cyber thief.
Ready for the brave new world?
July 25, 2012
IBM recently introduced the IBM PureSystems line of intelligent expert integrated systems. Available in a number of versions, they are pre-configured with various levels of embedded automation and intelligence depending upon whether the customer wants these capabilities implemented with a focus on infrastructure, platform or application levels. Depending on what is purchased, IBM PureSystems can include server, network, storage and management capabilities.
June 19, 2012
Previously, we discussed looking for opportunities to apply analytics to the data in your own backyard. The focus on ‘Big Data’ and sophisticated analytics tends to obscure and cause business and IT staff to overlook the in-house data already abundantly present and available for analysis. As the cost of data acquisition and storage has dropped along with the cost of computing, the amount of data available, as well as the opportunity and ability to extensively analyze it has exploded. The task is to discover and unlock the information that is hidden in all the available data.
April 18, 2012
Back in January, I said that the use of sophisticated analytics as a business and competitive tool would become widespread. Since then, the number of articles, blogs and announcements relating to analytics has increased dramatically: an internet search for the term ‘Business Analytics’ using Bing yields over 47 million hits. Smart Analytics (an IBM term) shrinks that number to approximately 12.3 million hits. If we change the search term to ‘Applied Analytics,’ the number decreases to a little less than 7 million hits.
March 14, 2012
Prism Microsystem’s founders decided early on that their goal and reason for the company’s existence was to design, develop and deliver SIEM services. As executives with a successful history in entrepreneurship, product development and enterprise management, they knew the risk and seductive promise of distractive diversification in pursuit of expanded revenues. They committed to concentrating specifically on SIEM functions of monitoring, discovery and warning about threats to security, compliance (in its multiple modes) and operational commitments.
February 22, 2012
5. Overdoing compensating controls
When a legitimate technological or documented business constraint prevents you from satisfying a requirement, a compensating control can be the answer after a risk analysis is performed. Compensating controls are not specifically defined inside PCI, but are instead defined by you (as a self-certifying merchant) or your QSA. It is specifically not an excuse to push PCI Compliance initiatives through completion at a minimal cost to your company. In reality, most compensating controls are actually harder to do and cost more money in the long run than actually fixing or addressing the original issue or vulnerability. See this article for a clear picture on the topic.
4. Separation of duty
Separation of duties is a key concept of internal controls. Increased protection from fraud and errors must be balanced with the increased cost/effort required. Both PCI DSS Requirements 3.4.1 and 3.5 mention separation of duties as an obligation for organizations, and yet many still do not do it right, usually because they lack staff.
3. Principle of Least privilege
PCI 2.2.3 says they should “configure system security parameters to prevent misuse.” This requires organizations to drill down into user roles to ensure they’re following the rule of least privilege wherever PCI regulations apply. This is easier said than done; more often it’s “easier” to grant all possible privileges rather than determine and assign just the correct set. Convenience is the enemy of security.
2. Fixating on excluding systems from scope
When you make the process of getting things out of scope a higher priority than addressing real risk, you get in trouble. Risk mitigation must come first and foremost. In far too many cases, out-of-scope becomes out-of-mind. This may make your CFO happy, but a hacker will get past weak security and not care if the system is in scope or not.
And drum roll …
1. Ignoring virtualization
Many organizations have embraced virtualization wholeheartedly given its efficiency gains. In some cases, virtualized machines are now off-premises and co-located at a service provider like Rackspace. This is a trend at federal government facilities. However, “off-premises” does not mean “off-your-list”. Regardless of the location of the cardholder data, such systems are within scope as are the hypervisor. In fact, PCI DSS 2.0 says, if the cardholder data is present on even one VM, then the entire VM infrastructure is “in scope.”
February 15, 2012
While there are still some who question the ‘relevance’ of IT to the enterprise, and others who question the ‘future’ of IT, those involved in day-to-day business activities recognize and acknowledge that IT operations is integral to business success and this is unlikely to change in the immediate future. Today’s IT staffer with security incident and event management (SIEM) responsibility must be able not only to detect, identify and respond to anomalies in infrastructure performance and operations, but also build processes, make decisions and take action based on the business impact of the incidents and events recorded in ubiquitous logs.
February 14, 2012
Since every cause needs “Awareness,” here are my picks for management speak to camouflage the bloody obvious:
5. Events per second
Log Management vendors are still trying to “differentiate” with this tired and meaningless metric as we pointed out in The EPS Myth.
4. Thought leadership
Mitch McCrimmon describes it best.
Now here is a term that means all things to all people.
2. Does that make sense?
The new “to be honest.” Jerry Weismann discusses it in the Harvard Business Review.
During the recent SOPA debate, so many self-described “country boys” wanted to get the “nerds” to explain the issue to them; as Jon Stewart pointed out, the word they were looking for was “expert.”
February 08, 2012
The Appalachian Trail is a marked hiking trail in the eastern United States extending between Georgia and Maine. It is approximately 2,181 miles long and takes about six months to complete. It is not a particularly difficult journey from start to finish; yet even so, completing the trail requires more from the hiker than just enthusiasm, endurance and will.
Likewise, SIEM implementation can take from one to six months to complete (depending on the level of customization) and like the Trail, appears deceptively simple. It too, can be filled with challenges that reduce even the most experienced IT manager to despair, and there is no shortage of implementations that have been abandoned or uncompleted. As with the Trail, SIEM implementation requires thoughtful consideration.
1) The Reasons Why
It doesn’t take too many nights scurrying to find shelter in a lightning storm, or days walking in adverse conditions before a hiker wonders: Why am I doing this again? Similarly, when implementing any IT project, SIEM included, it doesn’t take too many inter-departmental meetings, technical gotchas, or budget discussions before this same question presents itself: Why are we doing this again?
All too often, we don’t have a compelling answer, or we have forgotten it. If you are considering a half year long backpacking trip through the woods, there is a really good reason for it. In the same way, one embarks on a SIEM project with specific goals, such as regulatory compliance, IT security improvement or to control operating costs. Define the answer to this question before you begin the project and refer to it when the implementation appears to be derailing. This is the compass that should guide your way. Make adjustments as necessary.
2) The Virginia Blues
Daily trials can include anything from broken bones to homesickness, a circumstance that occurs on the Appalachian Trail about four to eight weeks into the journey, within the state lines of Virginia. Getting through requires not just perseverance but also an ability to adapt.
For a SIEM project, staff turnover, false positives, misconfigurations or unplanned explosions of data can potentially derail the project. But pushing harder in the face of distress is a recipe for failure. Step back, remind yourself of the reasons why this project is underway, and look at the problems from a fresh perspective. Can you be flexible? Can you make find new avenues to go around the problems?
3) A Fresh Perspective
In the beginning, every day is chock full of excitement, every summit view or wild animal encounter is exciting. But life in the woods will become the routine and exhilaration eventually fades into frustration.
In much the same way, after the initial thrill of installation and its challenges, the SIEM project devolves into a routine of discipline and daily observation across the infrastructure for signs of something amiss.
This is where boredom can set in, but the best defense against the lull that comes along with the end of the implementation is the expectation of it. The journey’s going to end. Completing it does not occur when the project is implemented. Rather, when the installation is done, the real journey and the hard work begins.
February 01, 2012
Among InfoSec and IT staff, there is a lot of behind-the-scenes hand wringing that users are the weakest link. But are InfoSec staff that much stronger?
While automation is and does have a place, Dan Geer, of CIA-backed venture fund In-Q-Tel, properly notes that while ” …humans can build structures more complex” than they can operate, ” …Are humans in the loop a failsafe or a liability? Is fully automated security to be desired or to be feared?”
We’ve considered this question before at Prism, when “automated remediation” was being heavily touted as a solution for mid-market enterprises, where IT staff is not abundant. We’ve found that human intervention is not just a fail-safe, but a necessity. The interdependencies, even in medium sized networks are far too complex to automate. We introduced the feature a couple of years back and in reviewing the usage, concluded that such “automated remediation” does have a role to play in the modern enterprise. Use cases include changes to group membership in Active Directory, unrecognized processes, account creation where the naming convention is not followed or honeypot access. In other words, when the condition can be well defined and narrowly focused, humans in the loop will slow things down. However for every such “rule” there are hundreds more that will be obvious to a human but missed by the narrow rule.
So are humans in the loop a failsafe or a liability? It depends on the scenario.
What’s your thought?
January 25, 2012
Nearly every analyst has made aggressive predictions that outsourcing to the cloud will continue to grow rapidly. It’s clear that servers and applications are migrating to the cloud as fast as possible, but according to an article in The Economist, the tradeoff is efficiency vs. sovereignty. The White House announced that the federal government will shut down 178 duplicative data centers in 2012, adding to the 195 that will be closed by the end of this year.
Businesses need motivation and capability to recognize business problems, solutions that can improve the enterprise, and ways to implement those solutions. There is clearly a role for outsourced solutions and it is one that enterprises are embracing.
For an engineer, however, the response to outsourcing can be one of frustration, and concerns about short-sighted decisions by management that focus on short term gains at the risk of long term security. But there is also an argument why in-sourcing isn’t necessarily the better business decision: a recent Gartner report noted that IT departments often center too much of their attention on technology and not enough on business needs, resulting in a “veritable Tower of Babel, where the language between the IT organization and the business has been confounded, and they no longer understand each other.”
Despite increased migration to cloud services, it does not appear that there is an immediate impact on InfoSec-related jobs. Among the 12 computer-related job classifications tracked by the Department of Labor’s Bureau of Labor Statistics (BLS), information security analysts, along with computer and information research scientists, were among those whose jobs did not report unemployment during the first two quarters of 2011.
John Reed, executive director at IT staffing firm Robert Half Technology, attributes the high growth to the increasing organizational awareness of the need for security and hands-on IT security teams to ensure appropriate security controls are in place to safeguard digital files and vital electronic infrastructure, as well as respond to computer security breaches and viruses.
Simply put: the facility of using cloud services does not replace the skills needed to analyze and interpret the data to protect the enterprise. Outsourcing to a cloud may provide immediate efficiencies, but it’s the IT security staff who deliver business value that ensure long term security.
January 18, 2012
The past year has been a hair-raising series of IT security breakdowns and headlining events reaching as high as RSA itself falling victim to a phishing attack. But as the year set on 2011, the hacker group Anonymous remained busy, providing a sobering reminder that IT Security can never rest.
It turned out that attackers sent two different targeted phishing e-mails to four workers at its parent company, EMC. The e-mails contained a malicious attachment that was identified in the subject line as “2011 Recruitment plan.xls” which was the point of attack.
Back to Basics:
Using administrative controls such as security awareness training, technical controls such as firewalls, and anti-virus and IPS, to stop attacks from penetrating the network. Most industry and government experts agree that security configuration management is probably the best way to ensure the best security configuration allowable, along with automated patch management and updating anti-virus software.
Employing a blend of technical controls such as anti-virus, IPS, intrusion detection systems (IDS), system monitoring, file integrity monitoring, change control, log management and incident alerting can help to track how and when system intrusions are being attempted.
Applying operating system upgrades, backup data restore and vulnerability mitigation and other controls to make sure systems are configured correctly and can prevent the irretrievable loss of data.
January 17, 2012
The beginning of a new year marks a time of reflection on the past and anticipation of the future. The result for analysts, pundits and authors is a near irresistible urge to identify important trends in their areas of expertise (real or imagined). I am no exception, so here are my thoughts on what we’ll see in the next year in the areas of application and evolution of Information Technology.
January 11, 2012
In the InfoSec industry, there is an abundance of familiar flaws and copycat theories and approaches. We repeat ourselves and recommend the same approaches. But what has really changed in the last year?
The emergence of hacking groups like Anonymous, LulzSec, and TeaMp0isoN.
In 2011, these groups brought the fight to corporate America, crippling firms both small (HBGary Federal) and large (Stratfor, Sony). As the year drew to a close these groups shifted from prank-oriented hacks for laughs (or “lulz”), to aligning themselves with political movements like Occupy Wall Street, and hacking firms like Stratfor, a Austin, Tex.-based security “think tank” that releases a daily newsletter concerning security and intelligence matters all over the world. After HBGary Federal CEO Aaron Barr publicly bragged that he was going to identify some members of the group during a talk in San Francisco at the RSA Conference week, Anonymous members responded by dumping a huge cache of his personal emails and those of other HBGary Federal executives online, eventually leading to Barr’s resignation. Anonymous and LulzSec then spent several months targeting various retailers, public figures and members of the security community. Their Operation AntiSec aimed to expose alleged hypocrisies and sins by members of the security community. They targeted a number of federal contractors, including IRC Federal and Booz Allen Hamilton, exposing personal data in the process. Congress got involved in July when Sen. John McCain urged Senate leaders to form a select committee to address the threat posed by Anonymous/LulzSec/Wikileaks.
The attack on RSA SecurId was another watershed event. The first public news of the compromise came from RSA itself, when it published a blog post explaining that an attacker had been able to gain access to the company’s network through a “sophisticated” attack. Officials said the attacker had compromised some resources related to the RSA SecurID product, which set off major alarm bells throughout the industry. SecurID is used for two-factor authentication by a huge number of large enterprises, including banks, financial services companies, government agencies and defense contractors. Within months of the RSA attack, there were attacks on SecurID customers, including Lockheed Martin, and the current working theory espoused by experts is that the still-unidentified attackers were interested in LM and other RSA customers all along and, having run into trouble compromising them directly, went after the SecurID technology to loop back to the customers.
The specifics of the attack were depressingly mundane (targeted phishing email with a malicious Excel file attached).
Then too, several certificate authorities were compromised throughout the year. Comodo was the first to fall when it was revealed in March that an attacker (apparently an Iranian national) had been able to compromise the CA infrastructure and issue himself a pile of valid certificates for domains belonging to Google, Yahoo, Skype and others. The attacker bragged about his accomplishments in Pastebin posts and later posted evidence of his forged certificate for Mozilla. Later in the year, the same person targeted the Dutch CA DigiNotar. The details of the attack were slightly different, but the end result was the same: he was able to issue himself several hundred valid certificates and this time went after domains owned by, among others, the Central Intelligence Agency. In the end, all of the major browser manufacturers had to revoke trust in the DigiNotar root CA. The damage to the company was so bad that the Dutch government eventually took it over and later declared it bankrupt. Staggering, isn’t it? A lone attacker not only forced Microsoft, Apple and Mozilla to yank a root CA from their list of trusted roots, but he was also responsible for forcing a certificate authority out of business.
What has changed in our industry? Nothing really. It’s not a question “if” but “when” the attack will arrive on your assets.
Plus ça change, plus c'est la même, I suppose.
December 09, 2011
Changes in end-user behavior and the resulting “consumerization” of IT have contributed to the changing and expanding definition of Application Performance Management (“APM”). APM can no longer focus just on the application or the optimization of infrastructure against abstract limits; APM must now view performance from the end-user’s access point back across all infrastructure involved in the delivery of the service.
November 21, 2011
The commercialization of Cloud-based IT services, along with market and economic challenges are changing the way business services are conceived, created, delivered and consumed. This change is reflected in the growing interest in alternative delivery models and solutions.
October 13, 2011
Those in IT operations responsible for service delivery or infrastructure operations know what it’s like: collect and store a growing amount of the data that is necessary to do our jobs, but at a rate that drives up cost. However, the problem with infinite detail is not much different than trying to organize and analyze noise; there’s plenty of it, but finding the signal underneath is the difficult, but critical point.
September 20, 2011
I have two rules of thumb when it comes to audit logging: first, if it has a log, enable it. Second, if you can collect the log and archive it with your log management/SIEM solution, do it – even if you don’t set up any alert rules or reports.
August 30, 2011
Columbia, MD, August 30, 2011 — Prism Microsystems, a leading provider of comprehensive security and compliance software for the US Department of Defense (DoD) and US Federal Government agencies, today announced the release of EventTracker DriveShield, an easy-to-deploy solution designed to provide visibility to files copied to USB devices or burned to CD/DVD-W drives.
August 24, 2011
No one needs to be convinced that monitoring Domain Controller security logs is important; member servers are equally as important: most people understand that member servers are where “our data” is located. But I often face an uphill battle helping people understand why workstation security logs are so critical. Frequently I hear IT administrators tell me they have policies that forbid the of storing confidential information locally. But the truth is, workstations and laptops always have sensitive information on them – there’s no way to prevent it. Besides applications like Outlook, Offline Files and SharePoint workspace that cache server information locally, there’s also the page file, which can contain content from any document or other information at any time.
August 17, 2011
Security and Compliance At Talbot’s Talbots is a leading multi-channel retailer and direct marketer of women’s apparel, shoes and accessories, based in Tampa, Florida. Talbots is well known for it’s stellar reputation in classic fashion. Everyone knows to look to Talbots when it is time to buy the perfect jacket or a timeless skirt. Talbots customers are women in the 35+ population that shop at their 568 stores in 47 states, catalogs and online at www.talbots.com. Approximate sales for Talbots in 2010 were $991 million.
July 20, 2011
An area of audit logging that is often confusing is the difference between two categories in the Windows security log: Account Logon events and Logon/Logoff events. These two categories are related but distinct, and the similarity in the naming convention contributes to the confusion.
June 24, 2011
Noticed the raft of headlines about break-ins at companies? If you did, that is the proverbial tip of the iceberg. Why? Think about the hammering that Sony took on the Playstation hack or how RSA will never live down the loss of golden keys and the subsequent attack at Lockheed. Victims overwhelmingly prefer to keep quiet. If there is disclosure, its because there is loss of consumer information which is subject to laws. If corporate information is stolen, it is often not required to be disclosed.
June 16, 2011
There’s been a lot of recent hype about security risks with the rise of virtualization, but much of it is vague and short on specifics. There is also an assumption that all the security available on a physical server simply disappears when it migrates to being a virtual machine. This is not true. A virtual server is the same server it was before it was P2V’d from a physical server. IS authentication, access control, audit, and network controls remain as active as before.
May 25, 2011
The next significant horizon in audit log management will be the automation of the review and response tasks associated with security events. Currently, log management SIEM solutions are expected to scour logs, identify high-impact changes or other suspicious activity, and simply send out an alert. It requires the intercession of a person to assess the information, make inquiries, research and review data, and ultimately resolve the matter.
April 21, 2011
Intrusion detection and compliance are the focus of log management, SIEM and security logging. But security logs, when managed correctly are also the only control over rogue admins. Once root or admin authority has been given to, or acquired by, a user, there is little they cannot do: with admin authority, they can circumvent access or authorization controls by changing settings or using tools to leverage their root access to tamper with the internals of the operating system.
March 06, 2011
It’s the line from a song in the 70’s, but quite apt when it comes to describing the Windows security log. There’s no getting around the fact that there are a lot of useless and inexplicable events in the Security log, and the sooner you get comfortable with that the sooner you’ll save your sanity and get on with work. In this article we’ll look at some common examples and noise events in the security and discuss strategies for dealing with them.
February 12, 2011
Randy Franklin Smith compares methods for detecting malicious activity from logs including monitoring for high impact changes, setting up tripwires and anomalous changes in activity levels. Security standards and auditors make much of reviewing logs for malicious activity. I am frequently asked what event signatures are indicative of intrusions: “What are the top Event IDs for intrusion detection?” Ah, if it was only as easy as the movies make it, where the protagonist furiously defends the network while a computer voice stridently calls out “Intruder! Intruder!”
January 16, 2011
In most previous newsletters, we have discussed the use of logging for various regulatory mandates (such as PCI DSS, HIPAA and FISMA) as well as the use of logs for incident response and malicious software tracking. This log data can also be incredibly useful for detecting and investigating insider abuse and internal attacks.
December 17, 2010
Despite the fact that security industry has been fighting malicious software – viruses, worms, spyware, bots and other malware since the late 1980s, malware still represents one of the key threat factors for organizations today. While silly viruses of the 1990s and noisy worms (Blaster, Slammer, etc.) of the early 2000’s have been replaced by commercial bots and so-called “advanced persistent threats,” the malware fight rages on.
November 15, 2010
Log Review for Incident Response: Part 2 From all the uses for log data across security, compliance and operations (see, for example, LogTalk: 100 Uses for Log Management #67: Secure Auditing – Solaris), using logs for incident response presents a truly universal scenario: you can be forced to use logs for incident response at any moment, whether you are prepared to or not.
August 16, 2010
The Federal Information Security Management Act of 2002 (FISMA) “requires each federal agency to develop, document, and implement an agency-wide program to provide information security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source.”
July 22, 2010
HIPAA Logging HOWTO, Part 2 The Health Insurance Portability and Accountability Act of 1996 (HIPAA) outlines relevant security and privacy standards for health information – both electronic and physical. The main mission of the law is “to improve portability and continuity of health insurance coverage in the group and individual markets, to combat waste, fraud, and abuse in health insurance and health care delivery” (HIPAA Act of 1996 http://www.hhs.gov/ocr/privacy/). A recent enhancement to HIPAA is called Health Information Technology for Economic and Clinical Health Act or HITECH Act.
June 13, 2010
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) outlines relevant security and privacy standards for health information – both electronic and physical. The main mission of the law is “to improve portability and continuity of health insurance coverage in the group and individual markets, to combat waste, fraud, and abuse in health insurance and health care delivery”.
May 19, 2010
PCI Logging HOWTO, Part 2 Payment Card Industry Data Security Standard (PCI DSS) was created by the major card brands and is now managed by the PCI Security Standards Council. Since its creation in 2006, PCI DSS continues to affect how thousands of organization approach security. PCI applies to all organizations that handle credit card transactions or that store or process payment card data – and such organizations number in the millions worldwide. Despite its focus on reducing payment card transaction risk, PCI DSS also makes an impact on broader data security as well as network and application security.
April 21, 2010
PCI Logging HOWTO Payment Card Industry Data Security Standard (PCI DSS) was created by the major card brands – Visa, MasterCard, American Express, JCB and Discover – and is now managed by the PCI Security Standards Council. Since its creation in 2006, PCI DSS continues to affect how thousands of organization approach security. PCI applies to all organizations that handle credit card transactions or that store or process payment card data – and such organization number in the millions worldwide.
March 26, 2010
Anomaly Detection and Log Management: What we Can (and Can’t) Learn from the Financial Fraud Space Have you ever been in a store with an important purchase, rolled up to the cash register and handed over your card only to have it denied? You scramble to think why: “Has my identity been stolen?” “Is there something wrong with the purchase approval network?” “Did I forget to pay my bill?” While all of the above are possible explanations
February 07, 2010
Turning log information into business intelligence with relationship mapping Now that we’re past January, most of us have received all of our W2 and 1099 tax forms. We all know that it’s important to keep these forms until we’ve filed our taxes and most of us also keep the forms for seven years after filing in case there is a problem with a previous year’s filing. But how many of us keep those records past the seven year mark? Keeping too much data can be as problematic as not keeping records at all. One of the biggest problems with retention of too much information is that storage needs increase and it becomes difficult to parse through the existing data to find what’s most important.
January 17, 2010
Time won’t give me time: The importance of time synchronization for Log Management
Does this sound familiar? You get off a late night flight and wearily make your way to your hotel. As you wait to check in, you look at the clocks behind the registration desk and do a double-take.
December 11, 2009
Tuning Log Management and SIEM for Compliance Reporting The winter holidays are quickly approaching, and one thing that could probably make most IT Security wish lists is a way to produce automated compliance reports that make auditors say “Wow!” In last month’s newsletter, we took a look at ways to work better with auditors. This month, we’re going to do a deeper dive into tuning of log management and SIEM for more effective compliance reporting.
November 16, 2009
Working Well with Auditors For some IT professionals, the mere mention of an audit conjures painful images of being trussed and stuffed like a Thanksgiving turkey. If you’ve ever been through an audit that you weren’t prepared for, you may harbor your own unpleasant images of an audit process gone wrong. As recently as 10-15 years ago, many auditors were just learning their way around the “new world” of IT, while just as many computer and network professionals were beginning to learn their way around the audit world.
October 23, 2009
Today’s blog looks at Requirement 1 of the PCI Data Security Standard, which is about building and maintaining a secure network. We look at how logging solutions such as EventTracker can help you maintain the security of your network by monitoring logs coming from security systems.
October 12, 2009
I saw a headline a day or so ago on BankInfoSecurity.com about the Heartland data breach: Lawsuit: Heartland Knew Data Security Standard was ‘Insufficient’. It is worth a read as is the actual complaint document (remarkably readable for legalese, but I suspect the audience for this document was not other lawyers). The main proof of this insufficiency seems to be contained in point 56 in the complaint. I quote:
56. Heartland executives were well aware before the Data Breach occurred that the bare minimum PCI-DSS standards were insufficient to protect it from an attack by sophisticated hackers. For example, on a November 4, 2008 Earnings Call with analysts, Carr remarked that “[w]e also recognize the need to move beyond the lowest common denominator of data security, currently the PCI-DSS standards. We believe it is imperative to move to a higher standard for processing secure transactions, one which we have the ability to implement without waiting for the payments infrastructure to change.” Carr’s comment confirms that the PCI standards are minimal, and that the actual industry standard for security is much higher. (Emphasis added)
Despite not being a mathematician, I do know that the lowest common denominator does not mean minimal or barely adequate, but that aside lets look at the 2 claims in the last sentence.
It is increasingly popular to bash compliance regulations in the security industry these days and often with good reason. We have heard and made the arguments many times before that compliant does not equal secure and further, don’t embrace the standard, embrace the spirit or intent of the standard. But to be honest the PCI DSS Standard is far from minimal, especially by comparison to most other compliance regulations.
The issue with standards has been the fear that they make companies complacent. Does PCI-DSS make you safe from attacks from sophisticated hackers? Well, no, but there is no single regulation, standard or practice out there that will. You can make it hard or harder to get attacked, and PCI-DSS does make it harder, but impossible, no.
Is the Data Security Standard perfect? No. Is the industry safer with it than without it? I would venture a guess that in the case of PCI DSS it is, in fact. That there was significant groaning and a lot of work on the part of the industry to implement the standard would lead one to believe that they were not doing it prior and that there are not a lot of worthless requirements in the DSS. PCI DSS makes a company take positive steps like run vulnerability scans, examine logs for signs of intrusion, and encrypt data. If all those companies handling credit cards prior to the standard were not doing these things, imagine what it was like before?
The second claim is where the real absurdity lays — the assertion that the industry standard for security is so much better than PCI DSS. What industry standard are they talking about exactly? In reality, the industry standard for security is whatever the IT department can cajole, scare, or beg the executives into providing them in terms of budget and resources – which is as little as possible (remember this is capitalism – profits do matter). Using this as a basis, the actual standard for security is to do as little as possible for the least amount of money to avoid being successfully sued, your executives put in jail or losing business. Indeed PCI DSS forced companies to do more, but emphasis on the forced. (So, come to think of it maybe Heartland did not do the industry standard, as they are getting sued, but let’s wait on that outcome!).
Here is where I have my real problem with the entire matter. The statements taken together imply that Heartland had some special knowledge to the DSS’s shortcomings and did nothing, and indeed did not even do what other people in the industry were doing – the “industry standard”. The reality is anyone with a basic knowledge of cyber security and the PCI DSS would have known the limitations, this included no doubt many, many people on the staffs of the banks that are suing. So whatever knowledge Heartland had, the banks that were customers of Heartland knew as well, and even if they did not, Mr. Carr went so far as to announce it in the call noted above. If this statement was so contrary to the norm, why didn’t the banks act in the interest of their customers and insist Heartland shape up or fire them? What happened to the concept of the educated and responsible buyer?
If Heartland was not compliant I have little sympathy for them, or if it can be proved they were negligent, well, have at them. But the banks here took a risk getting into the credit card issuing business– and no doubt made a nice sum of money – but they knew the risk of a data breach and the follow-on expense existed. I thought the nature of risk was that you occasionally lose and in the case of business risk impacts your profits. This lawsuit seems to be like the recent financial bailout – the new expectation of risk in the financial community is when it works, pocket the money, and when it does not, blame someone else to make them pay or get a bailout!
October 05, 2009
Log Management in virtualized environments Back in the early/mid-90s I was in charge of the global network for a software company. We had a single connection to the Internet and had set up an old Sun box as the gatekeeper between our internal network and the ‘net. My “log management” process consisted of keeping a terminal window open on my desktop where I streamed the Sun’s system logs (or “tailed the syslog”) in real time.
September 17, 2009
Eric Knorr, the Editor in Chief over at InfoWorld has been writing about “IT Dark Matter” which he defines as system device and application logs. Turns out half of enterprise data is logs or so-called Dark Matter. Not hugely surprising and certainly good news for the data storage vendors and hopefully for SIEM vendors like us! He described these logs or dark matter as “widely distributed and hidden” which got me thinking. The challenge with blogging is that we have to reduce fairly complex concepts and arguments into simple claims otherwise posts end up being on-line books. The good thing in that simplification, however, is that often gives a good opportunity to point out other topics of discussion.
There are two great challenges in log management – the first is being able to provide the tools and knowledge to make the log data readily available and useful, which leads to Eric’s comment on how Dark Matter is “Hidden” as it is simply too hard to mine without some advanced equipment. The second challenge, however, is preserving the record – making sure it is accurate, complete and unchanged. In Eric’s blog this Dark Matter is “widely distributed” and there is an implied assumption that this Dark Matter is just there to be mined – that the Dark Matter will and does exist and even more so, it is accurate. In reality it is, for all practical purposes, impossible to have logs widely distributed and expect them to be complete and accurate – this fatally weakens their usefulness.
Let’s use a simple illustration we all know well in computer security — almost the first thing a hacker will do once they penetrate a system is shut down logging, or as soon as they finish whatever they are doing, delete or alter the logs. Let’s use the analogy of video surveillance at your local 7/11. How useful would it be if you left the recording equipment out in the open at the cash register unguarded – not real useful, right? When you do nothing to secure the record, the value of the record is compromised, and the more important the record the more likely it is to be compromised or simple deleted.
This is not to imply that there are not useful nuggets to be mined even if the records are distributed. Without attempting to secure and preserve the logs, logs become the trash heap of IT. Archeologists spend much of their time digging through the trash of civilizations to figure out how people lived. Trash is an accurate indication of what really happened simply because 1) it was trash and had no value and 2) no one worried that someone 1000 years later was going to dig it up. It represents a pretty accurate, if fragmentary, picture of day to day existence. But don’t expect to find treasure, state secrets or individual records in the trash heap however. The usefulness of the record is 1) a matter of luck that the record was preserved and 2) directly inverse to the interest of the creating parties to modify it.
– Steve Lafferty
September 17, 2009
The threat within: Protecting information assets from well-meaning employees Most information security experts will agree that employees form the weakest link when it comes to corporate information security. Malicious insiders aside, well-intentioned employees bear responsibility for a large number of breaches today. Whether it’s a phishing scam, a lost USB or mobile device that bears sensitive data, a social engineering attack or downloading unauthorized software, unsophisticated but otherwise well-meaning insiders have the potential of unknowingly opening company networks to costly attacks.
August 27, 2009
I came across this interesting (and scary if you are a business person) article in the Washington Post. In a nutshell pretty much every business banks electronically. Some cyber gangs in Eastern Europe have come up with a pretty clever method to swindle money from small and medium sized companies. They do a targeted email attack on the finance guys and get them to click on a bogus attachment – when they do so, key logging malware is installed that harvests electronic bank account passwords. These passwords are then used to transfer large sums of money to the bad guys.
The article is definitely worth a read for a number of reasons, but what I found surprising was first that businesses do not have the same protection from electronic fraud as consumers do so the banks don’t monitor commercial account activity as closely, and second, just how much this type of attack is happening. Turns out businesses only have 2 days to report fraudulent activity instead of a consumer’s 60 days so businesses that suffer a loss usually don’t recover their money.
My first reaction was to ring up our finance guys and tell them about the article. Luckily their overall feel was that since Marketing spent the money as quickly as the Company made it, we were really not too susceptible to this type of attack as we had no money to steal – an unanticipated benefit of a robust (and well paid, naturally!) marketing group. I did make note of this helpful point for use during budget and annual review time.
My other thought was how this demonstrated the usefulness of efforts like the Consensus Audit Guidelines from SANS. Sometime security personnel pooh-pooh the basics but you can make it lot harder on the bad guys with some pretty easy blocking and tackling activity. CAG Control 12 talks about monitoring for active and updated anti-virus and anti-spyware on all systems. Basic, but it really helps – remember a business does not have 60 days but 2. You can’t notice the malware a week after the signatures finally get updated.
There are a number of other activities that can also really help to prevent these attacks in advanced tools such as EventTracker such as change monitoring, tracking first time executable launch, monitoring the AV application has not been shut down and monitoring network activity for anomalous behavior, but that is a story for another day. If you can’t do it all, at least start with the obvious – you might not be safe, but you will be safer.
August 12, 2009
Every drop in the business cycle brings out the ‘get more value for your money’ strategies. For IT this usually means either use the tools you have to solve a wider range of problems or buy a tool that with fast initial payback and can be used to solve a wide range of other problems. This series looks at how different log management tasks can be applied to solve a wider range of problems beyond the traditional compliance and security drivers so that companies can get more value for their IT money.
July 19, 2009
Smart Value: Getting more from Log Management Every dip in the business cycle brings out the ‘get more value for your money’ strategies, and our current “Kingda Ka style” economic drop only increases the strategy implementation urgency. For IT this usually means either use the tools you have to solve a wider range of problems or buy a tool that with fast initial payback and can be used to solve a wide range of other problems.
June 14, 2009
Log and security event management tame the wild west environment of a university network Being a network administrator in a university environment is no easy task. Unlike the corporate world, a university network typically has few restrictions over who can gain access; what type or brand of equipment people use at the endpoint; how those endpoint devices are configured and managed; and what users do once they are on the network.
May 19, 2009
The Verizon Business Risk Team publishes a useful Data Breach Investigations Report drawn from over 500 forensic engagements over a four-year period.
The report describes a “Time Span of Breach” event broken into four stages of an attack. These are:
– Pre-Attack Research
– Point of Entry to Compromise
– Compromise to Discovery
– Discovery to Containment
The top two are under control of the attacker but the rest are under the control of the defender. Where log management is particularly useful would be in discovery. So what does the 2008 version of the DBIR show about the time between Compromise to Discovery? Months Sigh. Worse yet, in 70% of the cases, Discovery was the victim being notified by someone else.
Conclusion? Most victims do not have sufficient visibility into their own networks and equipment.
It’s not hard but it is tedious. The tedium can be relieved, for the most part, by a one-time setup and configuration of a log management system. Perhaps not the most exciting project you can think of but hard to beat for effectiveness and return on investment.
May 08, 2009
Have your cake and eat it too- improve IT security, comply with multiple regulations while reducing operational costs and saving money Headlines don’t lie. The number and severity of security breaches suffered by companies has consistently increased over the past couple of years and statistics show that 9 out of 10 businesses will suffer an attack on their corporate network in 2009.
April 21, 2009
How logs support data forensics investigations Novak and his team have been involved in hundreds of investigations employing data forensics. He says log data is a vital resource in discovering the existence, extent and source of any security breach. “Computer logs are central and pivotal components to any forensic investigation,” according to Novak. “They are a ‘fingerprint’ that provides a record of computer and system activities that may demonstrate a data leak or security breach.” The incriminating activities might include failed login attempts
April 20, 2009
A few months ago I wrote some thoughts on cloud security and compliance.The other day I came across this interesting article in Network World about SaaS security and it got me thinking on the subject again. The Burton analyst quoted, Eric Maiwald, made some interesting and salient points about the challenges of SaaS security but he stopped short of explicitly addressing compliance issues. If you have a SaaS service and you are subject to any one of the myriad compliance regulations how will you demonstrate compliance if the SaaS app is processing critical data subject to the standard? And is the vendor passing a SAS-70 audit going to satisfy your auditors and free you of any compliance requirement?
Mr. Maiwald makes a valid point that you have to take care in thinking through the security requirements and put it in the contract with the SaaS vendor. The same can also be held true for any compliance requirement, but he raises an even more critical point where he states that SaaS vendors want to offer a one size fits all offering (rightly so, or else I would put forward we would see a lot of belly-up SaaS vendors). My question then becomes how can an SME that is generally subject to compliance mandates but lacks the purchasing power to negotiate a cost effective agreement with a SaaS vendor take advantage of the benefits such services provide? Are we looking at one of these chicken and egg situations where the SaaS vendors don’t see the demand because the very customers they would serve are unable to use their service without this enabling technology? At the very least I would think that SaaS vendors would benefit from putting in the same audit capability that the other enterprise application vendors are, and making that available (maybe for a small additional fee) to their customers. Perhaps it could be as simple as user and admin activity auditing, but it seems to me a no brainer – if a prospect is going to let critical data and services go outside their control they are going to want the same visibility as they had when it resided internally, or else it becomes a non-starter until the price is driven so far down that reward trumps risk. Considering we will likely see more regulation, not less, in the future that price may well be pretty close to zero.
April 13, 2009
As a vendor of a log management solution, we come across prospects with a variety of requirements — consistent with a variety of needs and views of approaching problems.
Recently, one prospect was very insistent on “real-time” processing. This is perfectly reasonable but as with anything, when taken to an extreme, can be meaningless. In this instance, the “typical” use case (indeed the defining one) for the log management implementation was “a virus is making its way across the enterprise; I don’t have time to search or refine or indeed any user (slow) action; I need instant notification and ability to sort data on a variety of indexes instantly”.
As vendors we are conditioned to think “the customer is always right” but I wonder if the requirement is reasonable or even possible. Given specifics of a scenario, I am sure many vendors can meet the requirement — but in general? Not knowing which OS, which attack pattern, how logs are generated/transmitted?
I was reminded again by this blog by Bejtlich in which he explains that “If you only rely on your security products to produce alerts of any type, or blocks of any type, you will consistently be “protected” from only the most basic threats.”
While real-time processing of logs is a perfectly reasonable requirement, retrospective security analysis is the only way to get a clue as to attack patterns and therefore a defense.
March 12, 2009
Overcoming the blind spot of mobile computing For many organizations, mobile computing has become a strategic approach to improve productivity for sales professionals, knowledge workers and field personnel. As a result, the Internet has become an extension of the corporate network. Mobile and remote workers use the Internet as the means to access applications and resources that previously were only available to “in-house” users – those who are directly connected to the corporate network.
February 14, 2009
How LM / SIEM plays a critical role in the integrated system of internal controls Many public companies are still grappling with the demands of complying with the Sarbanes-Oxley Act of 2002 (SOX). SOX Section 404 dictates that audit functions are ultimately responsible for ensuring that financial data is accurate. One key aspect of proof is the absolute verification that sufficient control has been exercised over the corporate network where financial transactions are processed and records are held.
January 09, 2009
Log Management can find answers to every IT-related problem Why can I say that? Because I think most problems get handled the same way. The first stage is someone getting frustrated with the situation. They then use tools to analyze whatever data is accessible to them. From this analysis, they draw some conclusions about the problem’s answer, and then they act. Basically, finding answers to problems requires the ability to generate intelligence and insight from raw data.
December 15, 2008
Cloud computing has been described as a trade off between sovereignty and efficiency. Where is security (aka Risk Transfer) in this debate?
Chris Hoff notes that yesterday’s SaaS providers (Monster, Salesforce) are now styled as cloud computing providers in his post .
CIOs, under increasing cost pressure, may begin to accept the efficiency argument that cloud vendors have economies of scale in both the acquisition and operations of the data center.
But hold up…
To what extent is the risk transferred when you move data to the cloud? To a very limited extent, at most to the SLA. This is similar to the debate where one claims compliance (Hannaford, NYC and now sadly Mumbai) but attacks take place anyway, causing great damage. Would an SLA save the Manager in such cases? Unlikely.
In any case, the generic cloud vendor does not understand your assets or your business. At most, they can understand threats, in general terms. They will no doubt commit to the SLA but these usually refer to availability not security.
Thus far, general purpose, low cost utility or “cloud” infrastructure (such as Azure or EC2), or SaaS vendors (salesforce.com) do not have very sophisticated security features built in.
So as you ponder the Sovereignty v/s Efficiency tradeoff, spare a thought for security.
December 12, 2008
Don’t look now, but the Web 2.0 wave is crashing onto corporate beaches everywhere. Startups, software vendors, and search engine powerhouses are all providing online accounts and services for users to create wikis, blogs, etc. for collaborating and sharing corporate data, often without the knowledge or involvement of IT or in-house legal counsel.
November 09, 2008
Cutting through SIEM/Log Management vendor hype While there is little doubt that SIEM solutions are critical for compliance, security monitoring or IT optimization, it is getting harder for buyers to find the right product for their needs. The reason for this is two fold; firstly, there are a number of products available and vendors have done a great job of making their products sound roughly the same in core features such as correlation, reporting, collection, etc.
October 29, 2008
The Economist opines that the world is flirting with recession and IT may suffer; which in turn will hasten the move to “cloud computing”, which in a pithy distillation is described as “a trade-off between sovereignty and efficiency”.
Computing as a borderless utility? Whereas most privacy laws assume data resides in one place…the cloud makes data seem present everywhere and nowhere.
In a recent post Steve differentiated between security OF the cloud and security IN the cloud. This led us to an analysis of cloud computing as it is currently offered by Amazon AWS, Google Apps and Zoho.
From a risk perspective, security of content IN the cloud is essentially considered your problem by Amazon whereas Google and Zoho say “trust in me, just in me”. When pressed, Google says “we do not recommend Google Apps for content subject to compliance regulations” but is apparently working to assuage concerns about access control.
However moving your data to the cloud does not absolve you from responsibility on who accessed it for what purpose — the main concern of auditors everywhere.
At the present time, neither Google nor Zoho make any audit trail available to subscribers while at Amazon it’s your problem. We think widespread adoption by the business community (and what of the federal government?) will require significant transparency to provide visibility. This is also true for popular hosted applications like Intuit Quickbooks and Salesforce.
As Alex notes “…in order to gain that visibility, our insight into Cloud Risk Management must include significant provisions for understanding a joint ability to Prevent/Detect/Respond as well as provisions for managing the risk that one of the participants won’t provide that visibility or ability via SLA’s and penalties.”
Clear as mud.
October 14, 2008
Performing well during a security “Every crisis offers you extra desired power” William Moulton Marston Jasmine’s corollary: “Only if you perform well during that crisis.” Crises will happen no matter how many precautions we take. The need to blame someone is a human desire and it is easy to focus that on the crisis response team, because they are visible. Yet when teams perform well during the crisis they don’t merely avoid blame. They do garner the potential to become powerful advisors or outright leaders. It’s even better if you can also demonstrate that lessons learned from past crises are making the current environment more secure. After all, the Justice League members wouldn’t be heroes if no one knew about their actions. But what does it mean to perform well in a crisis?
September 16, 2008
Data leakage and the end of the world Most of the time when IT folk talk about data leakage they mean employees emailing sensitive documents to Gmail accounts or exposing the company through peer-to-peer networks or the burgeoning use of social networking services.
August 08, 2008
Hot server virtualization and cold compliance Without a doubt, server virtualization is a hot technology. NetworkWorld reported: “More than 40% of respondents listed consolidation as a high priority for the next year, and just under 40% said virtualization is more directly on their radar.” They also reported that server virtualization remains one of IT’s top initiatives even as IT executives are bracing themselves for potential spending cuts. Another survey of 100 US companies shows 60% of the respondents are currently using virtualization in production to support non-mission-critical business services. In other words, they are using it in a “production sandbox” before deploying it on a large scale.
July 15, 2008
Fear, boredom and the pursuit of compliance When it comes right down to it, we try to comply with regulations and policies because we are afraid of the penalties. Penalties such as corporate fines and jail time may be for the executive club, but everyone is affected when the U.S. Federal Trade Commission starts directly overseeing your security audits and risk assessment programs for 20 years. Just ask the IT folks at TJX Cos Inc. Then there are the hits to the top line as customers get shy about using their credit cards with you, and the press has fun raking you through the mud.
June 12, 2008
Creating lasting change from security management Over the past year, I’ve dealt with how to implement a Pragmatic approach to security management and then dug deeper into the specifics of how to successfully implement a security management environment successfully. Think of those previous tips as your high school level education in security management.
May 17, 2008
Is it better to leave some logs behind? Log management has emerged in the past few years as a must-do discipline in IT for complying with regulatory standards, and protecting the integrity of critical IT assets. However, with millions of logs being spit out on a daily basis by firewalls, routers, servers, workstations, applications and other sources across a network, enterprises are deluged with log data and there is no stemming the tide.
April 07, 2008
The three basic ingredients of any business are technology, processes and people. From an IT security standpoint, which of these is the weakest link in your organization? Whichever it is, it is likely to be the focus of attack.
Organizations around the globe routinely employ the use of powerful firewalls, anti-virus software and sophisticated intrusion-detection systems to guard precious information assets. Year in and year out, polls show the weakest link to be processes and the people behind them. In the SIEM world, the absence of a process to examine exception reports to detect non-obvious problems is one manifestation of process weakness.
The reality is that not all threats are obvious and detected/blocked by automation. You must apply the human element appropriately.
Another is to audit user activity especially privileged user activity. It must match approved requests and pass the reasonableness test (eg performed during business hours).
Earlier this decade, the focus of security was the perimeter and the internal network. Technologies such as firewalls and network based intrusion detection were all the rage. While these are necessary, vital even, defense in depth dictates that you look carefully at hosts and user activity.
March 07, 2008
The 5 W’s of security management I’ve seen it happen about a thousand times if I’ve seen it once. A high profile project ends up in a ditch because there wasn’t a proper plan defined AHEAD of time. I see this more often in “squishy” projects like security management because success isn’t easily defined. It’s not like installing a web application firewall, which will be deemed a success if it blocks web attacks.
February 02, 2008
Understanding where SIM ends and log management begins In my travels, I tend to run into two types of security practitioners. The first I’ll call the “sailor.” These folks are basically adrift in the lake in a boat with many holes. They’ve got a little cup and they work hard every day trying to make sure the water doesn’t overcome the little ship and sink their craft. The others I’ll call the “builders,” and these folks have gotten past the sailor phase, gotten their ship to port and are trying to build a life in their new surroundings. Thus, they are trying to lay the foundation for a strong home that can withstand whatever the elements have to offer.
January 18, 2008
Selection criteria for pragmatic Log Management As we wrap up our 6-month tour of Pragmatic Log Management, let’s focus on what are some of the important buying criteria that you should consider when looking at log management offerings. Ultimately, a lot of the vendors in the space have done a good job of making all the products sound the same. So really deciphering what differentiates one product versus another is an art form.
January 14, 2008
Mid-size organizations continue to be tossed on the horns of the Security/Compliance dilemma. Is it reasonable to consider regulatory compliance a natural benefit of a security focused approach?
Consider why regulatory standards came into being in the first place. Some like PCI-DSS, FISMA and DCID/6 are largely driven by security concerns and the potential for loss of high value data. Others like Sarbanes-Oxley seek to establish responsibility for changes and are an incentive to blunt the insider threat. Vendor provided Best Practices have come about because of concerns about “attack surface” and “vulnerability”. Clearly security issues.
While large organizations can establish dedicated “compliance teams”, the high cost of such an approach precludes it as an option for mid tier organizations. If you could only have one team and effort and had to choose, its a no-brainer. Security wins. Accordingly, such organizations naturally consider that compliance efforts are folded into the security teams and budgets.
While this is a reasonable approach, recognize that some compliance regulations are more auditor and governance related and a strict security view is a misfit. An adaptation, is to transition the ownership of tools and their use from the security to the operational team.
The classic approach for mid-size organizations to the dilemma — start as a security focused initiative, transition to the operations team.
December 12, 2007
Buying a Pragmatic Log Management Solution Over the past 4 months, we’ve discussed many of the reasons that log management is critical. To quickly review, log management can help you react faster from an operational aspect – so you can pinpoint an incident and remediate any issues well ahead of a significant loss. Secondly, log management helps in the event of an incident in terms of having rock-solid evidence to investigate a breach and hopefully bring the perpetrator to justice.
November 10, 2007
Log Management and Compliance In past articles, I’ve covered how log management helps with operations and incident response, all in a distinctly “Pragmatic” way. This month we are going to address what I consider to be the 3rd leg of the stool – compliance. Security professionals have a love/hate relationship with compliance.
October 15, 2007
Log Management and Incident Response I’m going to let you in on a little secret. It’s a tough message to get, but part of being Pragmatic is not deluding yourself about what you can and can’t do. The cold harsh reality of today’s information security environment is that you will be compromised. I don’t know whether it will be tomorrow, next Tuesday, or some other time in the future -but it will happen. There are just too many legitimate attack vectors, too many restrictions on what we can and can’t do, and too many limitations on budget and resources to ever be “truly secure.”
September 08, 2007
Log Management and Pragmatic Operations Last month, I introduced the concept of the Pragmatic CSO methodology, a 12-step program to help security professionals overcome their addiction to throwing new products at every new attack vector and security problem. Additionally, the process helps security professionals build a value proposition, interface with senior management more effectively, and run their security operation as a business. As a high level construct, the 12 steps are helpful, but ultimately security professionals need to do something, and that’s what we are going to discuss this month.
August 17, 2007
Looking at Log Management Pragmatically As the first article in a 6-part series on the specifics of log management, I want to introduce the concept of the Pragmatic CSO methodology and go into how/why the idea of log management is important to achieving the goals of the Chief Security Officer. This piece will lay the foundation for the journey we will take together over the next 6 months.
June 17, 2007
Collect Vista Events Microsoft has made some considerable changes to event management in Windows Vista. One major change is the way you can now centrally collect events from a variety of systems. This article is the fifth in a series that demystifies the Vista Event Log. Windows Vista includes an updated implementation of Microsoft’s remote management infrastructure: Windows Remote Management (WinRM). The Vista Event Log uses WinRM along with the Windows Event Collector service as the engines for collecting events from remote machines and sending them to a central event collector system.
May 05, 2007
Automate Vista Events Microsoft has made some considerable changes to event management in Windows Vista. One major change is the way you can link events to automated tasks. This article is the fourth in a series that demystifies the Vista Event Log. When you manage events, you often wish you could generate automatic actions when specific events occur. For example, it would be nice if you could automatically delete temporary files and send a notification to desktop technicians when PC disk drives get too full. In another scenario, it would be nice if you could receive automatic
April 16, 2007
Explore the Vista Task Scheduler Microsoft has made some considerable changes to event management in Windows Vista. One related change is the way the Vista Task Scheduler has been enhanced. These enhancements allow you to link events to automated tasks. This article is the third in a series that demystifies the Vista Event Log.
March 10, 2007
Explore the Vista Event Log Microsoft has made some considerable changes in the Windows Vista Event Log. It sports a new interface and a significant number of new event categories making much more useful than ever before. This article is the second in a series that demystifies the Vista Event Log
February 20, 2007
Industry News Logging data extracts puts some agencies in a bind SPECIAL REPORT: Case study no. 3 – Mandate forces changes in who accesses information OMB gives agencies 45 days to begin logging all computer-readable data extracts, and after 90 days, verify if the data has been erased or still is needed. Very few agencies—if any—have met this most challenging mandate of the four, industry and federal experts said.
January 12, 2007
Manage Change in Windows Vista Microsoft has made some considerable changes in the Windows Vista Event Log. How do those changes affect system auditing and how will they change the way you monitor systems? This article is the first in a series that demystifies the Vista Event Log.
See EventTracker in action!
Join our next live demo November 19th at 2:00 p.m. EST.