What good is Threat Intelligence integration in a SIEM?


Bad actors/actions are more and more prevelant on the Internet. Who are they? What are they up to? Are they prowling in your network?

The first two questions are answered by Threat Intelligence (TI), the last one can be provided by a SIEM that integrates TI into its functionality.

But wait, don’t buy just yet, there’s more, much more!

Threat Intelligence when fused with SIEM can:
• Validate correlation rules and improve base lining alerts by upping the priority of rules that also point at TI-reported “bad” sources
• Detect owned boxes, bots, etc. that call home when on your network
• Qualify entities related to an incident based on collected TI data (what’s the history of this IP?)
• Historical matching of past, historical log data to current TI data
• Review past TI history as key context for reviewed events, alerts, incidents, etc.
• Enable automatic action due to better context available from high-quality TI feeds
• Run TI effectiveness reports in a SIEM (how much TI leads to useful alerts and incidents?)
• Validate web server logs source IP to profile visitors and reduce service to those appearing on bad lists (uncommon)
and the beat goes on…

Want the benefits of SIEM without the heavy lifting involved? SIEM Simplified  may be for you.

Gathering logs or gathering dust?


Did you wrestle your big name SIEM vendor to throw in their “enterprise class” solution for a huge discount as part of the last negotiation? If so, good from you – you should be pleased with yourself for wrangling something so valuable for them. 90% discounts are not unheard of, by the way.

But do you know why they caved and included it? It’s because there is very high probability that you really won’t ever obtain any significant value from it.

You see the “enterprise class” SIEM solutions from the top name vendors all require significant trained staff to even just get them up and running, never mind tuning and delivering any real value. They figured, you probably just don’t have the staff or the time to do any of that so they can just give it away at that huge discount. It only adds some value to their invoice, preventing any other vendor from horning in on their turf and makes you happy – what’s not to like?

The problem of course is that you are not any closer to solving any of the problems that a SIEM can address. Is that ok with you? If so, why even bother to pay that 10%?

From a recent webinar on the topic by Gartner Analyst Anton Chuvakin:

Q: For a mid-size company what percent of time would a typical SIEM analyst spend in monitoring / management of the tool – outstanding incident management?
A: Look at my SIEM skill model of Run/Watch/Tune and the paper where it is described in depth. Ideally, you don’t want to have one person running the SIEM system, doing security monitoring and tuning SIEM content (such as writing correlation rules, etc) since it would be either one busy person or one really talented one. Overall, you want to spend a small minority of time on the management of the tool and most of the time using it. SIEM works if you work it! SIEM fails if you fail to use it.

So is your SIEM gathering logs? Or gathering dust?

If the latter, give us a call! Our SIEM Simplified service can take the sting out of the bite.

Why add more hay?


Recent terrorist attacks in France have shaken governments in Europe. The difficulty of defending against insider attacks is once again front and center. How should we respond? The UK government seems to feel that greater mass surveillance is a proper response. The Communications Data Bill  proposed by Prime Minister Cameron would compel telecom companies to keep records of all Internet, email, and cellphone activity. He also wants to ban encrypted communications services.

This approach would add even more massive data sets for analysis by computer programs than currently thought to be analyzed by NSA/GCHQ, in hopes that algorithms would be able to pinpoint the bad guys. Of course France has blanket surveillance but that did not prevent the Charlie Hebdo attack.

In the SIEM universe, the equivalent would be to gather every log from every source in hopes that attacks could be predicted and prevented. In practice,accepting data like this into a SIEM solution reduces it to a quivering mess of barely functioning components. In fact the opposite approach “output driven SIEM” is favored by experienced implementers.

Ray Corrigan writing Mass Surveillance Will Not Stop Terrorism  in the New Scientist notes “Surveillance of the entire population, the vast majority of whom are innocent, leads to the diversion of limited intelligence resources in pursuit of huge numbers of false leads. Terrorists are comparatively rare, so finding one is a needle-in-a-haystack problem. You don’t make it easier by throwing more needleless hay on the stack.”

Threat Intelligence – Paid or Free?


Threat Intelligence (TI) is evidence-based knowledge, including context, mechanisms, indicators, implications and actionable advice, about an existing or emerging menace or hazard to assets that can be used to inform decisions regarding the subject’s response to that menace or hazard. The challenge is that leading indicators of risk to an organization are difficult to identify when the organization’s adversaries, including their thoughts, capabilities and actions, are unknown. Therefore “black lists” of various types have become popular which list top attackers, spammers, poisoned URLs, malware domains etc have become popular. These lists are either community (free) maintained (eg SANS DShield), paid for by your tax dollars (eg InfraGuard) or paid services.

EventTracker 7.6 introduced formal support to automatically import and use such lists. We are often asked the question, which list(s) to use. Is it worth it to pay for TI service? Here is our thinking on the subject:

– External v/s Internal
In most cases, we find “white lists” to be much smaller, more effective and easier to tune/maintain than any “black list”. EventTracker supports the generation of such White lists from internal sources (the Change Audit feature) or the list of known good IP ranges (internal range, your Amazon EC2 or Azure instances, your O365 instances etc). Using the NOTIN match option of the Behavior module gives you a small list of suspicious activities (grey list) which can be quickly sorted to either black or white for future processing. As a first step, this is a quick/inexpensive/effective solution.

– Paid v/s Free
Free services include well regarded sources such as shadowservers.org, abuse.ch, dshield.org, FBI Infraguard, US CERT and EventTracker ThreatCenter (a curated list of low volume, high confidence sources formatted for quick import into EventTracker. Many customers in industry verticals (e.g., Electric power have lists circulated within their community.)

If you are thinking of paid services, then ask yourself:

– Will the feed allow me to detect threats faster? (e.g., a feed of top attackers updated in real-time v/s once in 6/12 hours). If faster is your motivation, are you able to respond to the threat detection faster? If the threat is detected at 8PM on a Friday, when will you be able to properly respond (not just acknowledge)?

– Will the feed allow me to detect threats better? i.e., you would have missed this threat if it had not been for that paid feed. At this time, many paid services for tactical TI are aggregating, cleaning and de-duplicating free sources and/or offering analysis that is also available in the public domain (e.g. McAfee and Kaspersky analysis of Dark Seoul, the malware that created havoc at Sony Pictures is available from US CERT).

Bottom line, Threat Intelligence is an excellent extension to SIEM solutions. The order of implementation should be internal/whitelist first, external free lists next and finally paid services to cover any remaining gaps.

Looking for 80% coverage at 20% cost? Let us do the detection with SIEM Simplified so you can remain focused on remediation.

Why Naming Conventions are Important to Log Monitoring


Log monitoring is difficult for many reasons. For one thing there are not many events that unquestionably indicate an intrusion or malicious activity. If it were that easy the system would just prevent the attack in the first place. One way to improve log monitoring is to name implement naming conventions that imbed information about objects like user accounts, groups and computers such as type or sensitivity. This makes it easy for relatively simple log analysis rules to recognize important objects or improper combinations of information that would be impossible otherwise.

Why Risk Classification is Important


Traditional threat models posit that it is necessary to protect against all attacks. While this may be true for a critical national defense network, it is unlikely to be true for the typical commercial enterprise. In fact many technically possible attacks are economically infeasible and thus not attempted by typical attackers.

This can be inferred by noting that most users ignore security precautions and yet escape regular harm. Most assets escape exploitation because they are not targeted, not because they are impregnable.

As Cormac Herley points out “a more realistic view is that we start with some variant of the traditional threat model, e.g., it is necessary and suffi cient to defend against all attacks” but then modify it in some way, e.g., defense eff ort should be appropriate to the assets.” However, while the first statement is absolute, and has a clear call-to-action, the qualifier is vague and imprecise. Of course we can’t defend against everything, but on what basis should we decide what to neglect?”

One way around this is by risk classification. The more you have to lose, the harder you must make it for the attacker. If you can make the value of the attack to be less than the monetization value then a financially motivated attacker will move on as its not worth it.

Want to present a hard target to attackers at an efficient price? Consider our SIEM Simplified service. You can get 80% of the value of a SIEM for 20% of the do-it-yourself price.

How many people does it take to run a SIEM?


You must have a heard light bulb jokes, for example:
How many optimists does it take to screw in a light bulb? None, they’re convinced that the power will come back on soon.

So how many people does it take to run a SIEM?
Let me count the ways.

Assuming the SIEM has been installed and configured properly (i.e, in accordance with the desired use cases), a few different skill sets are needed (these can all be the same person but that is quite rare).

SIEM Admin: This person handles the RUN function and will maintain the product in operational state and monitor its up-time. Other duties include deploying updates from the vendor and optimizing system performance. This is usually a fraction of a full time equivalent (FTE). About 4-8 hours/week for the typical EventTracker installation.

Security Analyst: This person handles the WATCH function and uses EventTracker for security monitoring. In the case of an incident, reviews activity reports and investigates alerts. Depending on the extent of the infrastructure being monitored, this can range from a fraction of an FTE to several FTEs. Plan for coverage on weekends and after hours. Incident response may require notification of other admin personnel.

SIEM Expert: This person handles the TUNE function and refines/customizes the SIEM rules/content and creates rules to support new use cases. This function requires the highest skill level, familiarity with the network and expertise with the SIEM product.

Back to the (bad) joke:
Q. So how many people does it take to run a SIEM?
A. None! The vendor said it manages itself!

How much security investment is enough?


In the last few weeks of 2014 and in the aftermath of the Sony hack, the attacks at many retailers and the incessant news on shell shock, poodle and many other vulnerabilities, many manager are considering 2015 budgets and the eternal question “how much to invest in IT security” is a common one.

It sometimes see that there is no limit and the more you spend, the lower your risk. But the Gordon-Loeb model says that is in fact not the case.

As pointed out by the RH Smith College at the University of Maryland:
The security of information is a fundamental concern to organizations operating in the modern digital economy. There are technical, behavioral, and organizational aspects related to this concern. There are also economic aspects of information security. One important economic aspect of information security (including cybersecurity) revolves around deriving the right amount an organization should invest in protecting information. Organizations also need to determine the most appropriate way to allocate such an investment. Both of these aspects of information security are addressed by Drs. Lawrence A. Gordon and Martin P. Loeb – See more here.

The focus of the Gordon-Loeb Model is to present an economic framework that characterizes the optimal level of investment to protect a given set of information. The model shows that the amount a firm should spend to protect information should generally be only a small fraction of the expected loss. More specifically, it shows that it is generally uneconomical to invest in information security activities (including cybersecurity related activities) more than 37 percent of the expected loss that would occur from a security breach. For a given level of potential loss, the optimal amount to spend to protect an information set does not always increase with increases in the information sets vulnerability. In other words, organizations may derive a higher return on their security activities by investing in cyber/information security activities that are directed at improving the security of information sets with a medium level of vulnerability.

Want the most for your 37% of expected loss? Consider SIEM Simplified.

What is a Stolen Credit Card Worth?


Solution Providers for Retail
Guest blog by A.N. Ananth

Cybercrime and stealing credit cards has been a hot topic all year. From the Target breach to Sony, the classic motivation for cybercriminals is profit. So how much is a stolen credit card worth?

The reason it is important to know the answer to this question is that it is the central motivation behind the criminal. If you could make it more expensive for a criminal to steal a card than what the thief would gain by selling them, then the attackers would find an easier target. That is what being a hard target is all about.

This article suggests prices of $35-$45 for a stolen credit card depending upon whether it is a platinum or corporate card. It is also worth noting that the viable lifetime of a stolen card is at most one billing cycle. After this time, the rightful owner will most likely detect its loss or the bank fraud monitor will pick up irregularities and terminate the account.

Why is a credit card with a high spending limit (say $10K) worth only $35? It is because monetizing a stolen credit card is difficult and requires a lot of expensive effort on part of the criminal. That is contrary to popular press which suggest that cybercrime results in easy billions. At the Workshop on Economics of Information Security, Herley and Florencio showed in their presentation, “Sex, Lies and Cybercrime Surveys,” that widely circulated estimates of cybercrime losses are wrong by orders of magnitude.For example:

Far from being broadly-based estimates of losses across the population, the cyber-crime estimates that we have appear to be largely the answers of a handful of people extrapolated to the whole population. A single individual who claims $50,000 losses, in an N = 1000 person survey, is all it takes to generate a $10 billion loss over the popu- lation. One unverified claim of $7,500 in phishing losses translates into $1.5 billion. …Cyber-crime losses follow very concentrated distributions where a representative sample of the pop- ulation does not necessarily give an accurate estimate of the mean. They are self-reported numbers which have no robustness to any embellishment or exaggeration. They are surveys of rare phenomena where the signal is overwhelmed by the noise of misinformation. In short they produce estimates that cannot be relied upon.

That’s a rational, fact based explanation as to why the most basic of information security is unusually effective in most cases. Pundits have been screaming this from the rooftops for a long time. What are your thoughts?

Read more at Solution Provider for Retail guest blog.

Are honeypots illegal?


In computer terminology, a honeypot is a computer system set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of IT systems. Generally, a honeypot appears to be part of a network, but is actually isolated and monitored, and which seems to contain information or a resource of value to attackers.

Lance Spitzner covers this topic from his (admittedly) non-legal perspective.

Is it entrapment?
Honeypots are not a form of entrapment. For some reason, many people have this misconception that if they deploy honeypots, they can be prosecuted for entrapping the bad guys. Entrapment, by definition is “a law-enforcement officer’s or government agent’s inducement of a person to commit a crime, by means of fraud or undue persuasion, in an attempt to later bring a criminal prosecution against that person.”

Does it violate privacy laws?
Privacy laws in the US may limit your right to capture data about an attacker, even when the attacker is breaking into your honeypot but the exemption under Service Provider Protection is key. What this exemption means is that security technologies can collect information on people (and attackers), as long as that technology is being used to protect or secure your environment. In other words, these technologies are now exempt from privacy restrictions. For example, an IDS sensor that is used for detection and captures network activity is doing so to detect (and thus enable organizations to respond to) unauthorized activity. Such a technology is most likely not considered a violation of privacy as the technology is being used to help protect the organization, so it falls under the exemption of Service Provider Protection. Honeypots that are used to protect an organization would fall under this exemption.

Does it expose us to liability?
Liability is not a criminal issue, but civil. Liability implies you could be sued if your honeypot is used to harm others. For example, if it is used to attack other systems or resources, the owners of those may sue. The argument being that if you had taken proper precautions to keep your systems secure, the attacker would not have been able to harm my systems, so you share the fault for any damage occurred to me during the attack. The issue of liability is one of risk. First, anytime you deploy a security technology (even one without an IP stack), that technology comes with risk. For example, there have been numerous vulnerabilities discovered in firewalls, IDS systems, and network sniffers. Honeypots are no different.

Obviously this blog entry is not legal advice and should not be construed as such.

SIEM or Log Management?


Security Information and Event Management (SIEM) is a Gartner coined term to describe solutions which monitor and help manage user and service privileges, directory services, and other system configuration changes in addition to providing log auditing, and review and incident response.

SIEM differs from Log Management, which refers to solutions which deal with large volumes of computer-generated log messages (also known as audit records, event-logs, etc.)

Log management is aimed at general system troubleshooting or incident response support. The focus is on collecting all logs for various reasons. This “input-driven” approach tries to get every possible bit of data.

This model fails with SIEM-focused solutions. Opening the floodgates, admitting any/all log data into the tool first, then considering what (if any) use is there for the data, reduces tool performance as it struggles to cope with the flood. More preferable is an “output-driven” model where data is admitted if and only if its usage is defined. This use can be defined to include alerts, dashboards, reports, behavior profiling, threat analysis, etc..

Buying a SIEM solution and using it as log management tool is a waste of money. Forcing a log management solution to act like a SIEM is folly.

4 Fundamentals of Good Security Log Monitoring


Effective security log monitoring is a very technical challenge that requires a lot of arcane knowledge and it is easy to get lost in the details. Over the years, there are 4 things that stand out to me as fundamentals when it comes to keeping the big picture and meeting the challenge:

The Security Risks of Industry Interconnections


2014 has seen a rash of high profile security breaches involving theft of personal data and credit card numbers from retailers Neiman Marcus, Home Depot, Target, Michaels, online auction site eBay, and grocery chains SuperValu and Hannaford among others. Hackers were able to steal hundreds of millions of credit and debit cards; from the information disclosed, this accounted for 40 million cards from Target, 350,000 from Neiman Marcus, up to 2.6 million from Michaels, 56 million from Home Depot.

The Identity Theft Resource Center (ITRC) reports that to date in 2014, 644 security breaches have occurred, an increase of 25.3 percent over last year. By far the majority of breaches targeted payment card data along with personal information like social security numbers and email addresses, and personal health information, and it estimates that over 78 million records were exposed.

Malware installed using third party credentials was found to be among the primary cause of the breaches in post-security analysis. Banks and financial institutions are critically dependent on their IT infrastructure and are also constantly exposed to attacks because of Sutton’s Law. Networks are empowering because they allow us to interact with other employees, customers and vendors. However, it is often the case that industry partners have a looser view of security and thus may be more vulnerable to being breached; exploiting industry interconnection is a favorite tactic used by attackers. After all, a frontal brute force attack on a well-defended large corporation’s doors are unlikely to be successful.

The Weak Link

The attackers target subcontractors, which are usually small companies with comparatively weaker IT security defenses and minimal cyber security expertise on hand. These small companies are also proud of their large customer and keen to highlight their connection. Likewise, companies often provide a surprising number of information meant for vendors on public sites for which logins are not necessary. This makes the first step of researching the target and their industry interconnections easier for the attacker.

The next step is to compromise the subcontractor network and find employee data. Social networking sites liked LinkedIn are a boon to attackers and used to create lists of IT admin and management staff who are likely to be privileged users. In West Virginia, state agencies were victims when malware infected computers of users whose email addresses ended with @wv.gov. The next step is to gain access to the contractors’ privileged users workstation, and from there, to breach the final target. In one retailer breach, the network credentials given to a heating, air conditioning and refrigeration contractor were stolen after hackers mounted a phishing attack, and were able to successfully lodge malware in the contractor’s systems, two months before they attacked the retailer, their ultimate target.

Good Practices, Good Security

Organizations can no longer assume that their enterprise is enforcing effective security standards; likewise, they cannot make the same assumption about their partners, vendors and clients, or anyone who has access to their networks. A Fortune 500 company has access to resources to acquire and manage security systems that a smaller vendor might not. So how can the enterprise protect itself while making the industry interconnections it needs to thrive?

Risk Assessments: When establishing a relationship with a vendor, partner, or client, consider vetting their security practices a part of due diligence. Before network access can be granted, the third party should be subject to a security appraisal that assesses where security gaps can occur (weak firewalls or security monitoring systems, lack of proper security controls). An inventory of the third party’s systems and applications and its control of those can help the enterprise develop an effective vendor management profile. Furthermore, it provides the enterprise with visibility into information that will be shared and who has access to that information.

Controlled Access: Third party access should be restricted and compartmentalized only to a segment of the network, and prevented access to other assets. Likewise, the organization can require that vendors and third parties use particular technologies for remote access, which enables the enterprise to catalog which connections are being made to the network.

Active Monitoring: Organizations should actively monitor network connections; SIEM software can help identify when remote access or other unauthorized software is installed, alert the organization when unauthorized connections are attempted, and establish baselines for “typical” versus unusual or suspicious user behaviors which can presage the beginning of a breach

Ongoing Audits: Vendors given access to the network should be required to submit to periodic audits; this allows both the organization and the vendor to assess security strengths and weaknesses and ensure that the vendor is in compliance with the organization’s security policies.

What next?

Financial institutions often implicitly trust vendors. But just as good fences make good neighbors, vendor audits produce good relationships. Initial due diligence and enforcing sound security practices with third parties can eliminate or mitigate security failures. Routine vendor audits send the message that the entity is always monitoring the vendor to ensure that it is complying with IT security practices.

SIEM is Sunlight


Security Information and Event Management (SIEM) refers to technology that provides real-time analysis of security alerts generated by network hardware and applications. SIEM works by gathering, analyzing and presenting information from a variety of sources of such information across the enterprise network including network and security devices; identity and access management applications; vulnerability management and policy compliance tools; operating system, database and application logs; and external threat data.

All compliance frameworks including PCI-DSS, HIPAA, FISMA, NERC etc call for the implementation and regular usage of SIEM technology. The absence of regular usage is noted as a major factor in post-mortem analysis of IT security related incidents.

Why is this the case? It’s because SIEM, when implemented properly gathers security data from all the nooks and crannies of the enterprise network. When this information is collated and presented well, an analyst is able to see what is happening, what happened and what is different.

It’s akin to letting in the sunlight to all corners and hidden places. You can see better, much better.

You can’t fix what you can’t see and don’t know. Knowledge of the goings-on in the various parts of the network, in real-time when possible, is the first step towards building a meaningful security defense.

Mobile and Remote Endpoints – Don’t Leave Them Out of Your Monitoring


I’ve always tried to raise awareness about the importance of workstation security logs. Workstation endpoints are a crucial component of security and the first target of today’s bad guys. Look at news reports and you’ll find that APT attacks and outsider data thefts always begin with the workstation endpoint. So unless you want to ignore your first opportunity to detect and disrupt such attacks you need to be monitoring them.

Three key advantages for SIEM-As-A-Service


Three key advantages for SIEM-As-A-Service

Security Information and Event Management (SIEM) technology is an essential component in a modern defense-in-depth strategy for IT Security. SIEM is described as such in every Best Practice recommendation from industry groups and security pundits. The absence of SIEM is repeatedly noted in Verizon Enterprise Data Breach Investigations Report as a factor in late discovery of breaches. Indeed attackers are most often successful with soft targets where defenders do not review log and other security data. In addition, all regulatory compliance standards, such as PCI-DSS, HIPAA, FISMA etc specifically require SIEM technology be deployed and more importantly be used actively.

This last point (“be used actively”) is the Achilles heel for many organizations and has been noted often, as “security is something you do, not something you buy.” Organizations large and small struggle to assign staff with necessary expertise and maintain the discipline of periodic log review.

New SIEM-As-A Service options

SIEM Simplified services are available for buyers that cannot leverage traditional on premise, self-serve products. In such models, the vendor assumes responsibility for as much (or as little) of the heavy lifting as desired by the user including: Installation, Configuration, Tuning, Periodic review, Updates and responding to incident investigation or audit support requests.

Such offerings have three distinct advantages over the traditional self-serve, on premise model.

1) Managed Service Delivery: The vendor is responsible for the most “fragile” and “difficult to get right” aspect of a SIEM deployment – that is installation, configuration, tuning and Periodic review of SIEM data. This can also include upgrades, performance management to get speedy response and updates to security threat intelligence feeds.
2) Deployment options: In addition to the traditional on premise model, such services usually offer cloud based, managed hosted or hybrid solutions. Options for host based agents and/or premise based collectors/sensors allow for great flexibility in deployment
3) Utility pricing: Contrast with traditional perpetual models that require capital expenditure and front loading, SIEM-As-A-Service follows the utility model with usage based pricing and monthly expenditure. This is friendly to Operational Expenditures.

SIEM is a core technology in the modern IT Enterprise. New As-A-Service deployment models can increase adoption and value of this complex monitoring technology.

Top 5 Linux log file groups in/var/log


If you manage any Linux machines, it is essential that you know where the log files are located, and what is contained in them. Such files are usually in /var/log. Logging is controlled by the associated .conf file.

Some log files are distribution specific and this directory can also contain applications such as samba, apache, lighttpd, mail etc.

From a security perspective, here are 5 groups of files which are essential. Many other files are generated and will be important for system administration and troubleshooting.

1. The main log file
a) /var/log/messages – Contains global system messages, including the messages that are logged during system startup. There are several things that are logged in /var/log/messages including mail, cron, daemon, kern, auth, etc.

2. Access and authentication
a) /var/log/auth.log – Contains system authorization information, including user logins and authentication machinsm that were used.
b) /var/log/lastlog – Displays the recent login information for all the users. This is not an ascii file. You should use lastlog command to view the content of this file.
c) /var/log/btmp – This file contains information about failed login attemps. Use the last command to view the btmp file. For example, “last -f /var/log/btmp | more”
d) /var/log/wtmp or /var/log/utmp – Contains login records. Using wtmp you can find out who is logged into the system. who command uses this file to display the information.
e) /var/log/faillog – Contains user failed login attemps. Use faillog command to display the content of this file.
f) /var/log/secure – Contains information related to authentication and authorization privileges. For example, sshd logs all the messages here, including unsuccessful login.

3. Package install/uninstall
a) /var/log/dpkg.log – Contains information that are logged when a package is installed or removed using dpkg command
b) /var/log/yum.log – Contains information that are logged when a package is installed using yum

4. System
a) /var/log/daemon.log – Contains information logged by the various background daemons that runs on the system
b) /var/log/cups – All printer and printing related log messages
c) /var/log/cron – Whenever cron daemon (or anacron) starts a cron job, it logs the information about the cron job in this file

5. Applications
b) /var/log/maillog /var/log/mail.log – Contains the log information from the mail server that is running on the system. For example, sendmail logs information about all the sent items to this file
b) /var/log/Xorg.x.log – Log messages from the XWindows system

Happy Logging!

Seven Habits of Highly Fraudulent Users


This post Seven Habits of Highly Fraudulent Users from Izzy at SiftScience describes patterns culled from 6 million transactions over a three month sample. The “fraud” sample consisted of transactions confirmed fraudulent by customers; “normal” samples consisted of transactions confirmed by customers to be non-fraudulent, as well as a subset of unlabeled transactions.

These patterns are useful to Security Operations Center (SOC) teams who “hunt” for these things.

Habit #1 Fraudsters go hungry

Whereas there is a dip in activity by normal users at lunch time, no such dip is observed in fraudulent transactions. When looking for out-of-ordinary behavior, the absence of any dip during the day might speak to a script which never tires.

Habit #2 Fraudsters are night owls

Analyzing fraudulent transactions as a percentage of all transactions, 3AM was found to be the most fraudulent hour in the day, and night-time in general was a more dangerous time. SOC teams should hunt for “after hours” behavior as a tip-off for bad actors.

Habit #3 Fraudsters are international

Look for traffic originating outside your home country. While these patterns change frequently, as a general rule, international traffic is worth trending and observing.

Habit #4 Fraudsters don multiple identities

Fraudsters tend to make multiple accounts on their laptop or phone to commit fraud. When multiple accounts are associated with the same device, the higher the likelihood of fraud. A user who has 6 accounts on her laptop is 15 times more likely to be fraudulent than the average person. Users with only 1 account however, are less likely to be fraudulent. SOC teams should look for multiple users using the same computer in a given time frame. Even in shared PC situations (e.g, nurses station in a hospital, it is unusual for much more than one user accessing a PC in a given shift.

Habit #5 Fraudsters use well known domains

The top 3 sources of fraud originate from Microsoft sites including outlook.com, Hotmail and live.com. Traffic from/to such sites is worthy of trending and examining.

Habit #6 Fraudsters are boring

A widely recognized predictor of fraud is the number of digits in an email address. The more numbers, the more likely that it’s fraud.

Habit #7 Fraudsters like disposable things

We know that attacks almost always originate from DHCP addresses (which is why dshield.org/block.txt gives out /24 ranges). Its also true that the older an account age, the less likely (in general) its involved in fraud. SOC teams must always look out for account creation.

Good hunting.

EventTracker and Poodle


Summary:
• All systems and applications utilizing the Secure Socket Layer (SSL) 3.0 with cipher-block chaining (CBC) mode ciphers may be vulnerable. However, the POODLE (Padding Oracle On Downgraded Legacy Encryption) attack demonstrates this vulnerability using web browsers and web servers, which is one of the most likely exploitation scenarios.
• EventTracker v7.x is implemented above IIS on the Windows platform and there MAY be vulnerable to POODLE depending on the configuration of IIS..
• ETIDS and ETVAS which are offered as options of the SIEM Simplified service, are based on CentOS v6.5 which uses Apache and may also be vulnerable, depending on the configuration of Apache.

1. Poodle Scan can be used to test if your server is vulnerable
• Below are the links relevant to this vulnerability:

http://nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566
https://www.us-cert.gov/ncas/alerts/TA14-290A
http://www.dotnetnoob.com/2013/10/hardening-windows-server-20082012-and.html
http://support.microsoft.com/kb/187498

• If you are a subscriber to SIEM Simplified service, the EventTracker Control Center has already initiated action to patch this vulnerability on your behalf. Please contact ecc@eventtracker.com with any questions.
• If you maintain EventTracker yourself, this document explains how you can update your installation to remove the vulnerability against SSL 3.0

Details:
The SSL 3.0 vulnerability stems from the way blocks of data are encrypted under a specific type of encryption algorithm within the SSL protocol. The POODLE attack takes advantage of the protocol version negotiation feature built into SSL/TLS to force the use of SSL 3.0 and then leverages this new vulnerability to decrypt select content within the SSL session. The decryption is done byte by byte and will generate a large number of connections between the client and server.

While SSL 3.0 is an old encryption standard and has generally been replaced by Transport Layer Security (TLS) (which is not vulnerable in this way), most SSL/TLS implementations remain backwards compatible with SSL 3.0 to interoperate with legacy systems in the interest of a smooth user experience. Even if a client and server both support a version of TLS the SSL/TLS protocol suite allows for protocol version negotiation (being referred to as the “downgrade dance” in other reporting). The POODLE attack leverages the fact that when a secure connection attempt fails, servers will fall back to older protocols such as SSL 3.0. An attacker who can trigger a connection failure can then force the use of SSL 3.0 and attempt the new attack.

Solution:
• If you have installed EventTracker on Microsoft Windows Server and are maintaining it yourself, please download the Disable Weak Cyphers file to the server running EventTracker. Extract and save DisableWeakCiphers.bat; run this file as Administrator. This file executes the following commands:

REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Client” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\DES 56/56” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 40/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 56/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 128/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 40/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 56/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 64/128” /v Enabled /t REG_DWORD /d 00000000 /f

Laying Traps for External Information Thieves


Wouldn’t it be nice if you detect when an external threat actor, who’s taken over one of your users’ endpoints, goes on a poaching expedition through all the information that user has access to on your network?

Easier said than done, right?  After all, when malware is running on an endpoint anything it does show up as being performed by that user.  How high really are your chances of recognizing those events as being different from the user’s normal behavior? 

EventTracker Search Performance


EventTracker 7.6 is a complex software application and while there is no easy formula to compute its performance, there are ways to configure and use it so as to get better performance. All data received either real-time or by file ingest (called the Direct Log Archiver) is first indexed and then archived for optimal disk utilization. When performance of a search is cross indexed, compression speed of results depend on the type of search as well as the underlying hardware.

Searches can be categorized as:
Dense – at least one result per thousand (1,000) events
Sparse – at least one result per million (1,000,000) events
Rare – at least one result per billion (1,000,000,000) events
Needle in a haystack – one event in more than a billion events

Based on provided search criteria, EventTracker consults indexing meta-data to determine if and in which archive contains events matching the search terms. As searches go from dense to needle-in-a-haystack, they move from being CPU bound to I/O bound.

Dense searches are CPU bound because matches are found easily and there is sufficient raw data to decompress. For the fastest possible response on default hardware, EventTracker will limit return results to the first (sorted by time with newest on top) 200 results displayed. This setting can of course be defeated but is provided because it satisfies the most common use case.

As the events containing the search term get to one in a hundred thousand (100,000), performance becomes more I/O bound. The reason is there is less and less data but more and more index files have to be consulted.

I/O performance is measured as latency which is a measure of the time delay from when a disk I/O request is created, until the time the disk I/O request is completed by the underlying hardware. Windows perfmon can measure average disk/sec transfer. A rule of thumb is to have this be below 25 millisec for best I/O performance.

This can be realized in various ways:
– Having different drives (spindles) for the OS/progam and archives
– Using faster disk (15K RPM performs better than 7200 RPM disks)
– Using a SAN

In larger installations with multipleVirtual Collection Points (VCP), dedicating a separate disk spindle for each VCP can help.

Nineteen Minutes In April


In April 16 of 2013, a sniper took a hundred shots at Pacific Gas and Electric’s (PG&E) Metcalf Electric Power Transformer Station. The utility was able to reroute power on the grid and avert a black out. The whole ordeal took nineteen tension-filled minutes. The event added muscle to the regulatory grip of The North American Electric Reliability Corporation (NERC) – a not-for-profit entity whose mission is to ensure the reliability of the bulk power system in North America. A terrorist attack, domestic or otherwise, could bring the state’s power grid down.

The Data Scientist Unicorn


An essential part of any IT Security program is to hunt for unusual patterns in sensor (or log) data to uncover attacks. Aside of tools that gather and collate this data (for example SIEM solutions like EventTracker), a smart pair of eyeballs is needed to sift through the data warehouse. In modern parlance, this person is called a data scientist, one who extracts knowledge from data. This requires a deep understanding of the available data and a feel for pattern recognition and visualization.

As Michael Schrage notes in the HBR Blog network “…the opportunities for data-science-enabled efficiencies and innovation are too important to defer or deny. Big organizations can afford — or think they can afford — to throw money at the problem by hiring laid-off Wall Street quants or hiring big-budget analytics boutiques. More frugal and prudent enterprises seem to be taking alternate approaches.”

Starting up a “center of excellence” or addressing a “grand challenge”  is not practical for most organizations. Instead, how about an effort to deliver tangible and data-driven benefits in a short time frame?

Interestingly, Schrage notes “Without exception, every team I ran across or worked with hired outside expertise. They knew when a technical challenge and/or statistical technique was beyond the capability…the relationship was less of an RFP box-ticking exercise than a shared space…”

What does any of this have to do with SIEM you ask?

Well for the typical Small/Medium Enterprise [SME] this is a familiar dilemma. Data, data everywhere and not a drop (of intelligence) to drink. Either the “data scientist” is not on the employee roster or does not have time available. How then do you square this circle? Look for outside expertise, as Schrage notes.

SIEM Simplified service

SME’s looking for expertise to leverage the existing mountain of security data within their enterprise can leverage our SIEM Simplified service.

Unicorns don’t exist but that doesn’t mean that do-nothing is a valid option.

EventTracker and Shellshock


What’s your thought on Shellshock? EventTracker CEO A.N. Ananth weighs in.

Summary:

  • Shellshock (also known as Bashdoor) CVE-2014-6271 is a security bug in the Linux/Unix Bash shell.
  • EventTracker v 6.x, v7.x is NOT vulnerable to Shellshock as these products are based on the Microsoft Windows platform.
  • ETIDS and ETVAS which are offered as options of the SIEM Simplified service, are vulnerable to Shellshock, as these solutions are based on CentOS v6.5. Below are the links relevant to this vulnerability.
  • If you subscribe to ETVAS and/or ETIDS, the EventTracker Control Center has already initiated action to patch this vulnerability on your behalf. Please contact ecc@eventtracker.com with any questions.

Details:

Shellshock (also known as Bashdoor) CVE-2014-6271 is a security bug in the broadly used Unix Bash shell. Bash is used to process certain commands across many internet daemons. It is a program that is used by various Unix-based systems to execute command scripts and command lines. Often it is installed as the system’s default command line interface.

Notes:

  • Environment variables (each running program having its own list of name/value pairs) occur in Unix-based and other operating systems that Bash supports. When one program is started by an earlier program, an initial list of environment variables is provided by the earlier program to the new program. Apart from this, named scripts (internal list of functions) are also maintained by Bash that can be executed from within.
  • By creating vulnerable versions of Bash, an attacker can gain unauthorized access to a computer system. By executing Bash with a chosen value in its environment variable list, vulnerable versions of Bash can be caused, that may allow remote code execution.
  • Scrutiny of the Bash source code history, reveal that concealed vulnerabilities have been present since approximately version 1.13 (1992). Lack of comprehensive change logs do not allow, the maintainers of Bash source code, to pinpoint the exact time of introduction of the vulnerability.

We don’t need no stinkin Connectors


#36 on the American Film Institute list of Top Movie Quotes is “Badges? We don’t need no stinkin badges” which has been used often (e.g., Blazing Saddles). The equivalent of this in the log management universe is a “Connector”. We are often asked how many “Connectors” we have readily available or how long it takes to develop a Connector.

These questions stem from a model used by programs such as ArcSight which depend on Early Binding. In an earlier era of computing, Early Binding was needed for the compiler could not create an entry in the virtual method table for the procedure being compiled. It has the advantage of being efficient, an important consideration when CPU and memory are in very short supply, like years ago.

Just in time languages such as .NET or Java adopt Late Binding where the v-table is computed at run time. Years ago, Late Binding had negative connotations in terms of performance but that hasn’t been true for at least 20 years now.

Early binding requires a fixed schema to be mandated for all possible entries and for input to be “normalized” to this schema. The benefit of the fixed plan is efficiency in output since the data is already normalized. While that may make some sense for compilers, input in formalized language grammars makes almost no sense in the log management universe, where the input is log data from sources that do not adopt any standardization at all. The downside of such an approach is to require a “Connector” to normalize a new log source to the normalized schema. Another consideration is that outputs can greatly vary depending on usage – there are many possible uses for the data, the limitation is only the users imagination. The Early Binding model however, is designed with fixed outputs in mind. These disadvantages limit such designs.

In contrast, EventTracker uses Late Binding, where the meaning of tokens can be assigned at output (run) time, rather than being fixed at receive time. Thus new log formats do not need a “Collector” to be available at ingest time. The desired output format can be specified at search or report time for easy viewing. This requires somewhat greater computing capacity with Moores Law to the rescue. Late Binding is the primary advantage of EventTrackers’ “Fast In, Smart Out” architecture.

Spray & Pray or 80/20


If you spend any time at all looking at log data from any server that is accessible to the Internet, you will be shocked at the brazen attempts to knock the castle over. They being within minutes of the server being available. They most commonly include port scans, login attempts using default username/password, web server attacks described by OWASP.

How can this possibly be? Given the sheer number of machines that are visible on the Internet? Don’t these guys have anything better to do?

The answer is automation and scripted attacks, also known as spray and pray. The bad guys are capitalists too (regardless of country of origin!) and need to maximize their effort, computing capacity and network bandwidth usage. Accordingly, they use automation to “knock on all available doors in a wealthy neighborhood” as efficiently and regularly as possible. Why pick on servers in developed countries? Because that’s where the payoff is likely to be higher. Its Risk v. Reward all the way.

The automated (first) wave of these attacks is to identify vulnerable machines and establish presence. Following waves may be staffed depending on the the location and identity and thus the potential value to be obtained by a greater investment of (scarce) expertise by the attacker.

Such attacks can be deterred quite simply by using secure (non-default) configuration, system patching and basic security defenses such as firewall and anti-virus. This explains the repeated exhortations of security pundits on “best practice” and also the rationale behind compliance standards and auditors trying to enforce basic minimum safeguards.

The 80/20 rule applies to attackers just as it does to defenders. Attackers are trying to cover 80% of the ground at 20% of the cost so as to at-least identify soft high value targets and at most steal from them. Defenders are trying to deter 80% of the attackers at 20% of cost by using basic best practices.

Guidance such as SANS Critical Controls or lessons from Verizon’s Annual Data Breach studies can help you prioritize your actions. Attackers depend on the fact that the majority of users do not follow basic security hygiene, don’t collect logs which would expose the attackers actions and certainly never actually look at the logs.

Defeating a “spray and pray” attacks requires basic tooling and discipline. The easy way to so this? We call it SIEM Simplified. Drop us a shout, it beats being a victim.

Hackers: What they are looking for and the abnormal activities you should be evaluating


Most hackers are looking into critical data for credential theft. A credential theft attack is when an attacker initially gains privileged access to a computer on a network and then uses freely available tooling to extract credentials from the sessions of other logged-on accounts. The most prevalent target for a credential theft is a “VIP account.” VIP account’s consist of contacts with highly sensitive data attached: access to accounts and secure data that many others within that organization probably don’t have.

It’s very important for administrators to be conscious of activities that increase the likelihood of a successful credential-theft attack.

These activities are:
• Logging on to unsecured computers with privileged accounts
• Browsing the Internet with a highly privileged account
• Configuring local privileged accounts with the same credentials across systems
• Overpopulation and overuse of privileged domain groups
• Insufficient management of the security of domain controllers.

There are specific accounts, servers, and infrastructure components that are the usual primary targets of attacks against Active Directory.

These accounts are:
• Permanently privileged accounts
• VIP accounts
• “Privilege-Attached” Active Directory accounts
• Domain controllers
• Other infrastructure services that affect identity, access, and configuration management, such as public key infrastructure (PKI) servers and systems management servers

Although pass-the-hash (PtH) and other credential theft attacks are ubiquitous today, it is because there is freely available tooling that makes it simple and easy to extract the credentials of other privileged accounts when an attacker has gained Administrator – or SYSTEM-level access to a computer.

Even without this tool, an attacker with privileged access to a computer can just as easily install keystroke loggers that capture keystrokes, screenshots, and clipboard contents. An attacker with privileged access to a computer can disable anti-malware software, install rootkits, modify protected files, or install malware on the computer that automates attacks or turns a server into a drive-by download host.

The tactics used to extend a breach beyond a single computer vary, but the key to propagating compromise is the acquisition of highly privileged access to additional systems. By reducing the number of accounts with privileged access to any system, you reduce the attack surface not only of that computer, but the likelihood of an attacker harvesting valuable credentials from the computer.

Case of the Disappearing Objects: How to Audit Who Deleted What in Active Directory


I often get asked how to audit the deletion of objects in Active Directory. It’s pretty easy to do this with the Windows Security Log – especially for tracking deletion of users and groups which I’ll show you first. All you have to do is enable “Audit user accounts” and “Audit security group management” in the Default Domain Controllers Policy GPO.

Practical ways to analyze login and pre-authentication failures


Nikunj Shah, team lead of EventTracker SIEM Simplified team provides some practical tips on analyzing login and pre-authentication failures:

1) Learn and know how to identify login events and their descriptions. A great resource to find event IDs is here: http://technet.microsoft.com/en-us/library/cc787567(v=ws.10).aspx.

2) Identify and look into the event description. To analyze events efficiently and effectively you must analyze the event description. Within the login failure description, paying attention to the details like: failure reason, user name, logon type, workstation name and source network address are critical to your investigation and analysis. By identifying the description and knowing what to pay attention to, you will easily eliminate the noise.

When using a system like EventTracker, the display of the required fields used to showcase eliminates the noise and show you the immediate error results. EventTracker will provide a summary based on the total number of events for each failure type and user name to demonstrate the automation of your systems’ critical information.

Using IDS will help your enterprise run more efficiently and effectively with the analysis of traditional reports for the hundreds of events that happen every day. Doing this without the help of a management and a monitoring tool is nearly impossible.

Please reference here for detailed charts.

Simplify SIEM with Services


To support security, compliance and operational requirements, specific and fast answers to the 4 W questions (Who, What, When, Where) are very desirable. These requirements drive the need to Security Information Event Management (SIEM) solutions that provide detailed and one-pain-of-glass visibility into this data, which is constantly generated within your information ecosystem. This visibility and the attendant effectiveness are made possibly by centralizing the collection, analysis and storage of log and other security data from sources throughout the enterprise network.

To obtain value from your SIEM solution, it must be watered and fed. This is an eternal commitment, whether your team chooses to do-it yourself or get someone to do it for you. This new white paper from EventTracker examines the pros and cons of using a specialist external service provider.

“Think about this for a second: a lot more people will engage professional services to help them RUN, not just DEPLOY, a SIEM. However, this is not the same as managed services, as those organization will continue to own their SIEM tools.” –Anton Chuvakin, Gartner Analyst