Seven Habits of Highly Fraudulent Users

This post by Izzy at SiftScience describes patterns culled from 6M transactions over a three month sample. The “fraud” sample consisted of transactions confirmed fraudulent by customers; “normal” samples consisted of transactions confirmed by customers to be non-fraudulent, as well as a subset of unlabeled transactions.

These patterns are useful to Security Operations Center (SOC) teams who “hunt” for these things.

Habit #1 Fraudsters go hungry

Whereas there is a dip in activity by normal users at lunch time, no such dip is observed in fraudulent transactions. When looking for out-of-ordinary behavior, the absence of any dip during the day might speak to a script which never tires.

Habit #2 Fraudsters are night owls

Analyzing fraudulent transactions as a percentage of all transactions, 3AM was found to be the most fraudulent hour in the day, and night-time in general was a more dangerous time. SOC teams should hunt for “after hours” behavior as a tip-off for bad actors.

Habit #3 Fraudsters are international

Look for traffic originating outside your home country. While these patterns change frequently, as a general rule, international traffic is worth trending and observing.

Habit #4 Fraudsters don multiple identities

Fraudsters tend to make multiple accounts on their laptop or phone to commit fraud. When multiple accounts are associated with the same device, the higher the likelihood of fraud. A user who has 6 accounts on her laptop is 15 times more likely to be fraudulent than the average person. Users with only 1 account however, are less likely to be fraudulent. SOC teams should look for multiple users using the same computer in a given time frame. Even in shared PC situations (e.g, nurses station in a hospital, it is unusual for much more than one user accessing a PC in a given shift.

Habit #5 Fraudsters use well known domains

The top 3 sources of fraud originate from Microsoft sites including outlook.com, Hotmail and live.com. Traffic from/to such sites is worthy of trending and examining.

Habit #6 Fraudsters are boring

A widely recognized predictor of fraud is the number of digits in an email address. The more numbers, the more likely that it’s fraud.

Habit #7 Fraudsters like disposable things

We know that attacks almost always originate from DHCP addresses (which is why dshield.org/block.txt gives out /24 ranges). Its also true that the older an account age, the less likely (in general) its involved in fraud. SOC teams must always look out for account creation.

Good hunting.

 



EventTracker and Poodle

Summary:
• All systems and applications utilizing the Secure Socket Layer (SSL) 3.0 with cipher-block chaining (CBC) mode ciphers may be vulnerable. However, the POODLE (Padding Oracle On Downgraded Legacy Encryption) attack demonstrates this vulnerability using web browsers and web servers, which is one of the most likely exploitation scenarios.
• EventTracker v7.x is implemented above IIS on the Windows platform and there MAY be vulnerable to POODLE depending on the configuration of IIS..
• ETIDS and ETVAS which are offered as options of the SIEM Simplified service, are based on CentOS v6.5 which uses Apache and may also be vulnerable, depending on the configuration of Apache.

1. Poodle Scan can be used to test if your server is vulnerable
• Below are the links relevant to this vulnerability:

http://nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566
https://www.us-cert.gov/ncas/alerts/TA14-290A
http://www.dotnetnoob.com/2013/10/hardening-windows-server-20082012-and.html
http://support.microsoft.com/kb/187498

• If you are a subscriber to SIEM Simplified service, the EventTracker Control Center has already initiated action to patch this vulnerability on your behalf. Please contact ecc@eventtracker.com with any questions.
• If you maintain EventTracker yourself, this document explains how you can update your installation to remove the vulnerability against SSL 3.0

Details:
The SSL 3.0 vulnerability stems from the way blocks of data are encrypted under a specific type of encryption algorithm within the SSL protocol. The POODLE attack takes advantage of the protocol version negotiation feature built into SSL/TLS to force the use of SSL 3.0 and then leverages this new vulnerability to decrypt select content within the SSL session. The decryption is done byte by byte and will generate a large number of connections between the client and server.

While SSL 3.0 is an old encryption standard and has generally been replaced by Transport Layer Security (TLS) (which is not vulnerable in this way), most SSL/TLS implementations remain backwards compatible with SSL 3.0 to interoperate with legacy systems in the interest of a smooth user experience. Even if a client and server both support a version of TLS the SSL/TLS protocol suite allows for protocol version negotiation (being referred to as the “downgrade dance” in other reporting). The POODLE attack leverages the fact that when a secure connection attempt fails, servers will fall back to older protocols such as SSL 3.0. An attacker who can trigger a connection failure can then force the use of SSL 3.0 and attempt the new attack.

Solution:
• If you have installed EventTracker on Microsoft Windows Server and are maintaining it yourself, please download the Disable Weak Cyphers file to the server running EventTracker. Extract and save DisableWeakCiphers.bat; run this file as Administrator. This file executes the following commands:

REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Client” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\DES 56/56″ /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 40/128″ /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 56/128″ /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 128/128″ /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 40/128″ /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 56/128″ /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 64/128″ /v Enabled /t REG_DWORD /d 00000000 /f



EventTracker Search Performance

EventTracker 7.6 is a complex software application and while there is no easy formula to compute its performance, there are ways to configure and use it so as to get better performance. All data received either real-time or by file ingest (called the Direct Log Archiver) is first indexed and then archived for optimal disk utilization. When performance of a search is cross indexed, compression speed of results depend on the type of search as well as the underlying hardware.

Searches can be categorized as:
Dense - at least one result per thousand (1,000) events
Sparse - at least one result per million (1,000,000) events
Rare - at least one result per billion (1,000,000,000) events
Needle in a haystack - one event in more than a billion events

Based on provided search criteria, EventTracker consults indexing meta-data to determine if and in which archive contains events matching the search terms. As searches go from dense to needle-in-a-haystack, they move from being CPU bound to I/O bound.

Dense searches are CPU bound because matches are found easily and there is sufficient raw data to decompress. For the fastest possible response on default hardware, EventTracker will limit return results to the first (sorted by time with newest on top) 200 results displayed. This setting can of course be defeated but is provided because it satisfies the most common use case.

As the events containing the search term get to one in a hundred thousand (100,000), performance becomes more I/O bound. The reason is there is less and less data but more and more index files have to be consulted.

I/O performance is measured as latency which is a measure of the time delay from when a disk I/O request is created, until the time the disk I/O request is completed by the underlying hardware. Windows perfmon can measure average disk/sec transfer. A rule of thumb is to have this be below 25 millisec for best I/O performance.

This can be realized in various ways:
- Having different drives (spindles) for the OS/progam and archives
- Using faster disk (15K RPM performs better than 7200 RPM disks)
- Using a SAN

In larger installations with multipleVirtual Collection Points (VCP), dedicating a separate disk spindle for each VCP can help.



The Data Scientist Unicorn

An essential part of any IT Security program is to hunt for unusual patterns in sensor (or log) data to uncover attacks. Aside of tools that gather and collate this data (for example SIEM solutions like EventTracker, a smart pair of eyeballs is needed to sift through the data warehouse. In modern parlance, this person is called a data scientist, one who extracts knowledge from data. This requires a deep understanding of the available data and a feel for pattern recognition and visualization.

As Michael Schrage notes in the HBR Blog network “…the opportunities for data-science-enabled efficiencies and innovation are too important to defer or deny. Big organizations can afford — or think they can afford — to throw money at the problem by hiring laid-off Wall Street quants or hiring big-budget analytics boutiques. More frugal and prudent enterprises seem to be taking alternate approaches.”

Starting up a “center of excellence” or addressing a “grand challenge”  is not practical for most organizations. Instead, how about an effort to deliver tangible and data-driven benefits in a short time frame?

Interestingly, Schrage notes “Without exception, every team I ran across or worked with hired outside expertise. They knew when a technical challenge and/or statistical technique was beyond the capability…the relationship was less of an RFP box-ticking exercise than a shared space…”

What does any of this have to do with SIEM you ask?

Well for the typical Small/Medium Enterprise [SME] this is a familiar dilemma. Data, data everywhere and not a drop (of intelligence) to drink. Either the “data scientist” is not on the employee roster or does not have time available. How then do you square this circle? Look for outside expertise, as Schrage notes.

SIEM Simplified service

SME’s looking for expertise to leverage the existing mountain of security data within their enterprise can leverage our SIEM Simplified service.

Unicorns don’t exist but that doesn’t mean that do-nothing is a valid option.



EventTracker and Shellshock

Summary:

  • Shellshock (also known as Bashdoor) CVE-2014-6271 is a security bug in the Linux/Unix Bash shell.
  • EventTracker v 6.x, v7.x is NOT vulnerable to Shellshock as these products are based on the Microsoft Windows platform.
  • ETIDS and ETVAS which are offered as options of the SIEM Simplified service, are vulnerable to Shellshock, as these solutions are based on CentOS v6.5. Below are the links relevant to this vulnerability.
  • If you subscribe to ETVAS and/or ETIDS, the EventTracker Control Center has already initiated action to patch this vulnerability on your behalf. Please contact ecc@eventtracker.com with any questions.

Details:

Shellshock (also known as Bashdoor) CVE-2014-6271 is a security bug in the broadly used Unix Bash shell. Bash is used to process certain commands across many internet daemons. It is a program that is used by various Unix-based systems to execute command scripts and command lines. Often it is installed as the system’s default command line interface.

Notes:

  • Environment variables (each running program having its own list of name/value pairs) occur in Unix-based and other operating systems that Bash supports. When one program is started by an earlier program, an initial list of environment variables is provided by the earlier program to the new program. Apart from this, named scripts (internal list of functions) are also maintained by Bash that can be executed from within.
  • By creating vulnerable versions of Bash, an attacker can gain unauthorized access to a computer system. By executing Bash with a chosen value in its environment variable list, vulnerable versions of Bash can be caused, that may allow remote code execution.
  • Scrutiny of the Bash source code history, reveal that concealed vulnerabilities have been present since approximately version 1.13 (1992). Lack of comprehensive change logs do not allow, the maintainers of Bash source code, to pinpoint the exact time of introduction of the vulnerability.


We don’t need no stinkin Connectors

#36 on the American Film Institute list of Top Movie Quotes is “Badges? We don’t need no stinkin badges” which has been used often (e.g., Blazing Saddles). The equivalent of this in the log management universe is a “Connector”. We are often asked how many “Connectors” we have readily available or how long it takes to develop a Connector.

These questions stem from a model used by programs such as ArcSight which depend on Early Binding. In an earlier era of computing, Early Binding was needed for the compiler could not create an entry in the virtual method table for the procedure being compiled. It has the advantage of being efficient, an important consideration when CPU and memory are in very short supply, like years ago.

Just in time languages such as .NET or Java adopt Late Binding where the v-table is computed at run time. Years ago, Late Binding had negative connotations in terms of performance but that hasn’t been true for at least 20 years now.

Early binding requires a fixed schema to be mandated for all possible entries and for input to be “normalized” to this schema. The benefit of the fixed plan is efficiency in output since the data is already normalized. While that may make some sense for compilers, input in formalized language grammars makes almost no sense in the log management universe, where the input is log data from sources that do not adopt any standardization at all. The downside of such an approach is to require a “Connector” to normalize a new log source to the normalized schema. Another consideration is that outputs can greatly vary depending on usage – there are many possible uses for the data, the limitation is only the users imagination. The Early Binding model however, is designed with fixed outputs in mind. These disadvantages limit such designs.

In contrast, EventTracker uses Late Binding, where the meaning of tokens can be assigned at output (run) time, rather than being fixed at receive time. Thus new log formats do not need a “Collector” to be available at ingest time. The desired output format can be specified at search or report time for easy viewing. This requires somewhat greater computing capacity with Moores Law to the rescue. Late Binding is the primary advantage of EventTrackers’ “Fast In, Smart Out” architecture.



Spray & Pray or 80/20

If you spend any time at all looking at log data from any server that is accessible to the Internet, you will be shocked at the brazen attempts to knock the castle over. They being within minutes of the server being available. They most commonly include port scans, login attempts using default username/password, web server attacks described by OWASP.

How can this possibly be? Given the sheer number of machines that are visible on the Internet? Don’t these guys have anything better to do?

The answer is automation and scripted attacks, also known as spray and pray. The bad guys are capitalists too (regardless of country of origin!) and need to maximize their effort, computing capacity and network bandwidth usage. Accordingly, they use automation to “knock on all available doors in a wealthy neighborhood” as efficiently and regularly as possible. Why pick on servers in developed countries? Because that’s where the payoff is likely to be higher. Its Risk v. Reward all the way.

The automated (first) wave of these attacks is to identify vulnerable machines and establish presence. Following waves may be staffed depending on the the location and identity and thus the potential value to be obtained by a greater investment of (scarce) expertise by the attacker.

Such attacks can be deterred quite simply by using secure (non-default) configuration, system patching and basic security defenses such as firewall and anti-virus. This explains the repeated exhortations of security pundits on “best practice” and also the rationale behind compliance standards and auditors trying to enforce basic minimum safeguards.

The 80/20 rule applies to attackers just as it does to defenders. Attackers are trying to cover 80% of the ground at 20% of the cost so as to at-least identify soft high value targets and at most steal from them. Defenders are trying to deter 80% of the attackers at 20% of cost by using basic best practices.

Guidance such as SANS Critical Controls or lessons from Verizon’s Annual Data Breach studies can help you prioritize your actions. Attackers depend on the fact that the majority of users do not follow basic security hygiene, don’t collect logs which would expose the attackers actions and certainly never actually look at the logs.

Defeating a “spray and pray” attacks requires basic tooling and discipline. The easy way to so this? We call it SIEM Simplified. Drop us a shout, it beats being a victim.



Hackers: What they are looking for and the abnormal activities you should be evaluating

Most hackers are looking into critical data for credential theft. A credential theft attack is when an attacker initially gains privileged access to a computer on a network and then uses freely available tooling to extract credentials from the sessions of other logged-on accounts. The most prevalent target for a credential theft is a “VIP account.” VIP account’s consist of contacts with highly sensitive data attached: access to accounts and secure data that many others within that organization probably don’t have.

It’s very important for administrators to be conscious of activities that increase the likelihood of a successful credential-theft attack.

These activities are:
• Logging on to unsecured computers with privileged accounts
• Browsing the Internet with a highly privileged account
• Configuring local privileged accounts with the same credentials across systems
• Overpopulation and overuse of privileged domain groups
• Insufficient management of the security of domain controllers.

There are specific accounts, servers, and infrastructure components that are the usual primary targets of attacks against Active Directory.

These accounts are:
• Permanently privileged accounts
• VIP accounts
• “Privilege-Attached” Active Directory accounts
• Domain controllers
• Other infrastructure services that affect identity, access, and configuration management, such as public key infrastructure (PKI) servers and systems management servers

Although pass-the-hash (PtH) and other credential theft attacks are ubiquitous today, it is because there is freely available tooling that makes it simple and easy to extract the credentials of other privileged accounts when an attacker has gained Administrator – or SYSTEM-level access to a computer.

Even without this tool, an attacker with privileged access to a computer can just as easily install keystroke loggers that capture keystrokes, screenshots, and clipboard contents. An attacker with privileged access to a computer can disable anti-malware software, install rootkits, modify protected files, or install malware on the computer that automates attacks or turns a server into a drive-by download host.

The tactics used to extend a breach beyond a single computer vary, but the key to propagating compromise is the acquisition of highly privileged access to additional systems. By reducing the number of accounts with privileged access to any system, you reduce the attack surface not only of that computer, but the likelihood of an attacker harvesting valuable credentials from the computer.



EventTracker’s SIEM Simplified team lead offers practical ways to analyze login and pre-authentication failures

Nikunj Shah, team lead of EventTracker SIEM Simplified team provides some practical tips on analyzing login and pre-authentication failures:

1) Learn and know how to identify login events and their descriptions. A great resource to find event IDs is here: http://technet.microsoft.com/en-us/library/cc787567(v=ws.10).aspx.

2) Identify and look into the event description. To analyze events efficiently and effectively you must analyze the event description. Within the login failure description, paying attention to the details like: failure reason, user name, logon type, workstation name and source network address are critical to your investigation and analysis. By identifying the description and knowing what to pay attention to, you will easily eliminate the noise.

When using a system like EventTracker, the display of the required fields used to showcase eliminates the noise and show you the immediate error results. EventTracker will provide a summary based on the total number of events for each failure type and user name to demonstrate the automation of your systems’ critical information.

Using IDS will help your enterprise run more efficiently and effectively with the analysis of traditional reports for the hundreds of events that happen every day. Doing this without the help of a management and a monitoring tool is nearly impossible.

Please reference here for detailed charts.



Simplify SIEM with services

To support security, compliance and operational requirements, specific and fast answers to the 4 W questions (Who, What, When, Where) are very desirable. These requirements drive the need to Security Information Event Management (SIEM) solutions that provide detailed and one-pain-of-glass visibility into this data, which is constantly generated within your information ecosystem. This visibility and the attendant effectiveness are made possibly by centralizing the collection, analysis and storage of log and other security data from sources throughout the enterprise network.

To obtain value from your SIEM solution, it must be watered and fed. This is an eternal commitment, whether your team chooses to do-it yourself or get someone to do it for you. This new white paper from EventTracker examines the pros and cons of using a specialist external service provider.

“Think about this for a second: a lot more people will engage professional services to help them RUN, not just DEPLOY, a SIEM. However, this is not the same as managed services, as those organization will continue to own their SIEM tools.” –Anton Chuvakin, Gartner Analyst