Are honeypots illegal?

In computer terminology, a honeypot is a computer system set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of IT systems. Generally, a honeypot appears to be part of a network, but is actually isolated and monitored, and which seems to contain information or a resource of value to attackers.

Lance Spitzner covers this topic from his (admittedly) non-legal perspective.

Is it entrapment?
Honeypots are not a form of entrapment. For some reason, many people have this misconception that if they deploy honeypots, they can be prosecuted for entrapping the bad guys. Entrapment, by definition is “a law-enforcement officer’s or government agent’s inducement of a person to commit a crime, by means of fraud or undue persuasion, in an attempt to later bring a criminal prosecution against that person.”

Does it violate privacy laws?
Privacy laws in the US may limit your right to capture data about an attacker, even when the attacker is breaking into your honeypot but the exemption under Service Provider Protection is key. What this exemption means is that security technologies can collect information on people (and attackers), as long as that technology is being used to protect or secure your environment. In other words, these technologies are now exempt from privacy restrictions. For example, an IDS sensor that is used for detection and captures network activity is doing so to detect (and thus enable organizations to respond to) unauthorized activity. Such a technology is most likely not considered a violation of privacy as the technology is being used to help protect the organization, so it falls under the exemption of Service Provider Protection. Honeypots that are used to protect an organization would fall under this exemption.

Does it expose us to liability?
Liability is not a criminal issue, but civil. Liability implies you could be sued if your honeypot is used to harm others. For example, if it is used to attack other systems or resources, the owners of those may sue. The argument being that if you had taken proper precautions to keep your systems secure, the attacker would not have been able to harm my systems, so you share the fault for any damage occurred to me during the attack. The issue of liability is one of risk. First, anytime you deploy a security technology (even one without an IP stack), that technology comes with risk. For example, there have been numerous vulnerabilities discovered in firewalls, IDS systems, and network sniffers. Honeypots are no different.

Obviously this blog entry is not legal advice and should not be construed as such.



SIEM or Log Management?

Security Information and Event Management (SIEM) is a Gartner coined term to describe solutions which monitor and help manage user and service privileges, directory services, and other system configuration changes in addition to providing log auditing, and review and incident response.

SIEM differs from Log Management, which refers to solutions which deal with large volumes of computer-generated log messages (also known as audit records, event-logs, etc.)

Log management is aimed at general system troubleshooting or incident response support. The focus is on collecting all logs for various reasons. This “input-driven” approach tries to get every possible bit of data.

This model fails with SIEM-focused solutions. Opening the floodgates, admitting any/all log data into the tool first, then considering what (if any) use is there for the data, reduces tool performance as it struggles to cope with the flood. More preferable is an “output-driven” model where data is admitted if and only if its usage is defined. This use can be defined to include alerts, dashboards, reports, behavior profiling, threat analysis, etc..

Buying a SIEM solution and using it as log management tool is a waste of money. Forcing a log management solution to act like a SIEM is folly.

Want to know more? Check out our White Paper “How to Succeed at SIEM” featuring original research from Gartner’s Security & Risk Management Summit, learn what tools and skills you need to make a SIEM implementation successful.



The Seven Habits of Highly Fraudulent Users

This post Seven Habits of Highly Fraudulent Users from Izzy at SiftScience describes patterns culled from 6 million transactions over a three month sample. The “fraud” sample consisted of transactions confirmed fraudulent by customers; “normal” samples consisted of transactions confirmed by customers to be non-fraudulent, as well as a subset of unlabeled transactions.  These patterns are useful to Security Operations Center (SOC) teams who “hunt” for these things.

Here are some of the common traits of fraudulent users or Fraudsters:

Habit #1 Fraudsters go hungry

Whereas there is a dip in activity by normal users at lunch time, no such dip is observed in fraudulent transactions. When looking for out-of-ordinary behavior, the absence of any dip during the day might speak to a script which never tires.

Habit #2 Fraudsters are night owls

Analyzing fraudulent transactions as a percentage of all transactions, 3AM was found to be the most fraudulent hour in the day, and night-time in general was a more dangerous time. SOC teams should hunt for “after hours” behavior as a tip-off for bad actors.

Habit #3 Fraudsters are international

Look for traffic originating outside your home country. While these patterns change frequently, as a general rule, international traffic is worth trending and observing.

Habit #4 Fraudsters don multiple identities

Fraudsters tend to make multiple accounts on their laptop or phone to commit fraud. When multiple accounts are associated with the same device, the higher the likelihood of fraud. A user who has 6 accounts on her laptop is 15 times more likely to be fraudulent than the average person. Users with only 1 account however, are less likely to be fraudulent. SOC teams should look for multiple users using the same computer in a given time frame. Even in shared PC situations (e.g, nurses station in a hospital, it is unusual for much more than one user accessing a PC in a given shift.

Habit #5 Fraudsters use well known domains

The top 3 sources of fraud originated from Microsoft sites including Outlook.com, Hotmail and live.com. Traffic from/to such sites is worthy of trending and examining.

Habit #6 Fraudsters are boring

A widely recognized predictors of fraud is the number of digits in an email address. The more numbers, the more likely that it’s fraud.

Habit #7 Fraudsters like disposable things

We know that attacks almost always originate from DHCP addresses (which is why dshield.org/block.txt gives out /24 ranges). It is also true that the older an account age, the less likely (in  general) it is involved in fraud. SOC teams must always look out for account creation.

Happy Hunting.

By A. N. Ananth, CEO of EventTracker

EventTracker offers a dynamic suite of award winning products for SIEM and event log management. SC Magazine BestBuy EventTracker Enterprise processes hundreds of millions of discrete log messages to deliver vital and actionable information, enabling organizations to identify and address security risks, improve IT security, and maintain regulatory compliance requirements with simplified audit functionality.Security Center offers instant security alerts and a real-time dashboard for viewing every incident in the infrastructure, and Log Manager is a monitoring and early threat detection tool.



The Security Risks of Industry Interconnections

2014 has seen a rash of high profile security breaches involving theft of personal data and credit card numbers from retailers Neiman Marcus, Home Depot, Target, Michaels, online auction site eBay, and grocery chains SuperValu and Hannaford among others. Hackers were able to steal hundreds of millions of credit and debit cards; from the information disclosed, this accounted for 40 million cards from Target, 350,000 from Neiman Marcus, up to 2.6 million from Michaels, 56 million from Home Depot.

The Identity Theft Resource Center (ITRC) reports that to date in 2014, 644 security breaches have occurred, an increase of 25.3 percent over last year. By far the majority of breaches targeted payment card data along with personal information like social security numbers and email addresses, and personal health information, and it estimates that over 78 million records were exposed.

Malware installed using third party credentials was found to be among the primary cause of the breaches in post-security analysis. Banks and financial institutions are critically dependent on their IT infrastructure and are also constantly exposed to attacks because of Sutton’s Law. Networks are empowering because they allow us to interact with other employees, customers and vendors. However, it is often the case that industry partners have a looser view of security and thus may be more vulnerable to being breached; exploiting industry interconnection is a favorite tactic used by attackers. After all, a frontal brute force attack on a well-defended large corporation’s doors are unlikely to be successful.

The Weak Link

The attackers target subcontractors, which are usually small companies with comparatively weaker IT security defenses and minimal cyber security expertise on hand. These small companies are also proud of their large customer and keen to highlight their connection. Likewise, companies often provide a surprising number of information meant for vendors on public sites for which logins are not necessary. This makes the first step of researching the target and their industry interconnections easier for the attacker.

The next step is to compromise the subcontractor network and find employee data. Social networking sites liked LinkedIn are a boon to attackers and used to create lists of IT admin and management staff who are likely to be privileged users. In West Virginia, state agencies were victims when malware infected computers of users whose email addresses ended with @wv.gov. The next step is to gain access to the contractors’ privileged users workstation, and from there, to breach the final target. In one retailer breach, the network credentials given to a heating, air conditioning and refrigeration contractor were stolen after hackers mounted a phishing attack, and were able to successfully lodge malware in the contractor’s systems, two months before they attacked the retailer, their ultimate target.

Good Practices, Good Security

Organizations can no longer assume that their enterprise is enforcing effective security standards; likewise, they cannot make the same assumption about their partners, vendors and clients, or anyone who has access to their networks. A Fortune 500 company has access to resources to acquire and manage security systems that a smaller vendor might not. So how can the enterprise protect itself while making the industry interconnections it needs to thrive?

Risk Assessments: When establishing a relationship with a vendor, partner, or client, consider vetting their security practices a part of due diligence. Before network access can be granted, the third party should be subject to a security appraisal that assesses where security gaps can occur (weak firewalls or security monitoring systems, lack of proper security controls). An inventory of the third party’s systems and applications and its control of those can help the enterprise develop an effective vendor management profile. Furthermore, it provides the enterprise with visibility into information that will be shared and who has access to that information.

Controlled Access: Third party access should be restricted and compartmentalized only to a segment of the network, and prevented access to other assets. Likewise, the organization can require that vendors and third parties use particular technologies for remote access, which enables the enterprise to catalog which connections are being made to the network.

Active Monitoring: Organizations should actively monitor network connections; SIEM software can help identify when remote access or other unauthorized software is installed, alert the organization when unauthorized connections are attempted, and establish baselines for “typical” versus unusual or suspicious user behaviors which can presage the beginning of a breach

Ongoing Audits: Vendors given access to the network should be required to submit to periodic audits; this allows both the organization and the vendor to assess security strengths and weaknesses and ensure that the vendor is in compliance with the organization’s security policies.

What next?

Financial institutions often implicitly trust vendors. But just as good fences make good neighbors, vendor audits produce good relationships. Initial due diligence and enforcing sound security practices with third parties can eliminate or mitigate security failures. Routine vendor audits send the message that the entity is always monitoring the vendor to ensure that it is complying with IT security practices.



SIEM is Sunlight

Security Information and Event Management (SIEM) refers to technology that provides real-time analysis of security alerts generated by network hardware and applications. SIEM works by gathering, analyzing and presenting information from a variety of sources of such information across the enterprise network including network and security devices; identity and access management applications; vulnerability management and policy compliance tools; operating system, database and application logs; and external threat data.

All compliance frameworks including PCI-DSS, HIPAA, FISMA, NERC etc call for the implementation and regular usage of SIEM technology. The absence of regular usage is noted as a major factor in post-mortem analysis of IT security related incidents.

Why is this the case? It’s because SIEM, when implemented properly gathers security data from all the nooks and crannies of the enterprise network. When this information is collated and presented well, an analyst is able to see what is happening, what happened and what is different.

It’s akin to letting in the sunlight to all corners and hidden places. You can see better, much better.

You can’t fix what you can’t see and don’t know. Knowledge of the goings-on in the various parts of the network, in real-time when possible, is the first step towards building a meaningful security defense.

In a 1913 article in Harper’s Weekly, Justice Louis Brandies wrote sunlight is said to be the best of disinfectants. SIEM is sunlight for your network.



Three key advantages for SIEM-As-A-Service

Three key advantages for SIEM-As-A-Service

Security Information and Event Management (SIEM) technology is an essential component in a modern defense-in-depth strategy for IT Security. SIEM is described as such in every Best Practice recommendation from industry groups and security pundits. The absence of SIEM is repeatedly noted in Verizon Enterprise Data Breach Investigations Report as a factor in late discovery of breaches. Indeed attackers are most often successful with soft targets where defenders do not review log and other security data. In addition, all regulatory compliance standards, such as PCI-DSS, HIPAA, FISMA etc specifically require SIEM technology be deployed and more importantly be used actively.

This last point (“be used actively”) is the Achilles heel for many organizations and has been noted often, as “security is something you do, not something you buy.” Organizations large and small struggle to assign staff with necessary expertise and maintain the discipline of periodic log review.

New SIEM-As-A Service options

SIEM Simplified services are available for buyers that cannot leverage traditional on premise, self-serve products. In such models, the vendor assumes responsibility for as much (or as little) of the heavy lifting as desired by the user including: Installation, Configuration, Tuning, Periodic review, Updates and responding to incident investigation or audit support requests.

Such offerings have three distinct advantages over the traditional self-serve, on premise model.

1) Managed Service Delivery: The vendor is responsible for the most “fragile” and “difficult to get right” aspect of a SIEM deployment – that is installation, configuration, tuning and Periodic review of SIEM data. This can also include upgrades, performance management to get speedy response and updates to security threat intelligence feeds.
2) Deployment options: In addition to the traditional on premise model, such services usually offer cloud based, managed hosted or hybrid solutions. Options for host based agents and/or premise based collectors/sensors allow for great flexibility in deployment
3) Utility pricing: Contrast with traditional perpetual models that require capital expenditure and front loading, SIEM-As-A-Service follows the utility model with usage based pricing and monthly expenditure. This is friendly to Operational Expenditures.

SIEM is a core technology in the modern IT Enterprise. New As-A-Service deployment models can increase adoption and value of this complex monitoring technology.



Top 5 Linux log file groups in/var/log

If you manage any Linux machines, it is essential that you know where the log files are located, and what is contained in them. Such files are usually in /var/log. Logging is controlled by the associated .conf file.

Some log files are distribution specific and this directory can also contain applications such as samba, apache, lighttpd, mail etc.

From a security perspective, here are 5 groups of files which are essential. Many other files are generated and will be important for system administration and troubleshooting.

1. The main log file
a) /var/log/messages – Contains global system messages, including the messages that are logged during system startup. There are several things that are logged in /var/log/messages including mail, cron, daemon, kern, auth, etc.

2. Access and authentication
a) /var/log/auth.log – Contains system authorization information, including user logins and authentication machinsm that were used.
b) /var/log/lastlog – Displays the recent login information for all the users. This is not an ascii file. You should use lastlog command to view the content of this file.
c) /var/log/btmp – This file contains information about failed login attemps. Use the last command to view the btmp file. For example, “last -f /var/log/btmp | more”
d) /var/log/wtmp or /var/log/utmp – Contains login records. Using wtmp you can find out who is logged into the system. who command uses this file to display the information.
e) /var/log/faillog – Contains user failed login attemps. Use faillog command to display the content of this file.
f) /var/log/secure – Contains information related to authentication and authorization privileges. For example, sshd logs all the messages here, including unsuccessful login.

3. Package install/uninstall
a) /var/log/dpkg.log – Contains information that are logged when a package is installed or removed using dpkg command
b) /var/log/yum.log – Contains information that are logged when a package is installed using yum

4. System
a) /var/log/daemon.log – Contains information logged by the various background daemons that runs on the system
b) /var/log/cups – All printer and printing related log messages
c) /var/log/cron – Whenever cron daemon (or anacron) starts a cron job, it logs the information about the cron job in this file

5. Applications
b) /var/log/maillog /var/log/mail.log – Contains the log information from the mail server that is running on the system. For example, sendmail logs information about all the sent items to this file
b) /var/log/Xorg.x.log – Log messages from the XWindows system

Happy Logging!



Seven Habits of Highly Fraudulent Users

This post by Izzy at SiftScience describes patterns culled from 6M transactions over a three month sample. The “fraud” sample consisted of transactions confirmed fraudulent by customers; “normal” samples consisted of transactions confirmed by customers to be non-fraudulent, as well as a subset of unlabeled transactions.

These patterns are useful to Security Operations Center (SOC) teams who “hunt” for these things.

Habit #1 Fraudsters go hungry

Whereas there is a dip in activity by normal users at lunch time, no such dip is observed in fraudulent transactions. When looking for out-of-ordinary behavior, the absence of any dip during the day might speak to a script which never tires.

Habit #2 Fraudsters are night owls

Analyzing fraudulent transactions as a percentage of all transactions, 3AM was found to be the most fraudulent hour in the day, and night-time in general was a more dangerous time. SOC teams should hunt for “after hours” behavior as a tip-off for bad actors.

Habit #3 Fraudsters are international

Look for traffic originating outside your home country. While these patterns change frequently, as a general rule, international traffic is worth trending and observing.

Habit #4 Fraudsters don multiple identities

Fraudsters tend to make multiple accounts on their laptop or phone to commit fraud. When multiple accounts are associated with the same device, the higher the likelihood of fraud. A user who has 6 accounts on her laptop is 15 times more likely to be fraudulent than the average person. Users with only 1 account however, are less likely to be fraudulent. SOC teams should look for multiple users using the same computer in a given time frame. Even in shared PC situations (e.g, nurses station in a hospital, it is unusual for much more than one user accessing a PC in a given shift.

Habit #5 Fraudsters use well known domains

The top 3 sources of fraud originate from Microsoft sites including outlook.com, Hotmail and live.com. Traffic from/to such sites is worthy of trending and examining.

Habit #6 Fraudsters are boring

A widely recognized predictor of fraud is the number of digits in an email address. The more numbers, the more likely that it’s fraud.

Habit #7 Fraudsters like disposable things

We know that attacks almost always originate from DHCP addresses (which is why dshield.org/block.txt gives out /24 ranges). Its also true that the older an account age, the less likely (in general) its involved in fraud. SOC teams must always look out for account creation.

Good hunting.

 



EventTracker and Poodle

Summary:
• All systems and applications utilizing the Secure Socket Layer (SSL) 3.0 with cipher-block chaining (CBC) mode ciphers may be vulnerable. However, the POODLE (Padding Oracle On Downgraded Legacy Encryption) attack demonstrates this vulnerability using web browsers and web servers, which is one of the most likely exploitation scenarios.
• EventTracker v7.x is implemented above IIS on the Windows platform and there MAY be vulnerable to POODLE depending on the configuration of IIS..
• ETIDS and ETVAS which are offered as options of the SIEM Simplified service, are based on CentOS v6.5 which uses Apache and may also be vulnerable, depending on the configuration of Apache.

1. Poodle Scan can be used to test if your server is vulnerable
• Below are the links relevant to this vulnerability:

http://nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566
https://www.us-cert.gov/ncas/alerts/TA14-290A
http://www.dotnetnoob.com/2013/10/hardening-windows-server-20082012-and.html
http://support.microsoft.com/kb/187498

• If you are a subscriber to SIEM Simplified service, the EventTracker Control Center has already initiated action to patch this vulnerability on your behalf. Please contact ecc@eventtracker.com with any questions.
• If you maintain EventTracker yourself, this document explains how you can update your installation to remove the vulnerability against SSL 3.0

Details:
The SSL 3.0 vulnerability stems from the way blocks of data are encrypted under a specific type of encryption algorithm within the SSL protocol. The POODLE attack takes advantage of the protocol version negotiation feature built into SSL/TLS to force the use of SSL 3.0 and then leverages this new vulnerability to decrypt select content within the SSL session. The decryption is done byte by byte and will generate a large number of connections between the client and server.

While SSL 3.0 is an old encryption standard and has generally been replaced by Transport Layer Security (TLS) (which is not vulnerable in this way), most SSL/TLS implementations remain backwards compatible with SSL 3.0 to interoperate with legacy systems in the interest of a smooth user experience. Even if a client and server both support a version of TLS the SSL/TLS protocol suite allows for protocol version negotiation (being referred to as the “downgrade dance” in other reporting). The POODLE attack leverages the fact that when a secure connection attempt fails, servers will fall back to older protocols such as SSL 3.0. An attacker who can trigger a connection failure can then force the use of SSL 3.0 and attempt the new attack.

Solution:
• If you have installed EventTracker on Microsoft Windows Server and are maintaining it yourself, please download the Disable Weak Cyphers file to the server running EventTracker. Extract and save DisableWeakCiphers.bat; run this file as Administrator. This file executes the following commands:

REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Client” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\DES 56/56″ /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 40/128″ /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 56/128″ /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 128/128″ /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 40/128″ /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 56/128″ /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 64/128″ /v Enabled /t REG_DWORD /d 00000000 /f



EventTracker Search Performance

EventTracker 7.6 is a complex software application and while there is no easy formula to compute its performance, there are ways to configure and use it so as to get better performance. All data received either real-time or by file ingest (called the Direct Log Archiver) is first indexed and then archived for optimal disk utilization. When performance of a search is cross indexed, compression speed of results depend on the type of search as well as the underlying hardware.

Searches can be categorized as:
Dense - at least one result per thousand (1,000) events
Sparse - at least one result per million (1,000,000) events
Rare - at least one result per billion (1,000,000,000) events
Needle in a haystack - one event in more than a billion events

Based on provided search criteria, EventTracker consults indexing meta-data to determine if and in which archive contains events matching the search terms. As searches go from dense to needle-in-a-haystack, they move from being CPU bound to I/O bound.

Dense searches are CPU bound because matches are found easily and there is sufficient raw data to decompress. For the fastest possible response on default hardware, EventTracker will limit return results to the first (sorted by time with newest on top) 200 results displayed. This setting can of course be defeated but is provided because it satisfies the most common use case.

As the events containing the search term get to one in a hundred thousand (100,000), performance becomes more I/O bound. The reason is there is less and less data but more and more index files have to be consulted.

I/O performance is measured as latency which is a measure of the time delay from when a disk I/O request is created, until the time the disk I/O request is completed by the underlying hardware. Windows perfmon can measure average disk/sec transfer. A rule of thumb is to have this be below 25 millisec for best I/O performance.

This can be realized in various ways:
– Having different drives (spindles) for the OS/progam and archives
– Using faster disk (15K RPM performs better than 7200 RPM disks)
– Using a SAN

In larger installations with multipleVirtual Collection Points (VCP), dedicating a separate disk spindle for each VCP can help.