Archive

Top 5 Linux log file groups in/var/log

If you manage any Linux machines, it is essential that you know where the log files are located, and what is contained in them. Such files are usually in /var/log. Logging is controlled by the associated .conf file.

Some log files are distribution specific and this directory can also contain applications such as samba, apache, lighttpd, mail etc.

From a security perspective, here are 5 groups of files which are essential. Many other files are generated and will be important for system administration and troubleshooting.

1. The main log file
a) /var/log/messages – Contains global system messages, including the messages that are logged during system startup. There are several things that are logged in /var/log/messages including mail, cron, daemon, kern, auth, etc.

2. Access and authentication
a) /var/log/auth.log – Contains system authorization information, including user logins and authentication machinsm that were used.
b) /var/log/lastlog – Displays the recent login information for all the users. This is not an ascii file. You should use lastlog command to view the content of this file.
c) /var/log/btmp – This file contains information about failed login attemps. Use the last command to view the btmp file. For example, “last -f /var/log/btmp | more”
d) /var/log/wtmp or /var/log/utmp – Contains login records. Using wtmp you can find out who is logged into the system. who command uses this file to display the information.
e) /var/log/faillog – Contains user failed login attemps. Use faillog command to display the content of this file.
f) /var/log/secure – Contains information related to authentication and authorization privileges. For example, sshd logs all the messages here, including unsuccessful login.

3. Package install/uninstall
a) /var/log/dpkg.log – Contains information that are logged when a package is installed or removed using dpkg command
b) /var/log/yum.log – Contains information that are logged when a package is installed using yum

4. System
a) /var/log/daemon.log – Contains information logged by the various background daemons that runs on the system
b) /var/log/cups – All printer and printing related log messages
c) /var/log/cron – Whenever cron daemon (or anacron) starts a cron job, it logs the information about the cron job in this file

5. Applications
b) /var/log/maillog /var/log/mail.log – Contains the log information from the mail server that is running on the system. For example, sendmail logs information about all the sent items to this file
b) /var/log/Xorg.x.log – Log messages from the XWindows system

Happy Logging!

Seven Habits of Highly Fraudulent Users

This post Seven Habits of Highly Fraudulent Users from Izzy at SiftScience describes patterns culled from 6 million transactions over a three month sample. The “fraud” sample consisted of transactions confirmed fraudulent by customers; “normal” samples consisted of transactions confirmed by customers to be non-fraudulent, as well as a subset of unlabeled transactions.

These patterns are useful to Security Operations Center (SOC) teams who “hunt” for these things.

Habit #1 Fraudsters go hungry

Whereas there is a dip in activity by normal users at lunch time, no such dip is observed in fraudulent transactions. When looking for out-of-ordinary behavior, the absence of any dip during the day might speak to a script which never tires.

Habit #2 Fraudsters are night owls

Analyzing fraudulent transactions as a percentage of all transactions, 3AM was found to be the most fraudulent hour in the day, and night-time in general was a more dangerous time. SOC teams should hunt for “after hours” behavior as a tip-off for bad actors.

Habit #3 Fraudsters are international

Look for traffic originating outside your home country. While these patterns change frequently, as a general rule, international traffic is worth trending and observing.

Habit #4 Fraudsters don multiple identities

Fraudsters tend to make multiple accounts on their laptop or phone to commit fraud. When multiple accounts are associated with the same device, the higher the likelihood of fraud. A user who has 6 accounts on her laptop is 15 times more likely to be fraudulent than the average person. Users with only 1 account however, are less likely to be fraudulent. SOC teams should look for multiple users using the same computer in a given time frame. Even in shared PC situations (e.g, nurses station in a hospital, it is unusual for much more than one user accessing a PC in a given shift.

Habit #5 Fraudsters use well known domains

The top 3 sources of fraud originate from Microsoft sites including outlook.com, Hotmail and live.com. Traffic from/to such sites is worthy of trending and examining.

Habit #6 Fraudsters are boring

A widely recognized predictor of fraud is the number of digits in an email address. The more numbers, the more likely that it’s fraud.

Habit #7 Fraudsters like disposable things

We know that attacks almost always originate from DHCP addresses (which is why dshield.org/block.txt gives out /24 ranges). Its also true that the older an account age, the less likely (in general) its involved in fraud. SOC teams must always look out for account creation.

Good hunting.

EventTracker and Poodle

Summary:
• All systems and applications utilizing the Secure Socket Layer (SSL) 3.0 with cipher-block chaining (CBC) mode ciphers may be vulnerable. However, the POODLE (Padding Oracle On Downgraded Legacy Encryption) attack demonstrates this vulnerability using web browsers and web servers, which is one of the most likely exploitation scenarios.
• EventTracker v7.x is implemented above IIS on the Windows platform and there MAY be vulnerable to POODLE depending on the configuration of IIS..
• ETIDS and ETVAS which are offered as options of the SIEM Simplified service, are based on CentOS v6.5 which uses Apache and may also be vulnerable, depending on the configuration of Apache.

1. Poodle Scan can be used to test if your server is vulnerable
• Below are the links relevant to this vulnerability:

http://nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566
https://www.us-cert.gov/ncas/alerts/TA14-290A
http://www.dotnetnoob.com/2013/10/hardening-windows-server-20082012-and.html
http://support.microsoft.com/kb/187498

• If you are a subscriber to SIEM Simplified service, the EventTracker Control Center has already initiated action to patch this vulnerability on your behalf. Please contact ecc@eventtracker.com with any questions.
• If you maintain EventTracker yourself, this document explains how you can update your installation to remove the vulnerability against SSL 3.0

Details:
The SSL 3.0 vulnerability stems from the way blocks of data are encrypted under a specific type of encryption algorithm within the SSL protocol. The POODLE attack takes advantage of the protocol version negotiation feature built into SSL/TLS to force the use of SSL 3.0 and then leverages this new vulnerability to decrypt select content within the SSL session. The decryption is done byte by byte and will generate a large number of connections between the client and server.

While SSL 3.0 is an old encryption standard and has generally been replaced by Transport Layer Security (TLS) (which is not vulnerable in this way), most SSL/TLS implementations remain backwards compatible with SSL 3.0 to interoperate with legacy systems in the interest of a smooth user experience. Even if a client and server both support a version of TLS the SSL/TLS protocol suite allows for protocol version negotiation (being referred to as the “downgrade dance” in other reporting). The POODLE attack leverages the fact that when a secure connection attempt fails, servers will fall back to older protocols such as SSL 3.0. An attacker who can trigger a connection failure can then force the use of SSL 3.0 and attempt the new attack.

Solution:
• If you have installed EventTracker on Microsoft Windows Server and are maintaining it yourself, please download the Disable Weak Cyphers file to the server running EventTracker. Extract and save DisableWeakCiphers.bat; run this file as Administrator. This file executes the following commands:

REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Client” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\DES 56/56” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 40/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 56/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 128/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 40/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 56/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 64/128” /v Enabled /t REG_DWORD /d 00000000 /f

Laying Traps for External Information Thieves

Wouldn’t it be nice if you detect when an external threat actor, who’s taken over one of your users’ endpoints, goes on a poaching expedition through all the information that user has access to on your network?

Easier said than done, right?  After all, when malware is running on an endpoint anything it does show up as being performed by that user.  How high really are your chances of recognizing those events as being different from the user’s normal behavior?  It’s not impossible.  Perhaps you are monitoring for failed access attempts on certain folders and perhaps the attacker inadvertently generates a number of such attempts as he masquerades as the hijacked user.  But how often do legitimate users accidently trip the same events?   And more to the point, where are the files stored.  You can monitor failed access attempts on Windows file system.  But SharePoint’s audit log has no access failure events because SharePoint never gives you the opportunity to see, much less try to access any object you don’t already have permissions to.

But there is another way to detect information theft attempts that is surprisingly reliable but let me be clear that this method does require some up front preparation and cooperation from end-users and system admins.  You’ll only be successful implementing this method if management is serious about detecting late stage information theft attacks by outsiders.  That’s another important fact with this technique: it catches outsiders – not insiders who have had the training required to make this strategy work.  So here goes.

The idea is to lay traps throughout your unstructured data repositories (i.e. SharePoint document libraries, file shares).  These traps are extra folders intermingled among normal folders containing the real information you are trying to protect.  Obviously this is non-traditional you’ll need management support if server admins or end-users object.  But there’s no real downside to dropping a folder here or there with a naming convention that an informed insider recognizes as a red herring and knows to stay out of.  That’s where the end-user training and cooperation comes in.  Users who forget and access these red herrings will cause false positives in our monitoring logic.

How does this catch the evil outsider?  Well, provided you do your training correctly, an outsider masquerading as one of your users on the network will have no idea that these red herrings exist or what to look for.  As they explore your network they will inadvertently stumble on of these red herring folders or libraries and trip the alarm.

Notice that I said “provided you do your training correctly”.  For example, you don’t want to post information about “red herring objects, how to recognize them and to avoid accessing them” on your organization intranet or email it to your end users.  Better to cover it in onboarding meetings and occasionally when your security officer sits in on department staff meetings.  That’s called “out of band” communication and is necessary so that external attackers don’t learn about your red herrings program and thus know to avoid them.

Come up with a naming convention that identifies any red herring folder or document library with a keyword or character sequence.  This will be the queue to your users to “stay out”.  And you can easily configure your SIEM to tell you whenever it sees any activity involving access to objects with the same watch code.  Let’s say the watch code is DA42 and you create a number of folders, document libraries, etc throughout your network with that code in the name.  After you create filters for indexers, malware scanners and backup accounts, if you see any such activity – such as Bob accessed “Secret Formulas DA42” you know that either 1) Bob forgot and accesses the red herring or 2) a bad guy running amok on your network.

I’m a real believer in this “active” method of late stage information theft detection.   But if it’s too much for you to bite off right now then you need a SIEM with the best possible behavior analysis and anomaly detection.  It’s good to use such technology regardless.  Checkout EventTracker’s functionality in this area.

EventTracker Search Performance

EventTracker 7.6 is a complex software application and while there is no easy formula to compute its performance, there are ways to configure and use it so as to get better performance. All data received either real-time or by file ingest (called the Direct Log Archiver) is first indexed and then archived for optimal disk utilization. When performance of a search is cross indexed, compression speed of results depend on the type of search as well as the underlying hardware.

Searches can be categorized as:
Dense – at least one result per thousand (1,000) events
Sparse – at least one result per million (1,000,000) events
Rare – at least one result per billion (1,000,000,000) events
Needle in a haystack – one event in more than a billion events

Based on provided search criteria, EventTracker consults indexing meta-data to determine if and in which archive contains events matching the search terms. As searches go from dense to needle-in-a-haystack, they move from being CPU bound to I/O bound.

Dense searches are CPU bound because matches are found easily and there is sufficient raw data to decompress. For the fastest possible response on default hardware, EventTracker will limit return results to the first (sorted by time with newest on top) 200 results displayed. This setting can of course be defeated but is provided because it satisfies the most common use case.

As the events containing the search term get to one in a hundred thousand (100,000), performance becomes more I/O bound. The reason is there is less and less data but more and more index files have to be consulted.

I/O performance is measured as latency which is a measure of the time delay from when a disk I/O request is created, until the time the disk I/O request is completed by the underlying hardware. Windows perfmon can measure average disk/sec transfer. A rule of thumb is to have this be below 25 millisec for best I/O performance.

This can be realized in various ways:
– Having different drives (spindles) for the OS/progam and archives
– Using faster disk (15K RPM performs better than 7200 RPM disks)
– Using a SAN

In larger installations with multipleVirtual Collection Points (VCP), dedicating a separate disk spindle for each VCP can help.

Nineteen Minutes In April

In April 16 of 2013, a sniper took a hundred shots at Pacific Gas and Electric’s (PG&E) Metcalf Electric Power Transformer Station. The utility was able to reroute power on the grid and avert a black out. The whole ordeal took nineteen tension-filled minutes.

The event added muscle to the regulatory grip of The North American Electric Reliability Corporation (NERC) – a not-for-profit entity whose mission is to ensure the reliability of the bulk power system in North America. A terrorist attack, domestic or otherwise, could bring the state’s power grid down. NERC’s job is to regulate bulk power systems to safeguard against this and many other scenarios. For the bulk power industry, regulation stands to protect, but poses a challenge to organizations who must stay compliant. NERC develops and enforces reliability standards for the bulk power industry and annually assesses seasonal and long term reliability.

Entities under NERC’s jurisdiction are users, owners and operators of the bulk power system. NERC works with all stakeholders to develop standards for power system operation. They monitor and enforce compliance with those standards, assessing resource adequacy, and providing educational and training resources as part of an accreditation program to ensure power system operators remain qualified and proficient. They also investigate and analyze the causes of significant power system disturbances such as the one in San Diego at Metcalf, to help prevent future events. The North American Electric Reliability Corporation Critical Infrastructure Protection (“NERC CIP”) plan is a set of requirements designed to secure the assets required for operating North America’s bulk electric system.

NERC CIP standards and requirements include electronic perimeters and protection of critical cyber assets, as well as specific requirements for personnel, training, security management and disaster recovery. Companies regulated by NERC CIP must monitor, track and audit equipment and operations to comply with the polygon of requirements that NERC invokes.

Today’s business climate is one where the playing field is often heavily regulated. We see this in financial services, healthcare, food and drug, automotive, insurance, and consumer products industries. The 2013 Gartner CEO Survey noted that the second overall business risk is regulatory change, which requires a punishing regimen of regulatory compliance followed by more compliance. Such requirements force business leaders to respond while trying to compete in an environment of limited resources. Without the right tools, controls and robust monitoring in place, regulatory compliance can be burdensome and near impossible.

SIEM platforms which provide monitoring, tracking and auditing for control is necessary to meet NERC/CIP requirements.

Firstly, SIEM systems aid in the discovery of cyber critical devices and assets, and assign a value to its criticality. But identity is not enough. The assets’ configuration must be checked for security measures and insure that those measures are in place. A SIEM solution reports on such features and controls and enables management to drive decisions and stay compliant.

But NERC CIP requires that the cyber electronic security perimeter is intact by controlling access points and reporting incident alerts, reporting it via dashboard for monitoring. The SIEM system provides some real horsepower by quantifying security and risk reduction of the assets within the electrical perimeter and electrical playing field. It assigns value to various factors and provides a security posture at a glance. This also includes other stakeholders who may be in and out of the perimeter such as vendors and other frameworks. Logs and event data are aggregated and reported in logbooks for status of all salient information. In addition, change audit features of a SIEM solution can provide a “way back” or means of recovery using change management to diagnose and prevent disaster. Again, a concise dashboard equals broad control.

SIEM systems can bolster the oncoming NERC CIP requirements by enabling vulnerable organizations to stay compliant in an industry where nineteen seconds of vulnerability could have had disastrous consequences.

The Data Scientist Unicorn

An essential part of any IT Security program is to hunt for unusual patterns in sensor (or log) data to uncover attacks. Aside of tools that gather and collate this data (for example SIEM solutions like EventTracker), a smart pair of eyeballs is needed to sift through the data warehouse. In modern parlance, this person is called a data scientist, one who extracts knowledge from data. This requires a deep understanding of the available data and a feel for pattern recognition and visualization.

As Michael Schrage notes in the HBR Blog network “…the opportunities for data-science-enabled efficiencies and innovation are too important to defer or deny. Big organizations can afford — or think they can afford — to throw money at the problem by hiring laid-off Wall Street quants or hiring big-budget analytics boutiques. More frugal and prudent enterprises seem to be taking alternate approaches.”

Starting up a “center of excellence” or addressing a “grand challenge”  is not practical for most organizations. Instead, how about an effort to deliver tangible and data-driven benefits in a short time frame?

Interestingly, Schrage notes “Without exception, every team I ran across or worked with hired outside expertise. They knew when a technical challenge and/or statistical technique was beyond the capability…the relationship was less of an RFP box-ticking exercise than a shared space…”

What does any of this have to do with SIEM you ask?

Well for the typical Small/Medium Enterprise [SME] this is a familiar dilemma. Data, data everywhere and not a drop (of intelligence) to drink. Either the “data scientist” is not on the employee roster or does not have time available. How then do you square this circle? Look for outside expertise, as Schrage notes.

SIEM Simplified service

SME’s looking for expertise to leverage the existing mountain of security data within their enterprise can leverage our SIEM Simplified service.

Unicorns don’t exist but that doesn’t mean that do-nothing is a valid option.