What is a Stolen Credit Card Worth?

Solution Providers for Retail
Guest blog by A.N. Ananth

Cybercrime and stealing credit cards has been a hot topic all year. From the Target breach to Sony, the classic motivation for cybercriminals is profit. So how much is a stolen credit card worth?

The reason it is important to know the answer to this question is that it is the central motivation behind the criminal. If you could make it more expensive for a criminal to steal a card than what the thief would gain by selling them, then the attackers would find an easier target. That is what being a hard target is all about.

This article suggests prices of $35-$45 for a stolen credit card depending upon whether it is a platinum or corporate card. It is also worth noting that the viable lifetime of a stolen card is at most one billing cycle. After this time, the rightful owner will most likely detect its loss or the bank fraud monitor will pick up irregularities and terminate the account.

Why is a credit card with a high spending limit (say $10K) worth only $35? It is because monetizing a stolen credit card is difficult and requires a lot of expensive effort on part of the criminal. That is contrary to popular press which suggest that cybercrime results in easy billions. At the Workshop on Economics of Information Security, Herley and Florencio showed in their presentation, “Sex, Lies and Cybercrime Surveys,” that widely circulated estimates of cybercrime losses are wrong by orders of magnitude.For example:

Far from being broadly-based estimates of losses across the population, the cyber-crime estimates that we have appear to be largely the answers of a handful of people extrapolated to the whole population. A single individual who claims $50,000 losses, in an N = 1000 person survey, is all it takes to generate a $10 billion loss over the popu- lation. One unverified claim of $7,500 in phishing losses translates into $1.5 billion. …Cyber-crime losses follow very concentrated distributions where a representative sample of the pop- ulation does not necessarily give an accurate estimate of the mean. They are self-reported numbers which have no robustness to any embellishment or exaggeration. They are surveys of rare phenomena where the signal is overwhelmed by the noise of misinformation. In short they produce estimates that cannot be relied upon.

That’s a rational, fact based explanation as to why the most basic of information security is unusually effective in most cases. Pundits have been screaming this from the rooftops for a long time. What are your thoughts?

Read more at Solution Provider for Retail guest blog.

Are honeypots illegal?

In computer terminology, a honeypot is a computer system set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of IT systems. Generally, a honeypot appears to be part of a network, but is actually isolated and monitored, and which seems to contain information or a resource of value to attackers.

Lance Spitzner covers this topic from his (admittedly) non-legal perspective.

Is it entrapment?
Honeypots are not a form of entrapment. For some reason, many people have this misconception that if they deploy honeypots, they can be prosecuted for entrapping the bad guys. Entrapment, by definition is “a law-enforcement officer’s or government agent’s inducement of a person to commit a crime, by means of fraud or undue persuasion, in an attempt to later bring a criminal prosecution against that person.”

Does it violate privacy laws?
Privacy laws in the US may limit your right to capture data about an attacker, even when the attacker is breaking into your honeypot but the exemption under Service Provider Protection is key. What this exemption means is that security technologies can collect information on people (and attackers), as long as that technology is being used to protect or secure your environment. In other words, these technologies are now exempt from privacy restrictions. For example, an IDS sensor that is used for detection and captures network activity is doing so to detect (and thus enable organizations to respond to) unauthorized activity. Such a technology is most likely not considered a violation of privacy as the technology is being used to help protect the organization, so it falls under the exemption of Service Provider Protection. Honeypots that are used to protect an organization would fall under this exemption.

Does it expose us to liability?
Liability is not a criminal issue, but civil. Liability implies you could be sued if your honeypot is used to harm others. For example, if it is used to attack other systems or resources, the owners of those may sue. The argument being that if you had taken proper precautions to keep your systems secure, the attacker would not have been able to harm my systems, so you share the fault for any damage occurred to me during the attack. The issue of liability is one of risk. First, anytime you deploy a security technology (even one without an IP stack), that technology comes with risk. For example, there have been numerous vulnerabilities discovered in firewalls, IDS systems, and network sniffers. Honeypots are no different.

Obviously this blog entry is not legal advice and should not be construed as such.

SIEM or Log Management?

Security Information and Event Management (SIEM) is a Gartner coined term to describe solutions which monitor and help manage user and service privileges, directory services, and other system configuration changes in addition to providing log auditing, and review and incident response.

SIEM differs from Log Management, which refers to solutions which deal with large volumes of computer-generated log messages (also known as audit records, event-logs, etc.)

Log management is aimed at general system troubleshooting or incident response support. The focus is on collecting all logs for various reasons. This “input-driven” approach tries to get every possible bit of data.

This model fails with SIEM-focused solutions. Opening the floodgates, admitting any/all log data into the tool first, then considering what (if any) use is there for the data, reduces tool performance as it struggles to cope with the flood. More preferable is an “output-driven” model where data is admitted if and only if its usage is defined. This use can be defined to include alerts, dashboards, reports, behavior profiling, threat analysis, etc..

Buying a SIEM solution and using it as log management tool is a waste of money. Forcing a log management solution to act like a SIEM is folly.

The Security Risks of Industry Interconnections

2014 has seen a rash of high profile security breaches involving theft of personal data and credit card numbers from retailers Neiman Marcus, Home Depot, Target, Michaels, online auction site eBay, and grocery chains SuperValu and Hannaford among others. Hackers were able to steal hundreds of millions of credit and debit cards; from the information disclosed, this accounted for 40 million cards from Target, 350,000 from Neiman Marcus, up to 2.6 million from Michaels, 56 million from Home Depot.

The Identity Theft Resource Center (ITRC) reports that to date in 2014, 644 security breaches have occurred, an increase of 25.3 percent over last year. By far the majority of breaches targeted payment card data along with personal information like social security numbers and email addresses, and personal health information, and it estimates that over 78 million records were exposed.

Malware installed using third party credentials was found to be among the primary cause of the breaches in post-security analysis. Banks and financial institutions are critically dependent on their IT infrastructure and are also constantly exposed to attacks because of Sutton’s Law. Networks are empowering because they allow us to interact with other employees, customers and vendors. However, it is often the case that industry partners have a looser view of security and thus may be more vulnerable to being breached; exploiting industry interconnection is a favorite tactic used by attackers. After all, a frontal brute force attack on a well-defended large corporation’s doors are unlikely to be successful.

The Weak Link

The attackers target subcontractors, which are usually small companies with comparatively weaker IT security defenses and minimal cyber security expertise on hand. These small companies are also proud of their large customer and keen to highlight their connection. Likewise, companies often provide a surprising number of information meant for vendors on public sites for which logins are not necessary. This makes the first step of researching the target and their industry interconnections easier for the attacker.

The next step is to compromise the subcontractor network and find employee data. Social networking sites liked LinkedIn are a boon to attackers and used to create lists of IT admin and management staff who are likely to be privileged users. In West Virginia, state agencies were victims when malware infected computers of users whose email addresses ended with @wv.gov. The next step is to gain access to the contractors’ privileged users workstation, and from there, to breach the final target. In one retailer breach, the network credentials given to a heating, air conditioning and refrigeration contractor were stolen after hackers mounted a phishing attack, and were able to successfully lodge malware in the contractor’s systems, two months before they attacked the retailer, their ultimate target.

Good Practices, Good Security

Organizations can no longer assume that their enterprise is enforcing effective security standards; likewise, they cannot make the same assumption about their partners, vendors and clients, or anyone who has access to their networks. A Fortune 500 company has access to resources to acquire and manage security systems that a smaller vendor might not. So how can the enterprise protect itself while making the industry interconnections it needs to thrive?

Risk Assessments: When establishing a relationship with a vendor, partner, or client, consider vetting their security practices a part of due diligence. Before network access can be granted, the third party should be subject to a security appraisal that assesses where security gaps can occur (weak firewalls or security monitoring systems, lack of proper security controls). An inventory of the third party’s systems and applications and its control of those can help the enterprise develop an effective vendor management profile. Furthermore, it provides the enterprise with visibility into information that will be shared and who has access to that information.

Controlled Access: Third party access should be restricted and compartmentalized only to a segment of the network, and prevented access to other assets. Likewise, the organization can require that vendors and third parties use particular technologies for remote access, which enables the enterprise to catalog which connections are being made to the network.

Active Monitoring: Organizations should actively monitor network connections; SIEM software can help identify when remote access or other unauthorized software is installed, alert the organization when unauthorized connections are attempted, and establish baselines for “typical” versus unusual or suspicious user behaviors which can presage the beginning of a breach

Ongoing Audits: Vendors given access to the network should be required to submit to periodic audits; this allows both the organization and the vendor to assess security strengths and weaknesses and ensure that the vendor is in compliance with the organization’s security policies.

What next?

Financial institutions often implicitly trust vendors. But just as good fences make good neighbors, vendor audits produce good relationships. Initial due diligence and enforcing sound security practices with third parties can eliminate or mitigate security failures. Routine vendor audits send the message that the entity is always monitoring the vendor to ensure that it is complying with IT security practices.

SIEM is Sunlight

Security Information and Event Management (SIEM) refers to technology that provides real-time analysis of security alerts generated by network hardware and applications. SIEM works by gathering, analyzing and presenting information from a variety of sources of such information across the enterprise network including network and security devices; identity and access management applications; vulnerability management and policy compliance tools; operating system, database and application logs; and external threat data.

All compliance frameworks including PCI-DSS, HIPAA, FISMA, NERC etc call for the implementation and regular usage of SIEM technology. The absence of regular usage is noted as a major factor in post-mortem analysis of IT security related incidents.

Why is this the case? It’s because SIEM, when implemented properly gathers security data from all the nooks and crannies of the enterprise network. When this information is collated and presented well, an analyst is able to see what is happening, what happened and what is different.

It’s akin to letting in the sunlight to all corners and hidden places. You can see better, much better.

You can’t fix what you can’t see and don’t know. Knowledge of the goings-on in the various parts of the network, in real-time when possible, is the first step towards building a meaningful security defense.

Three key advantages for SIEM-As-A-Service

Three key advantages for SIEM-As-A-Service

Security Information and Event Management (SIEM) technology is an essential component in a modern defense-in-depth strategy for IT Security. SIEM is described as such in every Best Practice recommendation from industry groups and security pundits. The absence of SIEM is repeatedly noted in Verizon Enterprise Data Breach Investigations Report as a factor in late discovery of breaches. Indeed attackers are most often successful with soft targets where defenders do not review log and other security data. In addition, all regulatory compliance standards, such as PCI-DSS, HIPAA, FISMA etc specifically require SIEM technology be deployed and more importantly be used actively.

This last point (“be used actively”) is the Achilles heel for many organizations and has been noted often, as “security is something you do, not something you buy.” Organizations large and small struggle to assign staff with necessary expertise and maintain the discipline of periodic log review.

New SIEM-As-A Service options

SIEM Simplified services are available for buyers that cannot leverage traditional on premise, self-serve products. In such models, the vendor assumes responsibility for as much (or as little) of the heavy lifting as desired by the user including: Installation, Configuration, Tuning, Periodic review, Updates and responding to incident investigation or audit support requests.

Such offerings have three distinct advantages over the traditional self-serve, on premise model.

1) Managed Service Delivery: The vendor is responsible for the most “fragile” and “difficult to get right” aspect of a SIEM deployment – that is installation, configuration, tuning and Periodic review of SIEM data. This can also include upgrades, performance management to get speedy response and updates to security threat intelligence feeds.
2) Deployment options: In addition to the traditional on premise model, such services usually offer cloud based, managed hosted or hybrid solutions. Options for host based agents and/or premise based collectors/sensors allow for great flexibility in deployment
3) Utility pricing: Contrast with traditional perpetual models that require capital expenditure and front loading, SIEM-As-A-Service follows the utility model with usage based pricing and monthly expenditure. This is friendly to Operational Expenditures.

SIEM is a core technology in the modern IT Enterprise. New As-A-Service deployment models can increase adoption and value of this complex monitoring technology.

Top 5 Linux log file groups in/var/log

If you manage any Linux machines, it is essential that you know where the log files are located, and what is contained in them. Such files are usually in /var/log. Logging is controlled by the associated .conf file.

Some log files are distribution specific and this directory can also contain applications such as samba, apache, lighttpd, mail etc.

From a security perspective, here are 5 groups of files which are essential. Many other files are generated and will be important for system administration and troubleshooting.

1. The main log file
a) /var/log/messages – Contains global system messages, including the messages that are logged during system startup. There are several things that are logged in /var/log/messages including mail, cron, daemon, kern, auth, etc.

2. Access and authentication
a) /var/log/auth.log – Contains system authorization information, including user logins and authentication machinsm that were used.
b) /var/log/lastlog – Displays the recent login information for all the users. This is not an ascii file. You should use lastlog command to view the content of this file.
c) /var/log/btmp – This file contains information about failed login attemps. Use the last command to view the btmp file. For example, “last -f /var/log/btmp | more”
d) /var/log/wtmp or /var/log/utmp – Contains login records. Using wtmp you can find out who is logged into the system. who command uses this file to display the information.
e) /var/log/faillog – Contains user failed login attemps. Use faillog command to display the content of this file.
f) /var/log/secure – Contains information related to authentication and authorization privileges. For example, sshd logs all the messages here, including unsuccessful login.

3. Package install/uninstall
a) /var/log/dpkg.log – Contains information that are logged when a package is installed or removed using dpkg command
b) /var/log/yum.log – Contains information that are logged when a package is installed using yum

4. System
a) /var/log/daemon.log – Contains information logged by the various background daemons that runs on the system
b) /var/log/cups – All printer and printing related log messages
c) /var/log/cron – Whenever cron daemon (or anacron) starts a cron job, it logs the information about the cron job in this file

5. Applications
b) /var/log/maillog /var/log/mail.log – Contains the log information from the mail server that is running on the system. For example, sendmail logs information about all the sent items to this file
b) /var/log/Xorg.x.log – Log messages from the XWindows system

Happy Logging!

Seven Habits of Highly Fraudulent Users

This post Seven Habits of Highly Fraudulent Users from Izzy at SiftScience describes patterns culled from 6 million transactions over a three month sample. The “fraud” sample consisted of transactions confirmed fraudulent by customers; “normal” samples consisted of transactions confirmed by customers to be non-fraudulent, as well as a subset of unlabeled transactions.

These patterns are useful to Security Operations Center (SOC) teams who “hunt” for these things.

Habit #1 Fraudsters go hungry

Whereas there is a dip in activity by normal users at lunch time, no such dip is observed in fraudulent transactions. When looking for out-of-ordinary behavior, the absence of any dip during the day might speak to a script which never tires.

Habit #2 Fraudsters are night owls

Analyzing fraudulent transactions as a percentage of all transactions, 3AM was found to be the most fraudulent hour in the day, and night-time in general was a more dangerous time. SOC teams should hunt for “after hours” behavior as a tip-off for bad actors.

Habit #3 Fraudsters are international

Look for traffic originating outside your home country. While these patterns change frequently, as a general rule, international traffic is worth trending and observing.

Habit #4 Fraudsters don multiple identities

Fraudsters tend to make multiple accounts on their laptop or phone to commit fraud. When multiple accounts are associated with the same device, the higher the likelihood of fraud. A user who has 6 accounts on her laptop is 15 times more likely to be fraudulent than the average person. Users with only 1 account however, are less likely to be fraudulent. SOC teams should look for multiple users using the same computer in a given time frame. Even in shared PC situations (e.g, nurses station in a hospital, it is unusual for much more than one user accessing a PC in a given shift.

Habit #5 Fraudsters use well known domains

The top 3 sources of fraud originate from Microsoft sites including outlook.com, Hotmail and live.com. Traffic from/to such sites is worthy of trending and examining.

Habit #6 Fraudsters are boring

A widely recognized predictor of fraud is the number of digits in an email address. The more numbers, the more likely that it’s fraud.

Habit #7 Fraudsters like disposable things

We know that attacks almost always originate from DHCP addresses (which is why dshield.org/block.txt gives out /24 ranges). Its also true that the older an account age, the less likely (in general) its involved in fraud. SOC teams must always look out for account creation.

Good hunting.

EventTracker and Poodle

Summary:
• All systems and applications utilizing the Secure Socket Layer (SSL) 3.0 with cipher-block chaining (CBC) mode ciphers may be vulnerable. However, the POODLE (Padding Oracle On Downgraded Legacy Encryption) attack demonstrates this vulnerability using web browsers and web servers, which is one of the most likely exploitation scenarios.
• EventTracker v7.x is implemented above IIS on the Windows platform and there MAY be vulnerable to POODLE depending on the configuration of IIS..
• ETIDS and ETVAS which are offered as options of the SIEM Simplified service, are based on CentOS v6.5 which uses Apache and may also be vulnerable, depending on the configuration of Apache.

1. Poodle Scan can be used to test if your server is vulnerable
• Below are the links relevant to this vulnerability:

http://nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566
https://www.us-cert.gov/ncas/alerts/TA14-290A
http://www.dotnetnoob.com/2013/10/hardening-windows-server-20082012-and.html
http://support.microsoft.com/kb/187498

• If you are a subscriber to SIEM Simplified service, the EventTracker Control Center has already initiated action to patch this vulnerability on your behalf. Please contact ecc@eventtracker.com with any questions.
• If you maintain EventTracker yourself, this document explains how you can update your installation to remove the vulnerability against SSL 3.0

Details:
The SSL 3.0 vulnerability stems from the way blocks of data are encrypted under a specific type of encryption algorithm within the SSL protocol. The POODLE attack takes advantage of the protocol version negotiation feature built into SSL/TLS to force the use of SSL 3.0 and then leverages this new vulnerability to decrypt select content within the SSL session. The decryption is done byte by byte and will generate a large number of connections between the client and server.

While SSL 3.0 is an old encryption standard and has generally been replaced by Transport Layer Security (TLS) (which is not vulnerable in this way), most SSL/TLS implementations remain backwards compatible with SSL 3.0 to interoperate with legacy systems in the interest of a smooth user experience. Even if a client and server both support a version of TLS the SSL/TLS protocol suite allows for protocol version negotiation (being referred to as the “downgrade dance” in other reporting). The POODLE attack leverages the fact that when a secure connection attempt fails, servers will fall back to older protocols such as SSL 3.0. An attacker who can trigger a connection failure can then force the use of SSL 3.0 and attempt the new attack.

Solution:
• If you have installed EventTracker on Microsoft Windows Server and are maintaining it yourself, please download the Disable Weak Cyphers file to the server running EventTracker. Extract and save DisableWeakCiphers.bat; run this file as Administrator. This file executes the following commands:

REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Client” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client” /v Enabled /t REG_DWORD /d 0 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\DES 56/56” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 40/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 56/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 128/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 40/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 56/128” /v Enabled /t REG_DWORD /d 00000000 /f
REG.EXE ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 64/128” /v Enabled /t REG_DWORD /d 00000000 /f

EventTracker Search Performance

EventTracker 7.6 is a complex software application and while there is no easy formula to compute its performance, there are ways to configure and use it so as to get better performance. All data received either real-time or by file ingest (called the Direct Log Archiver) is first indexed and then archived for optimal disk utilization. When performance of a search is cross indexed, compression speed of results depend on the type of search as well as the underlying hardware.

Searches can be categorized as:
Dense – at least one result per thousand (1,000) events
Sparse – at least one result per million (1,000,000) events
Rare – at least one result per billion (1,000,000,000) events
Needle in a haystack – one event in more than a billion events

Based on provided search criteria, EventTracker consults indexing meta-data to determine if and in which archive contains events matching the search terms. As searches go from dense to needle-in-a-haystack, they move from being CPU bound to I/O bound.

Dense searches are CPU bound because matches are found easily and there is sufficient raw data to decompress. For the fastest possible response on default hardware, EventTracker will limit return results to the first (sorted by time with newest on top) 200 results displayed. This setting can of course be defeated but is provided because it satisfies the most common use case.

As the events containing the search term get to one in a hundred thousand (100,000), performance becomes more I/O bound. The reason is there is less and less data but more and more index files have to be consulted.

I/O performance is measured as latency which is a measure of the time delay from when a disk I/O request is created, until the time the disk I/O request is completed by the underlying hardware. Windows perfmon can measure average disk/sec transfer. A rule of thumb is to have this be below 25 millisec for best I/O performance.

This can be realized in various ways:
– Having different drives (spindles) for the OS/progam and archives
– Using faster disk (15K RPM performs better than 7200 RPM disks)
– Using a SAN

In larger installations with multipleVirtual Collection Points (VCP), dedicating a separate disk spindle for each VCP can help.

The Data Scientist Unicorn

An essential part of any IT Security program is to hunt for unusual patterns in sensor (or log) data to uncover attacks. Aside of tools that gather and collate this data (for example SIEM solutions like EventTracker), a smart pair of eyeballs is needed to sift through the data warehouse. In modern parlance, this person is called a data scientist, one who extracts knowledge from data. This requires a deep understanding of the available data and a feel for pattern recognition and visualization.

As Michael Schrage notes in the HBR Blog network “…the opportunities for data-science-enabled efficiencies and innovation are too important to defer or deny. Big organizations can afford — or think they can afford — to throw money at the problem by hiring laid-off Wall Street quants or hiring big-budget analytics boutiques. More frugal and prudent enterprises seem to be taking alternate approaches.”

Starting up a “center of excellence” or addressing a “grand challenge”  is not practical for most organizations. Instead, how about an effort to deliver tangible and data-driven benefits in a short time frame?

Interestingly, Schrage notes “Without exception, every team I ran across or worked with hired outside expertise. They knew when a technical challenge and/or statistical technique was beyond the capability…the relationship was less of an RFP box-ticking exercise than a shared space…”

What does any of this have to do with SIEM you ask?

Well for the typical Small/Medium Enterprise [SME] this is a familiar dilemma. Data, data everywhere and not a drop (of intelligence) to drink. Either the “data scientist” is not on the employee roster or does not have time available. How then do you square this circle? Look for outside expertise, as Schrage notes.

SIEM Simplified service

SME’s looking for expertise to leverage the existing mountain of security data within their enterprise can leverage our SIEM Simplified service.

Unicorns don’t exist but that doesn’t mean that do-nothing is a valid option.

EventTracker and Shellshock

What’s your thought on Shellshock? EventTracker CEO A.N. Ananth weighs in.

Summary:

  • Shellshock (also known as Bashdoor) CVE-2014-6271 is a security bug in the Linux/Unix Bash shell.
  • EventTracker v 6.x, v7.x is NOT vulnerable to Shellshock as these products are based on the Microsoft Windows platform.
  • ETIDS and ETVAS which are offered as options of the SIEM Simplified service, are vulnerable to Shellshock, as these solutions are based on CentOS v6.5. Below are the links relevant to this vulnerability.
  • If you subscribe to ETVAS and/or ETIDS, the EventTracker Control Center has already initiated action to patch this vulnerability on your behalf. Please contact ecc@eventtracker.com with any questions.

Details:

Shellshock (also known as Bashdoor) CVE-2014-6271 is a security bug in the broadly used Unix Bash shell. Bash is used to process certain commands across many internet daemons. It is a program that is used by various Unix-based systems to execute command scripts and command lines. Often it is installed as the system’s default command line interface.

Notes:

  • Environment variables (each running program having its own list of name/value pairs) occur in Unix-based and other operating systems that Bash supports. When one program is started by an earlier program, an initial list of environment variables is provided by the earlier program to the new program. Apart from this, named scripts (internal list of functions) are also maintained by Bash that can be executed from within.
  • By creating vulnerable versions of Bash, an attacker can gain unauthorized access to a computer system. By executing Bash with a chosen value in its environment variable list, vulnerable versions of Bash can be caused, that may allow remote code execution.
  • Scrutiny of the Bash source code history, reveal that concealed vulnerabilities have been present since approximately version 1.13 (1992). Lack of comprehensive change logs do not allow, the maintainers of Bash source code, to pinpoint the exact time of introduction of the vulnerability.

We don’t need no stinkin Connectors

#36 on the American Film Institute list of Top Movie Quotes is “Badges? We don’t need no stinkin badges” which has been used often (e.g., Blazing Saddles). The equivalent of this in the log management universe is a “Connector”. We are often asked how many “Connectors” we have readily available or how long it takes to develop a Connector.

These questions stem from a model used by programs such as ArcSight which depend on Early Binding. In an earlier era of computing, Early Binding was needed for the compiler could not create an entry in the virtual method table for the procedure being compiled. It has the advantage of being efficient, an important consideration when CPU and memory are in very short supply, like years ago.

Just in time languages such as .NET or Java adopt Late Binding where the v-table is computed at run time. Years ago, Late Binding had negative connotations in terms of performance but that hasn’t been true for at least 20 years now.

Early binding requires a fixed schema to be mandated for all possible entries and for input to be “normalized” to this schema. The benefit of the fixed plan is efficiency in output since the data is already normalized. While that may make some sense for compilers, input in formalized language grammars makes almost no sense in the log management universe, where the input is log data from sources that do not adopt any standardization at all. The downside of such an approach is to require a “Connector” to normalize a new log source to the normalized schema. Another consideration is that outputs can greatly vary depending on usage – there are many possible uses for the data, the limitation is only the users imagination. The Early Binding model however, is designed with fixed outputs in mind. These disadvantages limit such designs.

In contrast, EventTracker uses Late Binding, where the meaning of tokens can be assigned at output (run) time, rather than being fixed at receive time. Thus new log formats do not need a “Collector” to be available at ingest time. The desired output format can be specified at search or report time for easy viewing. This requires somewhat greater computing capacity with Moores Law to the rescue. Late Binding is the primary advantage of EventTrackers’ “Fast In, Smart Out” architecture.

Spray & Pray or 80/20

If you spend any time at all looking at log data from any server that is accessible to the Internet, you will be shocked at the brazen attempts to knock the castle over. They being within minutes of the server being available. They most commonly include port scans, login attempts using default username/password, web server attacks described by OWASP.

How can this possibly be? Given the sheer number of machines that are visible on the Internet? Don’t these guys have anything better to do?

The answer is automation and scripted attacks, also known as spray and pray. The bad guys are capitalists too (regardless of country of origin!) and need to maximize their effort, computing capacity and network bandwidth usage. Accordingly, they use automation to “knock on all available doors in a wealthy neighborhood” as efficiently and regularly as possible. Why pick on servers in developed countries? Because that’s where the payoff is likely to be higher. Its Risk v. Reward all the way.

The automated (first) wave of these attacks is to identify vulnerable machines and establish presence. Following waves may be staffed depending on the the location and identity and thus the potential value to be obtained by a greater investment of (scarce) expertise by the attacker.

Such attacks can be deterred quite simply by using secure (non-default) configuration, system patching and basic security defenses such as firewall and anti-virus. This explains the repeated exhortations of security pundits on “best practice” and also the rationale behind compliance standards and auditors trying to enforce basic minimum safeguards.

The 80/20 rule applies to attackers just as it does to defenders. Attackers are trying to cover 80% of the ground at 20% of the cost so as to at-least identify soft high value targets and at most steal from them. Defenders are trying to deter 80% of the attackers at 20% of cost by using basic best practices.

Guidance such as SANS Critical Controls or lessons from Verizon’s Annual Data Breach studies can help you prioritize your actions. Attackers depend on the fact that the majority of users do not follow basic security hygiene, don’t collect logs which would expose the attackers actions and certainly never actually look at the logs.

Defeating a “spray and pray” attacks requires basic tooling and discipline. The easy way to so this? We call it SIEM Simplified. Drop us a shout, it beats being a victim.

Hackers: What they are looking for and the abnormal activities you should be evaluating

Most hackers are looking into critical data for credential theft. A credential theft attack is when an attacker initially gains privileged access to a computer on a network and then uses freely available tooling to extract credentials from the sessions of other logged-on accounts. The most prevalent target for a credential theft is a “VIP account.” VIP account’s consist of contacts with highly sensitive data attached: access to accounts and secure data that many others within that organization probably don’t have.

It’s very important for administrators to be conscious of activities that increase the likelihood of a successful credential-theft attack.

These activities are:
• Logging on to unsecured computers with privileged accounts
• Browsing the Internet with a highly privileged account
• Configuring local privileged accounts with the same credentials across systems
• Overpopulation and overuse of privileged domain groups
• Insufficient management of the security of domain controllers.

There are specific accounts, servers, and infrastructure components that are the usual primary targets of attacks against Active Directory.

These accounts are:
• Permanently privileged accounts
• VIP accounts
• “Privilege-Attached” Active Directory accounts
• Domain controllers
• Other infrastructure services that affect identity, access, and configuration management, such as public key infrastructure (PKI) servers and systems management servers

Although pass-the-hash (PtH) and other credential theft attacks are ubiquitous today, it is because there is freely available tooling that makes it simple and easy to extract the credentials of other privileged accounts when an attacker has gained Administrator – or SYSTEM-level access to a computer.

Even without this tool, an attacker with privileged access to a computer can just as easily install keystroke loggers that capture keystrokes, screenshots, and clipboard contents. An attacker with privileged access to a computer can disable anti-malware software, install rootkits, modify protected files, or install malware on the computer that automates attacks or turns a server into a drive-by download host.

The tactics used to extend a breach beyond a single computer vary, but the key to propagating compromise is the acquisition of highly privileged access to additional systems. By reducing the number of accounts with privileged access to any system, you reduce the attack surface not only of that computer, but the likelihood of an attacker harvesting valuable credentials from the computer.

Practical ways to analyze login and pre-authentication failures

Nikunj Shah, team lead of EventTracker SIEM Simplified team provides some practical tips on analyzing login and pre-authentication failures:

1) Learn and know how to identify login events and their descriptions. A great resource to find event IDs is here: http://technet.microsoft.com/en-us/library/cc787567(v=ws.10).aspx.

2) Identify and look into the event description. To analyze events efficiently and effectively you must analyze the event description. Within the login failure description, paying attention to the details like: failure reason, user name, logon type, workstation name and source network address are critical to your investigation and analysis. By identifying the description and knowing what to pay attention to, you will easily eliminate the noise.

When using a system like EventTracker, the display of the required fields used to showcase eliminates the noise and show you the immediate error results. EventTracker will provide a summary based on the total number of events for each failure type and user name to demonstrate the automation of your systems’ critical information.

Using IDS will help your enterprise run more efficiently and effectively with the analysis of traditional reports for the hundreds of events that happen every day. Doing this without the help of a management and a monitoring tool is nearly impossible.

Please reference here for detailed charts.

Simplify SIEM with Services

To support security, compliance and operational requirements, specific and fast answers to the 4 W questions (Who, What, When, Where) are very desirable. These requirements drive the need to Security Information Event Management (SIEM) solutions that provide detailed and one-pain-of-glass visibility into this data, which is constantly generated within your information ecosystem. This visibility and the attendant effectiveness are made possibly by centralizing the collection, analysis and storage of log and other security data from sources throughout the enterprise network.

To obtain value from your SIEM solution, it must be watered and fed. This is an eternal commitment, whether your team chooses to do-it yourself or get someone to do it for you. This new white paper from EventTracker examines the pros and cons of using a specialist external service provider.

“Think about this for a second: a lot more people will engage professional services to help them RUN, not just DEPLOY, a SIEM. However, this is not the same as managed services, as those organization will continue to own their SIEM tools.” –Anton Chuvakin, Gartner Analyst

Known knowns, Unknown unknowns

“There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know. ”
–Donald Rumsfeld, Secretary of Defense

In SIEM world, the known knowns are alerts. We configure rules to look at security data for threats/problems that we find to be interesting and bring them to the operators’ attention. This is a huge step up in the SIEM maturity scale from log ignorance. The Department of Homeland Security refers to this as “If you see something, say something.” What do you do when you see something? You “do something,” better known as alert-driven workflow. In the early stages of a SIEM implementation there is a lot of time spent refining alert definitions in order to reduce “noise.”

While this approach addresses the “known knowns”, it does nothing for the “unknown unknowns”. To identify the unknown, you must stop waiting for alerts and instead search for the insights. This approach starts with a question rather than a reaction to an alert. Notice that often enough, it’s non IT persons asking the questions e.g., Who changed this file? Which systems did “Susan” access on Saturday?

This approach results in interactive investigation rather than the traditional drill down. For example:
– Show me all successful login’s over the weekend
– Filter these to show only those on server3
– Why did “Susan” login here? Show all “Susan” activity over the weekend…

This form of active data exploration requires a certain degree of expertise in log management tools, with experience and knowledge of the data set to review a thread that looks out of place. Once you get used to the idea, it is incredible to see how visible these patterns become to you. This is essential to “running a tight ship” and being aware of out of the ordinary patterns given the baseline. When staffing technical persons for the EventTracker SIEM Simplified service team, we are constantly looking for “insight hunters” instead of mere “alert responders.”  Alert responding is so 2013…

Top 5 bad assumptions about SIEM

The cliché goes “When you assume, you make an ass out of u and me.” When implementing a SIEM solution, these five assumptions have the potential to get us in trouble. They stand in the way or organization and personal success and thus are best avoided.

5. Security by obscurity or my network is too unimportant to be attacked
Small businesses tend to be more innovative and cost-conscious. Is there such a thing as too small for hackers to care? In this blog post we outlined why this is almost never the case. As the Verizon Data Breach shows year in and year out, companies with 11-100 employees from 36 countries had the maximum number of breaches.

4. I’ve got to do it myself to get it right
Charles De Gaulle on humility “The graveyards are full of indispensable men”. Everyone tries to demonstrate multifaceted skill but its neither effective nor efficient. Corporations do it all the time. Tom Friedman explains it in “The World is Flat.”

3. Compliance = Security
This is only true if your auditor is your only threat actor. We tend to fear the known more than the unknown so it is often the case that we fear the (known) auditor more than we fear the (unknown) attacker. Among the myriad lessons from the Target breach, perhaps the most important is that “Compliance” does NOT equal Security.

2. All I have to do it plug it in, the rest happens by magic
Marketing departments of every security vendor would have you believe this of their magic appliance or software. When has this ever been true? Self-propelling lawn mower anyone?

1. It’s all about buying the most expen$ive technology
Kivas Fajo in “The Most Toys” the 70th episode of Star Trek TNG believed this. You could negotiate a 90% discount on a $200K solution and then park it as shelfware, what did you get? Wasted $20K is what. It’s always about using what you have.

Bad assumptions = bad decisions.
Always true.

Security is not something you buy, but something you do

The three sides of the security triangle are People, Processes and Technology.

SIEM-Triangles

  1. People –the key issues are: who owns the process, who is involved, what are their roles, are they committed to improving it and working together, and more importantly are they prepared to do the work to fix the problem?
  1. Process –can be defined as a trigger event which creates a chain of actions resulting in something being prepared for a customer of that process.
  1. Technology – Now that people are aligned, and the process developed and clarified, technology can be applied to ensure consistency in the process application and to provide the thin guiding rails to keep the process on track, making it easier to follow the process than not.

None of this is particularly new to CIOs and CSOs, yet how often have you seen six or seven digit “investments” sitting on datacenter racks, or even sometimes on actual storage shelves, unused or heavily underused? Organizations throw away massive amounts of money, then complain about “lack of security funds” and “being insecure.” Buying security technologies is far too often an easier task than utilizing them, and “operationalizing” them for many organizations. SIEM technology suffers from this problem as do many other “Monitoring” technologies.

Compliance and “checkbox mentality” makes this problem worse as people read the mandates and only pay attention to sections that refer to buying boxes.

Despite all this rhetoric, many managers equate information security with technology, completely ignoring the proper order. In reality, a skilled engineer with a so-so tool, but a good process is more valuable than an untrained person equipped with the best of tools.

As Gartner analyst Anton Chuvakin notes, “…if you got a $200,000 security appliance for $20,000 (i.e. at a steep 90% discount), but never used it, you didn’t save $180k – you only wasted $20,000!”

Security is not something you BUY, but something you DO.

IP Address is not a person

As we deal with forensic reviews of log data, our SIEM Simplified team is called upon to piece together a trail showing the four W’s: Who, What, When and Where. Logs can be your friend and if collected, centralized and indexed can get you answers very quickly.

There is a catch though. The “Where” question is usually answered by supplying either a system name or an IP Address which at the time in question was associated with that system name.

Is that good enough for the law? i.e., will the legal system accept that you are your IP Address?

Florida District Court Judge Ursula Ungaro says no.

Judge Ungaro was presented with a case brought by Malibu Media, who accused IP-address “174.61.81.171″ of sharing one of their films using BitTorrent without their permission. The Judge, however, was reluctant to issue a subpoena, and asked the company to explain how they could identify the actual infringer.

Responding to this order to show cause, Malibu Media gave an overview of their data gathering techniques. Among other things they explained that geo-location software was used to pinpoint the right location, and how they made sure that it was a residential address, and not a public hotspot.

Judge Ungaro welcomed the additional details, but saw nothing that actually proves that the account holder is the person who downloaded the file.

“Plaintiff has shown that the geolocation software can provide a location for an infringing IP address; however, Plaintiff has not shown how this geolocation software can establish the identity of the Defendant,” Ungaro wrote in an order last week.

“There is nothing that links the IP address location to the identity of the person actually downloading and viewing Plaintiff’s videos, and establishing whether that person lives in this district,” she adds.

As a side note, on April 26, 2012, Judge Ungaro ruled that an order issued by Florida Governor Rick Scott to randomly drug test 80,000 Florida state workers was unconstitutional. Ungaro found that Scott had not demonstrated that there was a compelling reason for the tests and that, as a result, they were an unreasonable search in violation of the Constitution.

Three trends in Enterprise Networks

There are three trends in Enterprise Networks:

1) Internet of Things Made Real. We’re all familiar with the challenge of big data ­ how the volume, velocity and variety of data is overwhelming. Studies confirm the conclusion many of you have reached on your own: There’s more data crossing the internet every second than existed on the internet in total 20 years ago. And, now, as customers deploy more sensors and devices in every part of their business, the data explosion is just beginning. This concept, called the “Internet of Things,” is a hot topic. Many businesses are uncovering efficiencies based on how connected devices drive decisions with more precision in their organizations.

2) “Reverse BYOD.” Most of us have seen firsthand how a mobile workplace can blur the line between our personal and professional lives. Today’s road warrior isn’t tethered to a PC in a traditional office setting. They move between multiple devices throughout their workdays with the expectation that they¹ll be able to access their settings, data and applications. Forrester estimates that nearly 80 percent of workers spend at least some portion of their time working out of the office and 29 percent of the global workforce can be characterized as “anywhere, anytime” information workers. This trend was called “bring your own device” or “BYOD.” But now we¹re seeing the reverse. Business-ready, secure devices are getting so good that organizations are centrally deploying mobility solutions that are equally effective at work and play.

3) Creating New Business Models with the Cloud. The conversation around cloud computing has moved from “if to “when.” Initially driven by the need to reduce costs, many enterprises saw cloud computing as a way to move non-critical workloads such as messaging and storage to a more cost-efficient, cloud-based model. However, the larger benefit comes from customers who identify and grow new revenue models enabled by the cloud. The cloud provides a unique and sustainable way to enable business value, innovation and competitive differentiation ­ all of which are critical in a global marketplace that demands more mobility, flexibility, agility and better quality across the enterprise.

The 5 stages of SIEM Implementation

Are you familiar with the Kübler-Ross 5 Stages of Grief model?

SIEM implementation (and indeed most enterprise software installations) bear a striking resemblance.

  • Stage One: Denial – The frustration that new users feel learning the terminology and delivering on the “asks” from the implementation make them question the time investment.
  • Stage Two: Despair – The self-doubt that most implementation teams feel in delivering on the promises of a complex security technology with many moving parts.
  • Stage Three: Hopeful Performance – Learning, and even using, the SIEM solution, partners build confidence in their ability to become one of those recognized for competence and potential.
  • Stage Four: Soaring Execution – The exalted status of a “go-to” team member, connected at the hip through the vendor support team or service provider; earning accolades from management. The team member has delivered value to the organization and is reaping rewards for the business. Personal relationships with vendor or service reps are genuine and mutually beneficial.
  • Stage Five:  Devolution/Plateau – Complacency, through lack of vision or agility, in embracing the next big thing drags down the relationship. Other partners, hungrier for  the customer’s attention, take over the mindshare once enjoyed.

How much security is enough?

Ask a pragmatic CISO about achieving a state of complete organizational security and you’ll quickly be told that this is unrealistic and financially imprudent goal. So then how much security is enough?

More than merely complying with regulations or implementing “best practice”, think in terms of optimizing the outcome of the security investment. So never mind the theoretical state of absolute security, think instead of determining and managing risk to critical business processes and assets.

Risk appetite is defined by the Committee of Sponsoring Organizations of the Treadway Commission (COSO) as “… the amount of risk, on a broad level, an entity is willing to accept in pursuit of value (and its mission).” Risk appetite influences the entity’s culture, operating style, strategies, resource allocation, and infrastructure. Risk appetite is not a constant; it is influenced by and must adapt to changes in the environment. Risk tolerance could be defined as the residual risk the organization is willing to accept after implementing risk-mitigation and monitoring processes and controls. One way to implement this is to define levels of residual risk and therefore the levels of security that is “enough”.

Risk-Wall

The basic level of security is the diligent one which is the staple of every business network; the organization is able to deal with known threats. The hardened level adds the ability to be proactive (with vulnerability scanning), compliant and gives the ability to perform forensic analysis.  At the advanced level, predictive capabilities are introduced and the organization develops the ability to deal with unknown threats.

If it all sounds a bit overwhelming, take heart; managed security services can relieve your team of the heavy lifting that is a staple of IT Security.

Bottom line – determine your risk appetite to determine how much security is enough.

Top 6 uses for SIEM

Security Information and Event Management (SIEM) is a term coined by Gartner in 2005 to describe technology used to monitor and help manage user and service privileges, directory services and other system configuration changes; as well as providing log auditing and review and incident response.

The core capabilities of SIEM technology are the broad scope of event collection and the ability to correlate and analyze events across disparate information sources. Simply put, SIEM technology collects log and security data from computers, network devices and applications on the network to enable alerting, archiving and reporting.

Once log and security data has been received, you can:

  • Discover external and internal threats

Logs from firewalls and IDS/IPS sensors are useful to uncover external threats; logs from e-mail servers, proxy servers can help detect phishing attacks; logs from badge and thumbprint scanners are used to detect physical access

  • Monitor the activities of privileged users

Computers, network devices and application logs are used to develop a trail of activity across the network by any user but especially users with high privileges

  • Monitor server and database resource access

Most enterprises have critical data repositories in files/folder /databases and these are attractive targets for attackers. By monitoring all server and db resource access, security is improved.

  • Monitor, correlate and analyze user activity across multiple systems and applications

With all logs and security data in one place, an especially useful benefit is the ability to correlate user activity across the network.

  • Provide compliance reporting

Often the source of funding for SIEM, when properly setup, auditor on-site time can be reduced by up to 90%; more importantly, compliance is to the spirit of the law rather than merely a check-the-box exercise

  • Provide analytics and workflow to support incident response

Answer Who, What, When, Where questions. Such questions are the heart of forensic activities and critical to draw valuable lessons.

SIEM technology is routinely cited as a basic best practice by every regulatory standard and its absence has been regularly shown as a glaring weakness in every data breach post mortem.

Want the benefit but not the hassle? Consider SIEM Simplified, our service where we do the disciplined blocking and tackling which forms the core of any security or compliance regime.

TMI, Too Little Analysis

The typical SIEM implementation suffers from TMI, TLA (Too Much Information, Too Little Analysis). And if any organization that’s recently been in the news knows this, it’s the National Security Agency (NSA). The Wall Street Journal carried this story quoting William Binney, who rose through the ranks at the National Security Agency (NSA) over a 30 year career, retiring in 2001. “The NSA knows so much it cannot understand what it has,” Binney said. “What they are doing is making themselves dysfunctional by taking all this data.”

Most SIEM implementations start at this premise – open the floodgates, gather everything because we are not sure what we are specifically looking for, and more importantly, the auditors don’t help and the regulations are vague and poorly worded.

Lt Gen Clarence E. McKnight is the former head of the Signal Corps and opined that “The issue is a straightforward one of simple ability to manage data effectively in order to provide our leaders with actionable information. Too much raw data compromises that ability. That is all there is to it.”

A presidential panel recently recommended the NSA shut down its bulk collection of telephone call records of all Americans. It also recommended creation of “smart software” to sort data as it is collected, rather than accumulate vast troves of information for sorting out later. The reality is that the collection becomes an end in itself, and the sorting out never gets done.

The NSA may be a large, powerful bureaucracy, intrinsically resistant to change, but how about your organization? If you are seeking a way to get real value out of SIEM data, consider co-sourcing that problem to a team that does that for a living. SIEM Simplified was created for just that purpose. Switch from TMI, TLA (Too Much Information, Too Little Analysis) to JEI, JEA (Just Enough Information, Just Enough Analysis).

EventTracker and Heartbleed

Summary:

The usage of OpenSSL in EventTracker v7.5 is NOT vulnerable to heartbleed.

Details:

A lot of attention has focused on CVE-2014-0160, the Heartbleed vulnerability in OpenSSL. According to http://heartbleed.com, OpenSSL 0.9.8 is NOT vulnerable.

The EventTracker Windows Agent uses OpenSSL indirectly if the following options are enabled and used:

1)      Send Windows events as syslog messages AND use the FTP server option to transfer non real-time events to a FTP server. To support this mode of operation, WinSCP.exe v4.2.9 is distributed as part of the EventTracker Windows Agent. This version of WinSCP.exe is compiled with OpenSSL 0.9.8, as documented in http://winscp.net/eng/docs/history_old (v4.2.6 onwards). Accordingly, the EventTracker Windows Agent is NOTvulnerable.

2)      Configuration Assessment (SCAP). This optional feature uses ovaldi.exe v5.8 Build 2 which in turn includes OpenLDAP v2.3.27 as documented in the OVALDI-README distributed with the EventTracker install package. This version of OpenLDAP uses OpenSSL v0.9.8c which is NOT vulnerable.

Notes:

  • EventTracker Agent uses Microsoft secure channel (Schannel) for transferring syslog over SSL/TLS. This package is NOT vulnerable as noted here.
  • We recommend that all customers who may be vulnerable follow the guidance from their software distribution provider.  For more information and corrective action guidance, please see the information from US Cert here.

Top 5 reasons IT Admins love logs

Top 5 reasons IT Admins love logs:

1) Answer the ‘W’ questions

Who, what, where and when; critical files, logins, USB inserts, downloads…see it all

2) Cut ’em off at the pass, ke-mo sah-bee

Get an early warning of the railroad jumping off track. It’s what IT Admins do.

3) Demonstrate compliance

Don’t even try to demonstrate compliance until you get a log management solution in place. Reduce on-site auditor time by 90%.

4) Get a life

Want to go home on time and enjoy the weekend? How about getting proactive instead of reactive?

5) Logs tell you what users don’t

“It wasn’t me. I didn’t do it.” Have you heard this before? Logs don’t lie.

Top 5 reasons Sys Admins hate logs

Top 5 Reasons Sys Admins hate logs:

1) Logs multiply – the volume problem

A single server easily generates 0.25M logs every day, even when operating normally. How many servers do you have? Plus you have workstations, applications and not to mention network devices.

2) Log obscurity – what does it mean?

Jan 2 19:03:22  r37s9p2 oesaudit: type=SYSCALL msg=audit(01/02/13 19:03:22.683:318) : arch=i386 syscall=open success=yes exit=3 a0=80e3f08 a1=18800

Do what now? Go where? ‘Nuff said.

3) Real hackers don’t get logged

If your purpose of logging is, for example, to review logs to “identify and proactively address unauthorized access to cardholder data” for PCI-DSS, how do you know what you don’t know?

4) How can I tell you logged in? Let me count the ways

This is a simple question with a complex answer. It depends on where you logged in. Linux? Solaris? Cisco? Windows 2003? Windows 2008? Application? VMware? Amazon EC2?

5) Compliance forced down your throat, but no specific guidance

Have you ever been in the rainforest with no map, creepy crawlies everywhere, low on supplies and a day’s trek to the nearest settlement? That’s how IT guys feel when management drops a 100+ page compliance standard on their desk.

Big Data: Lessons from the 2012 election

The US Presidential elections of 2012 confounded many pundits. The Republican candidate, Gov. Mitt Romney, put together a strong campaign and polls leading into the final week that suggested a close race. The final results were not so close, and Barack Obama handily won a second term.

Antony Young explains how the Obama campaign used big data, analytics and micro targeting to mobilize key voter blocks giving Obama the numbers needed to push him over the edge.

“The Obama camp in preparing for this election, established a huge Analytics group that comprised of behavioral scientists, data technologists and mathematicians. They worked tirelessly to gather and interpret data to inform every part of the campaign. They built up a voter file that included voter history, demographic profiles, but also collected numerous other data points around interests … for example, did they give to charitable organizations or which magazines did they read to help them better understand who they were and better identify the group of ‘persuadables‘ to target.”

That data was able to be drilled down to zip codes, individual households and in many cases individuals within those households.”

“However it is how they deployed this data in activating their campaign that translated the insight they garnered into killer tactics for the Obama campaign.

“Volunteers canvassing door to door or calling constituents were able to access these profiles via an app accessed on an iPad, iPhone or Android mobile device to provide an instant transcript to help them steer their conversations. They were also able to input new data from their conversation back into the database real time.

“The profiles informed their direct and email fundraising efforts. They used issues such Obama’s support for gay marriage or Romney’s missteps in his portrayal of women to directly target more liberal and professional women on their database, with messages that “Obama is for women,” using that opportunity to solicit contributions to his campaign.

“Marketers need to take heed of how the Obama campaign transformed their marketing approach centered around data. They demonstrated incredible discipline to capture data across multiple sources and then to inform every element of the marketing – direct to consumer, on the ground efforts, unpaid and paid media. Their ability to dissect potential prospects into narrow segments or even at an individual level and develop specific relevant messaging created highly persuasive communications. And finally their approach to tap their committed fans was hugely powerful. The Obama campaign provides a compelling case for companies to build their marketing expertise around big data and micro-targeting. How ready is your organization to do the same?”

Old dogs, new tricks

Doris Lessing passed away at the end of last year. She was the freewheeling Nobel Prize-winning writer on racism, colonialism, feminism and communism who died November 17 at the age of 94, was prolific for most of her life. But five years ago, she said the writing had dried up. “Don’t imagine you’ll have it forever,” she said, according to one obituary. “Use it while you’ve got it because it’ll go; it’s sliding away like water down a plug hole.”

In the very fast changing world of IT, it is common to feel like an old fogey. Everything changes at bewildering speed. From hardware specs to programming languages to user interfaces. We hear of wunderkinds whose innovations transform our very culture. Think Mozart, Zuckerberg to name two.

Tara Bahrampour examined the idea, and quotes author Mark Walton, “What’s really interesting from the neuroscience point of view is that we are hard-wired for creativity for as long as we stay at it, as long as nothing bad happens to our brain.”

The field also matters.

Howard Gardner, professor of cognition and education at the Harvard Graduate School of Education says, “Large creative breakthroughs are more likely to occur with younger scientists and mathematicians, and with lyric poets, than with individuals who create longer forms.”

In fields like law, psychoanalysis and perhaps history and philosophy, on the other hand, “you need a much longer lead time, and so your best work is likely to occur in the latter years. You should start when you are young, but there is no reason whatsoever to assume that you will stop being creative just because you have grey hair.” Gardner said.

Old dogs take heart; you can learn new tricks as long as you stay open to new ideas.

Fail How To: Top 3 SIEM implementation mistakes

Over the years, we had a chance to witness a large number of SIEM implementations, with results from the superb to the colossal failures. What is common with the failures? This blog by Keith Strier nails it:

1) Design Democracy: Find all internal stakeholders and grant all of them veto power. The result is inevitably a mediocre mess. The collective wisdom of the masses is not the best thing here. A super empowered individual is usually found at the center of the successful implementation. If multiple stakeholders are involved, this person builds consensus but nobody else has veto power.
2) Ignore the little things: A great implementation is a set of micro-experiences that add up to make the whole. Think of the Apple iPhone, every detail from the shape, size, appearance to every icon and gesture and feature converges to enhance the user experience. The path to failure is just focus on the big picture, ignore the little things from authentication to navigation and just launch to meet deadline.

3) Avoid Passion: View the implementation as non-strategic overhead; implement and deploy without passion. Result? At best, requirements are fulfilled but users are unlikely to be empowered. Milestones may be met but business sponsors still complain. Prioritizing deadlines, linking IT staff bonuses to delivery metrics, squashing creativity is a sure way to launch technology failures that crush morale.”

Digital detox: Learning from Luke Skywalker

For any working professional in 2013, multiple screens, devices and apps are integral instruments for success. The multitasking can be overwhelming and dependence on gadgets and Internet connectivity can become a full-blown addiction.

There are digital detox facilities for those whose careers and relationships have been ruined by extreme gadget use. Shambhalah Ranch in Northern California has a three-day retreat for people who feel addicted to their gadgets. For 72 hours, the participants eat vegan food, practice yoga, swim in a nearby creek, take long walks in the woods, and keep a journal about being offline. Participants have one thing in common: they’re driven to distraction by the Internet.

Is this you? Checking e-mail in the bathroom and sleeping with your cell phone by your bed are now considered normal. According to the Pew Research Center, in 2007 only 58 percent of people used their phones to text; last year it was 80 percent. More than half of all cell phone users have smartphones, giving them Internet access all the time. As a result, the number of hours Americans spend collectively online has almost doubled since 2010, according to ComScore, a digital analytics company.

Teens and twentysomethings are the most wired. In 2011, Diana Rehling and Wendy Bjorklund, communications professors at St. Cloud State University in Minnesota, surveyed their undergraduates and found that the average college student checks Facebook 20 times an hour.

So what can Luke Skywalker teach you? Shane O’Neill says it well:

“The climactic Death Star battle scene is the centerpiece of the movie’s nature vs. technology motif, a reminder to today’s viewers about the perils of relying too much on gadgets and not enough on human intuition. You’ll recall that Luke and his team of X-Wing fighters are attacking Darth Vader’s planet-size command center. Pilots are relying on a navigation and targeting system displayed through a small screen (using gloriously outdated computer graphics) to try to drop torpedoes into the belly of the Death Star. No pilot has succeeded, and a few have been blown to bits.

“Luke, an apprentice still learning the ways of The Force from the wise — but now dead — Obi-Wan Kenobi, decides to put The Force to work in the heat of battle. He pushes the navigation screen away from his face, shuts off his “targeting computer” and lets The Force guide his mind and his jet’s torpedo to the precise target.

“Luke put down his gadget, blocked out the noise and found a quiet place of Zen-like focus. George Lucas was making an anti-technology statement 36 years ago that resonates today. The overarching message of Star Wars is to use technology for good. Use it to conquer evil, but don’t let it override your own human Force. Don’t let technology replace you.

Take a lesson from a great Jedi warrior. Push the screen away from time to time and give your mind and personality a chance to shine. When it’s time to use the screen again, use it for good.”

Looking back: Operation Buckshot Yankee & agent.btz

It was the fall of 2008. A variant of a three year old relatively benign worm began infecting U.S. military networks via thumb drives.

Deputy Defense Secretary William Lynn wrote nearly two years later that the patient zero was traced to an infected flash drive that was inserted into a U.S. military laptop at a base in the Middle East. The flash drive’s malicious computer code uploaded itself onto a network run by the U.S. Central Command. That code spread undetected on both classified and unclassified systems, establishing what amounted to a digital beachhead, from which data could be transferred to servers under foreign control. It was a network administrator’s worst fear: a rogue program operating silently, poised to deliver operational plans into the hands of an unknown adversary.

The worm, dubbed agent.btz, caused the military’s network administrators major headaches. It took the Pentagon nearly 14 months of stop and go effort to clean out the worm — a process the military called Operation Buckshot Yankee. It was so hard to do that it led to a major reorganization of the information defenses of the armed forces, ultimately causing the new Cyber Command to come into being.

So what was agent.btz? It was a variant of the SillyFDC worm that copies itself from removable drive to computer and back to drive again. Depending on how the worm is configured, it has the ability to scan computers for data, open backdoors, and send through those backdoors to a remote command and control server.

To keep it from spreading across a network, the Pentagon banned thumb drives and the like from November 2008 to February 2010. You could also disable Windows’ “autorun” feature, which instantly starts any program loaded on a drive.

As Noah Shachtman noted, the havoc caused by agent.btz has little to do with the worm’s complexity or maliciousness — and everything to do with the military’s inability to cope with even a minor threat. “Exactly how much information was grabbed, whether it got out, and who got it — that was all unclear,” says an officer who participated in the operation. “The scary part was how fast it spread, and how hard it was to respond.”

Gen. Kevin Chilton of U.S. Strategic Command said, “I asked simple questions like how many computers do we have on the network in various flavor, what’s their configuration, and I couldn’t get an answer in over a month.” As a result, network defense has become a top-tier issue in the armed forces. “A year ago, cyberspace was not commanders’ business. Cyberspace was the sys-admin guy’s business or someone in your outer office when there’s a problem with machines business,” Chilton noted. “Today, we’ve seen the results of this command level focus, senior level focus.”

What can you learn from Operation Buckshot Yankee?
a) That denial is not a river in Egypt
b) There are well known ways to minimize (but not eliminate) threats
c) It requires command level, senior level focus; this is not a sys-admin business

Defense in Depth – The New York Times Case

In January 2013, the New York Times accused hackers from China with connections to its military of successful penetrating its network and gained access to the logins of 53 employees, including Shanghai bureau chief David Barboza who last October published an embarrassing article on the vast secret wealth of China’s prime minister, Wen Jiabao.

This came to light when AT&T noticed unusual activity which it was unable to trace or deflect. A security firm was brought into conduct a forensic investigation that uncovered the true extent of what had been going on.

Over four months starting in September 2012, the attackers had managed to install 45 pieces of targeted malware designed to probe for data such as emails after stealing credentials, only one of which was detected by the installed antivirus software from Symantec. Although the staff logins were hashed, that doesn’t appear to have stopped the hackers in this instance. Perhaps, the newspaper suggests, because they were able to deploy rainbow tables to beat the relatively short passwords.

Symantec offered this statement: “Turning on only the signature-based anti-virus components of endpoint solutions alone are not enough in a world that is changing daily from attacks and threats.”

Still think that basic AntiVirus and firewall is enough? Take it directly from Symantec – you need to monitor and analyze data from inside the enterprise for evidence of compromise. This is Security Information and Event Management (SIEM).

Cyber Pearl Harbor a myth?

Eric Gartzke writing in International Security argues that attackers don’t have much motive to stage a Pearl Harbor-type attack in cyberspace if they aren’t involved in an actual shooting war.

Here is his argument:

It isn’t going to accomplish any very useful goal. Attackers cannot easily use the threat of a cyber attack to blackmail the U.S. (or other states) into doing something they don’t want to do. If they provide enough information to make the threat credible, they instantly make the threat far more difficult to carry out. For example, if an attacker threatens to take down the New York Stock Exchange through a cyber attack, and provides enough information to show that she can indeed carry out this attack, she is also providing enough information for the NYSE and the U.S. Government to stop the attack.

Cyber attacks usually involve hidden vulnerabilities — if you reveal the vulnerability you are attacking, you probably make it possible for your target to patch the vulnerability. Nor does it make sense to carry out a cyber attack on its own, since the damage done by nearly any plausible cyber attack is likely to be temporary.

Points to ponder:

  • Most attacks are occurring against well known vulnerabilities; systems that are unpatched
  • Most attacks are undetected and systems are “pwned” for weeks/months
  • The disruption caused when attacks are discovered are significant both in human and cost terms
  • There was little logic in the 9/11 attacks other than to cause havoc and fear (i.e., terrorists are not famous for logical well thought out reasoning)

Coming to commercial systems, attacks are usually for monetary gain. Attacks are often performed because “they can” [Remember George Mallory famously quoted as having replied to the question “Why do you want to climb Mount Everest?” with the retort “Because it’s there”].