Top 5 Linux log file groups in/var/log


If you manage any Linux machines, it is essential that you know where the log files are located, and what is contained in them. Such files are usually in /var/log. Logging is controlled by the associated .conf file.

Some log files are distribution specific and this directory can also contain applications such as samba, apache, lighttpd, mail etc.

From a security perspective, here are 5 groups of files which are essential. Many other files are generated and will be important for system administration and troubleshooting.

1. The main log file
a) /var/log/messages – Contains global system messages, including the messages that are logged during system startup. There are several things that are logged in /var/log/messages including mail, cron, daemon, kern, auth, etc.

2. Access and authentication
a) /var/log/auth.log – Contains system authorization information, including user logins and authentication machinsm that were used.
b) /var/log/lastlog – Displays the recent login information for all the users. This is not an ascii file. You should use lastlog command to view the content of this file.
c) /var/log/btmp – This file contains information about failed login attemps. Use the last command to view the btmp file. For example, “last -f /var/log/btmp | more”
d) /var/log/wtmp or /var/log/utmp – Contains login records. Using wtmp you can find out who is logged into the system. who command uses this file to display the information.
e) /var/log/faillog – Contains user failed login attemps. Use faillog command to display the content of this file.
f) /var/log/secure – Contains information related to authentication and authorization privileges. For example, sshd logs all the messages here, including unsuccessful login.

3. Package install/uninstall
a) /var/log/dpkg.log – Contains information that are logged when a package is installed or removed using dpkg command
b) /var/log/yum.log – Contains information that are logged when a package is installed using yum

4. System
a) /var/log/daemon.log – Contains information logged by the various background daemons that runs on the system
b) /var/log/cups – All printer and printing related log messages
c) /var/log/cron – Whenever cron daemon (or anacron) starts a cron job, it logs the information about the cron job in this file

5. Applications
b) /var/log/maillog /var/log/mail.log – Contains the log information from the mail server that is running on the system. For example, sendmail logs information about all the sent items to this file
b) /var/log/Xorg.x.log – Log messages from the XWindows system

Happy Logging!

Seven Habits of Highly Fraudulent Users


This post Seven Habits of Highly Fraudulent Users from Izzy at SiftScience describes patterns culled from 6 million transactions over a three month sample. The “fraud” sample consisted of transactions confirmed fraudulent by customers; “normal” samples consisted of transactions confirmed by customers to be non-fraudulent, as well as a subset of unlabeled transactions.

These patterns are useful to Security Operations Center (SOC) teams who “hunt” for these things.

Habit #1 Fraudsters go hungry

Whereas there is a dip in activity by normal users at lunch time, no such dip is observed in fraudulent transactions. When looking for out-of-ordinary behavior, the absence of any dip during the day might speak to a script which never tires.

Habit #2 Fraudsters are night owls

Analyzing fraudulent transactions as a percentage of all transactions, 3AM was found to be the most fraudulent hour in the day, and night-time in general was a more dangerous time. SOC teams should hunt for “after hours” behavior as a tip-off for bad actors.

Habit #3 Fraudsters are international

Look for traffic originating outside your home country. While these patterns change frequently, as a general rule, international traffic is worth trending and observing.

Habit #4 Fraudsters don multiple identities

Fraudsters tend to make multiple accounts on their laptop or phone to commit fraud. When multiple accounts are associated with the same device, the higher the likelihood of fraud. A user who has 6 accounts on her laptop is 15 times more likely to be fraudulent than the average person. Users with only 1 account however, are less likely to be fraudulent. SOC teams should look for multiple users using the same computer in a given time frame. Even in shared PC situations (e.g, nurses station in a hospital, it is unusual for much more than one user accessing a PC in a given shift.

Habit #5 Fraudsters use well known domains

The top 3 sources of fraud originate from Microsoft sites including outlook.com, Hotmail and live.com. Traffic from/to such sites is worthy of trending and examining.

Habit #6 Fraudsters are boring

A widely recognized predictor of fraud is the number of digits in an email address. The more numbers, the more likely that it’s fraud.

Habit #7 Fraudsters like disposable things

We know that attacks almost always originate from DHCP addresses (which is why dshield.org/block.txt gives out /24 ranges). Its also true that the older an account age, the less likely (in general) its involved in fraud. SOC teams must always look out for account creation.

Good hunting.

EventTracker and Poodle


Summary:
• All systems and applications utilizing the Secure Socket Layer (SSL) 3.0 with cipher-block chaining (CBC) mode ciphers may be vulnerable. However, the POODLE (Padding Oracle On Downgraded Legacy Encryption) attack demonstrates this vulnerability using web browsers and web servers, which is one of the most likely exploitation scenarios.
• EventTracker v7.x is implemented above IIS on the Windows platform and there MAY be vulnerable to POODLE depending on the configuration of IIS..
• ETIDS and ETVAS which are offered as options of the SIEM Simplified service, are based on CentOS v6.5 which uses Apache and may also be vulnerable, depending on the configuration of Apache.

1. Poodle Scan can be used to test if your server is vulnerable
• Below are the links relevant to this vulnerability:

Laying Traps for External Information Thieves


Wouldn’t it be nice if you detect when an external threat actor, who’s taken over one of your users’ endpoints, goes on a poaching expedition through all the information that user has access to on your network?

Easier said than done, right?  After all, when malware is running on an endpoint anything it does show up as being performed by that user.  How high really are your chances of recognizing those events as being different from the user’s normal behavior? 

EventTracker Search Performance


EventTracker 7.6 is a complex software application and while there is no easy formula to compute its performance, there are ways to configure and use it so as to get better performance. All data received either real-time or by file ingest (called the Direct Log Archiver) is first indexed and then archived for optimal disk utilization. When performance of a search is cross indexed, compression speed of results depend on the type of search as well as the underlying hardware.

Searches can be categorized as:
Dense – at least one result per thousand (1,000) events
Sparse – at least one result per million (1,000,000) events
Rare – at least one result per billion (1,000,000,000) events
Needle in a haystack – one event in more than a billion events

Based on provided search criteria, EventTracker consults indexing meta-data to determine if and in which archive contains events matching the search terms. As searches go from dense to needle-in-a-haystack, they move from being CPU bound to I/O bound.

Dense searches are CPU bound because matches are found easily and there is sufficient raw data to decompress. For the fastest possible response on default hardware, EventTracker will limit return results to the first (sorted by time with newest on top) 200 results displayed. This setting can of course be defeated but is provided because it satisfies the most common use case.

As the events containing the search term get to one in a hundred thousand (100,000), performance becomes more I/O bound. The reason is there is less and less data but more and more index files have to be consulted.

I/O performance is measured as latency which is a measure of the time delay from when a disk I/O request is created, until the time the disk I/O request is completed by the underlying hardware. Windows perfmon can measure average disk/sec transfer. A rule of thumb is to have this be below 25 millisec for best I/O performance.

This can be realized in various ways:
– Having different drives (spindles) for the OS/progam and archives
– Using faster disk (15K RPM performs better than 7200 RPM disks)
– Using a SAN

In larger installations with multipleVirtual Collection Points (VCP), dedicating a separate disk spindle for each VCP can help.

Nineteen Minutes In April


In April 16 of 2013, a sniper took a hundred shots at Pacific Gas and Electric’s (PG&E) Metcalf Electric Power Transformer Station. The utility was able to reroute power on the grid and avert a black out. The whole ordeal took nineteen tension-filled minutes. The event added muscle to the regulatory grip of The North American Electric Reliability Corporation (NERC) – a not-for-profit entity whose mission is to ensure the reliability of the bulk power system in North America. A terrorist attack, domestic or otherwise, could bring the state’s power grid down.

The Data Scientist Unicorn


An essential part of any IT Security program is to hunt for unusual patterns in sensor (or log) data to uncover attacks. Aside of tools that gather and collate this data (for example SIEM solutions like EventTracker), a smart pair of eyeballs is needed to sift through the data warehouse. In modern parlance, this person is called a data scientist, one who extracts knowledge from data. This requires a deep understanding of the available data and a feel for pattern recognition and visualization.

As Michael Schrage notes in the HBR Blog network “…the opportunities for data-science-enabled efficiencies and innovation are too important to defer or deny. Big organizations can afford — or think they can afford — to throw money at the problem by hiring laid-off Wall Street quants or hiring big-budget analytics boutiques. More frugal and prudent enterprises seem to be taking alternate approaches.”

Starting up a “center of excellence” or addressing a “grand challenge”  is not practical for most organizations. Instead, how about an effort to deliver tangible and data-driven benefits in a short time frame?

Interestingly, Schrage notes “Without exception, every team I ran across or worked with hired outside expertise. They knew when a technical challenge and/or statistical technique was beyond the capability…the relationship was less of an RFP box-ticking exercise than a shared space…”

What does any of this have to do with SIEM you ask?

Well for the typical Small/Medium Enterprise [SME] this is a familiar dilemma. Data, data everywhere and not a drop (of intelligence) to drink. Either the “data scientist” is not on the employee roster or does not have time available. How then do you square this circle? Look for outside expertise, as Schrage notes.

SIEM Simplified service

SME’s looking for expertise to leverage the existing mountain of security data within their enterprise can leverage our SIEM Simplified service.

Unicorns don’t exist but that doesn’t mean that do-nothing is a valid option.

EventTracker and Shellshock


What’s your thought on Shellshock? EventTracker CEO A.N. Ananth weighs in.

Summary:

  • Shellshock (also known as Bashdoor) CVE-2014-6271 is a security bug in the Linux/Unix Bash shell.
  • EventTracker v 6.x, v7.x is NOT vulnerable to Shellshock as these products are based on the Microsoft Windows platform.
  • ETIDS and ETVAS which are offered as options of the SIEM Simplified service, are vulnerable to Shellshock, as these solutions are based on CentOS v6.5. Below are the links relevant to this vulnerability.
  • If you subscribe to ETVAS and/or ETIDS, the EventTracker Control Center has already initiated action to patch this vulnerability on your behalf. Please contact ecc@eventtracker.com with any questions.

Details:

Shellshock (also known as Bashdoor) CVE-2014-6271 is a security bug in the broadly used Unix Bash shell. Bash is used to process certain commands across many internet daemons. It is a program that is used by various Unix-based systems to execute command scripts and command lines. Often it is installed as the system’s default command line interface.

Notes:

  • Environment variables (each running program having its own list of name/value pairs) occur in Unix-based and other operating systems that Bash supports. When one program is started by an earlier program, an initial list of environment variables is provided by the earlier program to the new program. Apart from this, named scripts (internal list of functions) are also maintained by Bash that can be executed from within.
  • By creating vulnerable versions of Bash, an attacker can gain unauthorized access to a computer system. By executing Bash with a chosen value in its environment variable list, vulnerable versions of Bash can be caused, that may allow remote code execution.
  • Scrutiny of the Bash source code history, reveal that concealed vulnerabilities have been present since approximately version 1.13 (1992). Lack of comprehensive change logs do not allow, the maintainers of Bash source code, to pinpoint the exact time of introduction of the vulnerability.

We don’t need no stinkin Connectors


#36 on the American Film Institute list of Top Movie Quotes is “Badges? We don’t need no stinkin badges” which has been used often (e.g., Blazing Saddles). The equivalent of this in the log management universe is a “Connector”. We are often asked how many “Connectors” we have readily available or how long it takes to develop a Connector.

These questions stem from a model used by programs such as ArcSight which depend on Early Binding. In an earlier era of computing, Early Binding was needed for the compiler could not create an entry in the virtual method table for the procedure being compiled. It has the advantage of being efficient, an important consideration when CPU and memory are in very short supply, like years ago.

Just in time languages such as .NET or Java adopt Late Binding where the v-table is computed at run time. Years ago, Late Binding had negative connotations in terms of performance but that hasn’t been true for at least 20 years now.

Early binding requires a fixed schema to be mandated for all possible entries and for input to be “normalized” to this schema. The benefit of the fixed plan is efficiency in output since the data is already normalized. While that may make some sense for compilers, input in formalized language grammars makes almost no sense in the log management universe, where the input is log data from sources that do not adopt any standardization at all. The downside of such an approach is to require a “Connector” to normalize a new log source to the normalized schema. Another consideration is that outputs can greatly vary depending on usage – there are many possible uses for the data, the limitation is only the users imagination. The Early Binding model however, is designed with fixed outputs in mind. These disadvantages limit such designs.

In contrast, EventTracker uses Late Binding, where the meaning of tokens can be assigned at output (run) time, rather than being fixed at receive time. Thus new log formats do not need a “Collector” to be available at ingest time. The desired output format can be specified at search or report time for easy viewing. This requires somewhat greater computing capacity with Moores Law to the rescue. Late Binding is the primary advantage of EventTrackers’ “Fast In, Smart Out” architecture.

Spray & Pray or 80/20


If you spend any time at all looking at log data from any server that is accessible to the Internet, you will be shocked at the brazen attempts to knock the castle over. They being within minutes of the server being available. They most commonly include port scans, login attempts using default username/password, web server attacks described by OWASP.

How can this possibly be? Given the sheer number of machines that are visible on the Internet? Don’t these guys have anything better to do?

The answer is automation and scripted attacks, also known as spray and pray. The bad guys are capitalists too (regardless of country of origin!) and need to maximize their effort, computing capacity and network bandwidth usage. Accordingly, they use automation to “knock on all available doors in a wealthy neighborhood” as efficiently and regularly as possible. Why pick on servers in developed countries? Because that’s where the payoff is likely to be higher. Its Risk v. Reward all the way.

The automated (first) wave of these attacks is to identify vulnerable machines and establish presence. Following waves may be staffed depending on the the location and identity and thus the potential value to be obtained by a greater investment of (scarce) expertise by the attacker.

Such attacks can be deterred quite simply by using secure (non-default) configuration, system patching and basic security defenses such as firewall and anti-virus. This explains the repeated exhortations of security pundits on “best practice” and also the rationale behind compliance standards and auditors trying to enforce basic minimum safeguards.

The 80/20 rule applies to attackers just as it does to defenders. Attackers are trying to cover 80% of the ground at 20% of the cost so as to at-least identify soft high value targets and at most steal from them. Defenders are trying to deter 80% of the attackers at 20% of cost by using basic best practices.

Guidance such as SANS Critical Controls or lessons from Verizon’s Annual Data Breach studies can help you prioritize your actions. Attackers depend on the fact that the majority of users do not follow basic security hygiene, don’t collect logs which would expose the attackers actions and certainly never actually look at the logs.

Defeating a “spray and pray” attacks requires basic tooling and discipline. The easy way to so this? We call it SIEM Simplified. Drop us a shout, it beats being a victim.

Hackers: What they are looking for and the abnormal activities you should be evaluating


Most hackers are looking into critical data for credential theft. A credential theft attack is when an attacker initially gains privileged access to a computer on a network and then uses freely available tooling to extract credentials from the sessions of other logged-on accounts. The most prevalent target for a credential theft is a “VIP account.” VIP account’s consist of contacts with highly sensitive data attached: access to accounts and secure data that many others within that organization probably don’t have.

It’s very important for administrators to be conscious of activities that increase the likelihood of a successful credential-theft attack.

These activities are:
• Logging on to unsecured computers with privileged accounts
• Browsing the Internet with a highly privileged account
• Configuring local privileged accounts with the same credentials across systems
• Overpopulation and overuse of privileged domain groups
• Insufficient management of the security of domain controllers.

There are specific accounts, servers, and infrastructure components that are the usual primary targets of attacks against Active Directory.

These accounts are:
• Permanently privileged accounts
• VIP accounts
• “Privilege-Attached” Active Directory accounts
• Domain controllers
• Other infrastructure services that affect identity, access, and configuration management, such as public key infrastructure (PKI) servers and systems management servers

Although pass-the-hash (PtH) and other credential theft attacks are ubiquitous today, it is because there is freely available tooling that makes it simple and easy to extract the credentials of other privileged accounts when an attacker has gained Administrator – or SYSTEM-level access to a computer.

Even without this tool, an attacker with privileged access to a computer can just as easily install keystroke loggers that capture keystrokes, screenshots, and clipboard contents. An attacker with privileged access to a computer can disable anti-malware software, install rootkits, modify protected files, or install malware on the computer that automates attacks or turns a server into a drive-by download host.

The tactics used to extend a breach beyond a single computer vary, but the key to propagating compromise is the acquisition of highly privileged access to additional systems. By reducing the number of accounts with privileged access to any system, you reduce the attack surface not only of that computer, but the likelihood of an attacker harvesting valuable credentials from the computer.

Case of the Disappearing Objects: How to Audit Who Deleted What in Active Directory


I often get asked how to audit the deletion of objects in Active Directory. It’s pretty easy to do this with the Windows Security Log – especially for tracking deletion of users and groups which I’ll show you first. All you have to do is enable “Audit user accounts” and “Audit security group management” in the Default Domain Controllers Policy GPO.

Practical ways to analyze login and pre-authentication failures


Nikunj Shah, team lead of EventTracker SIEM Simplified team provides some practical tips on analyzing login and pre-authentication failures:

1) Learn and know how to identify login events and their descriptions. A great resource to find event IDs is here: http://technet.microsoft.com/en-us/library/cc787567(v=ws.10).aspx.

2) Identify and look into the event description. To analyze events efficiently and effectively you must analyze the event description. Within the login failure description, paying attention to the details like: failure reason, user name, logon type, workstation name and source network address are critical to your investigation and analysis. By identifying the description and knowing what to pay attention to, you will easily eliminate the noise.

When using a system like EventTracker, the display of the required fields used to showcase eliminates the noise and show you the immediate error results. EventTracker will provide a summary based on the total number of events for each failure type and user name to demonstrate the automation of your systems’ critical information.

Using IDS will help your enterprise run more efficiently and effectively with the analysis of traditional reports for the hundreds of events that happen every day. Doing this without the help of a management and a monitoring tool is nearly impossible.

Please reference here for detailed charts.

Simplify SIEM with Services


To support security, compliance and operational requirements, specific and fast answers to the 4 W questions (Who, What, When, Where) are very desirable. These requirements drive the need to Security Information Event Management (SIEM) solutions that provide detailed and one-pain-of-glass visibility into this data, which is constantly generated within your information ecosystem. This visibility and the attendant effectiveness are made possibly by centralizing the collection, analysis and storage of log and other security data from sources throughout the enterprise network.

To obtain value from your SIEM solution, it must be watered and fed. This is an eternal commitment, whether your team chooses to do-it yourself or get someone to do it for you. This new white paper from EventTracker examines the pros and cons of using a specialist external service provider.

“Think about this for a second: a lot more people will engage professional services to help them RUN, not just DEPLOY, a SIEM. However, this is not the same as managed services, as those organization will continue to own their SIEM tools.” –Anton Chuvakin, Gartner Analyst

Known knowns, Unknown unknowns


“There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know. ”
–Donald Rumsfeld, Secretary of Defense

In SIEM world, the known knowns are alerts. We configure rules to look at security data for threats/problems that we find to be interesting and bring them to the operators’ attention. This is a huge step up in the SIEM maturity scale from log ignorance. The Department of Homeland Security refers to this as “If you see something, say something.” What do you do when you see something? You “do something,” better known as alert-driven workflow. In the early stages of a SIEM implementation there is a lot of time spent refining alert definitions in order to reduce “noise.”

While this approach addresses the “known knowns”, it does nothing for the “unknown unknowns”. To identify the unknown, you must stop waiting for alerts and instead search for the insights. This approach starts with a question rather than a reaction to an alert. Notice that often enough, it’s non IT persons asking the questions e.g., Who changed this file? Which systems did “Susan” access on Saturday?

This approach results in interactive investigation rather than the traditional drill down. For example:
– Show me all successful login’s over the weekend
– Filter these to show only those on server3
– Why did “Susan” login here? Show all “Susan” activity over the weekend…

This form of active data exploration requires a certain degree of expertise in log management tools, with experience and knowledge of the data set to review a thread that looks out of place. Once you get used to the idea, it is incredible to see how visible these patterns become to you. This is essential to “running a tight ship” and being aware of out of the ordinary patterns given the baseline. When staffing technical persons for the EventTracker SIEM Simplified service team, we are constantly looking for “insight hunters” instead of mere “alert responders.”  Alert responding is so 2013…

Top 5 bad assumptions about SIEM


The cliché goes “When you assume, you make an ass out of u and me.” When implementing a SIEM solution, these five assumptions have the potential to get us in trouble. They stand in the way or organization and personal success and thus are best avoided.

5. Security by obscurity or my network is too unimportant to be attacked
Small businesses tend to be more innovative and cost-conscious. Is there such a thing as too small for hackers to care? In this blog post we outlined why this is almost never the case. As the Verizon Data Breach shows year in and year out, companies with 11-100 employees from 36 countries had the maximum number of breaches.

4. I’ve got to do it myself to get it right
Charles De Gaulle on humility “The graveyards are full of indispensable men”. Everyone tries to demonstrate multifaceted skill but its neither effective nor efficient. Corporations do it all the time. Tom Friedman explains it in “The World is Flat.”

3. Compliance = Security
This is only true if your auditor is your only threat actor. We tend to fear the known more than the unknown so it is often the case that we fear the (known) auditor more than we fear the (unknown) attacker. Among the myriad lessons from the Target breach, perhaps the most important is that “Compliance” does NOT equal Security.

2. All I have to do it plug it in, the rest happens by magic
Marketing departments of every security vendor would have you believe this of their magic appliance or software. When has this ever been true? Self-propelling lawn mower anyone?

1. It’s all about buying the most expen$ive technology
Kivas Fajo in “The Most Toys” the 70th episode of Star Trek TNG believed this. You could negotiate a 90% discount on a $200K solution and then park it as shelfware, what did you get? Wasted $20K is what. It’s always about using what you have.

Bad assumptions = bad decisions.
Always true.

SIEM and Return on Investment: Four Pillars for Success


Return on investment (ROI) — it is the Achilles heel of IT management. Nobody minds spending money to avoid costs, prevent disasters, and ultimately yield more than the initial investment outlay. But is the investment justified? It is challenging to calculate the ROI for any IT investment, and security information and event management (SIEM) tools are no exception. We recently explored some basic precepts or “pillars” of the ROI of SIEM tools and technology. These pillars provide some sensible groundwork for the difficult endeavor to justify intangible costs of SIEM tools and technology.

Security is not something you buy, but something you do


The three sides of the security triangle are People, Processes and Technology.

SIEM-Triangles

  1. People –the key issues are: who owns the process, who is involved, what are their roles, are they committed to improving it and working together, and more importantly are they prepared to do the work to fix the problem?
  1. Process –can be defined as a trigger event which creates a chain of actions resulting in something being prepared for a customer of that process.
  1. Technology – Now that people are aligned, and the process developed and clarified, technology can be applied to ensure consistency in the process application and to provide the thin guiding rails to keep the process on track, making it easier to follow the process than not.

None of this is particularly new to CIOs and CSOs, yet how often have you seen six or seven digit “investments” sitting on datacenter racks, or even sometimes on actual storage shelves, unused or heavily underused? Organizations throw away massive amounts of money, then complain about “lack of security funds” and “being insecure.” Buying security technologies is far too often an easier task than utilizing them, and “operationalizing” them for many organizations. SIEM technology suffers from this problem as do many other “Monitoring” technologies.

Compliance and “checkbox mentality” makes this problem worse as people read the mandates and only pay attention to sections that refer to buying boxes.

Despite all this rhetoric, many managers equate information security with technology, completely ignoring the proper order. In reality, a skilled engineer with a so-so tool, but a good process is more valuable than an untrained person equipped with the best of tools.

As Gartner analyst Anton Chuvakin notes, “…if you got a $200,000 security appliance for $20,000 (i.e. at a steep 90% discount), but never used it, you didn’t save $180k – you only wasted $20,000!”

Security is not something you BUY, but something you DO.

IP Address is not a person


As we deal with forensic reviews of log data, our SIEM Simplified team is called upon to piece together a trail showing the four W’s: Who, What, When and Where. Logs can be your friend and if collected, centralized and indexed can get you answers very quickly.

There is a catch though. The “Where” question is usually answered by supplying either a system name or an IP Address which at the time in question was associated with that system name.

Is that good enough for the law? i.e., will the legal system accept that you are your IP Address?

Florida District Court Judge Ursula Ungaro says no.

Judge Ungaro was presented with a case brought by Malibu Media, who accused IP-address “174.61.81.171″ of sharing one of their films using BitTorrent without their permission. The Judge, however, was reluctant to issue a subpoena, and asked the company to explain how they could identify the actual infringer.

Responding to this order to show cause, Malibu Media gave an overview of their data gathering techniques. Among other things they explained that geo-location software was used to pinpoint the right location, and how they made sure that it was a residential address, and not a public hotspot.

Judge Ungaro welcomed the additional details, but saw nothing that actually proves that the account holder is the person who downloaded the file.

“Plaintiff has shown that the geolocation software can provide a location for an infringing IP address; however, Plaintiff has not shown how this geolocation software can establish the identity of the Defendant,” Ungaro wrote in an order last week.

“There is nothing that links the IP address location to the identity of the person actually downloading and viewing Plaintiff’s videos, and establishing whether that person lives in this district,” she adds.

As a side note, on April 26, 2012, Judge Ungaro ruled that an order issued by Florida Governor Rick Scott to randomly drug test 80,000 Florida state workers was unconstitutional. Ungaro found that Scott had not demonstrated that there was a compelling reason for the tests and that, as a result, they were an unreasonable search in violation of the Constitution.

Three trends in Enterprise Networks


There are three trends in Enterprise Networks:

1) Internet of Things Made Real. We’re all familiar with the challenge of big data ­ how the volume, velocity and variety of data is overwhelming. Studies confirm the conclusion many of you have reached on your own: There’s more data crossing the internet every second than existed on the internet in total 20 years ago. And, now, as customers deploy more sensors and devices in every part of their business, the data explosion is just beginning. This concept, called the “Internet of Things,” is a hot topic. Many businesses are uncovering efficiencies based on how connected devices drive decisions with more precision in their organizations.

2) “Reverse BYOD.” Most of us have seen firsthand how a mobile workplace can blur the line between our personal and professional lives. Today’s road warrior isn’t tethered to a PC in a traditional office setting. They move between multiple devices throughout their workdays with the expectation that they¹ll be able to access their settings, data and applications. Forrester estimates that nearly 80 percent of workers spend at least some portion of their time working out of the office and 29 percent of the global workforce can be characterized as “anywhere, anytime” information workers. This trend was called “bring your own device” or “BYOD.” But now we¹re seeing the reverse. Business-ready, secure devices are getting so good that organizations are centrally deploying mobility solutions that are equally effective at work and play.

3) Creating New Business Models with the Cloud. The conversation around cloud computing has moved from “if to “when.” Initially driven by the need to reduce costs, many enterprises saw cloud computing as a way to move non-critical workloads such as messaging and storage to a more cost-efficient, cloud-based model. However, the larger benefit comes from customers who identify and grow new revenue models enabled by the cloud. The cloud provides a unique and sustainable way to enable business value, innovation and competitive differentiation ­ all of which are critical in a global marketplace that demands more mobility, flexibility, agility and better quality across the enterprise.

The 5 stages of SIEM Implementation


Are you familiar with the Kübler-Ross 5 Stages of Grief model?

SIEM implementation (and indeed most enterprise software installations) bear a striking resemblance.

  • Stage One: Denial – The frustration that new users feel learning the terminology and delivering on the “asks” from the implementation make them question the time investment.
  • Stage Two: Despair – The self-doubt that most implementation teams feel in delivering on the promises of a complex security technology with many moving parts.
  • Stage Three: Hopeful Performance – Learning, and even using, the SIEM solution, partners build confidence in their ability to become one of those recognized for competence and potential.
  • Stage Four: Soaring Execution – The exalted status of a “go-to” team member, connected at the hip through the vendor support team or service provider; earning accolades from management. The team member has delivered value to the organization and is reaping rewards for the business. Personal relationships with vendor or service reps are genuine and mutually beneficial.
  • Stage Five:  Devolution/Plateau – Complacency, through lack of vision or agility, in embracing the next big thing drags down the relationship. Other partners, hungrier for  the customer’s attention, take over the mindshare once enjoyed.

Increasing Security and Driving Down Costs Using the DevOps Approach


The prevailing IT requirement tends toward doing more work faster, but with fewer resources to do such work, many companies must reconsider their traditional approaches to developing, deploying and maintaining software. One such approach, called DevOps, first gained traction as a viable software development and deployment strategy in Europe in the late 2000s. DevOps is a marriage of convenience

How much security is enough?


Ask a pragmatic CISO about achieving a state of complete organizational security and you’ll quickly be told that this is unrealistic and financially imprudent goal. So then how much security is enough?

More than merely complying with regulations or implementing “best practice”, think in terms of optimizing the outcome of the security investment. So never mind the theoretical state of absolute security, think instead of determining and managing risk to critical business processes and assets.

Risk appetite is defined by the Committee of Sponsoring Organizations of the Treadway Commission (COSO) as “… the amount of risk, on a broad level, an entity is willing to accept in pursuit of value (and its mission).” Risk appetite influences the entity’s culture, operating style, strategies, resource allocation, and infrastructure. Risk appetite is not a constant; it is influenced by and must adapt to changes in the environment. Risk tolerance could be defined as the residual risk the organization is willing to accept after implementing risk-mitigation and monitoring processes and controls. One way to implement this is to define levels of residual risk and therefore the levels of security that is “enough”.

Risk-Wall

The basic level of security is the diligent one which is the staple of every business network; the organization is able to deal with known threats. The hardened level adds the ability to be proactive (with vulnerability scanning), compliant and gives the ability to perform forensic analysis.  At the advanced level, predictive capabilities are introduced and the organization develops the ability to deal with unknown threats.

If it all sounds a bit overwhelming, take heart; managed security services can relieve your team of the heavy lifting that is a staple of IT Security.

Bottom line – determine your risk appetite to determine how much security is enough.

Top 6 uses for SIEM


Security Information and Event Management (SIEM) is a term coined by Gartner in 2005 to describe technology used to monitor and help manage user and service privileges, directory services and other system configuration changes; as well as providing log auditing and review and incident response.

The core capabilities of SIEM technology are the broad scope of event collection and the ability to correlate and analyze events across disparate information sources. Simply put, SIEM technology collects log and security data from computers, network devices and applications on the network to enable alerting, archiving and reporting.

Once log and security data has been received, you can:

  • Discover external and internal threats

Logs from firewalls and IDS/IPS sensors are useful to uncover external threats; logs from e-mail servers, proxy servers can help detect phishing attacks; logs from badge and thumbprint scanners are used to detect physical access

  • Monitor the activities of privileged users

Computers, network devices and application logs are used to develop a trail of activity across the network by any user but especially users with high privileges

  • Monitor server and database resource access

Most enterprises have critical data repositories in files/folder /databases and these are attractive targets for attackers. By monitoring all server and db resource access, security is improved.

  • Monitor, correlate and analyze user activity across multiple systems and applications

With all logs and security data in one place, an especially useful benefit is the ability to correlate user activity across the network.

  • Provide compliance reporting

Often the source of funding for SIEM, when properly setup, auditor on-site time can be reduced by up to 90%; more importantly, compliance is to the spirit of the law rather than merely a check-the-box exercise

  • Provide analytics and workflow to support incident response

Answer Who, What, When, Where questions. Such questions are the heart of forensic activities and critical to draw valuable lessons.

SIEM technology is routinely cited as a basic best practice by every regulatory standard and its absence has been regularly shown as a glaring weakness in every data breach post mortem.

Want the benefit but not the hassle? Consider SIEM Simplified, our service where we do the disciplined blocking and tackling which forms the core of any security or compliance regime.

How to analyze login and pre-authentication failures for Windows Server 2003 R2 and below


Analyzing all the login and pre-authentication failures within your organization can be tedious. There are thousands of login failures generated for several reasons. Here we will discuss the different event IDs and error codes and how you can simplify the login failure review process.

TMI, Too Little Analysis


The typical SIEM implementation suffers from TMI, TLA (Too Much Information, Too Little Analysis). And if any organization that’s recently been in the news knows this, it’s the National Security Agency (NSA). The Wall Street Journal carried this story quoting William Binney, who rose through the ranks at the National Security Agency (NSA) over a 30 year career, retiring in 2001. “The NSA knows so much it cannot understand what it has,” Binney said. “What they are doing is making themselves dysfunctional by taking all this data.”

Most SIEM implementations start at this premise – open the floodgates, gather everything because we are not sure what we are specifically looking for, and more importantly, the auditors don’t help and the regulations are vague and poorly worded.

Lt Gen Clarence E. McKnight is the former head of the Signal Corps and opined that “The issue is a straightforward one of simple ability to manage data effectively in order to provide our leaders with actionable information. Too much raw data compromises that ability. That is all there is to it.”

A presidential panel recently recommended the NSA shut down its bulk collection of telephone call records of all Americans. It also recommended creation of “smart software” to sort data as it is collected, rather than accumulate vast troves of information for sorting out later. The reality is that the collection becomes an end in itself, and the sorting out never gets done.

The NSA may be a large, powerful bureaucracy, intrinsically resistant to change, but how about your organization? If you are seeking a way to get real value out of SIEM data, consider co-sourcing that problem to a team that does that for a living. SIEM Simplified was created for just that purpose. Switch from TMI, TLA (Too Much Information, Too Little Analysis) to JEI, JEA (Just Enough Information, Just Enough Analysis).

EventTracker and Heartbleed


Summary:

The usage of OpenSSL in EventTracker v7.5 is NOT vulnerable to heartbleed.

Details:

A lot of attention has focused on CVE-2014-0160, the Heartbleed vulnerability in OpenSSL. According to http://heartbleed.com, OpenSSL 0.9.8 is NOT vulnerable.

The EventTracker Windows Agent uses OpenSSL indirectly if the following options are enabled and used:

1)      Send Windows events as syslog messages AND use the FTP server option to transfer non real-time events to a FTP server. To support this mode of operation, WinSCP.exe v4.2.9 is distributed as part of the EventTracker Windows Agent. This version of WinSCP.exe is compiled with OpenSSL 0.9.8, as documented in http://winscp.net/eng/docs/history_old (v4.2.6 onwards). Accordingly, the EventTracker Windows Agent is NOTvulnerable.

2)      Configuration Assessment (SCAP). This optional feature uses ovaldi.exe v5.8 Build 2 which in turn includes OpenLDAP v2.3.27 as documented in the OVALDI-README distributed with the EventTracker install package. This version of OpenLDAP uses OpenSSL v0.9.8c which is NOT vulnerable.

Notes:

  • EventTracker Agent uses Microsoft secure channel (Schannel) for transferring syslog over SSL/TLS. This package is NOT vulnerable as noted here.
  • We recommend that all customers who may be vulnerable follow the guidance from their software distribution provider.  For more information and corrective action guidance, please see the information from US Cert here.

Top 5 reasons IT Admins love logs


Top 5 reasons IT Admins love logs:

1) Answer the ‘W’ questions

Who, what, where and when; critical files, logins, USB inserts, downloads…see it all

2) Cut ’em off at the pass, ke-mo sah-bee

Get an early warning of the railroad jumping off track. It’s what IT Admins do.

3) Demonstrate compliance

Don’t even try to demonstrate compliance until you get a log management solution in place. Reduce on-site auditor time by 90%.

4) Get a life

Want to go home on time and enjoy the weekend? How about getting proactive instead of reactive?

5) Logs tell you what users don’t

“It wasn’t me. I didn’t do it.” Have you heard this before? Logs don’t lie.

Avenue Compromise Credential Theft


After an attacker has compromised a target infrastructure, the typical next step is credential theft. The objective is to propagate compromise across additional systems, and eventually target Active Directory and domain controllers to obtain complete control of the network.

Top 5 reasons Sys Admins hate logs


Top 5 Reasons Sys Admins hate logs:

1) Logs multiply – the volume problem

A single server easily generates 0.25M logs every day, even when operating normally. How many servers do you have? Plus you have workstations, applications and not to mention network devices.

2) Log obscurity – what does it mean?

Jan 2 19:03:22  r37s9p2 oesaudit: type=SYSCALL msg=audit(01/02/13 19:03:22.683:318) : arch=i386 syscall=open success=yes exit=3 a0=80e3f08 a1=18800

Do what now? Go where? ‘Nuff said.

3) Real hackers don’t get logged

If your purpose of logging is, for example, to review logs to “identify and proactively address unauthorized access to cardholder data” for PCI-DSS, how do you know what you don’t know?

4) How can I tell you logged in? Let me count the ways

This is a simple question with a complex answer. It depends on where you logged in. Linux? Solaris? Cisco? Windows 2003? Windows 2008? Application? VMware? Amazon EC2?

5) Compliance forced down your throat, but no specific guidance

Have you ever been in the rainforest with no map, creepy crawlies everywhere, low on supplies and a day’s trek to the nearest settlement? That’s how IT guys feel when management drops a 100+ page compliance standard on their desk.