Learning from LeBron

Thinking about implementing analytics? Before you do that, ask yourself “What answers do I want from the data?”

After the Miami Heat lost the 2011 NBA playoffs to the Dallas Mavericks, many armchair MVPs were only too happy to explain that LeBron was not a clutch player and didn’t have what it takes to win championships in this league. Both LeBron and Coach Erik Spolestra however were determined to convert that loss into a teaching moment.

Analytics was indicated. But what was the question?  According to Spoelstra, “It took the ultimate failure in the Finals to view LeBron and our offense with a different lens. He was the most versatile player in the league. We had to figure out a way to use him in the most versatile of ways — in unconventional ways.” In the last game of the 2011 Finals, James was almost listlessly loitering beyond the arc, hesitating, shying away, and failing to take advantage of his stature. His last shot of those Finals was symbolic: an ill-fated 25-foot jump shot from the outskirts of the right wing — his favorite 3-point shot location that season.

LeBron decided the correct answer was to work on the post-up game during the off season. He spent a week learning from the great Hakeem Olajuwon. He brought his own videographer to record the sessions for later review. LeBron arrived early for each session and was stretched and ready to go every time. He took the lessons to the gym for the rest of the off season. It worked. James emerged from that summer transformed. “When he returned after the lockout, he was a totally different player,” Spoelstra says. “It was as if he downloaded a program with all of Olajuwon’s and Ewing’s post-up moves. I don’t know if I’ve seen a player improve that much in a specific area in one offseason. His improvement in that area alone transformed our offense to a championship level in 2012.”

The true test of analytics isn’t just on how good they are but in how committed you are in using the data. At the 2012 NBA Finals, LeBron won the MVP title and Miami, the championship.

The lesson to learn here is to know what answers you are seeking form the data and commit to going where the data takes you.

Using Dynamic Audit Policy to Detect Unauthorized File Access

One thing I always wished you could do in Windows auditing was mandate that access to an object be audited if the user was NOT a member of a specified group.  Why?  Well sometimes you have data that you know a given group of people will be accessing and for that activity you have no need of an audit trail.

Let’s just say you know that members of the Engineering group will be accessing your Transmogrifier project folder and you do NOT need an audit trail for when they do.  But this is very sensitive data and you DO need to know if anyone else looks at Transmogrifier.

In the old days there was no way to configure Windows audit policy with that kind of negative Boolean or exclusive criteria.  With Windows 2008/7 and before you could only enable auditing based on if someone was in a group not the opposite.

Windows Server 2012 gives you a new way to control audit policy on files.  You can create a dynamic policies based on attributes of the file and user.  (By the way, you get the same new dynamic capabilities for permissions, too).

Here’s a screen shot of audit policy for a file in Windows 7.

Unauthorized File Access

Now compare that to Windows Server 2012.

Unauthorized File Access

The same audit policy is defined but look at the “Add a condition” section.  This allows you to add further criteria that must be met before the audit policy takes effect.  Each time you click “Add a condition” Windows adds another criteria row where you can add Boolean expressions related to the User, the Resource (file) being accessed or the Device (computer) where the file is accessed.  In the screen shot below I’ve added a policy which accomplishes what we described at the beginning of the article.

Unauthorized File Access

So we start out by saying that Everyone is audited when they successfully read data in this file.  But then we limit that to users who do not belong to the Engineering group.  Pretty cool, but we are only scratching the surface.  You can add more conditions and you can join them by Boolean operators OR and AND.  You can even group expressions the way you would with parenthesis in programming code.  The example below shows all of these features so that the audit policy is effective if the user is either a member of certain group or department is Accounting and the file has been classified as relevant to GLBA or HIPAA compliance.

Unauthorized File Access

You’ll also notice that you can base auditing and access decision on much more that the user’s identity and group membership.  In the example above we are also referencing the department specified on the Organization tab of the user’s account in Active Directory.  But with dynamic access control we can choose any other attribute on AD user accounts by going to Dynamic Access Control in the Active Directory Administrative Center and selecting Claim Types as shown here.

Unauthorized File Access

You can create claim types for about any attribute of computer and user objects.  After creating a new claim type for a given attribute, it’s available in access control lists and audit policies of files and folders throughout the domain.

But dynamic access control and audit policy doesn’t stop with sophisticated Boolean logic and leveraging user and computer attributes from AD.  You can now classify resources (folders and files) according to any number of properties you’d like.  Below is a list of the default Resource Properties that come out of the box.


Before you can begin using a given Resource Property in a dynamic access control list or audit policy you need to enable it and then add it to a Resource Property List which is shown here.

Unauthorized File Access

After that you are almost ready to define dynamic permissions and audit policies.  The last setup step is to identity file servers where you want to use classify files and folders with Resource Properties.  On those file servers you need to add the File Server Resource Manager subrole.  After that when you open the properties of a file or folder you’ll find a new tab called Classification.

Unauthorized File Access

Above you’ll notice that I’ve classified this folder as being related to the Transmogrifier project.  Be aware that you can define dynamic access control and audit policies without referencing Resource Properties or adding the File Server Resource Manager subrole; you’ll just be limited to Claim Types and the enhanced Boolean logic already discussed.

The only change to the file system access events Windows sends to the Security Log is the addition of a new Resource Attributes to event ID 4663 which I’ve highlighted below.

Unauthorized File Access

This field is potentially useful in SIEM solutions because it embeds in the audit trail a record of how the file was classified when it was accessed.  This would allow us to classify important folders all over our network as “ACME-CONFIDENTIAL” and then include that string in alerts and correlation rules in a SIEM like EventTracker to alert or escalate on events where the information being accessed has been classified as such.

The other big change to auditing and access control in Windows Server 2012 is Central Access Policies which allows you to define a single access control list or audit policy in AD and apply it to any set of computers.  That policy is now evaluated in addition to the local security descriptor on each object.

While Microsoft and press are concentrating on the access control aspect of these new dynamic and central security features, I think the greatest immediate value may come from the audit policy side that we’ve just explored.  If you’d like to learn more about dynamic and central access control and audit policy check out the deep dive session I did with A.N. Ananth of EventTracker: File Access Auditing in Windows Server 2012.

Two classes of cyber threat to critical infrastructure

Dan Villasenor describes two classes of cyber threat confronting critical infrastructure. Some, like the power grid, are viewed by everyone as critical, and the number of people who might credibly target them is correspondingly smaller. Others, like the internal networks in the Pentagon, are viewed as a target by a much larger number of people. Providing a high level of protection to those systems is extremely challenging, but feasible. Securing them completely is not.

While I would agree that fewer people are interested/able to hack the power grid, it reminds me of the “insider threat” problem that enterprises face. When an empowered insider who has legitimate access goes rogue, the threat can be very hard to locate and the damage can be incredibly high. Most defense techniques for insider threat depend on monitoring and behavior anomaly detection. Adding to the problem is that systems like the power grid are harder to upgrade and harden. The basic methods to restrict access and enforce authentication and activity monitoring would be applicable. No doubt, this was all true for the Natanz processing plant in Iran and it still got hacked by Stuxnet. That system was apparently infected by a USB device carried in by an external contractor, so it would seem that restricting access and activity monitoring may have helped detect it sooner.

In the second class of threat, exemplified by the internal networks at the Pentagon, one assumes that all classic protection methods are enforced. Situational awareness in such cases becomes important. A local administrator who relies entirely on some central IT team to patrol, detect and inform him in time is expecting too much. It is said that God helps those who help themselves.

Villasenor also says: “There is one number that matters most in cybersecurity. No, it’s not the amount of money you’ve spent beefing up your information technology systems. And no, it’s not the number of PowerPoint slides needed to describe the sophisticated security measures protecting those systems, or the length of the encryption keys used to encode the data they hold. It’s really much simpler than that. The most important number in cybersecurity is how many people are mad at you.”

Perhaps we should also consider those interested in cybercrime? The malware industrial complex is booming and the average price for renting botnets to launch DDoS is plummeting.

The Post Breach Boom

A basic requirement for security is that systems be patched and the security products like antivirus be updated as frequently as possible. However, there are practical reasons which limit the application of updates to production systems. This is often the reason why the most active attacks are the ones which have been known for many months.

new report from the Ponemon Institute polled 3,529 IT and IT security professionals in the U.S., Canada, UK, Australia, Brazil, Japan, Singapore and United Arab Emirates, to understand the steps they are taking in the aftermath of malicious and non-malicious data breaches. Here are some highlights:

On average, it is taking companies nearly three months (80 days) to discover a malicious breach and then more than four months (123 days) to resolve it.

    • One third of malicious breaches are not being caught by any of the companies’ defenses – they are instead discovered when companies are notified by a third party, either law enforcement, a partner, customer or other party – or discovered by accident. Meanwhile, more than one third of non-malicious breaches (34 percent) are discovered accidentally.
    • Nearly half of malicious breaches (42 percent) targeted applications and more than one third (36 percent) targeted user accounts.
    • On average, malicious breaches ($840,000) are significantly more costly than non-malicious data breaches ($470,000). For non-malicious breaches, lost reputation, brand value and image were reported as the most serious consequences by participants. For malicious breaches, organizations suffered lost time and productivity followed by loss of reputation.

Want an effective defense but wondering where to start? Consider SIEM Simplified.