Archive

Pay Attention to System Security Access Events

There are five different ways you can log on in Windows called “logon types.” The Windows Security Log lists the logon type in event ID 4624 whenever you log on. Logon type allows you to determine if the user logged on at the actual console, via remote desktop, via a network share or if the logon is connected to a service or scheduled task starting up. The logon types are:

System Security

There are a few other logon types recorded by event ID 4624 for special cases like unlocking a locked session, but these aren’t real logon session types.

In addition to knowing the session type in logon events, you can also control users’ ability to logon in each of these five ways. A user account’s ability to logon is governed by five user rights found in group policy under Computer Configuration/Windows Settings/Security Setting/User Right Assignments. There is an allow and deny right for each logon type. In order to logon in a given way you must have the corresponding allow right. But the deny right for that same logon type takes precedence. For instance, in order to logon at the local keyboard and screen of a computer you must have the “Allow logon locally” right. But if the “Deny logon locally” right is also assigned to you or any group you belong to, you won’t be able to logon. The table lists each logon type and its corresponding allow and deny rights.

Logon rights are very powerful. They are your first level of control – determining whether a user can access a given system at all. After logging in of course their abilities are limited by object level permissions. Since logon rights are so powerful it’s important to know if they are suddenly granted or revoked. You can do this with Windows Security Log events 4717 and 4718 which are logged whenever a given right is granted or revoked respectively. To get these events you need to enable the Audit Authentication Policy Change audit subcategory.

Events 4717 and 4718 identify the logon right involved in the “Access Granted”/”Access Removed” field using a system name for the right as shown in corresponding column in the table above. The events also specify the user or group who was granted or revoked from having the right in the “Account Modified” field.

Here’s an example of event ID 4717 where we granted the “Access this computer from the network” to the local Users group.

System security access was granted to an account.
Subject:

Security ID: SYSTEM
Account Name: WIN-R9H529RIO4Y$
Account Domain: WORKGROUP
Logon ID: 0x3e7

Account Modified:

Account Name: BUILTIN\Users

Access Granted:

Access Right: SeNetworkLogonRight

One consideration is that the events do not tell you who (which administrator) granted or revoked the right. The reason is that user rights are controlled via group policy objects. Administrators do not directly assign or revoke user rights on individual systems; even if you modify the Local Security Settings of a computer you are really just editing the local group policy object. When Windows detects a change in group policy it applies the changes to the local configuration and that’s when 4717 and 4718 are logged. At that point the user making the change directly is just the local operating system itself and that’s why you see SYSTEM listed as the Subject in the event above.

So how can you figure out who a granted or removed the right? You need to be tracking group policy object changes, a topic I’ll cover in the future.

Did Big Data destroy the U.S. healthcare system?

The problem-plagued rollout of healthcare.gov has dominated the news in the USA. Proponents of the Affordable Care Act (ACA) urge that teething problems are inevitable and that’s all these are. In fact, President Obama has been at pains to say the ACA is more than just a website. Opponents of the law see the website failures as one more indicator that it is unworkable.

The premise of the ACA is that young healthy persons will sign up in large numbers and help defray the costs expected from older persons and thus provide a good deal for all. It has also been argued that the ACA is a good deal for young healthies. The debate between proponents of the ACA and the opponents of ACA hinge around this point. See for example, the debate (shouting match?) between Dr. Zeke Emmanuel and James Capretta on Fox News Sunday. In this segment, Capretta says the free market will solve the problem (but it hasn’t so far, has it?) and so Emmanuel says it must be mandated.

So when then has the free market not solved the problem? Robert X. Cringely argues that big data is the culprit. Here’s his argument:

– In the years before Big Data was available, actuaries at insurance companies studied morbidity and mortality statistics in order to set insurance rates. This involved metadata — data about data — because for the most part the actuaries weren’t able to drill down far enough to reach past broad groups of policyholders to individuals. In that system, insurance company profitability increased linearly with scale, so health insurance companies wanted as many policyholders as possible, making a profit on most of them.

– Enter Big Data. The cost of computing came down to the point where it was cost-effective to calculate likely health outcomes on an individual basis.

– Result? The health insurance business model switched from covering as many people as possible to covering as few people as possible — selling insurance only to healthy people who didn’t much need the healthcare system. The goal went from making a profit on most enrollees to making a profit on all enrollees.

Information Security Officer Extraordinaire

IT Security cartoon

Industry News:

Lessons Learned From 4 Major Data Breaches In 2013
Dark Reading

Last year at this time, the running count already totaled approximately 27.8 million records compromised and 637 breaches reported. This year, that tally so far equals about 10.6 million records compromised and 483 breaches reported. It’s a testament to the progress the industry has made in the fundamentals of compliance and security best practices. But this year’s record is clearly far from perfect.

How Will NIST Framework Affect Banks?
BankInfoSecurity

The NIST cybersecurity framework will help U.S. banking institutions assess their security strategies, but some institutions fear the framework could trigger unnecessary regulations, says Bill Stewart of Booz Allen Hamilton.

Did you know that EventTracker is NIST certified for Configuration Assessment?

EventTracker News

EventTracker Wins Government Security News Homeland Security Award

EventTracker announced today that it has won the Security Incident/Event Management (SIEM) category for the 2013 Government Security News Homeland Security Awards.  EventTracker competed for the win among a group of solution providers that included LogRhythm, Solarwinds and RSA.

EventTracker and Secure Links Partner to Bring Better Network Visibility

EventTracker announced that Secure Links, a leading IT services company serving the Canadian market, has joined the Managed Security Service Provider (MSSP) Partner Program. Secure Links will provide and manage EventTracker’s comprehensive suite of log management and SIEM solutions which offer security, operational, and regulatory compliance monitoring.

The VARs tale

The Canterbury Tales is a collection of stories written by Geoffrey Chaucer at the end of the 14th century. The tales were a part of a story telling contest between pilgrims going to Canterbury Cathedral with the prize being a free meal on their return. While the original is in Middle English, here is the VARs tale in modern day English.

In the beginning, the Value Added Reseller (VAR) represented products to the channel and it was good. Software publishers of note always preferred the indirect sales model and took great pains to cultivate the VAR or channel, and it was good. The VAR maintained the relationship with the end user and understood the nuances of their needs. The VAR gained the trust of the end user by first understanding, then recommending and finally supporting their needs with quality, unbiased recommendations, and it was good. End users in turn, trusted their VAR to look out for their needs and present and recommend the most suitable products.

Then came the cloud which appeared white and fluffy and unthreatening to the end user. But dark and foreboding to the VAR, the cloud was. It threatened to disrupt the established business model. It allowed the software publisher to sell product directly to the end user and bypass the VAR. And it was bad for the VAR. Google started it with Office Apps. Microsoft countered with Office 365. And it was bad for the VAR. And then McAfee did the same for their suite of security products. Now even the security focused VARs took note. Woe is me, said the VAR. Now software publishers are selling directly to the end user and I am bypassed. Soon the day will come when cats and dogs are friends. What are we to do?

Enter Quentin Reynolds who famously said, If you can’t lick ‘em, join them.” Can one roll back the cloud? No more than King Canute could stop the tide rolling in. This means what, then? It means a VAR must transition from being a reseller of product to one of services or better yet, a provider of services. In this way, may the VAR regain relevance with the end user and cement the trust built up over the years, between them.

Thus the VARs tale may have a happy ending wherein the end user has a more secure network, and the auditor being satisfied, returns to his keep and the VAR is relevant again.

Which service would suit, you ask? Well, consider one that is not a commodity, one that requires expertise, one that is valued by the end user, one that is not a set-and-forget. IT Security leaps to mind; it satisfies these criteria. Even more within this field is SIEM, Log Management, Vulnerability scan and Intrusion Detection, given their relevance to both security and regulatory compliance.

Auditing File Shares with the Windows Security Log

Over the years, security admins have repeatedly asked me how to audit file shares in Windows.  Until Windows Server 2008, there were no specific events for file shares.  The best we could do was to enable auditing of the registry key where shares are defined.  But in Windows Server 2008 and later, there are two new subcategories for share related events:

  • File Share
  • Detailed File Share

File Share Events

This subcategory allows you to track the creation, modification and deletion of shared folders (see table below).  You have a different event ID for each of those three operations.  The events indicate who made the change in the Subject fields, and provides the name the share users see when browsing the network and the patch to the file system folder made available by the share.  See the example of event ID 5142 below.

A network share object was added.

Subject:
Security ID:  W8R2\wsmith
Account Name:  wsmith
Account Domain:  W8R2
Logon ID:  0x475b7

Share Information:
Share Name:  \\*\AcmeAccounting
Share Path:  C:\AcmeAccounting

The bad news is that the subcategory also produces event ID 5140 every time a user connects to a share.  The data logged, including who accessed it, and their client IP address is nice, but the event is logged much too frequently.  Since Windows doesn’t keep network logon sessions active if no files are held open, you will tend to see this event frequently if you enable the “File Share” audit subcategory.  There is no way to configure Windows to produce just the share change events and not this access event as well.  Of course that’s the point of a log management solution like EventTracker, which can be configured to filter out the noise.

Detailed File Share Events

Event ID 5140, as discussed above, is intended to document each connection to a network share, and as such it does not log the names of the files accessed through that share connection.  The “Detailed File Share” audit subcategory provides this lower level of information with just one event ID – 5145 – which is shown below.

A network share object was checked to see whether client can be granted desired access.

Subject:
Security ID:  SYSTEM
Account Name:  WIN-KOSWZXC03L0$
Account Domain:  W8R2
Logon ID:  0x86d584

Network Information:
Object Type:  File
Source Address:  fe80::507a:5bf7:2a72:c046
Source Port:  55490

Share Information:
Share Name:  \\*\SYSVOL
Share Path:  \??\C:\Windows\SYSVOL\sysvol
Relative Target Name: w8r2.com\Policies\{6AC1786C-016F-11D2-945F-00C04fB984F9}\Machine\Microsoft\Windows NT\Audit\audit.csv

Access Request Information:
Access Mask:  0x120089
Accesses:  READ_CONTROL
SYNCHRONIZE
ReadData (or ListDirectory)
ReadEA
ReadAttributes

Access Check Results:
READ_CONTROL: Granted by Ownership
SYNCHRONIZE: Granted by D:(A;;0x1200a9;;;WD)
ReadData (or ListDirectory): Granted by D:(A;;0x1200a9;;;WD)
ReadEA: Granted by D:(A;;0x1200a9;;;WD)
ReadAttributes: Granted by D:(A;;0x1200a9;;;WD)

This event tells identifies the user (Subject fields), the user’s IP address (Network Information), the share, and the actual file accessed via the share (Share Information) and then provides the permissions requested and the results of the access request.  This event actually logs the access attempt and allows you to see failure versions of the event as well as success events.

Be careful about enabling this audit subcategory because you will get an event for every file accessed through network shares each time the application opens the file.  This can be more frequent than imagined for some applications like Microsoft Office.  Conversely, remember that this category won’t catch access attempts on the same files if a locally executing application accesses the file via the local patch (e.g. c:\docs\file.txt) instead of via a patch.

You might also want to consider enabling auditing on individual folders containing critical files and using the File System subcategory.  This method allows you to be much more selective about who, which files and what types of access are audited.

For most organizations, enable the File Share subcategory if it’s important to you to know when new folders are shared. You will probably want to filter out the 5140 occurrences.  Then, if you have file level audit needs, turn on the File Access subcategory, identify the exact folders containing the relevant files and enable auditing on those folders for the specific operations (e.g. Read, Write, Delete) needed to meet your audit requirements.  Don’t enable the Detailed File Share audit subcategory unless you really want events for every access to every file via network shares.

The air gap myth

As we work with various networks to implement IT Security in general and SIEM, Log Management and Vulnerability scanning in particular, we sometimes meet with teams that inform us that they have air gapped networks. An air gap is a network security measure that consists of ensuring physical isolation from unsecured networks (like the Internet for example). The premise here being harmful packets cannot “leap” across the air gap. This type of measure is more often seen in utility and defense installations. Are they really effective in improving security?

A study by the Idaho National Laboratory shows that in the utility industry, while an air gap may provide defense, there are many more points of vulnerability in older networks. Often, critical industrial equipment is of older vintage when insecure coding practices were the norm. Over the years, such systems have had web front ends grated on to them to ease configuration and management. This makes them very vulnerable indeed. In addition these older systems are often missing key controls such as encryption. When automation is added to such systems (to improve reliability or reduce operations cost), the potential for damage is quite high indeed.

In a recent interview, Eugene Kaspersky stated that the ultimate air gap had been compromised. The International Space Station, he said, suffered from virus epidemics. Kaspersky revealed that Russian astronauts carried a removable device into space which infected systems on the space station. He did not elaborate on the impact of the infection on operations of the International Space Station (ISS). Kaspersky doesn’t give any details about when the infection he was told about took place, but it appears as if it was prior to May of this year when the United Space Alliance, the group which oversees the operation of the ISS, moved all systems entirely to Linux to make them more “stable and reliable.”

Prior to this move the “dozens of laptops” used on board the space station had been using Windows XP. According to Kaspersky, the infections occurred on laptops used by scientists who used Windows as their main platform and carried USB sticks into space when visiting the ISS. A 2008 report on ExtremeTech said that a Windows XP laptop was brought onto the ISS by a Russian astronaut infected with the W32.Gammima.AG worm, which quickly spread to other laptops on the station – all of which were running Windows XP.

If the Stuxnet infection from June 2010 wasn’t enough evidence, this should lay the air gap myth to rest.

End(er’s) game: Compliance or Security?

Who do you fear more – The Auditor or The Attacker? The former plays by well-established rules, gives plenty of prior notice before arriving on your doorstep and is usually prepared to accept a Plan of Action with Milestones (POAM) in case of deficiencies. The latter gives no notice, never plays fair and will gleefully exploit any deficiencies. Notwithstanding this, most small enterprises, actually fear the auditor more and will jump through hoops to minimize their interaction. It’s ironic, because the auditor is really there to help; the attacker, obviously is not.

While it is true that 100% compliance is not achievable (or for that matter desirable), it is also true that even the most basic of steps towards compliance go a long way to deterring attackers. The comparison to the merits of physical exercise is an easy one. How often have you heard it said that even mild physical exercise (taking the steps instead of elevator) gives you benefit? You don’t have to be a gym rat, pumping iron for hours every day.

And so, to answer the question: What comes first, Compliance or Security? It’s Security really, because Compliance is a set of guidelines to help you get there with the help of an Auditor. Not convinced? The news is rife with accounts of exploits which in many cases are at organizations that have been certified compliant. Obviously there is no such thing as being completely secure, but will you allow the perfect to be the enemy of the good?

The National Institutes of Standards (NIST) released Rev 4 of its seminal publication 800-53, one that applies to US Government IT systems. As budgets (time, money, people) are always limited, it all begins with risk classification, applying  scarce resources in order of value. There are other guidelines such as the SANS Institute Consensus Audit Guidelines to help you make the most of limited resources.

You may not have trained like Ender Wiggin from a very young age through increasingly difficult games, but it doesn’t take a tactical genius to recognize “Buggers” as attackers and Auditors as the frenemies.

Looking for assistance with your IT Security needs? Click here for our newest publication and learn how you can simplify with services.

Simplifying SIEM

Since its inception, SIEM has been something for the well-to-do IT Department; the one that can spend tens or hundreds of thousands of dollars on a capital acquisition of the technology and then afford the luxury of qualified staff to use it in the intended manner. In some cases, they hire experts from the SIEM vendor to “man the barricades.”

In the real world of a typical IT Department in the Medium Enterprise or Small Business, this is a ride in Fantasy Land. Budgets simply do not allow capital expenditures of multiple six or even five figures; expert staff, to the extent they exist, are hardly idling and available to work the SIEM console; and hiring outside experts – the less said, the better. And so, SIEM has remained the in the province of the well heeled.

In the meantime, the security and compliance pressures continue to mount. PCI-DSS compliance in particular, but also HIPAA-HiTech, continues to drive to smaller organizations.

Question: How do we square this circle where budgets are tight and IT Security expertise is rare?
Answer: By delivering value as a service, that is, as a MSP/MSSP.

At EventTracker, we’ve obsessed on this problem for a dozen years; powering and then simplifying the implementation, and with v7.5 that trend continues. Let me count the ways:

  • EventTracker is implemented as a virtual appliance. This means it can be right-sized for the environment. Scale up to very large networks of tens of thousands nodes; scale down to a site with only handful of sources.
  • The Collection Point/Master model allows you to “divide and conquer.” Locate a Collection Point per a geographic or logical group; roll up to a single pane of glass at a central Collection Master. Enjoy local control with global oversight.
  • Consolidate all incident data, prioritized by risk, at both the Collection Point and Master. An MSP SOC operator can now watch for incidents at a Connection Master, being fed from any number of underlying Collection Points. After-hours coverage at a single pane of glass? No problem.
  • Archive data at either Collection Point or Collection Master or both with different retention periods. Don’t want data replication? Not interested in operating a SAS-70 or FISMA certified datacenter? No problem. Retain data at customer premises, subject to their access control.
  • Aggregated licensing – enjoy the best possible price point by rolling up all log sources or volume.
  • Flexible licensing models – buy by the node with unlimited log volume or by log volume with unlimited nodes

For MSPs and MSSPs looking to drive greater revenue or customer loyalty, EventTracker 7.5 helps with both by satisfying the customer’s compliance and security needs. For the medium enterprise or small business looking to meet these needs without breaking the bank – now there is a way.

SIEM Simplified, it’s what we do.

Three common SMB mistakes

Small and medium business (SMB) owners/managers understand that IT plays a vital role within their companies. However, many SMBs are still making simple mistakes with the management of their IT systems, which are costing them money.

1) Open Source Solutions In a bid to reduce overall costs, many SMBs look to open source applications and platforms. While such solutions appear attractive because of low or no license costs, the effort required for installation, configuration, operation, maintenance and ongoing upgrades should be factored in. The total cost of ownership of such systems are generally ignored or poorly understood. In many cases, they may require a more sophisticated (and therefore more expensive and hard to replace) user to drive them.

2) Migrating to the Cloud Cloud based services promise great savings, which is always music to an SMB manager/owner’s ears, and the entire SaaS market has exploded in recent years. However the costs savings are not always obvious or tangible. The Amazon ec2 service is often touted as an example of cost savings but it very much depends on how you use the resource. See this blog for an example. More appropriate might be a hybrid system that keeps some of the data and services in-house, with others moving to the cloud.

3) The Knowledge Gap Simply buying technology, be it servers or software, does not provide any tangible benefit. You have to integrate it into the day-to-day business operation. This takes expertise both with the technology and your particular business.

In the SIEM space, these buying objections have often stymied SMBs from adopting the technology, despite its benefits and repeated advice from experts. To overcome these, we offer a managed SIEM offering called SIEM Simplified.

The Holy Grail of SIEM

Merriam Webster defines “holy grail” as a goal that is sought after for its great significance”. Mike Rothman of Securosis has described a twofold response to what the “holy grail” is for a security practitioner, i.e.,

  1. A single alert specifying exactly what is broken, with relevant details and the ability to learn the extent of the damage
  2. Make the auditor go away, as quickly and painlessly as possible

How do you achieve the first goal? Here are the steps:

  • Collect log information from every asset on the enterprise network,
  • Filter it through vendor provided intelligence on its significance
  • Filter it through local configuration to determine its significance
  • Gather and package related, relevant information – the so-called 5 Ws (Who, What, Where, When and Why)
  • Alert the appropriate person in the notification method they prefer (email, dashboard, ticket etc.)

This is a fundamental goal for SIEM systems like EventTracker, and over the ten plus years working on this problem, we’ve got a huge collection of intelligence to draw on to help configure and tune the system to you needs. Even so, there is an undefinable element of luck to have it all work out for you, just when you need it. Murphy’s Law says that luck is not on your side. So now what?

One answer we have found is Anomalous Behavior detection. Learn “normal” behavior during a baseline period and draw the attention of a knowledgeable user to out of ordinary or new items. When you join these two systems, you get coverage for both known-knowns as well as unknown-unknowns.

The second goal involves more discipline and less black magic. If you are familiar with the audit process, then you may know that it’s all about preparation and presentation. The Duke of Wellington famously remarked that the “Battle of Waterloo was won on the playing fields of Eton” another testament to winning through preparation. Here again, to enable diligence, EventTracker Enterprise  offers several features including report/alert annotation, summary report on reports, incident acknowledgement and an electronic logbook to record mitigation and incident handling actions.

Of course, all this requires staff with the time and training to use the features. Lack time and resources you say? We’ve got you covered with SIEM Simplified, a co-sourcing option where we do the heavy lifting leaving you to sip from the Cup of Jamshid.

Have neither the time, nor the tools, nor budget? Then the story might unfold like this.

SIEM vs Search Engine

The pervasiveness of Google in the tech world has placed the search function in a central locus of our daily routine. Indeed many of the most popular apps we use every day are specialized forms of search. For example:

  • E-Mail is a search for incoming msgs; search by sender, by topic, by key phrase, by thread
  • Voice calling or texting is preceded by a search for a contact
  • Yelp is really searching for a restaurant
  • The browser address bar is in reality a search box

And the list goes on.

In the SIEM space, the rise of Splunk, especially when coupled with the promise of “big data”, has led to speculation that SIEM is going to be eclipsed by the search function. Let’s examine this a little more closely, especially from the viewpoint of an expert constrained Small Medium Enterprise (SME) where Data Scientists are not idling aplenty.

Big data and accompanying technologies are, at present, more developer level elements that require assembly with application code or intricate setup and configuration before they can be used by typical system administrators much less mid-level managers. To leverage the big-data value proposition of such platforms, the core skill required by such developers is thinking about distributed computing where the processing is performed in batches across multiple nodes. This is not a common skill set in the SME.

Assuming the assembly problem is somehow overcome, can you rejoice in your big-data-set and reduce the problems that SIEM solves to search queries? Well maybe, if you are a Data Scientist and know how to use advanced analytics. However, SIEM functions include things like detecting cyber-attacks, insider threats and operational conditions such as app errors – all pesky real-time requirements. Not quite so effective as a search on archived and indexed data of yesterday. So now the Data Scientist must also have infosec skills and understand the IT infrastructure.

You can probably appreciate that decent infosec skills such as network security, host security, data protection, security event interpretation, and attack vectors do not abound in the SME. There is no reason to think that the shortage of cyber-security professionals and the ultra-shortage of data scientists and experienced Big Data programmers will disappear anytime soon.

So how can an SME leverage the promise of big-data now? Well, frankly EventTracker has been grappling with the challenges of massive, diverse, fast data for many years before became popularly known as Big Data. In testing on COTS hardware, our recent 7.4 release showed up to a 450% increase in receiver/archiver performance over the previous 7.3 release on the same hardware. This is not an accident. We have been thinking and working on this problem continuously for the last 10 years. It’s what we do. This version also has advanced data-science methods built right in to the EventVault Explorer, our data-mart engine so that security analysts don’t need to be data scientists. Our behavior module incorporates data visualization capabilities to help users recognize hidden patterns and relations in the security data, the so-called “Jeopardy” problem wherein the answers are present in the data-set, the challenge is in asking the right questions.

Last but not the least, we recognize that notwithstanding all the chest-thumping above, many (most?) SMEs are so resource constrained that a disciplined SOC-style approach to log review and incident handling is out of reach. Thus we offer SIEM Simplified, a service where we do the heavy lifting leaving the remediation to you.

Search engines are no doubt a remarkably useful innovation that has transformed our approach to many problems. However, SIEM satisfies specific needs in today’s threat, compliance and operations environment that cannot be satisfied effectively or efficiently with a raw big-data platform.

Resistance is futile

The Borg are a fictional alien race that are a terrifying antagonist in the Star Trek franchise. The phrase “Resistance is futile” is best delivered by Patrick Stewart in the episode The Best of Both Worlds.

When IBM demonstrated the power of Watson in 2011 by defeating two of the best humans to ever play Jeopardy, Ken Jennings who won 74 games in a row admitted in defeat, “I, for one, welcome our new computer overlords.”

As the Edward Snowden revelations about the collection of metadata for phone calls became known, the first thinking was that it would be technically impossible to store data for every single phone call – the cost would be prohibitive. Then Brewster Kahle, one of the engineers behind the Internet Archive made this spreadsheet to calculate the storage cost to record and store one year’s-worth of all U.S. calls. He works the cost to about $30M which is non-trivial but not out of reach by any means for a large US Gov’t agency.

The next thought was – ok so maybe it’s technically feasible to record every phone call, but how could anyone possibly listen to every call? Well obviously this is not possible, but can search terms be applied to locate “interesting” calls? Again, we didn’t think so, until another N.S.A. document, cited by The Guardian, showed a “global heat map” that appeared to represent how much data the N.S.A. sweeps up around the world. If it were possible to efficiently mine metadata, data about who is calling or e-mailing, then the pressure for wiretapping and eavesdropping on communications becomes secondary.

This study in Nature shows that just four data points about the location and time of a mobile phone call, make it possible to identify the caller 95 percent of the time.

IBM estimates that thanks to smartphones, tablets, social media sites, e-mail and other forms of digital communications, the world creates 2.5 quintillion bytes of new data daily. Searching through this archive of information is humanly impossible, but precisely what a Watson-like artificial intelligence is designed to do. Isn’t that exactly what was demonstrated in 2011 to win Jeopardy?

Savvy IT Is The Way To Go

There is a lot of discussion in the context of cloud as well as traditional computing regarding Smart IT, Smarter Planets, Smart and Smarter Computing. Which makes a lot of sense in light of the explosion in the amount of collected data and the massive efforts aimed at using analytics to yield insight, information and intelligence about — well, just about everything. We have no problem with smart activities.

We also hear a lot of speculation about the impact, good and bad, that advances in technology, emerging business models, and changing revenue, cost and delivery processes will exert on IT, and specifically enterprise IT. Add to these the predictions of the end of ‘IT as we know it’, with prognosticators describing a looming radical alteration in enterprise computing as in-house IT culminates in applications, data and computing moving into vast, amorphous clouds of distributed, but still centralized infrastructure and data centers. Who is kidding whom?

Smart computing isn’t going to go away, and it makes a point. However, our contention is that it takes more than just Smart IT to succeed; it takes Savvy IT.

Savvy IT complements and extends smarts – with the ability to leverage all of what you know and what you can do to be successful. Savvy can be used as a noun, an adjective and a verb. The definition of the adjective describes Savvy as “having or showing a clever awareness in practical matters: astute, cagey, canny, knowing, shrewd, slick, smart, wise”. More colloquially, it means acting and being ‘street smart’. Watching and listening across the industry, we see a market evolving to favor moving with Smart IT to Savvy IT.

Savvy IT is concerned with optimizing the use of IT infrastructure, assets and resources to achieve enterprise goals. Savvy IT acts proactively to drive line of business staff to use emerging technologies by helping them to understand how technology can help develop and implement new business models and revenue streams.  It is interactive, coordinated and cooperative efforts targeting external, as well as internal customers.

It’s about a ‘street smart’ application of technology to solve problems and drive organizational success. It is based on the insight of personal experiences that includes an awareness and knowledge about the business, their industry and personal efforts to exploit data, capabilities and technology. Finally, it’s about a CIO who pursues the goal of making sure IT’s services are at least the same, if not better than the best services available from SaaS or service providers.

An explicit example of Savvy IT appears in the evolution toward real solutions to comprehensive business and operational problems that are driven and developed from the perspective of the customer or client end-users. Savvy IT works with the business to proactively identify, develop and implement technology-dependent innovation that act as game changers for the company.

One example is the radical alteration in the sped up cycle of development, testing and distribution of business applications as they become app-based services. Or, when IT staff leverage transaction merchandising services across multiple technologies – linking transaction services in mobile technologies with traditional systems of record  to provide a seamless purchase experience whether ordering a purchase on-line, from a phone or flyer with the option for at home delivery or pick-up at a ‘brick and mortar’ store. An innovation that gives global merchandiser, Target, a significant competitive advantage.

Savvy IT requires both innovation and invention in the application of technology combined with experience that knows where and how to focus efforts that will either solve problems or reduce their impact in favor of continuing services. Smart operations provide a foundation on which to build; savvy tempers fashion with experience that ‘delivers’ despite the obstacles and challenges that inevitably arise.

In implementation and practice, Savvy IT involves and applies whether the model for IT services is built exclusively around an internal data center, an external cloud or service provider or a combination of both. Implementing Savvy IT is an organizational challenge that starts with IT, but extends to include the whole enterprise. Savvy IT is street smart. It’s about protecting the business from risks, existing and emerging that persistently evolve. We’ll explore more of the implications, impacts, processes and issues over the coming months.

Feel free to send any comments, questions or discussion about Savvy IT, pro or con, as well as other topics of interest to Rich Ptak: rlptak @ptaknoel [dot] com.

The Dark Side of Big Data

study published in Nature looked at the phone records of some 1.5 million mobile phone users in an undisclosed small European country, and found it took only four different data points on the time and location of a call to identify 95% of the people. In the dataset, the location of an individual was specified hourly with a spatial resolution given by the carrier’s antennas.

Mobility data is among the most sensitive data currently being collected. It contains the approximate whereabouts of individuals and can be used to reconstruct individuals’ movements across space and time. A simply anonymized dataset does not contain name, home address, phone number or other obvious identifier. For example, the Netflix Challenge provided a training dataset of 100,480,507 movie ratings each of the form <user, movie, date-of-grade, grade> where the user was an integer ID.

Yet, if individual’s patterns are unique enough, outside information can be used to link the data back to an individual. For instance, in one study, a medical database was successfully combined with a voters list to extract the health record of the governor of Massachusetts. In the case of the Netflix data set, despite the attempt to protect customer privacy, it was shown possible to identify individual users by matching the data set with film ratings on the Internet Movie Database. Even coarse data sets provide little anonymity.

The issue is making sure the debate over big data and privacy keeps up with the science. Yves-Alexandre de Montjoye, one of the authors of the Nature article, says that the ability to cross-link data, such as matching the identity of someone reading a news article to posts that person makes on Twitter, fundamentally changes the idea of privacy and anonymity.

Where do you, and by extension your political representative, stand on this 21st Century issue?

The Intelligence Industrial Complex

If you are old enough to remember the 1988 election in the USA for President, then the name Gary Hart may sound familiar. He was the clear frontrunner after his second Senate term from Colorado was over. He was caught in an extra-marital affair and dropped out of the race. He has since earned a doctorate in politics from Oxford and accepted an endowed professorship at the University of Colorado at Denver.

In this analysis, he quotes President Dwight Eisenhower, “…we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. The potential for the disastrous rise of misplaced power exists, and will persist.”

His point is that the US now has an intelligence-industrial complex composed of close to a dozen and a half federal intelligence agencies and services, many of which are duplicative, and in the last decade or two the growth of a private sector intelligence world. It is dangerous to have a technology-empowered government capable of amassing private data; it is even more dangerous to privatize this Big Brother world.

As has been extensively reported recently, the Foreign Intelligence Surveillance Act (FISA) courts are required to issue warrants, as the Fourth Amendment  (against unreasonable search and seizure) requires, upon a showing that the national security is endangered. This was instituted in the early 1970s following the findings of serious unconstitutional abuse of power. He asks “Is the Surveillance State — the intelligence-industrial complex — out of the control of the elected officials responsible for holding it accountable to American citizens protected by the U.S. Constitution?

We should not have to rely on whistle-blowers to protect our rights.

In a recent interview with Charlie Rose of PBS, President Obama said, “My concern has always been not that we shouldn’t do intelligence gathering to prevent terrorism, but rather: Are we setting up a system of checks and balances?” Despite this he avoided answering how no request to a FISA court has ever been rejected, that companies that provide data on their customers are under a gag order that even prevents them for disclosing the requests.

Is the Intelligence-Industrial complex calling the shots? Does the President know a lot more than he can reveal? Clearly he is unwilling to even consider changing his predecessor policy.

It would seem that Senator Hart has a valid point. If so, its a lot more consequential than Monkey Business.

Introducing EventTracker Log Manager

The IT team of a Small Business has it the worst. Just 1-2 administrators to keep the entire operation running, which includes servers, workstations, patching, anti-virus, firewalls, applications, upgrades, password resets…the list goes on. It would be great to have 25 hours in a day and 4 hands per admin just to keep up. Adding security or compliance demands to the list just make it that much harder.

The path to relief? Automation, in one word. Something that you can “fit-and-forget”.

You need a solution which gathers all security information from around the network, platforms, network devices, apps etc. and that knows what to do with it. One that retains it all efficiently and securely for later if-needed for analysis, displays it in a dashboard for you to examine at your convenience, alerts you via e-mail/SMS etc. if absolutely necessary, indexes it all for fast search, and finds new or out-of-ordinary patterns by itself.

And you need it all in a software-only package that is quickly installed on a workstation or server. That’s what I’m talking about. That’s EventTracker Log Manager.

Designed for the 1-2 sys admin team.
Designed to be easy to use, quick to install and deploy.
Based on the same award-winning technology that SC Magazine awarded a perfect 5-star rating to in 2013.

How do you spell relief? E-v-e-n-t-T-r-a-c-k-e-r  L-o-g  M-a-n-a-g-e-r.
Try it today.

Following a User’s Logon Tracks throughout the Windows Domain

What security events get logged when a user logs on to their workstation with a domain account and proceeds to run local applications and access resources on servers in the domain?

When a user logs on at a workstation with their domain account, the workstation contacts domain controller via Kerberos and requests a ticket granting ticket (TGT).  If the user fails authentication, the domain controllers logs event ID 4771 or an audit failure instance 4768.  The result code in either event specifies the reason for why authentication failed.  Bad passwords and time synchronization problems trigger 4771 and other authentication failures such as account expiration trigger a 4768 failure.  These result codes are based on the Kerberos RFC 1510 and in some cases one Kerberos failure reason corresponds to several possible Windows logon failure reasons.  In these cases the only way to know the exact reason for the failure is to check logon event failure reason on the computer where the user is trying to logon from.

If the user’s credentials authentication checks out, the domain controller creates a TGT, sends that ticket back to the workstation, and logs event ID 4768.  Event ID shows the user who authenticated and the IP address of the client (in this case, the workstation). However, there is no logon session identifier because the domain controller handles authentication – not logon sessions.   Authentication events are just events in time; sessions have a beginning and an end.  In Windows, each member computer (workstation and servers) handles its own logon sessions.

When the domain controller fails the authentication request, the local workstation will log 4625 in its local security log noting the user’s domain, logon name and the failure reason.  There is a different failure reason for every reason a Windows logon can failure, in contrast with the more general result codes generated by the Kerberos domain controller events.

If authentication succeeds and the domain controller sends back a TGT, the workstation creates a logon session and logs event ID 4624 to the local security log.  This event identifies the user who just logged on, the logon type and the logon ID.  The logon type specifies whether the logon session is interactive, remote desktop, network-based (i.e. incoming connection to shared folder), a batch job (e.g. Scheduled Task) or a service logon triggered by a service logging on.  The logon ID is a hexadecimal number identifying that particular logon session. All subsequent events associated with activity during that logon session will bear the same logon ID, making it relatively easy to correlate all of a user’s activities while he/she is logged on.  When the user finally logs off, Windows will record a 4634 followed by a 4647.  Event ID 4634 indicates the user initiated the logoff sequence, which may get canceled.  Logon 4647 occurs when the logon session is fully terminated.  If the system is shut down, all logon session get terminated, and since the user didn’t initiate the logoff, event ID 4634 is not logged.

While a user is logged on, they typically access one or more servers on the network.  Their workstation automatically re-uses the domain credentials they entered at logon to connect to other servers.  When a server receives a logon request – (for example, when a user tries to access a shared folder on a file server), the user’s workstation requests a service ticket from the domain controller which authenticates the user to that server.  The domain controller logs 4769,  which is useful because it indicates that user accessed server Y; the computer name of the server accessed is found in the Service Name field of 4769.  When the workstation presents the service ticket to the file server, the server creates a logon session and records event ID 4624 just like the workstation did earlier but this time logon type is 3 (network logon).  However as soon as the user closes all files opened during this network logon session, the server automatically ends the logon session and records 4647.  Therefore, network logon sessions typically last for less than a second while a file is saved, unless the user’s application keeps a file open on the server for extended periods of time.   This results in the constant stream of logon/logoff events that you typically observe on file servers and means that logon/logoff events on servers with logon type 3 are not very useful.  It is probably better to focus on access events to sensitive files using object access auditing.

Additional logon/logoff events on servers and authentication events associated with other types of user activity include:

  • Remote desktop connections
  • Service startups
  • Scheduled tasks
  • Application logons – especially IIS based applications like SharePoint, Outlook Web Access and ActiveSync mobile device clients

These events will generate logon/logoff events on the application servers involved and Kerberos events on domain controllers.

Also occurring might be NTLM authentication events on domain controllers from clients and applications that use NTLM instead of Kerberos.  NTLM events fall under the Credential Validation subcategory of the Account Logon audit category in Windows.  There is only event ID logged for both successful and failed NTLM authentication events.

A user leaves tracks on each system he or she accesses, and the combined security logs of domain controllers alone provide a complete list every time a domain account is used, and which workstations and servers were accessed.  Understanding Kerberos and NTLM and how Windows separates the concepts of logon sessions from authentication can help a sys admin to interpret these events and grasp why different events are logged on each system.

See more examples of the events described in this article at the Security Log Encyclopedia.

Secure your electronic trash

At the typical office, computer equipment becomes obsolete, slow etc. and periodically requires replacement or refresh. This includes workstations, servers, copy machines, printers etc. Users who get the upgrades are inevitably pleased and carefully move their data carefully to the new equipment and happily release the older ones. What happens after this? Does someone cart them off the local recycling post? Do you call for a dumpster? This is likely the case of the Small Medium Enterprise whereas large enterprises may hire an electronics recycler.

This blog by Kyle Marks appeared in the Harvard Business Review and reminds us that sensitive data can very well be leaked via decommissioned electronics also.

A SIEM solution like EventTracker is effective when leakage occurs from connected equipment or even mobile laptops or those that connect infrequently. However, disconnected and decommissioned equipment is invisible to a SIEM solution.

If you are subject to regulatory compliance, leakage is leakage. Data security laws mandate that organizations implement “adequate safeguards” to ensure privacy protection of individuals.  It’s equally applicable to that leakage comes from your electronic trash. You are still bound to safeguard the data.

Marks points out that detailed tracking data, however, reveals a troubling fact: four out of five corporate IT asset disposal projects had at least one missing asset. More disturbing is the fact that 15% of these “untracked” assets are devices potentially bearing data such as laptops, computers, and servers.

Treating IT asset disposal as a “reverse procurement” process will deter insider theft. This is something that EventTracker cannot help with but is equally valid in addressing compliance and security regulations.

You often see a gumshoe or Private Investigator in the movies conduct Trash Archaeology in looking for clues. Now you know why.

What did Ben Franklin really mean?

In the aftermath of the disclosure of the NSA program called PRISM by Edward Snowden to a reporter at The Guardian, commentators have gone into overdrive and the most iconic quote is one attributed to Benjamin Franklin “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety”.

It was amazing that something said over 250 years ago would be so apropos. Conservatives favor an originalist interpretation of documents such as the US Constitution (see Federalist Society) and so it seemed possible that very similar concerns existed at that time.

Trying to get to the bottom of this quote, Ben Wittes of Brookings wrote that it does not mean what it seems to say.

The words appear originally in a 1755 letter that Franklin is presumed to have written on behalf of the Pennsylvania Assembly to the colonial governor during the French and Indian War. The Assembly wished to tax the lands of the Penn family, which ruled Pennsylvania from afar, to raise money for defense against French and Indian attacks. The Penn family was willing to acknowledge the power of the Assembly to tax them.  The Governor, being an appointee of the Penn family, kept vetoing the Assembly’s effort. The Penn family later offered cash to fund defense of the frontier–as long as the Assembly would acknowledge that it lacked the power to tax the family’s lands.

Franklin was thus complaining of the choice facing the legislature between being able to make funds available for frontier defense versus maintaining its right of self-governance. He was criticizing the Governor for suggesting it should be willing to give up the latter to ensure the former.

The statement is typical of Franklin style and rhetoric which also includes “Sell not virtue to purchase wealth, nor Liberty to purchase power.”  While the circumstances were quite different, it seems the general principle he was stating is indeed relevant to the Snowden case.

What is happening to log files? The Internet of Things, Big Data, Analytics, Security, Visualization – OH MY!

Over the past year, enterprise IT has had more than a few things emerge to frustrate and challenge it. High on the list has to be limited budget growth in the face of increasing demand for and expectations of new services. In addition, there has been an explosion in the list of technologies and concerns that appear to be particularly intended to complicate the task of maintaining smooth running operations and service delivery.

Whether it is security, Big Data, analytics, Cloud, BYOD, data center consolidation, or infrastructure refresh – IT infrastructure and operations are changing, expanding, becoming smarter and, definitely increasingly more chatty. The amount of data generated from operating and maintaining the infrastructure to run workloads and deliver services continues to increase at an accelerating pace. The successful delivery of IT-dependent services requires data to be properly correlated, analyzed and the results presented in a clear, concise and rapidly consumable manner.

The Internet of Things refers to the proliferation of smart devices that connect to, communicate over and exchange data across the internet. It is rapidly becoming the Internet of Everything [1] as the number and variety of networked devices and services continues to explode. In fact, it is growing at a pace that challenges the capabilities and capacities of existing infrastructure to create, support and maintain effective, reliable services. The lagging pace of infrastructure evolution both complicates and drives innovation in the how, what and format of data collection, normalization, analysis and presentation.

Monitoring, managing and controlling the devices and services involve the creation, collection and consumption of data. Big Data barely describes the volume of data and information that must be consumed and analyzed to provide information and knowledge for management and control. Much of which ends up in log files.

Whether residing in log files or consumed as data services, it must be collected, filtered, integrated and analyzed more quickly to yield easily consumable, actionable information to drive corrective or ameliorative action. Data analysis and modeling, even sophisticated analysis has been around and used for centuries – but it has only been more recently that a growing community of non-experts have had the ability to access and use very sophisticated data manipulation and processing techniques.

A continuing stream of stories call attention to the risk of exposure and malicious access to the increasing amount of data, both personal and business, private and public that is collected, exchanged and accessible on today’s network. Such stories have little apparent effect on the oftentimes reckless willingness of consumers and customers to neglect efforts to protect the security and assure the integrity of data and information they all too casually and willingly provide, exchange and store.

Today’s market and political environments are unforgiving and woefully unsecured. It isn’t only malicious attacks that result in access to data and information that should be both private and well-protected. Only the extremely foolish or incurably reckless will fail to make a proactive investment necessary to secure and protect the integrity and privacy of business, enterprise, consumer and customer data. Recent events and actions are driving IT and business communities to move toward a greater focus and sensitivity to security issues.

The demand is escalating for improvement in the ability to communicate complex and critical information quickly and accurately. Increasingly sophisticated consumers must absorb and understand the significance and criticality of information to promptly and appropriately respond. Advanced analytics and manipulation smooth the analysis of data and information from multiple sources to obtain detailed information and insight as a result. There are applications that can combine data from multiple sources [2] into a single report and even send the data itself to a smartphone or tablet. Visualization is recognized and, with increasing frequency used as the fastest, most effective path to understanding what is happening and what must be done.

So, what does this mean for us? The widespread availability of data from multiple, disparate sources in the enterprise greatly expand what is available for analysis. It enhances the role, impact and visibility of analyst and IT as they directly contribute to enterprise success. Benefiting from this opportunity requires IT staff to proactively move to expand the scope of their analysis as they work more closely with partners in enterprise operations. Perceptive providers of analysis tools and solutions are working hard to include extended capabilities and functions that make this task easier, more effective and powerful.

Finally, there remains the need for a user interface specifically designed to easily manipulate multiple documents and data sets simultaneously by using a touch screen without a keyboard. The fast acceptance and increasing popularity of tablets, phablets and smartphones have alerted vendors to the inadequacy of existing interfaces. The forces described above along with competitive market pressures are driving interest and activity to deliver a new generation of user interfaces specifically designed for creating working documents for these devices. Such an interface will allow users to advance far beyond today’s content-only consumption patterns. Developing the new interface means rethinking office productivity applications completely – something nobody has really done since Xerox PARC designed its Star Office system. Now that is something to look forward to.


[1] An apparently endlessly growing list of internet connected ‘things’ that started with computers and has been adding networked devices ever since to now include monitoring devices (medical, automobile, equipment, buildings, home, etc.), financial transaction services, security, communication formats that include voice, analog, digital, video, etc., etc..

[2] For example – DB2, Hive/Apache Hadoop, Teradata, MySQL, Amazon Redshift, PostgreSQL, Microsoft SQL and SAP.

What, me worry?

Alfred E. Nueman is the fictitious mascot and cover boy of Mad Magazine. Al Feldstein, who took over as editor in 1956, said, “I want him to have this devil-may-care attitude, someone who can maintain a sense of humor while the world is collapsing around him”.

The #1 reason management doesn’t get security is the sense that “It can’t happen to me” or “What, me worry?” The general argument goes – we are not involved in financial services or national defense. Why would anyone care about what I have? And in any case, even if they hack me, what would they get? It’s not even worth the bother. Larry Ponemon writing in the Harvard Business Review captures this sentiment.

Attackers are increasingly targeting small companies, planting malware that not only steals customer data and contact lists but also makes its way into the computer systems of other companies, such as vendors. Hackers might also be more interested in your employees than you’d think. Are your workers relatively affluent? If so, chances are the hackers are way ahead of you and are either looking for a way into your company, or are already inside, stealing employee data and passwords which (as they well know) people tend to reuse for all their online accounts.

Ponemon says “It’s literally true that no company is immune anymore. In a study we conducted in 2006, approximately 5% of all endpoints, such as desktops and laptops, were infected by previously undetected malware at any given time. In 2009—2010, the proportion was up to 35%. In a new study, it looks as though the figure is going to be close to 54%, and the array of infected devices is wider too, ranging from laptops to phones.”

In the recent revelations by Edward Snowden who blew the whistle on the NSA program called “Prism”, many prominent voices have said they are ok with the program and have nothing to hide. This is another aspect of “What, me worry?” Benjamin Franklin had it right many years ago, “Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety.”

Learning from LeBron

Thinking about implementing analytics? Before you do that, ask yourself “What answers do I want from the data?”

After the Miami Heat lost the 2011 NBA playoffs to the Dallas Mavericks, many armchair MVPs were only too happy to explain that LeBron was not a clutch player and didn’t have what it takes to win championships in this league. Both LeBron and Coach Erik Spolestra however were determined to convert that loss into a teaching moment.

Analytics was indicated. But what was the question?  According to Spoelstra, “It took the ultimate failure in the Finals to view LeBron and our offense with a different lens. He was the most versatile player in the league. We had to figure out a way to use him in the most versatile of ways — in unconventional ways.” In the last game of the 2011 Finals, James was almost listlessly loitering beyond the arc, hesitating, shying away, and failing to take advantage of his stature. His last shot of those Finals was symbolic: an ill-fated 25-foot jump shot from the outskirts of the right wing — his favorite 3-point shot location that season.

LeBron decided the correct answer was to work on the post-up game during the off season. He spent a week learning from the great Hakeem Olajuwon. He brought his own videographer to record the sessions for later review. LeBron arrived early for each session and was stretched and ready to go every time. He took the lessons to the gym for the rest of the off season. It worked. James emerged from that summer transformed. “When he returned after the lockout, he was a totally different player,” Spoelstra says. “It was as if he downloaded a program with all of Olajuwon’s and Ewing’s post-up moves. I don’t know if I’ve seen a player improve that much in a specific area in one offseason. His improvement in that area alone transformed our offense to a championship level in 2012.”

The true test of analytics isn’t just on how good they are but in how committed you are in using the data. At the 2012 NBA Finals, LeBron won the MVP title and Miami, the championship.

The lesson to learn here is to know what answers you are seeking form the data and commit to going where the data takes you.

Using Dynamic Audit Policy to Detect Unauthorized File Access

One thing I always wished you could do in Windows auditing was mandate that access to an object be audited if the user was NOT a member of a specified group.  Why?  Well sometimes you have data that you know a given group of people will be accessing and for that activity you have no need of an audit trail.

Let’s just say you know that members of the Engineering group will be accessing your Transmogrifier project folder and you do NOT need an audit trail for when they do.  But this is very sensitive data and you DO need to know if anyone else looks at Transmogrifier.

In the old days there was no way to configure Windows audit policy with that kind of negative Boolean or exclusive criteria.  With Windows 2008/7 and before you could only enable auditing based on if someone was in a group not the opposite.

Windows Server 2012 gives you a new way to control audit policy on files.  You can create a dynamic policies based on attributes of the file and user.  (By the way, you get the same new dynamic capabilities for permissions, too).

Here’s a screen shot of audit policy for a file in Windows 7.

Unauthorized File Access

Now compare that to Windows Server 2012.

Unauthorized File Access

The same audit policy is defined but look at the “Add a condition” section.  This allows you to add further criteria that must be met before the audit policy takes effect.  Each time you click “Add a condition” Windows adds another criteria row where you can add Boolean expressions related to the User, the Resource (file) being accessed or the Device (computer) where the file is accessed.  In the screen shot below I’ve added a policy which accomplishes what we described at the beginning of the article.

Unauthorized File Access

So we start out by saying that Everyone is audited when they successfully read data in this file.  But then we limit that to users who do not belong to the Engineering group.  Pretty cool, but we are only scratching the surface.  You can add more conditions and you can join them by Boolean operators OR and AND.  You can even group expressions the way you would with parenthesis in programming code.  The example below shows all of these features so that the audit policy is effective if the user is either a member of certain group or department is Accounting and the file has been classified as relevant to GLBA or HIPAA compliance.

Unauthorized File Access

You’ll also notice that you can base auditing and access decision on much more that the user’s identity and group membership.  In the example above we are also referencing the department specified on the Organization tab of the user’s account in Active Directory.  But with dynamic access control we can choose any other attribute on AD user accounts by going to Dynamic Access Control in the Active Directory Administrative Center and selecting Claim Types as shown here.

Unauthorized File Access

You can create claim types for about any attribute of computer and user objects.  After creating a new claim type for a given attribute, it’s available in access control lists and audit policies of files and folders throughout the domain.

But dynamic access control and audit policy doesn’t stop with sophisticated Boolean logic and leveraging user and computer attributes from AD.  You can now classify resources (folders and files) according to any number of properties you’d like.  Below is a list of the default Resource Properties that come out of the box.

Img6_ResourceProperties

Before you can begin using a given Resource Property in a dynamic access control list or audit policy you need to enable it and then add it to a Resource Property List which is shown here.

Unauthorized File Access

After that you are almost ready to define dynamic permissions and audit policies.  The last setup step is to identity file servers where you want to use classify files and folders with Resource Properties.  On those file servers you need to add the File Server Resource Manager subrole.  After that when you open the properties of a file or folder you’ll find a new tab called Classification.

Unauthorized File Access

Above you’ll notice that I’ve classified this folder as being related to the Transmogrifier project.  Be aware that you can define dynamic access control and audit policies without referencing Resource Properties or adding the File Server Resource Manager subrole; you’ll just be limited to Claim Types and the enhanced Boolean logic already discussed.

The only change to the file system access events Windows sends to the Security Log is the addition of a new Resource Attributes to event ID 4663 which I’ve highlighted below.

Unauthorized File Access

This field is potentially useful in SIEM solutions because it embeds in the audit trail a record of how the file was classified when it was accessed.  This would allow us to classify important folders all over our network as “ACME-CONFIDENTIAL” and then include that string in alerts and correlation rules in a SIEM like EventTracker to alert or escalate on events where the information being accessed has been classified as such.

The other big change to auditing and access control in Windows Server 2012 is Central Access Policies which allows you to define a single access control list or audit policy in AD and apply it to any set of computers.  That policy is now evaluated in addition to the local security descriptor on each object.

While Microsoft and press are concentrating on the access control aspect of these new dynamic and central security features, I think the greatest immediate value may come from the audit policy side that we’ve just explored.  If you’d like to learn more about dynamic and central access control and audit policy check out the deep dive session I did with A.N. Ananth of EventTracker: File Access Auditing in Windows Server 2012.

Two classes of cyber threat to critical infrastructure

Dan Villasenor describes two classes of cyber threat confronting critical infrastructure. Some, like the power grid, are viewed by everyone as critical, and the number of people who might credibly target them is correspondingly smaller. Others, like the internal networks in the Pentagon, are viewed as a target by a much larger number of people. Providing a high level of protection to those systems is extremely challenging, but feasible. Securing them completely is not.

While I would agree that fewer people are interested/able to hack the power grid, it reminds me of the “insider threat” problem that enterprises face. When an empowered insider who has legitimate access goes rogue, the threat can be very hard to locate and the damage can be incredibly high. Most defense techniques for insider threat depend on monitoring and behavior anomaly detection. Adding to the problem is that systems like the power grid are harder to upgrade and harden. The basic methods to restrict access and enforce authentication and activity monitoring would be applicable. No doubt, this was all true for the Natanz processing plant in Iran and it still got hacked by Stuxnet. That system was apparently infected by a USB device carried in by an external contractor, so it would seem that restricting access and activity monitoring may have helped detect it sooner.

In the second class of threat, exemplified by the internal networks at the Pentagon, one assumes that all classic protection methods are enforced. Situational awareness in such cases becomes important. A local administrator who relies entirely on some central IT team to patrol, detect and inform him in time is expecting too much. It is said that God helps those who help themselves.

Villasenor also says: “There is one number that matters most in cybersecurity. No, it’s not the amount of money you’ve spent beefing up your information technology systems. And no, it’s not the number of PowerPoint slides needed to describe the sophisticated security measures protecting those systems, or the length of the encryption keys used to encode the data they hold. It’s really much simpler than that. The most important number in cybersecurity is how many people are mad at you.”

Perhaps we should also consider those interested in cybercrime? The malware industrial complex is booming and the average price for renting botnets to launch DDoS is plummeting.

The Post Breach Boom

A basic requirement for security is that systems be patched and the security products like antivirus be updated as frequently as possible. However, there are practical reasons which limit the application of updates to production systems. This is often the reason why the most active attacks are the ones which have been known for many months.

new report from the Ponemon Institute polled 3,529 IT and IT security professionals in the U.S., Canada, UK, Australia, Brazil, Japan, Singapore and United Arab Emirates, to understand the steps they are taking in the aftermath of malicious and non-malicious data breaches. Here are some highlights:

On average, it is taking companies nearly three months (80 days) to discover a malicious breach and then more than four months (123 days) to resolve it.

    • One third of malicious breaches are not being caught by any of the companies’ defenses – they are instead discovered when companies are notified by a third party, either law enforcement, a partner, customer or other party – or discovered by accident. Meanwhile, more than one third of non-malicious breaches (34 percent) are discovered accidentally.
    • Nearly half of malicious breaches (42 percent) targeted applications and more than one third (36 percent) targeted user accounts.
    • On average, malicious breaches ($840,000) are significantly more costly than non-malicious data breaches ($470,000). For non-malicious breaches, lost reputation, brand value and image were reported as the most serious consequences by participants. For malicious breaches, organizations suffered lost time and productivity followed by loss of reputation.

Want an effective defense but wondering where to start? Consider SIEM Simplified.

Cyber Attacks: Why are they attacking us?

The news sites are abuzz with reports on Chinese cyber attacks on Washington DC institutions both government and NGOs. Are you a possible target? It depends. Attackers funded by nation states have specific objectives and they will follow these. So if you are a dissident or enabling one, or have secrets that the attacker wants, then you may be a target. A law firm with access to intellectual property may be a target, but an individual has much more reason to fear cyber criminals who seek credit card details than a Chinese attack.

As Sun Tzu noted in the Art of War, “Know your enemy and know yourself, find naught in fear for 100 battles.”

So what are the Chinese after? Ezra Klein has a great piece in the Washington Post. He outlines three reasons:

1)      Asymmetric warfare – the US defense budget is larger than the next 13 countries combined and has been that way for a long, long time. In any conventional or atomic war, no conceivable adversary has any chance. An attack on critical infrastructure may help level the playing field. Operators of critical infrastructure and of course US DoD locations are at risk and should shore up defenses.

2)      Intellectual property theft – China and Russia want to steal the intellectual property (IP) of American companies, and much of that property now lies in the cloud or on an employee’s hard drive. Stealing those blueprints and plans and ideas is an easy way to cut the costs of product development. Law firms or employees with IP need protection.

3)      Chinese intelligence services [are] eager to understand how Washington works. Hackers often are searching for the unseen forces that might explain how the administration approaches an issue, experts say, with many Chinese officials presuming that reports by think tanks or news organizations are secretly the work of government officials — much as they would be in Beijing. This is the most interesting explanation but the least relevant to the security practitioner.

If none of these apply to you, then you should be worried about cyber criminals who are out for financial gain. Classic money-making things like credit cards or Social Security numbers that are used to defraud Visa/Mastercard or perpetrate Medicare fraud. This is by far much more widespread than any other type of hacking.

It turns out that many of the tools and tactics used by all these enemies are the same. Commodity attacks tend to be opportunistic and high volume. Persistent attacks tend to be low-and-slow. This in turn means the defenses for the one would apply to the other and often the most basic approaches are also the most effective. Effective approaches require discipline and dedication most of all. Sadly this is the hardest commitment for small and medium enterprises that are most vulnerable. If this is you, then consider a service like SIEM Simplified as an alternative to do-nothing.

Detecting Persistent Attacks with SIEM

Detecting Persistent Attacks with SIEM

As you read this, attackers are working to infiltrate your network and ex-filtrate valuable information like trade secrets and credit card numbers. In this newsletter featuring research from Gartner, we discuss advanced persistent threats and how SIEM can help detect such attacks.  We also discuss how you can quickly get on the road to deflecting persistent attacks. Read the entire newsletter here.

Industry News:

Pentagon cancels divisive Distinguished Warfare Medal for cyber ops, drone strikes

Washington Post

The special medal for the Pentagon’s drone operators and cyberwarriors didn’t last long. Two months after the military rolled out the Distinguished Warfare Medal for troops who don’t set foot on the battlefield, Defense Secretary Chuck Hagel has concluded it was a bad idea. Some veterans and some lawmakers spoke out against the award, arguing that it was unfair to make the medal a higher honor than some issued for valor on the battlefield.

Be sure to read EventTracker’s blog post discussing the creation and withdrawal of the award.

DDoS: What to Expect from Next Attacks

BankInfo Security

U.S. banking institutions are now in the fifth week of distributed-denial-of-service attacks waged against them as part of Izz ad-Din al-Qassam’s third phase. What lessons has the industry learned, and what actions do security and DDoS experts anticipate next from the hacktivists?

 IT security: Luxury or commodity in these uncertain times?

SC Magazine

Written by EventTracker CEO, A.N. Ananth

Those who attended the recent World Economic Forum in Davos, Switzerland reported that the prevailing mood was “circumspect.” Though there was relief that a global financial crisis may have been averted, both companies and countries continue to experience significant economic challenges. To be sure, there is a sense that the worst has passed, but uncertainty hovers as declining tax revenues are forcing many government agencies into spending cuts. In the United States, the threat of across-the-board cuts to agency budgets (called “sequestration”) looms in the air. Companies are hesitant to use cash on the balance sheet to fuel expansion, wondering if demand exists.

EventTracker News:

EventTracker Enterprise is the only “Recommended” Product of 2013 in SC Magazine SIEM Category

EventTracker, a leading provider of comprehensive SIEM solutions announced today that SC Magazine, the information security industry’s leading news and product evaluation publication, has named EventTracker Enterprise v7.3 its only “Recommended” product and awarded it a perfect 5-Star rating in the SIEM Group Test for 2013. The full product review appears in the April issue of SC Magazine and online.

EventTracker Enterprise Wins Certificate of Networthiness from the U.S. Army

EventTracker, a leading provider of comprehensive SIEM solutions announced today that its EventTracker Enterprise v7.3 security information and event management (SIEM) solution has been awarded a Certificate of Networthiness (CoN) by the U.S. Army Network Enterprise Technology Command (NETCOM). Previously, EventTracker’s Enterprise v7.0 also achieved this distinction.

 Featured Webinar:

 EventTracker Enterprise v7.3 – “A big leap forward in SIEM technology”

Tuesday, April 23 at 2:00 p.m. (EDT)

 Dive into the latest features and capabilities of EventTracker Enterprise v7.3 and see why SC Magazine says EventTracker “hits all of the benchmarks for a top-tier SIEM and is money well spent.”

CEO, A.N. Ananth will also go over the features highlighted in EventTracker’s recent 5-star review by SC Magazine.

One lucky webinar attendee will win a Microsoft Surface tablet, so be sure to register!

Check out a recent EventTracker’s blog post: Interpreting logs, the Tesla story. You can read all of EventTracker’s blogs at http://www.eventtracker.com/resources/blog/.

The current version of EventTracker is 7.3 b59. Click here for release notes. 

Watch EventTracker’s latest video “SIEM Simplified” here. Or view some of our other new videos here.

Distinguished Warfare Medal for cyber warriors

In what probably was his last move as defense secretary, Leon E. Panetta announced on February 13, 2013 the creation of a new type of medal for troops engaged in cyber-operations and drone strikes, saying the move “recognizes the changing face of warfare.” The official description said that it, “may not be awarded for valor in combat under any circumstances,” which is unique. The idea was to recognize accomplishments that are exceptional and outstanding, but not bounded in any geographic or chronologic manner – that is, it’s not taking place in the combat zone. This recognized that people can now do extraordinary things because of the new technologies that are used in war.

On April 16, 2013, barely two months later, incoming Defense Secretary, Chuck Hagel has withdrawn the medal. The medal was the first combat-related award to be created since the Bronze Star in 1944.

Why was it thought to be necessary? Use the case of the mission that got the leader of al-Qaida in Iraq, Abu Musab al-Zarqawi in June 2006. Reporting showed that U.S. warplanes dropped two 500-pound bombs on a house in which Zarqawi was meeting with other insurgent leaders. A U.S. military spokesman said coalition forces pinpointed Zarqawi’s location after weeks of tracking the movements of his spiritual adviser, Sheik Abdul Rahman, who also was killed in the blast. A team of unmanned aerial systems, drone operators, tracked him down. It was over 600 hours of mission operational work that finally pinpointed him. They put the laser target on the compound that he was in, this terrorist leader, and then an F-16 pilot flew six minutes, facing no enemy fire, and dropped the bombs – computer-guided of course – on that laser. The pilot was awarded the Distinguished Flying Cross.

The idea behind the medal was that drone operators can be recognized as well. The Distinguished Warfare Medal was to rank just below the Distinguished Flying Cross. It was to have precedence over — and be worn on a uniform above — the Bronze Star with “V” device, a medal awarded to troops for specific heroic acts performed under fire in combat. It was intended to recognize the magnitude of the achievement, not the personal risk taken by the recipient.

The decision to cancel the medal is more reflective on the uneasiness about the extent to which UAVs are being used in war, rather than questioning the skill and dedication of the operators. In announcing the move, Secretary Hagel said a “device” will be affixed to existing medals to recognize those who fly and operate drones, whom he described as “critical to our military’s mission of safeguarding the nation.” It also did not help that the medal had a higher precedence than a Purple Heart or Bronze Star.

There is no getting away from it, warfare in the 21st Century is increasingly in the cyber domain.

Interpreting logs, the Tesla story

Did you see the NY Times review by John Broder, which was critical about the Tesla Model S? Tesla CEO Elon Musk was not pleased. They are not arguing over interpretations or anecdotal recollections of experiences, instead they are arguing over basic facts — things that are supposed to be indisputable in an environment with cameras, sensors and instantly searchable logs.

The conflicting accounts — both described in detail — carry a lesson for those of us involved in log interpretation. Data is supposed to be the authoritative alternative to memory, which is selective in its recollection. As Bianca Bosker said, “In Tesla-gate, Big Data hasn’t made good on its promise to deliver a Big Truth. It’s only fueled a Big Fight.”

This is a familiar scenario if you have picked through logs as a forensic exercise. We can (within limitations) try and answer four of the five W questions – Who, What, When and Where, but the fifth one -Why- is elusive and brings the analyst of the realm of guesswork.

The Tesla story is interesting because interested observers are trying to deduce why the reporter was driving around the parking lot – to find the charger receptacle or to deliberately drain the battery and make for a bad review. Alas the data alone cannot answer this question.

In other words, relying on data alone, big data included, to plumb human intention is fraught with difficulty. An analyst needs context.

What is your risk appetite?

In Jacobellis v. Ohio (1964), Justice Potter Steward was quoted as saying, “I don’t know what porn is, but I’ll know it when I see it.” This is not dissimilar to the position that many business leaders confront the concept of “risk”.

When a business leader can describe and identify the risk they are willing to accept, then the security team can put appropriate controls in place. Easy to say, but so very hard to do. It’s because the quantification and definition of risk varies widely depending on the person, the business unit, the enterprise and also the vertical industry segment.

What is the downside of not being able to define risk? It leaves the security team guessing about what controls are appropriate. Inadequate controls expose the business to leakage and loss, whereas onerous controls are expen$ive and even offensive to users.

What do you do about it? Communication between the security team and business stakeholders is essential. We find that scenarios that demonstrate and personalize the impact of risk resonate best. It’s also useful to have a common vocabulary as the language divide between the security team and business stakeholders is a consistent problem. Where possible, use terminology that is already in use in the business instead of something from a standard or framework.

Happy Easter!

Easter-comic

Five telltale signs that your data security is failing and what you can do about it

5 telltale signs that your data security is failing and what you can do about it:

1) Security controls are not proportional to the business value of data

Protecting every bit of data as if it’s a gold bullion in Ft. Knox is not practical. Controls complexity (and therefore cost) must be proportional to the value of the items under protection. Loose change belongs on the bedside table; the crown jewels belong in the Tower of London. If you haven’t classified your data to know which is which, then the business stakeholders have no incentive to be involved in its protection.

2) Gaps between data owners and the security team

Data owners usually only understand business processes and activities and the related information – not the “data”. Security teams, on the other hand, understand “data” but usually not its relation to the business, and therefore its criticality to the enterprise. Each needs to take a half step into the others’ domain.

3) The company has never been penalized

Far too often, toothless regulation encourages a wait-and-see approach. Show me an organization that has failed an audit and I’ll show you one that is now motivated to make investments in security.

4) Stakeholders only see value in sharing, not the risk of leakage

Data owners get upset and push back against involving security teams in the setup of access management. Open access encourages sharing and improves productivity, they say. It’s my data, why are you placing obstacles in its usage? Can your security team effectively communicate the risk of leakage in terms that the data owner can understand?

5) Security is viewed as a hurdle to be overcome

How large is the gap between the business leaders and the security team?  The farther apart they are, the harder it is to get support for security initiatives. It helps to have a champion, but over-dependence on a single person is not sustainable. You need buy-in from senior leadership.

Happy St. Patrick’s Day-Compliance

Compliance

How to Use Process Tracking Events in the Windows Security Log

I think one of the most underutilized features of Windows Auditing and the Security Log are Process Tracking events.

In Windows 2003/XP you get these events by simply enabling the Process Tracking audit policy.  In Windows 7/2008+ you need to enable the Audit Process Creation and, optionally, the Audit Process Termination subcategories which you’ll find under Advanced Audit Policy Configuration in group policy objects.

These events are incredibly valuable because they give a comprehensive audit trail of every time any executable on the system is started as a process.  You can even determine how long the process ran by linking the process creation event to the process termination event using the Process ID found in both events.  Examples of both events are shown below.

Process Start WinXP/2003 592 A new process has been created.Subject:

Security ID: WIN-R9H529RIO4Y\Administrator
Account Name: Administrator
Account Domain: WIN-R9H529RIO4Y
Logon ID: 0x1fd23

Process Information:

New Process ID: 0xed0
New Process Name: C:\Windows\System32\notepad.exe
Token Elevation Type: TokenElevationTypeDefault (1)
Creator Process ID: 0x8c0

Win7/2008 4688
Process End WinXP/2003 593 A process has exited.Subject:

Security ID: WIN-R9H529RIO4Y\Administrator
Account Name: Administrator
Account Domain: WIN-R9H529RIO4Y
Logon ID: 0x1fd23

Process Information:

Process ID: 0xed0
Process Name: C:\Windows\System32\notepad.exe
Exit Status: 0x0

Win7/2008 4689

Trying to determine what a user did after logging on to Windows can be difficult to piece together.  These events are valuable on workstations because often, they are the most granular trail of activity left by end-users: for example, you can tell that Bob opened Outlook, then Word, then Excel and closed Word.

The process start event tells you the name of the program and when it started.  It also tells you who ran the program and the ID of their logon session with which you can correlate backwards to the logon event. This allows you to determine the kind of logon session in which the program was run and where the user (if remote) was on the network using the IP address and/or workstation name provided in the logon event.

Process start events also document the process that started them using Creator Process ID which can be correlated backwards to the process start event for the parent process.  This can be invaluable when trying to figure out how a suspect process was started.  If the Creator Process ID points to Explorer.exe, after tracking down the process start event, then it’s likely that the user simply started the process from the start menu.

These same events, when logged on servers, also provide a degree of auditing over privileged users but be aware that many Windows administrative functions will all show up as process starts for mmc.exe since all Microsoft Management Console apps run within mmc.exe.

But beyond privileged and end-user monitoring, process tracking events help you track possible change control issues and to trap advanced persistent threats.  When new software is executed for the first time on a given system it’s important to know that, since it implies a significant change to the system or it could alert you to a new unauthorized and even malicious program running for the first time.

The key to this seeing this kind of activity is to compare the executable name in a recent event 592/4688 to executable names in a whitelist – and thereby recognizing new executables.

Of course, this method isn’t foolproof because someone could replace an existing executable (on your whitelist) with a new program but with the same name and path as the old.  Such a change would “fly under the radar” with process tracking.  But my experience with unauthorized changes that bypass change control and APTs indicates that while certainly possible, the methods described here-in will catch their share of offenders and attackers.

To do this kind of correlation you need to enable process tracking on applicable systems (all systems if possible, including workstations) and then you need a SIEM solution that can compare the executable name in the current event to a “whitelist” of executables.

How you build that whitelist is important because it determines if your criteria for a new executable is unique to “that” system, or if it is based on a “golden” system, or your entire environment.  The more unique your whitelist is to each system or type of system, the better.  You can build the whitelist by either scanning for all the EXE files on a given system or by analyzing the 592/4688 events over some period of time.  I prefer the latter because there are many EXE files on Windows computers that are never actually executed and I’d like to know the first time any new EXE is run – whether it came with Windows and installed applications out of the box or whether it is a new EXE recently dropped onto the system.  On the other hand if you only want to detect when EXEs run which were not present on system at the time the whitelist was created, then a list built from simply running “dir *.exe /s” will suffice.

If you opt to analyze a period of system activity make sure that the period is long enough cover the full usage profile and business process profile for that system – usually a month will do it. Take some time to experiment with Process Tracking events and I think you’ll find that they are valuable for knowing what running on your system and who’s running it.

SIEM Simplified for the Security No Man’s Land

In this blog post, Mike Rothman described the quandary facing the midsize business. With a few hundred employees, they have information that hackers want to and try to get but not the budget or manpower to fund dedicated IT Security types, nor the volume of business to interest a large outsourcer. This puts them in no-man’s land with a bull’s-eye on their backs. Hackers are highly motivated to monetize their efforts and will therefore cheerfully pick the lowest hanging fruit they can get. It’s a wicked problem to be sure and one that we’ve been focused on addressing in our corner of the IT Security universe for some years now.

Our solution to this quandary is called SIEM SimplifiedSM and stems from the acceptance that as a vendor we could go developing all sorts of bells and whistles to our product offering only to see an ever shrinking percent of users actually use them in the manner they were designed. Why? Simply put, who has the time? Just as Mike says, our customers are people in mid-size businesses, wearing multiple hats, fighting fires and keeping things operational. SIEM Simplified is the addition of an expert crew at the EventTracker Control Center, in Columbia MD that does the basic blocking and tackling which is the core ingredient if you want to put points on the board. By sharing the crew across multiple customers, it reduces the cost for customers and increases the likelihood of finding the needle in the haystack. And because it’s our bread and butter, we can’t afford to get tired or take a vacation or fall sick and fall behind.

A decade-long focus on this problem as it relates to mid-size businesses has allowed us to tailor the solution to such needs. We use the behavior module to quickly spot new or out-of-ordinary patterns, and a wealth of existing reports and knowledge to do the routine but essential legwork of  log review. Mike was correct is pointing out that “folks in security no-man’s land need …. an advisor to guide them … They need someone to help them prioritize what they need to do right now.” SIEM Simplified delivers.  More information here.

EventTracker Recommendation Engine

Online shopping continues to bring more and more business to “e-tailers.”  Comscore says there was a  16% increase in holiday shopping this past season over the previous season. Some of this is attributed to “recommendations” that are helpfully shown by the giants of the game such as Amazon.

Here is how Amazon describes its recommendation algorithm. “We determine your interests by examining the items you’ve purchased, items you’ve told us you own items you’ve rated, and items you’ve told us you like. We then compare your activity on our site with that of other customers, and using this comparison, are able to recommend other items that may interest you.

Did you know that EventTracker has its own recommendation engine? It’s called Behavior Correlation and is part of the EventTracker Enterprise. Just as Amazon, learns about your browsing and buying habits and uses it to “suggest” other items, so also, EventTracker auto-learns what is “normal”  in your enterprise during an adaptive learning period. This can be as short as 3 days or as long as 15 days depending on the nature of your network. In this period, various items such as IP addresses, users, administrators, process names machines, USB serial numbers etc. are learned. Once learning is complete, data from the most recent period is compared to the learned behavior to pinpoint both unusual activities as well as those never-before-seen. EventTracker then “recommends” that you review these to determine if they point to trouble.

Learning never ends, so the baseline is adaptive, refreshing itself continuously. User defined rules can also be implemented wherein the comparison periods are not learned but specified, and comparisons performed not  once a day but as frequently as once a minute.

If you shop online and feel drawn to a “recommendation”, pause to reflect how this concept can also improve your IT security by looking at logs.

Cyber Security Executive Order

Based on early media reports, the Cyber Security executive order would seem to portend voluntary compliance on the part of U.S. based companies to implement security standards developed in concert with the federal government.  Setting aside the irony of an executive order to voluntarily comply with standards that are yet to be developed, how should private and public sector organizations approach cyber security given today’s exploding threatscape and limited information technology budgets?  How best to prepare for more bad guys, more threats, more imposed standards with less people, time and money?

Back to basics.  First let’s identify the broader challenges: of course you’re watching the perimeter with every flavor of firewall technology and multiple layers of IDS, IPS, AV and other security tools.  But don’t get too comfortable: every organization that has suffered a damaging breach had all those things too.  Since every IT asset is a potential target, every IT asset must be monitored.  Easy to describe, hard to implement. Why?

Challenge number one: massive volumes of log data.  Every organization running a network with more than 100 nodes is already generating millions of audit and event logs.  Those logs are generated by users, administrators, security systems, servers, network devices and other paraphernalia.  They generate the raw data that tracks everything going on from innocent to evil, without prejudice.

Challenge number two: unstructured data. Despite talk and movement toward audit log standards, log data remains widely variable with no common format across platforms, systems and applications, and no universal glossary to define tokens and values.  Even if every major IT player from Microsoft to Oracle (and HP and Cisco), along with several thousand other IT organizations were to adopt uniform, universal log standards today, we would still have another decade or two of the dreaded “legacy data” with which to contend.

Challenge number three: cryptic or non-human readable logs. Unstructured data is difficult enough, but further adding to the complexity is that most of the log data content and structure are defined by developers for developers or administrators.  Don’t assume that security officers and analysts, senior management, help desk personnel or even tenured system administrators can quickly and accurately glance at a log and immediately understanding its relevance or more importantly what to do about it.

Solution?  Use what you already have more wisely.  Implement a log monitoring solution that will ingest all of the data you already generate (and largely ignore until after you discover there’s a real problem), process it in real-time using built-in intelligence, and present the analysis immediately in the form of alerts, dashboards, reports and search capabilities.  Take a poorly designed and voluminous asset (audit logs) and turn it into actionable intelligence.  It isn’t as difficult as it sounds, though it require rigorous discipline and a serious time commitment.

Cyber criminals employ the digital equivalent of what our military refers to as an “asymmetrical tactic.” Consider a hostile emerging super power in Asia that directly or indirectly funds a million cyber warriors at the U.S. equivalent of $10 a day; cheap labor in a global economy.  No organization, not even the federal government, the world’s largest bank or a 10 location retailer, has unlimited people, time and money to defend against millions of bad guys attacking on a much lower (asymmetrical) operational budget.

IT Operations Problem-Solvers Infrastructure Maintenance Solution Providers

On a recent flight returning from an engagement with a client, my seating companion and I exchanged a few words as we settled into the flight before donning and turning to the iPod music and games used to distract ourselves from the hassles of travel. He was a cardiologist, and introduced himself as such, before quickly describing his job as basically ‘a glorified plumber’. We both chuckled knowing that while sharing fundamentals in basic concepts, there was much more to cardiology than managing and controlling flow. BTW, my own practical plumbing experiences convinced me of the value of a good plumber.

However, this set me off reflecting on how IT perceives and presents itself. There is no question that IT has progressed far from the days when a pundit launched his career asserting that “IT Doesn’t Matter”. IT operations and the impact of the application of the associated computer and communications technology are on display and felt everywhere around us – facilitating, speeding, complicating, escalating risk and changing our lives, professional and private. From pervasive monitoring to automated remote management and control over energy consumption, work habits, even purchasing, computers operate and impact it all.

In the enterprise today, technology itself is recognized as playing a vital role in business operations and success. Recent surveys of business executives from CEOs to CFOs to CIOs document their view that the application of information technology is linked directly to enterprise operations and growth. Unfortunately, too many IT staff are still struggling to come to terms with that impact and, more worrisome, how to respond to that reality. That is a problem for both the IT staff and the enterprise.

All too often, IT staff see themselves as primarily providers and maintainers (or restrictors) of access to technology, all the while ignoring the role and potential of IT as proactive and involved participants in activities that contribute to enterprise growth, profitability and revenue. IT isn’t simply maintenance, cost control and plumbing. IT is more than ever before, a potential source of competitive advantage and growth. Yet, many business staffs view IT as simply a source of cookie-cutter services which can easily, efficiently and even more effectively come from an outside organization.

Also familiar is the tension between IT as the ‘slow-to-respond’ gatekeeper for the introduction and adoption of new technologies and the business unit manager/sales/marketing professional ‘just trying to get the job done’. Neither is ‘wrong’; each has well-founded arguments that support their roles. However, the evolution in technology and in the enterprise, including the data center raises the risks of such conflict substantially.

The litany of change – cloud, big data, infrastructure as code, mobility, workload optimized infrastructure, deep analytics, etc. is familiar. The very nature of the data center is changing as computing moves from ‘systems of record’, i.e. traditional operational environments with dedicated infrastructure, where infrastructure limited applications to ‘systems of engagement, i.e. responsive and adaptive to the operating environment, demand and specific service provided. The implications for IT due to this shift are radical, exciting and still very much emerging. More fundamentally, these changes are revising how IT views, uses, applies and makes decisions about technology. IT must determine how to integrate, balance and effectively operate in an environment consisting of a combination of dynamic and fixed resources, infrastructure and assets.

The evolution of technology is changing how IT solution providers today provide products. The emphasis is on providing products and solutions that are smarter, more integrated, simpler to use, more comprehensive in application, quicker to implement and deliver a larger and faster payback using whatever exists as the current measure of success.

IT needs such solutions because it is the only way to meet the demands of their users while freeing resources for other activities. Non-technical business stake holders want these solutions because they see the power of applied technology to resolve real problems. Risk comes when the business side fails to see the potential of their own IT staffs to harness the power of technology and when business professionals begin to fail to involve IT in their adoption, introduction and use of technology.

Our own interactions with clients and vendors indicate that a transition within IT from problem-solver/technology maintainer to solution provider-business driver is underway. Unfortunately, it is occurring at a pace that is much slower than is healthy for IT and the enterprise. IT has to be proactive in positioning itself as an active partner in and contributor to business success. Fortunately, many vendors recognize the challenge facing their IT clients and are making the changes in their product offerings, training and presentation to support IT in the transition.

SIEM in the Social Era

The value proposition of our SIEM Simplified offering is that you can leave the heavy lifting to us. What is undeniable is that getting value from SIEM solutions requires patient sifting through millions of logs, dozens of reports and alerts to find nuggets of value. It’s quite similar to detective work.

But does that not mean you are somehow giving up power? Letting someone else get a claw hold in your domain?

Valid question, but consider this from Nilofer Merchant who says “In the Social Era, value will be (maybe even already is) no longer created primarily by people who work for you or your organization“.

Isn’t power about being the boss?
The Social Era has disrupted the traditional view of power which has always been your title, span of control and budget. Look at Wikipedia or Kickstarter where being powerful is about championing an idea. With SIEM Simplified, you remain in control, notified as necessary, in charge of any remediation.

Aren’t I paid to know the answer?
Not really. Being the keeper of all the answers has become less important with the rise of fantastic search tools and the ease of sharing, as compared to say even 10 years ago. Merchant says “When an organization crowns a few people as chiefs of answers, it forces ideas to move slowly up and down the hierarchy, which makes the organization resistant to change and less competitive. The Social Era raises the pressure on leaders to move from knowing everything to knowing what needs to be addressed and then engaging many people in solving that, together.” Our staff does this every day, for many different environments. This allows us to see the commonalities and bring issues to the fore.

Does it mean blame if there is failure and no praise if it works?
In a crowd sourcing environment, there are many more hands in every pie. In practice, this leads to more ownership from more people than the other way around. Consider Wikipedia as an example of this. It does require different skills, collaborating instead of commanding, sharing power rather than hoarding it. After all, we are only successful, if you are. Indeed, as a provider of the service, we are always mindful that this applies to us more than it does you.

As a provider of services, we see clearly that the most effective engagements are the ones where we can avoid the classic us/them paradigm and instead act as a badgeless team. The Hubble Space Telescope is an excellent example of this type of effort.

It’s a Brave New World, and it’s coming at you, ready or not.

Big Data and Information Inequality

Mike Wu writing in Tech Crunch observed that in all realistic data sets (especially big data), the amount of information one can extract from the data is always much less than the data volume (see figure below): information data.

Big Data

In his view, given the above, the value of big data is hugely exaggerated. He then goes on to infer that this is actually a strong argument for why we need even bigger data. Because the amount of valuable insights we can derive from big data is so very tiny, we need to collect even more data and use more powerful analytics to increase our chance of finding them.

Now machine data (aka log data) is certainly big data, and it is certainly true that obtaining insights from such dataset’s is a painstaking (and often thankless) job, but I wonder if this means we need even more data. Methinks we need to be able to better interpret the big data set and its relevance to “events”.

Over the past two years, we have been deeply involved in “eating our own dog food” as it were. At multiple EventTracker installations that are nationwide in scope, and span thousands of log sources, we have been working to extract insights for presentation to the network owners. In some cases, this is done with a lot of cooperation from the network owner and we have a good understanding of IT assets and the actors who use/abuse them. We find that with such involvement we are better able to risk prioritize what we observe in the data set and map to business concerns. In other cases where there is less interaction with the network owner and we know less about the actors or the relative criticality of assets, then we fall back on past experience and/or vendor-provided info as to what is an incident.  It is the same dataset in both cases but there is more value in one case than the other.

To say it another way, to get more information from the same data we need other types of context to extract signal from noise. Enabling logging at a more granular level from the same devices thereby generating an ever bigger dataset won’t increase the signal level. EventTracker can merge change audit data netflow information as well as vulnerability scan data to enable a greater signal-to-noise ratio. That is a big deal.

Small Business: too small to care?

Small businesses around the world tend to be more innovative and cost-conscious. Most often, the owners tend to be younger and therefore more attuned to being online. The efficiencies that come from being computerized and connected are more obvious and attractive to them. But we know that if you are online then you are vulnerable to attack. Are these small businesses  too small for hackers to care?

Two recent reports say no.

The UK the Information Security Breaches survey 2012 survey results published by PWC shows:

  • 76% of small business had a security breach
  • 15% of small businesses were hit by a denial of service attack
  • 20% of small businesses lost confidential data and 80% of these breaches were serious
  • The average cost of a small business worst security breach was between 15-30K pounds
  • Only 8% of small businesses monitor what their staff post on social sites
  • 34% of small businesses allow smart phones and tablets to connect to their network but have done nothing about it
  • On average, IT security consumes 8% of the spending but 58% make no attempt to evaluate the effectiveness of the expenditure

From the US, the 2012 Verizon data breach report shows:

  • Restaurant and POS systems are popular targets.
  • Companies with 11-100 employees from 36 countries had the maximum number of breaches.
  • Top threats to small business were external against servers
  • 83% of the theft was by professional cybercriminals, for profit
  • Keyloggers designed to capture user input were present in 48% of breaches
  • The most common malware injection vector is installation by a remote attacker
  • Payment card info and authentication credentials were the most stolen data
  • The initial compromise required basic methods with no customization, automated scripts can do it
  • More than 79% of attacks were opportunistic; large-scale automated attacks are opportunistically attacking small to medium businesses, and POS systems frequently provide the opportunity
  • In 72% of cases, it took only minutes from initial attack to compromise but hours for data removal and days for detection
  • More than 55% of breaches remained undiscovered for months
  • More than 92% of the breaches were reported by an external party
  • Only 11% were monitoring access which is called out in Chapter 10 of PCI-DSS

Lesson learned? Small may be beautiful, but in the interconnected world we live in, not too small to be hacked. Protect thyself – start simple by changing remote access credentials and enabling a firewall, monitor and mine your logs. ‘Nuff said.

A smartphone named Desire

Is this true for you? That your smartphone has merged your private and work lives. Smartphones now contain—by accident or by design—a wealth of information about the businesses we work for.

If your phone is stolen, the chance of getting it back approaches zero. How about lost in an elevator or the back seat of a taxi? Will it be returned? More importantly, from our point of view, what about the info on it – the corporate info?

Earlier this year, the Symantec HoneyStick project conducted an experiment by “losing” 50 smartphones in five different cities: New York City; Washington D.C.; Los Angeles; San Francisco; and Ottawa, Canada. Each had a collection of simulated corporate and personal data on them, along with the capability to remotely monitor what happened to them once they were found. They were left in high traffic public places such as elevators, malls, food courts, and public transit stops.

Key findings:

  • 96% of lost smartphones were accessed by the finders of the devices
  • 89% of devices were accessed for personal related apps and information
  • 83% of devices were accessed for corporate related apps and information
  • 70%of devices were accessed for both business and personal related apps and information
  • 50% of smartphone finders contacted the owner and provided contact information

The corporate related apps included remote access as well as email accounts. What is the lesson for corporate IT staff?

  • Take inventory of the mobile devices connecting to your company’s networks; you can’t protect and manage what you don’t know about.
  • Track resource access by mobile devices. For example if you are using MS Exchange, then ActiveSync logs can tell you a whole lot about such access.
  • See our white paper on the subject
  • Track all remote login to critical servers

See our webinar, ‘Using Logs to Deal With the Realities of Mobile Device Security and BYOD.’

Should You Disable Java?

The headlines are ablaze with the news of a new zero-day vulnerability in Java which could expose you to a remote attacker.

The Department of Homeland Security recommends disabling Java completely and many experts are apparently concurring. Crisis communications 101 says maintain high-volume, multi-channel communications but there is a strange silence from Oracle, aside of the announcement of a patch for said vulnerability.

Allowing your opponents to define you is a bad mistake as any political consultant will tell you. Today it’s Java, tomorrow, some other widely used component. The shrillness of the calls also makes me wonder why the hullabaloo?  Upset by the Oracle stewardship of Java, perhaps?

So what should you make of the “disable Java” calls echoing across Cyberia?  Personally I think it’s bad advice, assuming you can even take the advice in the first place. Java is widespread in server side applications (usually enterprise software) and embedded devices. There is probably no easy way to “upgrade” a heart pump or elevator control or a POS system. As far as server side, this may be easier but spare a thought to backward compatibility and business applications that are “certified” on older browsers. Pause a moment, the vulnerability becomes exposed when you visit a malicious website which can then take advantage of the flaw and get on your machine.

Instead of disabling Java and thereby possibly breaking critical functionality, why don’t you limit access to outside websites instead? This is easily done by configuring proxy servers (good for desktops or mobile situations) or limiting devices to a subnet that only has access to the trusted internal hosts (this can work for bar code scanners or manufacturing equipment). This limits your exposure. Proxy server filtering at the internet perimeter is done by matching the user agent string. This is also a good way to limit those older insecure browsers that must be present for internal applications from accessing the outside and potentially being equally a source of infection in the enterprise.

This is a serious issue that merits a thoughtful response, not a panicked rush to comply and cripple your enterprise.

Top 4 Security Questions You Can Only Answer with Workstation Logon/Logoff Events

I often encounter a dangerous misconception about the Windows Security Log: the idea that you only need to monitor domain controller logs.  Domain controller security logs are absolutely critical to security but they are only a portion of your overall audit trail.  Member server and workstation logs are really just as important and I’m going to focus this article on the top 4 questions you can only answer with workstation logon/logoff events.

For your workstations to generate these events you need to enable at least the following audit policy.  Remember that XP is configured with the legacy 9 audit categories while Windows 7 and 8 should be configured with audit subcategories under Advanced Audit Policy in group policy objects:

Windows XP Windows 7 and 8
Logon/Logoff for Success & Failure Logon for Success & Failure, Logoff for Success

When Did the User Logoff?

The workstation security log is the only place can answer this question.  Contrary to intuition, Domain Controllers have no idea when you logoff.  When you enter your credentials at your workstation, even though it looks like you are logging into the domain, you are really just logging on to your workstation.  The domain controller authenticates you but the only logon session established is at the workstation.  Windows labels that logon session with a logon ID which is included in both the logon event (528 or 4624) and then in the logoff event (551 or 4647), and that’s how you correlate the 2 events to come up with the duration of the overall logon session.  Remember though that users often remain logged on for days at a time.  So to make the concept of a logon session more meaningful you need to take into account when the workstation console is locked and/or screen saver is in effect.  Thankfully Microsoft has added event IDs to Windows 7 to cover these events.  See events 4800-4803 which are logged if you enable the “Other Logon/Logoff Events” audit sub category.

What Is the Exact Reason for Logon Failure?

Would you believe that Kerberos authentication failure events on domain controllers don’t tell you the exactreason why the request failed?  It’s true though and the reason is because Kerberos authentication ticket request events log use the reason codes specified in RFC 1510.  This Kerberos specification doesn’t contemplate all the different reasons why a logon can fail in Windows, so some Kerberos failure codes logged on the domain controller can mean one of several Windows logon failure reasons. For instance Kerberos failure code 0×12 (Client credentials have been revoked) can mean that the Windows account is disabled, expired or currently locked out.  To get the specific reason the logon failed you need to find the related logon failure event on the workstation where the user attempted logon from.  XP logs a different event ID for each reason (529-537) while Windows 7 logs just one event ID (4625) with the reason stated within the details.

Who Accessed this Laptop While It Was Disconnected from the Network?

Knowing the answer to this question can be important in forensics situations.  When a Windows laptop is disconnected from the network, any domain user out of the last 10 successful interactive logons can logon to the workstation with cached credentials.

Normally, when you logon to your workstation with domain credentials, the workstation checks your credentials with the domain controller and this creates an audit trail on the DC.  But when you logon to a workstation with cached credentials nothing is logged on the DC – after all the whole reason Windows is using cached credentials is because you aren’t connected to the network.

Again, the Logon/Logoff category on XP and the Logon subcategory on Windows 7 save the day.  Just look for logon event 528 or 540 where the Logon Type is 11.  11 stands for an interactive logon with cached credentials.

Is Anyone Trying to Break Into This Computer?

If someone is trying to break into a workstation from over the network by guessing the password of a domainaccount, the authentication failure will show up in the domain controller.  But they are pounding on a local account on that workstation or simply trying random user names the only indication you’ll have are the failed logon events in that workstation’s security log.

As stated earlier, the logon failure events for XP are 529-537 while Windows 7 logs just one event ID (4625) with the reason stated within the details.  How can you tell if the logon attempt involved domain account or not?  Just check the Account Domain under “Account For Which Logon Failed”.  If it matches one of your domains, the logon attempt is likely related to an account on your domain.  If the Account Domain matches the name of the workstation itself, someone is specifically trying to logon to that system using a local account there-in.  If the Account Domain is blank or has some other non-existent domain or user name, someone may be trying to break into that system.

As you can see, workstation logon events are extremely valuable – especially in this era of increased end-point security risks.  Advanced Persistent Threat actors love to start with a compromised workstation and follow a lateral kill chain to the server of their focus.  So, catching intruders at the workstation is a good way to break that kill chain early in the process.

2013 Security Resolutions

A New Year’s resolution is a commitment that a person makes to one or more personal goals, projects, or the reforming of a habit.

  • The ancient Babylonians made promises to their gods at the start of each year that they would return borrowed objects and pay their debts.
  • The Romans began each year by making promises to the god Janus, for whom the month of January is named.
  • In the Medieval era, the knights took the “peacock vow” at the end of the Christmas season each year to re-affirm their commitment to chivalry.

Here are mine:

1)      Shed those extra pounds of logs:

Log retention is always a challenge — how much to keep, for how long? Keep them too long and they are just eating away storage space. Pitch them mercilessly and keep wondering if you will need them.  For guidance, look to any regulation that may apply. PCI-DSS says 365 days, for example; NIST 800-92 unhelpfully says “This should be driven primarily by organizational policies” and then goes on to classify logs into system, infrastructure and application levels. Bottom line, use your judgment because you know your environment best.

2)      Exercise your log analysis muscles regularly

As the Verizon Data Breach report says year in and year out, the bad guys are hoping that you are not collecting logs, and if you are, that you are not reviewing them. More than 96% of all attacks were not highly difficult and were avoidable (at least in hindsight) without difficult or expensive countermeasures. Easier said than done, isn’t it? Consider co-sourcing the effort.

3)      Play with existing toys before buying new ones

Know what configuration assessment is? It’s applying secure configurations to existing equipment. Agencies such as NIST, CIS and DISA provide detailed guidelines. Vendors such as Microsoft provide hardening guides. It’s a question of applying them to existing hardware. This reduces attack surface and contributes greatly to a more secure posture. You already have the equipment, just apply the secure configuration.  EventTracker can help measure results.

Happy New Year.