Savvy IT Is The Way To Go

There is a lot of discussion in the context of cloud as well as traditional computing regarding Smart IT, Smarter Planets, Smart and Smarter Computing. Which makes a lot of sense in light of the explosion in the amount of collected data and the massive efforts aimed at using analytics to yield insight, information and intelligence about — well, just about everything. We have no problem with smart activities.

We also hear a lot of speculation about the impact, good and bad, that advances in technology, emerging business models, and changing revenue, cost and delivery processes will exert on IT, and specifically enterprise IT. Add to these the predictions of the end of ‘IT as we know it’, with prognosticators describing a looming radical alteration in enterprise computing as in-house IT culminates in applications, data and computing moving into vast, amorphous clouds of distributed, but still centralized infrastructure and data centers. Who is kidding whom?

Smart computing isn’t going to go away, and it makes a point. However, our contention is that it takes more than just Smart IT to succeed; it takes Savvy IT.

Savvy IT complements and extends smarts – with the ability to leverage all of what you know and what you can do to be successful. Savvy can be used as a noun, an adjective and a verb. The definition of the adjective describes Savvy as “having or showing a clever awareness in practical matters: astute, cagey, canny, knowing, shrewd, slick, smart, wise”. More colloquially, it means acting and being ‘street smart’. Watching and listening across the industry, we see a market evolving to favor moving with Smart IT to Savvy IT.

Savvy IT is concerned with optimizing the use of IT infrastructure, assets and resources to achieve enterprise goals. Savvy IT acts proactively to drive line of business staff to use emerging technologies by helping them to understand how technology can help develop and implement new business models and revenue streams.  It is interactive, coordinated and cooperative efforts targeting external, as well as internal customers.

It’s about a ‘street smart’ application of technology to solve problems and drive organizational success. It is based on the insight of personal experiences that includes an awareness and knowledge about the business, their industry and personal efforts to exploit data, capabilities and technology. Finally, it’s about a CIO who pursues the goal of making sure IT’s services are at least the same, if not better than the best services available from SaaS or service providers.

An explicit example of Savvy IT appears in the evolution toward real solutions to comprehensive business and operational problems that are driven and developed from the perspective of the customer or client end-users. Savvy IT works with the business to proactively identify, develop and implement technology-dependent innovation that act as game changers for the company.

One example is the radical alteration in the sped up cycle of development, testing and distribution of business applications as they become app-based services. Or, when IT staff leverage transaction merchandising services across multiple technologies – linking transaction services in mobile technologies with traditional systems of record  to provide a seamless purchase experience whether ordering a purchase on-line, from a phone or flyer with the option for at home delivery or pick-up at a ‘brick and mortar’ store. An innovation that gives global merchandiser, Target, a significant competitive advantage.

Savvy IT requires both innovation and invention in the application of technology combined with experience that knows where and how to focus efforts that will either solve problems or reduce their impact in favor of continuing services. Smart operations provide a foundation on which to build; savvy tempers fashion with experience that ‘delivers’ despite the obstacles and challenges that inevitably arise.

In implementation and practice, Savvy IT involves and applies whether the model for IT services is built exclusively around an internal data center, an external cloud or service provider or a combination of both. Implementing Savvy IT is an organizational challenge that starts with IT, but extends to include the whole enterprise. Savvy IT is street smart. It’s about protecting the business from risks, existing and emerging that persistently evolve. We’ll explore more of the implications, impacts, processes and issues over the coming months.

Feel free to send any comments, questions or discussion about Savvy IT, pro or con, as well as other topics of interest to Rich Ptak: rlptak @ptaknoel [dot] com.

The Dark Side of Big Data

study published in Nature looked at the phone records of some 1.5 million mobile phone users in an undisclosed small European country, and found it took only four different data points on the time and location of a call to identify 95% of the people. In the dataset, the location of an individual was specified hourly with a spatial resolution given by the carrier’s antennas.

Mobility data is among the most sensitive data currently being collected. It contains the approximate whereabouts of individuals and can be used to reconstruct individuals’ movements across space and time. A simply anonymized dataset does not contain name, home address, phone number or other obvious identifier. For example, the Netflix Challenge provided a training dataset of 100,480,507 movie ratings each of the form <user, movie, date-of-grade, grade> where the user was an integer ID.

Yet, if individual’s patterns are unique enough, outside information can be used to link the data back to an individual. For instance, in one study, a medical database was successfully combined with a voters list to extract the health record of the governor of Massachusetts. In the case of the Netflix data set, despite the attempt to protect customer privacy, it was shown possible to identify individual users by matching the data set with film ratings on the Internet Movie Database. Even coarse data sets provide little anonymity.

The issue is making sure the debate over big data and privacy keeps up with the science. Yves-Alexandre de Montjoye, one of the authors of the Nature article, says that the ability to cross-link data, such as matching the identity of someone reading a news article to posts that person makes on Twitter, fundamentally changes the idea of privacy and anonymity.

Where do you, and by extension your political representative, stand on this 21st Century issue?

The Intelligence Industrial Complex

If you are old enough to remember the 1988 election in the USA for President, then the name Gary Hart may sound familiar. He was the clear frontrunner after his second Senate term from Colorado was over. He was caught in an extra-marital affair and dropped out of the race. He has since earned a doctorate in politics from Oxford and accepted an endowed professorship at the University of Colorado at Denver.

In this analysis, he quotes President Dwight Eisenhower, “…we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. The potential for the disastrous rise of misplaced power exists, and will persist.”

His point is that the US now has an intelligence-industrial complex composed of close to a dozen and a half federal intelligence agencies and services, many of which are duplicative, and in the last decade or two the growth of a private sector intelligence world. It is dangerous to have a technology-empowered government capable of amassing private data; it is even more dangerous to privatize this Big Brother world.

As has been extensively reported recently, the Foreign Intelligence Surveillance Act (FISA) courts are required to issue warrants, as the Fourth Amendment  (against unreasonable search and seizure) requires, upon a showing that the national security is endangered. This was instituted in the early 1970s following the findings of serious unconstitutional abuse of power. He asks “Is the Surveillance State — the intelligence-industrial complex — out of the control of the elected officials responsible for holding it accountable to American citizens protected by the U.S. Constitution?

We should not have to rely on whistle-blowers to protect our rights.

In a recent interview with Charlie Rose of PBS, President Obama said, “My concern has always been not that we shouldn’t do intelligence gathering to prevent terrorism, but rather: Are we setting up a system of checks and balances?” Despite this he avoided answering how no request to a FISA court has ever been rejected, that companies that provide data on their customers are under a gag order that even prevents them for disclosing the requests.

Is the Intelligence-Industrial complex calling the shots? Does the President know a lot more than he can reveal? Clearly he is unwilling to even consider changing his predecessor policy.

It would seem that Senator Hart has a valid point. If so, its a lot more consequential than Monkey Business.

Introducing EventTracker Log Manager

The IT team of a Small Business has it the worst. Just 1-2 administrators to keep the entire operation running, which includes servers, workstations, patching, anti-virus, firewalls, applications, upgrades, password resets…the list goes on. It would be great to have 25 hours in a day and 4 hands per admin just to keep up. Adding security or compliance demands to the list just make it that much harder.

The path to relief? Automation, in one word. Something that you can “fit-and-forget”.

You need a solution which gathers all security information from around the network, platforms, network devices, apps etc. and that knows what to do with it. One that retains it all efficiently and securely for later if-needed for analysis, displays it in a dashboard for you to examine at your convenience, alerts you via e-mail/SMS etc. if absolutely necessary, indexes it all for fast search, and finds new or out-of-ordinary patterns by itself.

And you need it all in a software-only package that is quickly installed on a workstation or server. That’s what I’m talking about. That’s EventTracker Log Manager.

Designed for the 1-2 sys admin team.
Designed to be easy to use, quick to install and deploy.
Based on the same award-winning technology that SC Magazine awarded a perfect 5-star rating to in 2013.

How do you spell relief? E-v-e-n-t-T-r-a-c-k-e-r  L-o-g  M-a-n-a-g-e-r.
Try it today.

Following a User’s Logon Tracks throughout the Windows Domain

What security events get logged when a user logs on to their workstation with a domain account and proceeds to run local applications and access resources on servers in the domain?

When a user logs on at a workstation with their domain account, the workstation contacts domain controller via Kerberos and requests a ticket granting ticket (TGT).  If the user fails authentication, the domain controllers logs event ID 4771 or an audit failure instance 4768.  The result code in either event specifies the reason for why authentication failed.  Bad passwords and time synchronization problems trigger 4771 and other authentication failures such as account expiration trigger a 4768 failure.  These result codes are based on the Kerberos RFC 1510 and in some cases one Kerberos failure reason corresponds to several possible Windows logon failure reasons.  In these cases the only way to know the exact reason for the failure is to check logon event failure reason on the computer where the user is trying to logon from.

If the user’s credentials authentication checks out, the domain controller creates a TGT, sends that ticket back to the workstation, and logs event ID 4768.  Event ID shows the user who authenticated and the IP address of the client (in this case, the workstation). However, there is no logon session identifier because the domain controller handles authentication – not logon sessions.   Authentication events are just events in time; sessions have a beginning and an end.  In Windows, each member computer (workstation and servers) handles its own logon sessions.

When the domain controller fails the authentication request, the local workstation will log 4625 in its local security log noting the user’s domain, logon name and the failure reason.  There is a different failure reason for every reason a Windows logon can failure, in contrast with the more general result codes generated by the Kerberos domain controller events.

If authentication succeeds and the domain controller sends back a TGT, the workstation creates a logon session and logs event ID 4624 to the local security log.  This event identifies the user who just logged on, the logon type and the logon ID.  The logon type specifies whether the logon session is interactive, remote desktop, network-based (i.e. incoming connection to shared folder), a batch job (e.g. Scheduled Task) or a service logon triggered by a service logging on.  The logon ID is a hexadecimal number identifying that particular logon session. All subsequent events associated with activity during that logon session will bear the same logon ID, making it relatively easy to correlate all of a user’s activities while he/she is logged on.  When the user finally logs off, Windows will record a 4634 followed by a 4647.  Event ID 4634 indicates the user initiated the logoff sequence, which may get canceled.  Logon 4647 occurs when the logon session is fully terminated.  If the system is shut down, all logon session get terminated, and since the user didn’t initiate the logoff, event ID 4634 is not logged.

While a user is logged on, they typically access one or more servers on the network.  Their workstation automatically re-uses the domain credentials they entered at logon to connect to other servers.  When a server receives a logon request – (for example, when a user tries to access a shared folder on a file server), the user’s workstation requests a service ticket from the domain controller which authenticates the user to that server.  The domain controller logs 4769,  which is useful because it indicates that user accessed server Y; the computer name of the server accessed is found in the Service Name field of 4769.  When the workstation presents the service ticket to the file server, the server creates a logon session and records event ID 4624 just like the workstation did earlier but this time logon type is 3 (network logon).  However as soon as the user closes all files opened during this network logon session, the server automatically ends the logon session and records 4647.  Therefore, network logon sessions typically last for less than a second while a file is saved, unless the user’s application keeps a file open on the server for extended periods of time.   This results in the constant stream of logon/logoff events that you typically observe on file servers and means that logon/logoff events on servers with logon type 3 are not very useful.  It is probably better to focus on access events to sensitive files using object access auditing.

Additional logon/logoff events on servers and authentication events associated with other types of user activity include:

  • Remote desktop connections
  • Service startups
  • Scheduled tasks
  • Application logons – especially IIS based applications like SharePoint, Outlook Web Access and ActiveSync mobile device clients

These events will generate logon/logoff events on the application servers involved and Kerberos events on domain controllers.

Also occurring might be NTLM authentication events on domain controllers from clients and applications that use NTLM instead of Kerberos.  NTLM events fall under the Credential Validation subcategory of the Account Logon audit category in Windows.  There is only event ID logged for both successful and failed NTLM authentication events.

A user leaves tracks on each system he or she accesses, and the combined security logs of domain controllers alone provide a complete list every time a domain account is used, and which workstations and servers were accessed.  Understanding Kerberos and NTLM and how Windows separates the concepts of logon sessions from authentication can help a sys admin to interpret these events and grasp why different events are logged on each system.

See more examples of the events described in this article at the Security Log Encyclopedia.

Secure your electronic trash

At the typical office, computer equipment becomes obsolete, slow etc. and periodically requires replacement or refresh. This includes workstations, servers, copy machines, printers etc. Users who get the upgrades are inevitably pleased and carefully move their data carefully to the new equipment and happily release the older ones. What happens after this? Does someone cart them off the local recycling post? Do you call for a dumpster? This is likely the case of the Small Medium Enterprise whereas large enterprises may hire an electronics recycler.

This blog by Kyle Marks appeared in the Harvard Business Review and reminds us that sensitive data can very well be leaked via decommissioned electronics also.

A SIEM solution like EventTracker is effective when leakage occurs from connected equipment or even mobile laptops or those that connect infrequently. However, disconnected and decommissioned equipment is invisible to a SIEM solution.

If you are subject to regulatory compliance, leakage is leakage. Data security laws mandate that organizations implement “adequate safeguards” to ensure privacy protection of individuals.  It’s equally applicable to that leakage comes from your electronic trash. You are still bound to safeguard the data.

Marks points out that detailed tracking data, however, reveals a troubling fact: four out of five corporate IT asset disposal projects had at least one missing asset. More disturbing is the fact that 15% of these “untracked” assets are devices potentially bearing data such as laptops, computers, and servers.

Treating IT asset disposal as a “reverse procurement” process will deter insider theft. This is something that EventTracker cannot help with but is equally valid in addressing compliance and security regulations.

You often see a gumshoe or Private Investigator in the movies conduct Trash Archaeology in looking for clues. Now you know why.

What did Ben Franklin really mean?

In the aftermath of the disclosure of the NSA program called PRISM by Edward Snowden to a reporter at The Guardian, commentators have gone into overdrive and the most iconic quote is one attributed to Benjamin Franklin “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety”.

It was amazing that something said over 250 years ago would be so apropos. Conservatives favor an originalist interpretation of documents such as the US Constitution (see Federalist Society) and so it seemed possible that very similar concerns existed at that time.

Trying to get to the bottom of this quote, Ben Wittes of Brookings wrote that it does not mean what it seems to say.

The words appear originally in a 1755 letter that Franklin is presumed to have written on behalf of the Pennsylvania Assembly to the colonial governor during the French and Indian War. The Assembly wished to tax the lands of the Penn family, which ruled Pennsylvania from afar, to raise money for defense against French and Indian attacks. The Penn family was willing to acknowledge the power of the Assembly to tax them.  The Governor, being an appointee of the Penn family, kept vetoing the Assembly’s effort. The Penn family later offered cash to fund defense of the frontier–as long as the Assembly would acknowledge that it lacked the power to tax the family’s lands.

Franklin was thus complaining of the choice facing the legislature between being able to make funds available for frontier defense versus maintaining its right of self-governance. He was criticizing the Governor for suggesting it should be willing to give up the latter to ensure the former.

The statement is typical of Franklin style and rhetoric which also includes “Sell not virtue to purchase wealth, nor Liberty to purchase power.”  While the circumstances were quite different, it seems the general principle he was stating is indeed relevant to the Snowden case.

What is happening to log files? The Internet of Things, Big Data, Analytics, Security, Visualization – OH MY!

Over the past year, enterprise IT has had more than a few things emerge to frustrate and challenge it. High on the list has to be limited budget growth in the face of increasing demand for and expectations of new services. In addition, there has been an explosion in the list of technologies and concerns that appear to be particularly intended to complicate the task of maintaining smooth running operations and service delivery.

Whether it is security, Big Data, analytics, Cloud, BYOD, data center consolidation, or infrastructure refresh – IT infrastructure and operations are changing, expanding, becoming smarter and, definitely increasingly more chatty. The amount of data generated from operating and maintaining the infrastructure to run workloads and deliver services continues to increase at an accelerating pace. The successful delivery of IT-dependent services requires data to be properly correlated, analyzed and the results presented in a clear, concise and rapidly consumable manner.

The Internet of Things refers to the proliferation of smart devices that connect to, communicate over and exchange data across the internet. It is rapidly becoming the Internet of Everything [1] as the number and variety of networked devices and services continues to explode. In fact, it is growing at a pace that challenges the capabilities and capacities of existing infrastructure to create, support and maintain effective, reliable services. The lagging pace of infrastructure evolution both complicates and drives innovation in the how, what and format of data collection, normalization, analysis and presentation.

Monitoring, managing and controlling the devices and services involve the creation, collection and consumption of data. Big Data barely describes the volume of data and information that must be consumed and analyzed to provide information and knowledge for management and control. Much of which ends up in log files.

Whether residing in log files or consumed as data services, it must be collected, filtered, integrated and analyzed more quickly to yield easily consumable, actionable information to drive corrective or ameliorative action. Data analysis and modeling, even sophisticated analysis has been around and used for centuries – but it has only been more recently that a growing community of non-experts have had the ability to access and use very sophisticated data manipulation and processing techniques.

A continuing stream of stories call attention to the risk of exposure and malicious access to the increasing amount of data, both personal and business, private and public that is collected, exchanged and accessible on today’s network. Such stories have little apparent effect on the oftentimes reckless willingness of consumers and customers to neglect efforts to protect the security and assure the integrity of data and information they all too casually and willingly provide, exchange and store.

Today’s market and political environments are unforgiving and woefully unsecured. It isn’t only malicious attacks that result in access to data and information that should be both private and well-protected. Only the extremely foolish or incurably reckless will fail to make a proactive investment necessary to secure and protect the integrity and privacy of business, enterprise, consumer and customer data. Recent events and actions are driving IT and business communities to move toward a greater focus and sensitivity to security issues.

The demand is escalating for improvement in the ability to communicate complex and critical information quickly and accurately. Increasingly sophisticated consumers must absorb and understand the significance and criticality of information to promptly and appropriately respond. Advanced analytics and manipulation smooth the analysis of data and information from multiple sources to obtain detailed information and insight as a result. There are applications that can combine data from multiple sources [2] into a single report and even send the data itself to a smartphone or tablet. Visualization is recognized and, with increasing frequency used as the fastest, most effective path to understanding what is happening and what must be done.

So, what does this mean for us? The widespread availability of data from multiple, disparate sources in the enterprise greatly expand what is available for analysis. It enhances the role, impact and visibility of analyst and IT as they directly contribute to enterprise success. Benefiting from this opportunity requires IT staff to proactively move to expand the scope of their analysis as they work more closely with partners in enterprise operations. Perceptive providers of analysis tools and solutions are working hard to include extended capabilities and functions that make this task easier, more effective and powerful.

Finally, there remains the need for a user interface specifically designed to easily manipulate multiple documents and data sets simultaneously by using a touch screen without a keyboard. The fast acceptance and increasing popularity of tablets, phablets and smartphones have alerted vendors to the inadequacy of existing interfaces. The forces described above along with competitive market pressures are driving interest and activity to deliver a new generation of user interfaces specifically designed for creating working documents for these devices. Such an interface will allow users to advance far beyond today’s content-only consumption patterns. Developing the new interface means rethinking office productivity applications completely – something nobody has really done since Xerox PARC designed its Star Office system. Now that is something to look forward to.


[1] An apparently endlessly growing list of internet connected ‘things’ that started with computers and has been adding networked devices ever since to now include monitoring devices (medical, automobile, equipment, buildings, home, etc.), financial transaction services, security, communication formats that include voice, analog, digital, video, etc., etc..

[2] For example – DB2, Hive/Apache Hadoop, Teradata, MySQL, Amazon Redshift, PostgreSQL, Microsoft SQL and SAP.

What, me worry?

Alfred E. Nueman is the fictitious mascot and cover boy of Mad Magazine. Al Feldstein, who took over as editor in 1956, said, “I want him to have this devil-may-care attitude, someone who can maintain a sense of humor while the world is collapsing around him”.

The #1 reason management doesn’t get security is the sense that “It can’t happen to me” or “What, me worry?” The general argument goes – we are not involved in financial services or national defense. Why would anyone care about what I have? And in any case, even if they hack me, what would they get? It’s not even worth the bother. Larry Ponemon writing in the Harvard Business Review captures this sentiment.

Attackers are increasingly targeting small companies, planting malware that not only steals customer data and contact lists but also makes its way into the computer systems of other companies, such as vendors. Hackers might also be more interested in your employees than you’d think. Are your workers relatively affluent? If so, chances are the hackers are way ahead of you and are either looking for a way into your company, or are already inside, stealing employee data and passwords which (as they well know) people tend to reuse for all their online accounts.

Ponemon says “It’s literally true that no company is immune anymore. In a study we conducted in 2006, approximately 5% of all endpoints, such as desktops and laptops, were infected by previously undetected malware at any given time. In 2009—2010, the proportion was up to 35%. In a new study, it looks as though the figure is going to be close to 54%, and the array of infected devices is wider too, ranging from laptops to phones.”

In the recent revelations by Edward Snowden who blew the whistle on the NSA program called “Prism”, many prominent voices have said they are ok with the program and have nothing to hide. This is another aspect of “What, me worry?” Benjamin Franklin had it right many years ago, “Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety.”

Learning from LeBron

Thinking about implementing analytics? Before you do that, ask yourself “What answers do I want from the data?”

After the Miami Heat lost the 2011 NBA playoffs to the Dallas Mavericks, many armchair MVPs were only too happy to explain that LeBron was not a clutch player and didn’t have what it takes to win championships in this league. Both LeBron and Coach Erik Spolestra however were determined to convert that loss into a teaching moment.

Analytics was indicated. But what was the question?  According to Spoelstra, “It took the ultimate failure in the Finals to view LeBron and our offense with a different lens. He was the most versatile player in the league. We had to figure out a way to use him in the most versatile of ways — in unconventional ways.” In the last game of the 2011 Finals, James was almost listlessly loitering beyond the arc, hesitating, shying away, and failing to take advantage of his stature. His last shot of those Finals was symbolic: an ill-fated 25-foot jump shot from the outskirts of the right wing — his favorite 3-point shot location that season.

LeBron decided the correct answer was to work on the post-up game during the off season. He spent a week learning from the great Hakeem Olajuwon. He brought his own videographer to record the sessions for later review. LeBron arrived early for each session and was stretched and ready to go every time. He took the lessons to the gym for the rest of the off season. It worked. James emerged from that summer transformed. “When he returned after the lockout, he was a totally different player,” Spoelstra says. “It was as if he downloaded a program with all of Olajuwon’s and Ewing’s post-up moves. I don’t know if I’ve seen a player improve that much in a specific area in one offseason. His improvement in that area alone transformed our offense to a championship level in 2012.”

The true test of analytics isn’t just on how good they are but in how committed you are in using the data. At the 2012 NBA Finals, LeBron won the MVP title and Miami, the championship.

The lesson to learn here is to know what answers you are seeking form the data and commit to going where the data takes you.

Using Dynamic Audit Policy to Detect Unauthorized File Access

One thing I always wished you could do in Windows auditing was mandate that access to an object be audited if the user was NOT a member of a specified group.  Why?  Well sometimes you have data that you know a given group of people will be accessing and for that activity you have no need of an audit trail.

Let’s just say you know that members of the Engineering group will be accessing your Transmogrifier project folder and you do NOT need an audit trail for when they do.  But this is very sensitive data and you DO need to know if anyone else looks at Transmogrifier.

In the old days there was no way to configure Windows audit policy with that kind of negative Boolean or exclusive criteria.  With Windows 2008/7 and before you could only enable auditing based on if someone was in a group not the opposite.

Windows Server 2012 gives you a new way to control audit policy on files.  You can create a dynamic policies based on attributes of the file and user.  (By the way, you get the same new dynamic capabilities for permissions, too).

Here’s a screen shot of audit policy for a file in Windows 7.

Unauthorized File Access

Now compare that to Windows Server 2012.

Unauthorized File Access

The same audit policy is defined but look at the “Add a condition” section.  This allows you to add further criteria that must be met before the audit policy takes effect.  Each time you click “Add a condition” Windows adds another criteria row where you can add Boolean expressions related to the User, the Resource (file) being accessed or the Device (computer) where the file is accessed.  In the screen shot below I’ve added a policy which accomplishes what we described at the beginning of the article.

Unauthorized File Access

So we start out by saying that Everyone is audited when they successfully read data in this file.  But then we limit that to users who do not belong to the Engineering group.  Pretty cool, but we are only scratching the surface.  You can add more conditions and you can join them by Boolean operators OR and AND.  You can even group expressions the way you would with parenthesis in programming code.  The example below shows all of these features so that the audit policy is effective if the user is either a member of certain group or department is Accounting and the file has been classified as relevant to GLBA or HIPAA compliance.

Unauthorized File Access

You’ll also notice that you can base auditing and access decision on much more that the user’s identity and group membership.  In the example above we are also referencing the department specified on the Organization tab of the user’s account in Active Directory.  But with dynamic access control we can choose any other attribute on AD user accounts by going to Dynamic Access Control in the Active Directory Administrative Center and selecting Claim Types as shown here.

Unauthorized File Access

You can create claim types for about any attribute of computer and user objects.  After creating a new claim type for a given attribute, it’s available in access control lists and audit policies of files and folders throughout the domain.

But dynamic access control and audit policy doesn’t stop with sophisticated Boolean logic and leveraging user and computer attributes from AD.  You can now classify resources (folders and files) according to any number of properties you’d like.  Below is a list of the default Resource Properties that come out of the box.

Img6_ResourceProperties

Before you can begin using a given Resource Property in a dynamic access control list or audit policy you need to enable it and then add it to a Resource Property List which is shown here.

Unauthorized File Access

After that you are almost ready to define dynamic permissions and audit policies.  The last setup step is to identity file servers where you want to use classify files and folders with Resource Properties.  On those file servers you need to add the File Server Resource Manager subrole.  After that when you open the properties of a file or folder you’ll find a new tab called Classification.

Unauthorized File Access

Above you’ll notice that I’ve classified this folder as being related to the Transmogrifier project.  Be aware that you can define dynamic access control and audit policies without referencing Resource Properties or adding the File Server Resource Manager subrole; you’ll just be limited to Claim Types and the enhanced Boolean logic already discussed.

The only change to the file system access events Windows sends to the Security Log is the addition of a new Resource Attributes to event ID 4663 which I’ve highlighted below.

Unauthorized File Access

This field is potentially useful in SIEM solutions because it embeds in the audit trail a record of how the file was classified when it was accessed.  This would allow us to classify important folders all over our network as “ACME-CONFIDENTIAL” and then include that string in alerts and correlation rules in a SIEM like EventTracker to alert or escalate on events where the information being accessed has been classified as such.

The other big change to auditing and access control in Windows Server 2012 is Central Access Policies which allows you to define a single access control list or audit policy in AD and apply it to any set of computers.  That policy is now evaluated in addition to the local security descriptor on each object.

While Microsoft and press are concentrating on the access control aspect of these new dynamic and central security features, I think the greatest immediate value may come from the audit policy side that we’ve just explored.  If you’d like to learn more about dynamic and central access control and audit policy check out the deep dive session I did with A.N. Ananth of EventTracker: File Access Auditing in Windows Server 2012.

Two classes of cyber threat to critical infrastructure

Dan Villasenor describes two classes of cyber threat confronting critical infrastructure. Some, like the power grid, are viewed by everyone as critical, and the number of people who might credibly target them is correspondingly smaller. Others, like the internal networks in the Pentagon, are viewed as a target by a much larger number of people. Providing a high level of protection to those systems is extremely challenging, but feasible. Securing them completely is not.

While I would agree that fewer people are interested/able to hack the power grid, it reminds me of the “insider threat” problem that enterprises face. When an empowered insider who has legitimate access goes rogue, the threat can be very hard to locate and the damage can be incredibly high. Most defense techniques for insider threat depend on monitoring and behavior anomaly detection. Adding to the problem is that systems like the power grid are harder to upgrade and harden. The basic methods to restrict access and enforce authentication and activity monitoring would be applicable. No doubt, this was all true for the Natanz processing plant in Iran and it still got hacked by Stuxnet. That system was apparently infected by a USB device carried in by an external contractor, so it would seem that restricting access and activity monitoring may have helped detect it sooner.

In the second class of threat, exemplified by the internal networks at the Pentagon, one assumes that all classic protection methods are enforced. Situational awareness in such cases becomes important. A local administrator who relies entirely on some central IT team to patrol, detect and inform him in time is expecting too much. It is said that God helps those who help themselves.

Villasenor also says: “There is one number that matters most in cybersecurity. No, it’s not the amount of money you’ve spent beefing up your information technology systems. And no, it’s not the number of PowerPoint slides needed to describe the sophisticated security measures protecting those systems, or the length of the encryption keys used to encode the data they hold. It’s really much simpler than that. The most important number in cybersecurity is how many people are mad at you.”

Perhaps we should also consider those interested in cybercrime? The malware industrial complex is booming and the average price for renting botnets to launch DDoS is plummeting.

The Post Breach Boom

A basic requirement for security is that systems be patched and the security products like antivirus be updated as frequently as possible. However, there are practical reasons which limit the application of updates to production systems. This is often the reason why the most active attacks are the ones which have been known for many months.

new report from the Ponemon Institute polled 3,529 IT and IT security professionals in the U.S., Canada, UK, Australia, Brazil, Japan, Singapore and United Arab Emirates, to understand the steps they are taking in the aftermath of malicious and non-malicious data breaches. Here are some highlights:

On average, it is taking companies nearly three months (80 days) to discover a malicious breach and then more than four months (123 days) to resolve it.

    • One third of malicious breaches are not being caught by any of the companies’ defenses – they are instead discovered when companies are notified by a third party, either law enforcement, a partner, customer or other party – or discovered by accident. Meanwhile, more than one third of non-malicious breaches (34 percent) are discovered accidentally.
    • Nearly half of malicious breaches (42 percent) targeted applications and more than one third (36 percent) targeted user accounts.
    • On average, malicious breaches ($840,000) are significantly more costly than non-malicious data breaches ($470,000). For non-malicious breaches, lost reputation, brand value and image were reported as the most serious consequences by participants. For malicious breaches, organizations suffered lost time and productivity followed by loss of reputation.

Want an effective defense but wondering where to start? Consider SIEM Simplified.

Cyber Attacks: Why are they attacking us?

The news sites are abuzz with reports on Chinese cyber attacks on Washington DC institutions both government and NGOs. Are you a possible target? It depends. Attackers funded by nation states have specific objectives and they will follow these. So if you are a dissident or enabling one, or have secrets that the attacker wants, then you may be a target. A law firm with access to intellectual property may be a target, but an individual has much more reason to fear cyber criminals who seek credit card details than a Chinese attack.

As Sun Tzu noted in the Art of War, “Know your enemy and know yourself, find naught in fear for 100 battles.”

So what are the Chinese after? Ezra Klein has a great piece in the Washington Post. He outlines three reasons:

1)      Asymmetric warfare – the US defense budget is larger than the next 13 countries combined and has been that way for a long, long time. In any conventional or atomic war, no conceivable adversary has any chance. An attack on critical infrastructure may help level the playing field. Operators of critical infrastructure and of course US DoD locations are at risk and should shore up defenses.

2)      Intellectual property theft – China and Russia want to steal the intellectual property (IP) of American companies, and much of that property now lies in the cloud or on an employee’s hard drive. Stealing those blueprints and plans and ideas is an easy way to cut the costs of product development. Law firms or employees with IP need protection.

3)      Chinese intelligence services [are] eager to understand how Washington works. Hackers often are searching for the unseen forces that might explain how the administration approaches an issue, experts say, with many Chinese officials presuming that reports by think tanks or news organizations are secretly the work of government officials — much as they would be in Beijing. This is the most interesting explanation but the least relevant to the security practitioner.

If none of these apply to you, then you should be worried about cyber criminals who are out for financial gain. Classic money-making things like credit cards or Social Security numbers that are used to defraud Visa/Mastercard or perpetrate Medicare fraud. This is by far much more widespread than any other type of hacking.

It turns out that many of the tools and tactics used by all these enemies are the same. Commodity attacks tend to be opportunistic and high volume. Persistent attacks tend to be low-and-slow. This in turn means the defenses for the one would apply to the other and often the most basic approaches are also the most effective. Effective approaches require discipline and dedication most of all. Sadly this is the hardest commitment for small and medium enterprises that are most vulnerable. If this is you, then consider a service like SIEM Simplified as an alternative to do-nothing.

Detecting Persistent Attacks with SIEM

Detecting Persistent Attacks with SIEM

As you read this, attackers are working to infiltrate your network and ex-filtrate valuable information like trade secrets and credit card numbers. In this newsletter featuring research from Gartner, we discuss advanced persistent threats and how SIEM can help detect such attacks.  We also discuss how you can quickly get on the road to deflecting persistent attacks. Read the entire newsletter here.

Industry News:

Pentagon cancels divisive Distinguished Warfare Medal for cyber ops, drone strikes

Washington Post

The special medal for the Pentagon’s drone operators and cyberwarriors didn’t last long. Two months after the military rolled out the Distinguished Warfare Medal for troops who don’t set foot on the battlefield, Defense Secretary Chuck Hagel has concluded it was a bad idea. Some veterans and some lawmakers spoke out against the award, arguing that it was unfair to make the medal a higher honor than some issued for valor on the battlefield.

Be sure to read EventTracker’s blog post discussing the creation and withdrawal of the award.

DDoS: What to Expect from Next Attacks

BankInfo Security

U.S. banking institutions are now in the fifth week of distributed-denial-of-service attacks waged against them as part of Izz ad-Din al-Qassam’s third phase. What lessons has the industry learned, and what actions do security and DDoS experts anticipate next from the hacktivists?

 IT security: Luxury or commodity in these uncertain times?

SC Magazine

Written by EventTracker CEO, A.N. Ananth

Those who attended the recent World Economic Forum in Davos, Switzerland reported that the prevailing mood was “circumspect.” Though there was relief that a global financial crisis may have been averted, both companies and countries continue to experience significant economic challenges. To be sure, there is a sense that the worst has passed, but uncertainty hovers as declining tax revenues are forcing many government agencies into spending cuts. In the United States, the threat of across-the-board cuts to agency budgets (called “sequestration”) looms in the air. Companies are hesitant to use cash on the balance sheet to fuel expansion, wondering if demand exists.

EventTracker News:

EventTracker Enterprise is the only “Recommended” Product of 2013 in SC Magazine SIEM Category

EventTracker, a leading provider of comprehensive SIEM solutions announced today that SC Magazine, the information security industry’s leading news and product evaluation publication, has named EventTracker Enterprise v7.3 its only “Recommended” product and awarded it a perfect 5-Star rating in the SIEM Group Test for 2013. The full product review appears in the April issue of SC Magazine and online.

EventTracker Enterprise Wins Certificate of Networthiness from the U.S. Army

EventTracker, a leading provider of comprehensive SIEM solutions announced today that its EventTracker Enterprise v7.3 security information and event management (SIEM) solution has been awarded a Certificate of Networthiness (CoN) by the U.S. Army Network Enterprise Technology Command (NETCOM). Previously, EventTracker’s Enterprise v7.0 also achieved this distinction.

 Featured Webinar:

 EventTracker Enterprise v7.3 – “A big leap forward in SIEM technology”

Tuesday, April 23 at 2:00 p.m. (EDT)

 Dive into the latest features and capabilities of EventTracker Enterprise v7.3 and see why SC Magazine says EventTracker “hits all of the benchmarks for a top-tier SIEM and is money well spent.”

CEO, A.N. Ananth will also go over the features highlighted in EventTracker’s recent 5-star review by SC Magazine.

One lucky webinar attendee will win a Microsoft Surface tablet, so be sure to register!

Check out a recent EventTracker’s blog post: Interpreting logs, the Tesla story. You can read all of EventTracker’s blogs at http://www.eventtracker.com/resources/blog/.

The current version of EventTracker is 7.3 b59. Click here for release notes. 

Watch EventTracker’s latest video “SIEM Simplified” here. Or view some of our other new videos here.

Distinguished Warfare Medal for cyber warriors

In what probably was his last move as defense secretary, Leon E. Panetta announced on February 13, 2013 the creation of a new type of medal for troops engaged in cyber-operations and drone strikes, saying the move “recognizes the changing face of warfare.” The official description said that it, “may not be awarded for valor in combat under any circumstances,” which is unique. The idea was to recognize accomplishments that are exceptional and outstanding, but not bounded in any geographic or chronologic manner – that is, it’s not taking place in the combat zone. This recognized that people can now do extraordinary things because of the new technologies that are used in war.

On April 16, 2013, barely two months later, incoming Defense Secretary, Chuck Hagel has withdrawn the medal. The medal was the first combat-related award to be created since the Bronze Star in 1944.

Why was it thought to be necessary? Use the case of the mission that got the leader of al-Qaida in Iraq, Abu Musab al-Zarqawi in June 2006. Reporting showed that U.S. warplanes dropped two 500-pound bombs on a house in which Zarqawi was meeting with other insurgent leaders. A U.S. military spokesman said coalition forces pinpointed Zarqawi’s location after weeks of tracking the movements of his spiritual adviser, Sheik Abdul Rahman, who also was killed in the blast. A team of unmanned aerial systems, drone operators, tracked him down. It was over 600 hours of mission operational work that finally pinpointed him. They put the laser target on the compound that he was in, this terrorist leader, and then an F-16 pilot flew six minutes, facing no enemy fire, and dropped the bombs – computer-guided of course – on that laser. The pilot was awarded the Distinguished Flying Cross.

The idea behind the medal was that drone operators can be recognized as well. The Distinguished Warfare Medal was to rank just below the Distinguished Flying Cross. It was to have precedence over — and be worn on a uniform above — the Bronze Star with “V” device, a medal awarded to troops for specific heroic acts performed under fire in combat. It was intended to recognize the magnitude of the achievement, not the personal risk taken by the recipient.

The decision to cancel the medal is more reflective on the uneasiness about the extent to which UAVs are being used in war, rather than questioning the skill and dedication of the operators. In announcing the move, Secretary Hagel said a “device” will be affixed to existing medals to recognize those who fly and operate drones, whom he described as “critical to our military’s mission of safeguarding the nation.” It also did not help that the medal had a higher precedence than a Purple Heart or Bronze Star.

There is no getting away from it, warfare in the 21st Century is increasingly in the cyber domain.

Interpreting logs, the Tesla story

Did you see the NY Times review by John Broder, which was critical about the Tesla Model S? Tesla CEO Elon Musk was not pleased. They are not arguing over interpretations or anecdotal recollections of experiences, instead they are arguing over basic facts — things that are supposed to be indisputable in an environment with cameras, sensors and instantly searchable logs.

The conflicting accounts — both described in detail — carry a lesson for those of us involved in log interpretation. Data is supposed to be the authoritative alternative to memory, which is selective in its recollection. As Bianca Bosker said, “In Tesla-gate, Big Data hasn’t made good on its promise to deliver a Big Truth. It’s only fueled a Big Fight.”

This is a familiar scenario if you have picked through logs as a forensic exercise. We can (within limitations) try and answer four of the five W questions – Who, What, When and Where, but the fifth one -Why- is elusive and brings the analyst of the realm of guesswork.

The Tesla story is interesting because interested observers are trying to deduce why the reporter was driving around the parking lot – to find the charger receptacle or to deliberately drain the battery and make for a bad review. Alas the data alone cannot answer this question.

In other words, relying on data alone, big data included, to plumb human intention is fraught with difficulty. An analyst needs context.

What is your risk appetite?

In Jacobellis v. Ohio (1964), Justice Potter Steward was quoted as saying, “I don’t know what porn is, but I’ll know it when I see it.” This is not dissimilar to the position that many business leaders confront the concept of “risk”.

When a business leader can describe and identify the risk they are willing to accept, then the security team can put appropriate controls in place. Easy to say, but so very hard to do. It’s because the quantification and definition of risk varies widely depending on the person, the business unit, the enterprise and also the vertical industry segment.

What is the downside of not being able to define risk? It leaves the security team guessing about what controls are appropriate. Inadequate controls expose the business to leakage and loss, whereas onerous controls are expen$ive and even offensive to users.

What do you do about it? Communication between the security team and business stakeholders is essential. We find that scenarios that demonstrate and personalize the impact of risk resonate best. It’s also useful to have a common vocabulary as the language divide between the security team and business stakeholders is a consistent problem. Where possible, use terminology that is already in use in the business instead of something from a standard or framework.

Happy Easter!

Easter-comic

Five telltale signs that your data security is failing and what you can do about it

5 telltale signs that your data security is failing and what you can do about it:

1) Security controls are not proportional to the business value of data

Protecting every bit of data as if it’s a gold bullion in Ft. Knox is not practical. Controls complexity (and therefore cost) must be proportional to the value of the items under protection. Loose change belongs on the bedside table; the crown jewels belong in the Tower of London. If you haven’t classified your data to know which is which, then the business stakeholders have no incentive to be involved in its protection.

2) Gaps between data owners and the security team

Data owners usually only understand business processes and activities and the related information – not the “data”. Security teams, on the other hand, understand “data” but usually not its relation to the business, and therefore its criticality to the enterprise. Each needs to take a half step into the others’ domain.

3) The company has never been penalized

Far too often, toothless regulation encourages a wait-and-see approach. Show me an organization that has failed an audit and I’ll show you one that is now motivated to make investments in security.

4) Stakeholders only see value in sharing, not the risk of leakage

Data owners get upset and push back against involving security teams in the setup of access management. Open access encourages sharing and improves productivity, they say. It’s my data, why are you placing obstacles in its usage? Can your security team effectively communicate the risk of leakage in terms that the data owner can understand?

5) Security is viewed as a hurdle to be overcome

How large is the gap between the business leaders and the security team?  The farther apart they are, the harder it is to get support for security initiatives. It helps to have a champion, but over-dependence on a single person is not sustainable. You need buy-in from senior leadership.

Happy St. Patrick’s Day-Compliance

Compliance

How to Use Process Tracking Events in the Windows Security Log

I think one of the most underutilized features of Windows Auditing and the Security Log are Process Tracking events.

In Windows 2003/XP you get these events by simply enabling the Process Tracking audit policy.  In Windows 7/2008+ you need to enable the Audit Process Creation and, optionally, the Audit Process Termination subcategories which you’ll find under Advanced Audit Policy Configuration in group policy objects.

These events are incredibly valuable because they give a comprehensive audit trail of every time any executable on the system is started as a process.  You can even determine how long the process ran by linking the process creation event to the process termination event using the Process ID found in both events.  Examples of both events are shown below.

Process Start WinXP/2003 592 A new process has been created.Subject:

Security ID: WIN-R9H529RIO4Y\Administrator
Account Name: Administrator
Account Domain: WIN-R9H529RIO4Y
Logon ID: 0x1fd23

Process Information:

New Process ID: 0xed0
New Process Name: C:\Windows\System32\notepad.exe
Token Elevation Type: TokenElevationTypeDefault (1)
Creator Process ID: 0x8c0

Win7/2008 4688
Process End WinXP/2003 593 A process has exited.Subject:

Security ID: WIN-R9H529RIO4Y\Administrator
Account Name: Administrator
Account Domain: WIN-R9H529RIO4Y
Logon ID: 0x1fd23

Process Information:

Process ID: 0xed0
Process Name: C:\Windows\System32\notepad.exe
Exit Status: 0x0

Win7/2008 4689

Trying to determine what a user did after logging on to Windows can be difficult to piece together.  These events are valuable on workstations because often, they are the most granular trail of activity left by end-users: for example, you can tell that Bob opened Outlook, then Word, then Excel and closed Word.

The process start event tells you the name of the program and when it started.  It also tells you who ran the program and the ID of their logon session with which you can correlate backwards to the logon event. This allows you to determine the kind of logon session in which the program was run and where the user (if remote) was on the network using the IP address and/or workstation name provided in the logon event.

Process start events also document the process that started them using Creator Process ID which can be correlated backwards to the process start event for the parent process.  This can be invaluable when trying to figure out how a suspect process was started.  If the Creator Process ID points to Explorer.exe, after tracking down the process start event, then it’s likely that the user simply started the process from the start menu.

These same events, when logged on servers, also provide a degree of auditing over privileged users but be aware that many Windows administrative functions will all show up as process starts for mmc.exe since all Microsoft Management Console apps run within mmc.exe.

But beyond privileged and end-user monitoring, process tracking events help you track possible change control issues and to trap advanced persistent threats.  When new software is executed for the first time on a given system it’s important to know that, since it implies a significant change to the system or it could alert you to a new unauthorized and even malicious program running for the first time.

The key to this seeing this kind of activity is to compare the executable name in a recent event 592/4688 to executable names in a whitelist – and thereby recognizing new executables.

Of course, this method isn’t foolproof because someone could replace an existing executable (on your whitelist) with a new program but with the same name and path as the old.  Such a change would “fly under the radar” with process tracking.  But my experience with unauthorized changes that bypass change control and APTs indicates that while certainly possible, the methods described here-in will catch their share of offenders and attackers.

To do this kind of correlation you need to enable process tracking on applicable systems (all systems if possible, including workstations) and then you need a SIEM solution that can compare the executable name in the current event to a “whitelist” of executables.

How you build that whitelist is important because it determines if your criteria for a new executable is unique to “that” system, or if it is based on a “golden” system, or your entire environment.  The more unique your whitelist is to each system or type of system, the better.  You can build the whitelist by either scanning for all the EXE files on a given system or by analyzing the 592/4688 events over some period of time.  I prefer the latter because there are many EXE files on Windows computers that are never actually executed and I’d like to know the first time any new EXE is run – whether it came with Windows and installed applications out of the box or whether it is a new EXE recently dropped onto the system.  On the other hand if you only want to detect when EXEs run which were not present on system at the time the whitelist was created, then a list built from simply running “dir *.exe /s” will suffice.

If you opt to analyze a period of system activity make sure that the period is long enough cover the full usage profile and business process profile for that system – usually a month will do it. Take some time to experiment with Process Tracking events and I think you’ll find that they are valuable for knowing what running on your system and who’s running it.

SIEM Simplified for the Security No Man’s Land

In this blog post, Mike Rothman described the quandary facing the midsize business. With a few hundred employees, they have information that hackers want to and try to get but not the budget or manpower to fund dedicated IT Security types, nor the volume of business to interest a large outsourcer. This puts them in no-man’s land with a bull’s-eye on their backs. Hackers are highly motivated to monetize their efforts and will therefore cheerfully pick the lowest hanging fruit they can get. It’s a wicked problem to be sure and one that we’ve been focused on addressing in our corner of the IT Security universe for some years now.

Our solution to this quandary is called SIEM SimplifiedSM and stems from the acceptance that as a vendor we could go developing all sorts of bells and whistles to our product offering only to see an ever shrinking percent of users actually use them in the manner they were designed. Why? Simply put, who has the time? Just as Mike says, our customers are people in mid-size businesses, wearing multiple hats, fighting fires and keeping things operational. SIEM Simplified is the addition of an expert crew at the EventTracker Control Center, in Columbia MD that does the basic blocking and tackling which is the core ingredient if you want to put points on the board. By sharing the crew across multiple customers, it reduces the cost for customers and increases the likelihood of finding the needle in the haystack. And because it’s our bread and butter, we can’t afford to get tired or take a vacation or fall sick and fall behind.

A decade-long focus on this problem as it relates to mid-size businesses has allowed us to tailor the solution to such needs. We use the behavior module to quickly spot new or out-of-ordinary patterns, and a wealth of existing reports and knowledge to do the routine but essential legwork of  log review. Mike was correct is pointing out that “folks in security no-man’s land need …. an advisor to guide them … They need someone to help them prioritize what they need to do right now.” SIEM Simplified delivers.  More information here.

EventTracker Recommendation Engine

Online shopping continues to bring more and more business to “e-tailers.”  Comscore says there was a  16% increase in holiday shopping this past season over the previous season. Some of this is attributed to “recommendations” that are helpfully shown by the giants of the game such as Amazon.

Here is how Amazon describes its recommendation algorithm. “We determine your interests by examining the items you’ve purchased, items you’ve told us you own items you’ve rated, and items you’ve told us you like. We then compare your activity on our site with that of other customers, and using this comparison, are able to recommend other items that may interest you.

Did you know that EventTracker has its own recommendation engine? It’s called Behavior Correlation and is part of the EventTracker Enterprise. Just as Amazon, learns about your browsing and buying habits and uses it to “suggest” other items, so also, EventTracker auto-learns what is “normal”  in your enterprise during an adaptive learning period. This can be as short as 3 days or as long as 15 days depending on the nature of your network. In this period, various items such as IP addresses, users, administrators, process names machines, USB serial numbers etc. are learned. Once learning is complete, data from the most recent period is compared to the learned behavior to pinpoint both unusual activities as well as those never-before-seen. EventTracker then “recommends” that you review these to determine if they point to trouble.

Learning never ends, so the baseline is adaptive, refreshing itself continuously. User defined rules can also be implemented wherein the comparison periods are not learned but specified, and comparisons performed not  once a day but as frequently as once a minute.

If you shop online and feel drawn to a “recommendation”, pause to reflect how this concept can also improve your IT security by looking at logs.

Cyber Security Executive Order

Based on early media reports, the Cyber Security executive order would seem to portend voluntary compliance on the part of U.S. based companies to implement security standards developed in concert with the federal government.  Setting aside the irony of an executive order to voluntarily comply with standards that are yet to be developed, how should private and public sector organizations approach cyber security given today’s exploding threatscape and limited information technology budgets?  How best to prepare for more bad guys, more threats, more imposed standards with less people, time and money?

Back to basics.  First let’s identify the broader challenges: of course you’re watching the perimeter with every flavor of firewall technology and multiple layers of IDS, IPS, AV and other security tools.  But don’t get too comfortable: every organization that has suffered a damaging breach had all those things too.  Since every IT asset is a potential target, every IT asset must be monitored.  Easy to describe, hard to implement. Why?

Challenge number one: massive volumes of log data.  Every organization running a network with more than 100 nodes is already generating millions of audit and event logs.  Those logs are generated by users, administrators, security systems, servers, network devices and other paraphernalia.  They generate the raw data that tracks everything going on from innocent to evil, without prejudice.

Challenge number two: unstructured data. Despite talk and movement toward audit log standards, log data remains widely variable with no common format across platforms, systems and applications, and no universal glossary to define tokens and values.  Even if every major IT player from Microsoft to Oracle (and HP and Cisco), along with several thousand other IT organizations were to adopt uniform, universal log standards today, we would still have another decade or two of the dreaded “legacy data” with which to contend.

Challenge number three: cryptic or non-human readable logs. Unstructured data is difficult enough, but further adding to the complexity is that most of the log data content and structure are defined by developers for developers or administrators.  Don’t assume that security officers and analysts, senior management, help desk personnel or even tenured system administrators can quickly and accurately glance at a log and immediately understanding its relevance or more importantly what to do about it.

Solution?  Use what you already have more wisely.  Implement a log monitoring solution that will ingest all of the data you already generate (and largely ignore until after you discover there’s a real problem), process it in real-time using built-in intelligence, and present the analysis immediately in the form of alerts, dashboards, reports and search capabilities.  Take a poorly designed and voluminous asset (audit logs) and turn it into actionable intelligence.  It isn’t as difficult as it sounds, though it require rigorous discipline and a serious time commitment.

Cyber criminals employ the digital equivalent of what our military refers to as an “asymmetrical tactic.” Consider a hostile emerging super power in Asia that directly or indirectly funds a million cyber warriors at the U.S. equivalent of $10 a day; cheap labor in a global economy.  No organization, not even the federal government, the world’s largest bank or a 10 location retailer, has unlimited people, time and money to defend against millions of bad guys attacking on a much lower (asymmetrical) operational budget.

IT Operations Problem-Solvers Infrastructure Maintenance Solution Providers

On a recent flight returning from an engagement with a client, my seating companion and I exchanged a few words as we settled into the flight before donning and turning to the iPod music and games used to distract ourselves from the hassles of travel. He was a cardiologist, and introduced himself as such, before quickly describing his job as basically ‘a glorified plumber’. We both chuckled knowing that while sharing fundamentals in basic concepts, there was much more to cardiology than managing and controlling flow. BTW, my own practical plumbing experiences convinced me of the value of a good plumber.

However, this set me off reflecting on how IT perceives and presents itself. There is no question that IT has progressed far from the days when a pundit launched his career asserting that “IT Doesn’t Matter”. IT operations and the impact of the application of the associated computer and communications technology are on display and felt everywhere around us – facilitating, speeding, complicating, escalating risk and changing our lives, professional and private. From pervasive monitoring to automated remote management and control over energy consumption, work habits, even purchasing, computers operate and impact it all.

In the enterprise today, technology itself is recognized as playing a vital role in business operations and success. Recent surveys of business executives from CEOs to CFOs to CIOs document their view that the application of information technology is linked directly to enterprise operations and growth. Unfortunately, too many IT staff are still struggling to come to terms with that impact and, more worrisome, how to respond to that reality. That is a problem for both the IT staff and the enterprise.

All too often, IT staff see themselves as primarily providers and maintainers (or restrictors) of access to technology, all the while ignoring the role and potential of IT as proactive and involved participants in activities that contribute to enterprise growth, profitability and revenue. IT isn’t simply maintenance, cost control and plumbing. IT is more than ever before, a potential source of competitive advantage and growth. Yet, many business staffs view IT as simply a source of cookie-cutter services which can easily, efficiently and even more effectively come from an outside organization.

Also familiar is the tension between IT as the ‘slow-to-respond’ gatekeeper for the introduction and adoption of new technologies and the business unit manager/sales/marketing professional ‘just trying to get the job done’. Neither is ‘wrong’; each has well-founded arguments that support their roles. However, the evolution in technology and in the enterprise, including the data center raises the risks of such conflict substantially.

The litany of change – cloud, big data, infrastructure as code, mobility, workload optimized infrastructure, deep analytics, etc. is familiar. The very nature of the data center is changing as computing moves from ‘systems of record’, i.e. traditional operational environments with dedicated infrastructure, where infrastructure limited applications to ‘systems of engagement, i.e. responsive and adaptive to the operating environment, demand and specific service provided. The implications for IT due to this shift are radical, exciting and still very much emerging. More fundamentally, these changes are revising how IT views, uses, applies and makes decisions about technology. IT must determine how to integrate, balance and effectively operate in an environment consisting of a combination of dynamic and fixed resources, infrastructure and assets.

The evolution of technology is changing how IT solution providers today provide products. The emphasis is on providing products and solutions that are smarter, more integrated, simpler to use, more comprehensive in application, quicker to implement and deliver a larger and faster payback using whatever exists as the current measure of success.

IT needs such solutions because it is the only way to meet the demands of their users while freeing resources for other activities. Non-technical business stake holders want these solutions because they see the power of applied technology to resolve real problems. Risk comes when the business side fails to see the potential of their own IT staffs to harness the power of technology and when business professionals begin to fail to involve IT in their adoption, introduction and use of technology.

Our own interactions with clients and vendors indicate that a transition within IT from problem-solver/technology maintainer to solution provider-business driver is underway. Unfortunately, it is occurring at a pace that is much slower than is healthy for IT and the enterprise. IT has to be proactive in positioning itself as an active partner in and contributor to business success. Fortunately, many vendors recognize the challenge facing their IT clients and are making the changes in their product offerings, training and presentation to support IT in the transition.

SIEM in the Social Era

The value proposition of our SIEM Simplified offering is that you can leave the heavy lifting to us. What is undeniable is that getting value from SIEM solutions requires patient sifting through millions of logs, dozens of reports and alerts to find nuggets of value. It’s quite similar to detective work.

But does that not mean you are somehow giving up power? Letting someone else get a claw hold in your domain?

Valid question, but consider this from Nilofer Merchant who says “In the Social Era, value will be (maybe even already is) no longer created primarily by people who work for you or your organization“.

Isn’t power about being the boss?
The Social Era has disrupted the traditional view of power which has always been your title, span of control and budget. Look at Wikipedia or Kickstarter where being powerful is about championing an idea. With SIEM Simplified, you remain in control, notified as necessary, in charge of any remediation.

Aren’t I paid to know the answer?
Not really. Being the keeper of all the answers has become less important with the rise of fantastic search tools and the ease of sharing, as compared to say even 10 years ago. Merchant says “When an organization crowns a few people as chiefs of answers, it forces ideas to move slowly up and down the hierarchy, which makes the organization resistant to change and less competitive. The Social Era raises the pressure on leaders to move from knowing everything to knowing what needs to be addressed and then engaging many people in solving that, together.” Our staff does this every day, for many different environments. This allows us to see the commonalities and bring issues to the fore.

Does it mean blame if there is failure and no praise if it works?
In a crowd sourcing environment, there are many more hands in every pie. In practice, this leads to more ownership from more people than the other way around. Consider Wikipedia as an example of this. It does require different skills, collaborating instead of commanding, sharing power rather than hoarding it. After all, we are only successful, if you are. Indeed, as a provider of the service, we are always mindful that this applies to us more than it does you.

As a provider of services, we see clearly that the most effective engagements are the ones where we can avoid the classic us/them paradigm and instead act as a badgeless team. The Hubble Space Telescope is an excellent example of this type of effort.

It’s a Brave New World, and it’s coming at you, ready or not.

Big Data and Information Inequality

Mike Wu writing in Tech Crunch observed that in all realistic data sets (especially big data), the amount of information one can extract from the data is always much less than the data volume (see figure below): information data.

Big Data

In his view, given the above, the value of big data is hugely exaggerated. He then goes on to infer that this is actually a strong argument for why we need even bigger data. Because the amount of valuable insights we can derive from big data is so very tiny, we need to collect even more data and use more powerful analytics to increase our chance of finding them.

Now machine data (aka log data) is certainly big data, and it is certainly true that obtaining insights from such dataset’s is a painstaking (and often thankless) job, but I wonder if this means we need even more data. Methinks we need to be able to better interpret the big data set and its relevance to “events”.

Over the past two years, we have been deeply involved in “eating our own dog food” as it were. At multiple EventTracker installations that are nationwide in scope, and span thousands of log sources, we have been working to extract insights for presentation to the network owners. In some cases, this is done with a lot of cooperation from the network owner and we have a good understanding of IT assets and the actors who use/abuse them. We find that with such involvement we are better able to risk prioritize what we observe in the data set and map to business concerns. In other cases where there is less interaction with the network owner and we know less about the actors or the relative criticality of assets, then we fall back on past experience and/or vendor-provided info as to what is an incident.  It is the same dataset in both cases but there is more value in one case than the other.

To say it another way, to get more information from the same data we need other types of context to extract signal from noise. Enabling logging at a more granular level from the same devices thereby generating an ever bigger dataset won’t increase the signal level. EventTracker can merge change audit data netflow information as well as vulnerability scan data to enable a greater signal-to-noise ratio. That is a big deal.

Small Business: too small to care?

Small businesses around the world tend to be more innovative and cost-conscious. Most often, the owners tend to be younger and therefore more attuned to being online. The efficiencies that come from being computerized and connected are more obvious and attractive to them. But we know that if you are online then you are vulnerable to attack. Are these small businesses  too small for hackers to care?

Two recent reports say no.

The UK the Information Security Breaches survey 2012 survey results published by PWC shows:

  • 76% of small business had a security breach
  • 15% of small businesses were hit by a denial of service attack
  • 20% of small businesses lost confidential data and 80% of these breaches were serious
  • The average cost of a small business worst security breach was between 15-30K pounds
  • Only 8% of small businesses monitor what their staff post on social sites
  • 34% of small businesses allow smart phones and tablets to connect to their network but have done nothing about it
  • On average, IT security consumes 8% of the spending but 58% make no attempt to evaluate the effectiveness of the expenditure

From the US, the 2012 Verizon data breach report shows:

  • Restaurant and POS systems are popular targets.
  • Companies with 11-100 employees from 36 countries had the maximum number of breaches.
  • Top threats to small business were external against servers
  • 83% of the theft was by professional cybercriminals, for profit
  • Keyloggers designed to capture user input were present in 48% of breaches
  • The most common malware injection vector is installation by a remote attacker
  • Payment card info and authentication credentials were the most stolen data
  • The initial compromise required basic methods with no customization, automated scripts can do it
  • More than 79% of attacks were opportunistic; large-scale automated attacks are opportunistically attacking small to medium businesses, and POS systems frequently provide the opportunity
  • In 72% of cases, it took only minutes from initial attack to compromise but hours for data removal and days for detection
  • More than 55% of breaches remained undiscovered for months
  • More than 92% of the breaches were reported by an external party
  • Only 11% were monitoring access which is called out in Chapter 10 of PCI-DSS

Lesson learned? Small may be beautiful, but in the interconnected world we live in, not too small to be hacked. Protect thyself – start simple by changing remote access credentials and enabling a firewall, monitor and mine your logs. ‘Nuff said.

A smartphone named Desire

Is this true for you? That your smartphone has merged your private and work lives. Smartphones now contain—by accident or by design—a wealth of information about the businesses we work for.

If your phone is stolen, the chance of getting it back approaches zero. How about lost in an elevator or the back seat of a taxi? Will it be returned? More importantly, from our point of view, what about the info on it – the corporate info?

Earlier this year, the Symantec HoneyStick project conducted an experiment by “losing” 50 smartphones in five different cities: New York City; Washington D.C.; Los Angeles; San Francisco; and Ottawa, Canada. Each had a collection of simulated corporate and personal data on them, along with the capability to remotely monitor what happened to them once they were found. They were left in high traffic public places such as elevators, malls, food courts, and public transit stops.

Key findings:

  • 96% of lost smartphones were accessed by the finders of the devices
  • 89% of devices were accessed for personal related apps and information
  • 83% of devices were accessed for corporate related apps and information
  • 70%of devices were accessed for both business and personal related apps and information
  • 50% of smartphone finders contacted the owner and provided contact information

The corporate related apps included remote access as well as email accounts. What is the lesson for corporate IT staff?

  • Take inventory of the mobile devices connecting to your company’s networks; you can’t protect and manage what you don’t know about.
  • Track resource access by mobile devices. For example if you are using MS Exchange, then ActiveSync logs can tell you a whole lot about such access.
  • See our white paper on the subject
  • Track all remote login to critical servers

See our webinar, ‘Using Logs to Deal With the Realities of Mobile Device Security and BYOD.’