Three trends in Enterprise Networks

There are three trends in Enterprise Networks:

1) Internet of Things Made Real. We’re all familiar with the challenge of big data ­ how the volume, velocity and variety of data is overwhelming. Studies confirm the conclusion many of you have reached on your own: There’s more data crossing the internet every second than existed on the internet in total 20 years ago. And, now, as customers deploy more sensors and devices in every part of their business, the data explosion is just beginning. This concept, called the “Internet of Things,” is a hot topic. Many businesses are uncovering efficiencies based on how connected devices drive decisions with more precision in their organizations.

2) “Reverse BYOD.” Most of us have seen firsthand how a mobile workplace can blur the line between our personal and professional lives. Today’s road warrior isn’t tethered to a PC in a traditional office setting. They move between multiple devices throughout their workdays with the expectation that they¹ll be able to access their settings, data and applications. Forrester estimates that nearly 80 percent of workers spend at least some portion of their time working out of the office and 29 percent of the global workforce can be characterized as “anywhere, anytime” information workers. This trend was called “bring your own device” or “BYOD.” But now we¹re seeing the reverse. Business-ready, secure devices are getting so good that organizations are centrally deploying mobility solutions that are equally effective at work and play.

3) Creating New Business Models with the Cloud. The conversation around cloud computing has moved from “if to “when.” Initially driven by the need to reduce costs, many enterprises saw cloud computing as a way to move non-critical workloads such as messaging and storage to a more cost-efficient, cloud-based model. However, the larger benefit comes from customers who identify and grow new revenue models enabled by the cloud. The cloud provides a unique and sustainable way to enable business value, innovation and competitive differentiation ­ all of which are critical in a global marketplace that demands more mobility, flexibility, agility and better quality across the enterprise.

The 5 stages of SIEM Implementation

Are you familiar with the Kübler-Ross 5 Stages of Grief model?

SIEM implementation (and indeed most enterprise software installations) bear a striking resemblance.

  • Stage One: Denial – The frustration that new users feel learning the terminology and delivering on the “asks” from the implementation make them question the time investment.
  • Stage Two: Despair – The self-doubt that most implementation teams feel in delivering on the promises of a complex security technology with many moving parts.
  • Stage Three: Hopeful Performance – Learning, and even using, the SIEM solution, partners build confidence in their ability to become one of those recognized for competence and potential.
  • Stage Four: Soaring Execution – The exalted status of a “go-to” team member, connected at the hip through the vendor support team or service provider; earning accolades from management. The team member has delivered value to the organization and is reaping rewards for the business. Personal relationships with vendor or service reps are genuine and mutually beneficial.
  • Stage Five:  Devolution/Plateau – Complacency, through lack of vision or agility, in embracing the next big thing drags down the relationship. Other partners, hungrier for  the customer’s attention, take over the mindshare once enjoyed.

Increasing Security and Driving Down Costs Using the DevOps Approach

The prevailing IT requirement tends toward doing more work faster, but with fewer resources to do such work, many companies must reconsider their traditional approaches to developing, deploying and maintaining software. One such approach, called DevOps, first gained traction as a viable software development and deployment strategy in Europe in the late 2000s. DevOps is a marriage of convenience between software development (Dev) and IT operations (Ops) that seeks to shorten the lifecycle for development/testing/deployment/maintenance by moving developers closer to a production software environment, and simultaneously giving Ops folks more input and visibility into the development process. The result –namely the DevOps approach — is also known as continuous delivery or continuous deployment, and relies on close collaboration and communication between Dev and Ops personnel.

Rather than traditional development lifecycles with distinct phases that must be completed synchronously, DevOps supports a process where different software builds are continuously being moved through the development lifecycle in parallel, though builds may be in different phases of development at any given moment. The result is a number of frequent, smaller releases, versus months-long development cycles that result in large-scale software upgrades or deployments just once or twice a year. That’s a long time to wait to deliver critical features or bug fixes, especially if your competitors adopt shorter cycles. When implemented correctly, DevOps has the potential to transform software development and deployment by saving time and money while increasing application security. For the purposes of this discussion, we include QA as part of Dev, because that’s where QA functions traditionally reside.

Go Down to the Crossroads…and Automate

Thus, DevOps resides at the crossroads of software development, IT operations and software quality assurance (QA). Just as a three-legged stool requires all three legs to provide a stable foundation, the DevOps approach requires participation from Dev, Ops and QA to succeed. Automation is also another typical requirement for the DevOps approach, simply because manually performing the myriad steps required to get software ready for release is not feasible when you deploy new releases weekly, daily, or even multiple times a day. Testing in a DevOps environment requires a serious investment in automation, from unit testing to integration testing to regression testing to QA testing to load testing to user acceptance testing. There is simply no way to adequately test an application without the capabilities and quick turnaround times that testing automation software delivers. Note that most testing automation software integrates easily with log file monitoring tools, and that the data garnered from those log files provides feedback critical to the DevOps process.

Bridging the Gap Between Dev and Ops

The whole point of DevOps is to shorten the time it takes for Dev and Ops teams to communicate. This collaboration cross-trains Dev and Ops personnel on the challenges and opportunities inherent to each team’s perspective. Developers gain important insights into how their software works—or doesn’t work—in the real world. Tight collaboration between Dev and Ops also gives the Ops team a channel to provide feedback directly to developers when they see a production issue with the application. Similarly, by asking developers to work side-by-side with Ops, those developers can learn far more about their software creations that they could ever learn while sitting in a cubicle somewhere, totally removed from a production environment. Such collaboration between teams, who are both equally responsible for the success of a software deployment, reduces finger-pointing by literally putting Dev and Ops on the same boat. Cross-trained personnel, well-defined communication processes and a team spirit are some of the substantial and competitive advantages that come from a DevOps approach.

Increasing Security While Driving Down Costs

The shortened DevOps development cycle offers one further critical advantage, almost by accident: increased security for your applications. Shorter development feedback cycles means that any security vulnerabilities discovered in testing or from user feedback can be addressed almost immediately. Microsoft, for example, typically issues bug fixes once a month on the aptly nicknamed Patch Tuesday. Have you ever thought about the exposure to users’ computers that might be vulnerable during the gaps between Patch Tuesdays? What about users who fail to download patches when they are released? DevOps avoids this conundrum by reducing the time it takes to fix critical security vulnerabilities in an application. Vulnerabilities discovered in production can be quickly routed to the proper Dev personnel so that the issue can be verified and addressed in timely fashion. This quick reaction capability is a key advantage for software developed using DevOps principles.

DevOps Focuses on Stakeholder Deliverables

Last but not least, a DevOps approach increases the focus of software teams on what really matters: delivering robust software on a compressed schedule that meets or exceeds the expectations of end-users, business teams and company management. With this in mind, remember that DevOps is not an end unto itself. Rather, DevOps is a useful tool that can provide visibility, agility, cost reductions, increased security and significant competitive advantages to a company developing software.

Earl Follis is a long-time IT professional who has worked as a technical trainer, technical evangelist, network administrator, and in other IT positions for a variety of companies that include IBM-Tivoli, Nimsoft (acquired by CA), Northrop Grumman, Thomas-Conrad (acquired by Compaq) and Dell. He’s also the co-author of numerous books, including …For Dummies titles on Windows Server and NetWare, and has written for many print and Web publications. His primary areas of technical interest include networking, operating systems, cloud computing and unified monitoring and management solutions.

Ed Tittel is a 30-plus year IT veteran who’s worked as a software developer, networking consultant, technical trainer, writer, and expert witness. Perhaps best known for creating the Exam Cram series in the late 1990s, Ed has contributed to over 100 books on a variety of computing topics, including numerous titles on information security and HTML. Ed also blogs regularly for Tech Target (Windows Enterprise Desktop), Tom’s IT Pro, GoCertify.com, and PearsonITCertification.com.

How much security is enough?

Ask a pragmatic CISO about achieving a state of complete organizational security and you’ll quickly be told that this is unrealistic and financially imprudent goal. So then how much security is enough?

More than merely complying with regulations or implementing “best practice”, think in terms of optimizing the outcome of the security investment. So never mind the theoretical state of absolute security, think instead of determining and managing risk to critical business processes and assets.

Risk appetite is defined by the Committee of Sponsoring Organizations of the Treadway Commission (COSO) as “… the amount of risk, on a broad level, an entity is willing to accept in pursuit of value (and its mission).” Risk appetite influences the entity’s culture, operating style, strategies, resource allocation, and infrastructure. Risk appetite is not a constant; it is influenced by and must adapt to changes in the environment. Risk tolerance could be defined as the residual risk the organization is willing to accept after implementing risk-mitigation and monitoring processes and controls. One way to implement this is to define levels of residual risk and therefore the levels of security that is “enough”.

Risk-Wall

The basic level of security is the diligent one which is the staple of every business network; the organization is able to deal with known threats. The hardened level adds the ability to be proactive (with vulnerability scanning), compliant and gives the ability to perform forensic analysis.  At the advanced level, predictive capabilities are introduced and the organization develops the ability to deal with unknown threats.

If it all sounds a bit overwhelming, take heart; managed security services can relieve your team of the heavy lifting that is a staple of IT Security.

Bottom line – determine your risk appetite to determine how much security is enough.

Top 6 uses for SIEM

Security Information and Event Management (SIEM) is a term coined by Gartner in 2005 to describe technology used to monitor and help manage user and service privileges, directory services and other system configuration changes; as well as providing log auditing and review and incident response.

The core capabilities of SIEM technology are the broad scope of event collection and the ability to correlate and analyze events across disparate information sources. Simply put, SIEM technology collects log and security data from computers, network devices and applications on the network to enable alerting, archiving and reporting.

Once log and security data has been received, you can:

  • Discover external and internal threats

Logs from firewalls and IDS/IPS sensors are useful to uncover external threats; logs from e-mail servers, proxy servers can help detect phishing attacks; logs from badge and thumbprint scanners are used to detect physical access

  • Monitor the activities of privileged users

Computers, network devices and application logs are used to develop a trail of activity across the network by any user but especially users with high privileges

  • Monitor server and database resource access

Most enterprises have critical data repositories in files/folder /databases and these are attractive targets for attackers. By monitoring all server and db resource access, security is improved.

  • Monitor, correlate and analyze user activity across multiple systems and applications

With all logs and security data in one place, an especially useful benefit is the ability to correlate user activity across the network.

  • Provide compliance reporting

Often the source of funding for SIEM, when properly setup, auditor on-site time can be reduced by up to 90%; more importantly, compliance is to the spirit of the law rather than merely a check-the-box exercise

  • Provide analytics and workflow to support incident response

Answer Who, What, When, Where questions. Such questions are the heart of forensic activities and critical to draw valuable lessons.

SIEM technology is routinely cited as a basic best practice by every regulatory standard and its absence has been regularly shown as a glaring weakness in every data breach post mortem.

Want the benefit but not the hassle? Consider SIEM Simplified, our service where we do the disciplined blocking and tackling which forms the core of any security or compliance regime.

How to analyze login and pre-authentication failures for Windows Server 2003 R2 and below

Analyzing all the login and pre-authentication failures within your organization can be tedious. There are thousands of login failures generated for several reasons. Here we will discuss the different event IDs and error codes and how you can simplify the login failure review process.

First you need to know the event IDs related to login and pre-authentication failures.

The login failure event IDs are: 529, 530, 531, 532, 533, 534, 535, 536, 537 and 539.  You can learn the other logon event IDs here http://technet.microsoft.com/en-us/library/cc787567(v=ws.10).aspx.

A sample event description for event 529 is:

Logon Failure
Reason: Unknown user name or bad password
User Name: %1
Domain: %2
Logon Type: %3
Logon Process: %4
Authentication Package: %5
Workstation Name: %6

The Windows 2003 server adds some extra fields in the event description:

Caller User Name:-
Caller Domain:-
Caller Logon ID:-
Caller Process ID:-
Transited Services:-
Source Network Address:10.42.42.180
Source Port:0

NOTE: The only difference in event IDs 529, 530, 531, 532, 533, 534, 535, 536, 537 and 539 is the reason for failure. See below:

Event ID

So how do we analyze these events efficiently and effectively? You need to look within the event description. In the login failure event description we only care about the failure reason, user name, logon type, workstation name and source network address. The rest is all noise.

If you create a flex report with EventTracker, it will only display the required fields instead of the whole report.  It also provides a summary based on the total number of events for each failure type and user name. See the sample below:

Summary

Instead of going through hundreds of pages of a lengthy report, the report below provides a quick analysis on login failures based on failure reasons and user name. This allows you to efficiently and effectively analyze login failures in your environment.

Details

Details

Now let’s discuss the pre-authentication failure event.

Pre-authentication failure event id is: 675.
User Name: %1
User ID:  %2
Service Name: %3
Pre-Authentication Type: %4
Failure Code: %5
Client Address: %6

Here it is very important to analyze failure codes. There are tons of failure codes. I would recommend concentrating on the below:

Analyze Failure Codes

I would recommend using the same flex report format that we did above to get the summary counts based on failure code and user name.

Flex Report

Details

Details

This demonstrates that it is very efficient and effective to analyze pre-authentication failures using this method versus the traditional way, which doesn’t allow you to know how many failures were associated with each username or how many respective failure codes (summary counts, etc.) there were, nor how many failures were associated with one particular workstation, etc.  Analyzing the traditional report for hundreds of events every day becomes nearly impossible.

TMI, Too Little Analysis

The typical SIEM implementation suffers from TMI, TLA (Too Much Information, Too Little Analysis). And if any organization that’s recently been in the news knows this, it’s the National Security Agency (NSA). The Wall Street Journal carried this story quoting William Binney, who rose through the ranks at the National Security Agency (NSA) over a 30 year career, retiring in 2001. “The NSA knows so much it cannot understand what it has,” Binney said. “What they are doing is making themselves dysfunctional by taking all this data.”

Most SIEM implementations start at this premise – open the floodgates, gather everything because we are not sure what we are specifically looking for, and more importantly, the auditors don’t help and the regulations are vague and poorly worded.

Lt Gen Clarence E. McKnight is the former head of the Signal Corps and opined that “The issue is a straightforward one of simple ability to manage data effectively in order to provide our leaders with actionable information. Too much raw data compromises that ability. That is all there is to it.”

A presidential panel recently recommended the NSA shut down its bulk collection of telephone call records of all Americans. It also recommended creation of “smart software” to sort data as it is collected, rather than accumulate vast troves of information for sorting out later. The reality is that the collection becomes an end in itself, and the sorting out never gets done.

The NSA may be a large, powerful bureaucracy, intrinsically resistant to change, but how about your organization? If you are seeking a way to get real value out of SIEM data, consider co-sourcing that problem to a team that does that for a living. SIEM Simplified was created for just that purpose. Switch from TMI, TLA (Too Much Information, Too Little Analysis) to JEI, JEA (Just Enough Information, Just Enough Analysis).

EventTracker and Heartbleed

Summary:

The usage of OpenSSL in EventTracker v7.5 is NOT vulnerable to heartbleed.

Details:

A lot of attention has focused on CVE-2014-0160, the Heartbleed vulnerability in OpenSSL. According to http://heartbleed.com, OpenSSL 0.9.8 is NOT vulnerable.

The EventTracker Windows Agent uses OpenSSL indirectly if the following options are enabled and used:

1)      Send Windows events as syslog messages AND use the FTP server option to transfer non real-time events to a FTP server. To support this mode of operation, WinSCP.exe v4.2.9 is distributed as part of the EventTracker Windows Agent. This version of WinSCP.exe is compiled with OpenSSL 0.9.8, as documented in http://winscp.net/eng/docs/history_old (v4.2.6 onwards). Accordingly, the EventTracker Windows Agent is NOTvulnerable.

2)      Configuration Assessment (SCAP). This optional feature uses ovaldi.exe v5.8 Build 2 which in turn includes OpenLDAP v2.3.27 as documented in the OVALDI-README distributed with the EventTracker install package. This version of OpenLDAP uses OpenSSL v0.9.8c which is NOT vulnerable.

Notes:

  • EventTracker Agent uses Microsoft secure channel (Schannel) for transferring syslog over SSL/TLS. This package is NOT vulnerable as noted here.
  • We recommend that all customers who may be vulnerable follow the guidance from their software distribution provider.  For more information and corrective action guidance, please see the information from US Cert here.

Top 5 reasons IT Admins love logs

Top 5 reasons IT Admins love logs:

1) Answer the ‘W’ questions

Who, what, where and when; critical files, logins, USB inserts, downloads…see it all

2) Cut ’em off at the pass, ke-mo sah-bee

Get an early warning of the railroad jumping off track. It’s what IT Admins do.

3) Demonstrate compliance

Don’t even try to demonstrate compliance until you get a log management solution in place. Reduce on-site auditor time by 90%.

4) Get a life

Want to go home on time and enjoy the weekend? How about getting proactive instead of reactive?

5) Logs tell you what users don’t

“It wasn’t me. I didn’t do it.” Have you heard this before? Logs don’t lie.

Avenue Compromise Credential Theft

After an attacker has compromised a target infrastructure, the typical next step is credential theft. The objective is to propagate compromise across additional systems, and eventually target Active Directory and domain controllers to obtain complete control of the network.

Attractive Accounts for Credential Theft
Credential theft attacks are those in which an attacker initially gains privileged access to a computer on a network and then uses freely available tooling to extract credentials from the sessions of other logged-on accounts.

Activities that Increase the Likelihood of Compromise
Because the target of credential theft is usually highly privileged domain accounts and “very important person” (VIP) accounts, it is important for administrators to be conscious of activities that increase the likelihood of a success of a credential-theft attack.

These activities are:

  • Logging on to unsecured computers with privileged accounts
  • Browsing the Internet with a highly privileged account
  • Configuring local privileged accounts with the same credentials across systems
  • Overpopulation and overuse of privileged domain groups
  • Insufficient management of the security of domain controllers.

Privilege Elevation and Propagation
Specific accounts, servers, and infrastructure components are usually the primary targets of attacks against Active Directory.

These accounts are:

  • Permanently privileged accounts
  • VIP accounts
  • “Privilege-Attached” Active Directory accounts
  • Domain controllers
  • Other infrastructure services that affect identity, access, and configuration management, such as public key infrastructure (PKI) servers and systems management servers

Although pass-the-hash (PtH) and other credential theft attacks are ubiquitous today, it is because there is freely available tooling that makes it simple and easy to extract the credentials of other privileged accounts when an attacker has gained Administrator – or SYSTEM-level access to a computer. Even without tooling that allows harvesting of credentials from logon sessions, an attacker with privileged access to a computer can just as easily install keystroke loggers that capture keystrokes, screenshots, and clipboard contents. An attacker with privileged access to a computer can disable antimalware software, install rootkits, modify protected files, or install malware on the computer that automates attacks or turns a server into a drive-by download host.

The tactics used to extend a breach beyond a single computer vary, but the key to propagating compromise is the acquisition of highly privileged access to additional systems. By reducing the number of accounts with privileged access to any system, you reduce the attack surface not only of that computer, but the likelihood of an attacker harvesting valuable credentials from the computer.

A white-paper from Microsoft “Mitigating Pass-the-Hash (PtH) Attacks and Other Credential Theft Techniques” provides detailed guidance on the subject. Highly effective mitigation steps in the order of effort required to implement are:

  • Restrict and protect local accounts with administrative privilege
  • Restrict and protect high privileged domain accounts
  • Restrict inbound traffic using Windows Firewall
  • Remove standard users from the local administrators group

Top 5 reasons Sys Admins hate logs

Top 5 Reasons Sys Admins hate logs:

1) Logs multiply – the volume problem

A single server easily generates 0.25M logs every day, even when operating normally. How many servers do you have? Plus you have workstations, applications and not to mention network devices.

2) Log obscurity – what does it mean?

Jan 2 19:03:22  r37s9p2 oesaudit: type=SYSCALL msg=audit(01/02/13 19:03:22.683:318) : arch=i386 syscall=open success=yes exit=3 a0=80e3f08 a1=18800

Do what now? Go where? ‘Nuff said.

3) Real hackers don’t get logged

If your purpose of logging is, for example, to review logs to “identify and proactively address unauthorized access to cardholder data” for PCI-DSS, how do you know what you don’t know?

4) How can I tell you logged in? Let me count the ways

This is a simple question with a complex answer. It depends on where you logged in. Linux? Solaris? Cisco? Windows 2003? Windows 2008? Application? VMware? Amazon EC2?

5) Compliance forced down your throat, but no specific guidance

Have you ever been in the rainforest with no map, creepy crawlies everywhere, low on supplies and a day’s trek to the nearest settlement? That’s how IT guys feel when management drops a 100+ page compliance standard on their desk.

Big Data: Lessons from the 2012 election

The US Presidential elections of 2012 confounded many pundits. The Republican candidate, Gov. Mitt Romney, put together a strong campaign and polls leading into the final week that suggested a close race. The final results were not so close, and Barack Obama handily won a second term.

Antony Young explains how the Obama campaign used big data, analytics and micro targeting to mobilize key voter blocks giving Obama the numbers needed to push him over the edge.

“The Obama camp in preparing for this election, established a huge Analytics group that comprised of behavioral scientists, data technologists and mathematicians. They worked tirelessly to gather and interpret data to inform every part of the campaign. They built up a voter file that included voter history, demographic profiles, but also collected numerous other data points around interests … for example, did they give to charitable organizations or which magazines did they read to help them better understand who they were and better identify the group of ‘persuadables‘ to target.”

That data was able to be drilled down to zip codes, individual households and in many cases individuals within those households.”

“However it is how they deployed this data in activating their campaign that translated the insight they garnered into killer tactics for the Obama campaign.

“Volunteers canvassing door to door or calling constituents were able to access these profiles via an app accessed on an iPad, iPhone or Android mobile device to provide an instant transcript to help them steer their conversations. They were also able to input new data from their conversation back into the database real time.

“The profiles informed their direct and email fundraising efforts. They used issues such Obama’s support for gay marriage or Romney’s missteps in his portrayal of women to directly target more liberal and professional women on their database, with messages that “Obama is for women,” using that opportunity to solicit contributions to his campaign.

“Marketers need to take heed of how the Obama campaign transformed their marketing approach centered around data. They demonstrated incredible discipline to capture data across multiple sources and then to inform every element of the marketing – direct to consumer, on the ground efforts, unpaid and paid media. Their ability to dissect potential prospects into narrow segments or even at an individual level and develop specific relevant messaging created highly persuasive communications. And finally their approach to tap their committed fans was hugely powerful. The Obama campaign provides a compelling case for companies to build their marketing expertise around big data and micro-targeting. How ready is your organization to do the same?”

Old dogs, new tricks

Doris Lessing passed away at the end of last year. She was the freewheeling Nobel Prize-winning writer on racism, colonialism, feminism and communism who died November 17 at the age of 94, was prolific for most of her life. But five years ago, she said the writing had dried up. “Don’t imagine you’ll have it forever,” she said, according to one obituary. “Use it while you’ve got it because it’ll go; it’s sliding away like water down a plug hole.”

In the very fast changing world of IT, it is common to feel like an old fogey. Everything changes at bewildering speed. From hardware specs to programming languages to user interfaces. We hear of wunderkinds whose innovations transform our very culture. Think Mozart, Zuckerberg to name two.

Tara Bahrampour examined the idea, and quotes author Mark Walton, “What’s really interesting from the neuroscience point of view is that we are hard-wired for creativity for as long as we stay at it, as long as nothing bad happens to our brain.”

The field also matters.

Howard Gardner, professor of cognition and education at the Harvard Graduate School of Education says, “Large creative breakthroughs are more likely to occur with younger scientists and mathematicians, and with lyric poets, than with individuals who create longer forms.”

In fields like law, psychoanalysis and perhaps history and philosophy, on the other hand, “you need a much longer lead time, and so your best work is likely to occur in the latter years. You should start when you are young, but there is no reason whatsoever to assume that you will stop being creative just because you have grey hair.” Gardner said.

Old dogs take heart; you can learn new tricks as long as you stay open to new ideas.

Fail How To: Top 3 SIEM implementation mistakes

Over the years, we had a chance to witness a large number of SIEM implementations, with results from the superb to the colossal failures. What is common with the failures? This blog by Keith Strier nails it:

1) Design Democracy: Find all internal stakeholders and grant all of them veto power. The result is inevitably a mediocre mess. The collective wisdom of the masses is not the best thing here. A super empowered individual is usually found at the center of the successful implementation. If multiple stakeholders are involved, this person builds consensus but nobody else has veto power.
2) Ignore the little things: A great implementation is a set of micro-experiences that add up to make the whole. Think of the Apple iPhone, every detail from the shape, size, appearance to every icon and gesture and feature converges to enhance the user experience. The path to failure is just focus on the big picture, ignore the little things from authentication to navigation and just launch to meet deadline.

3) Avoid Passion: View the implementation as non-strategic overhead; implement and deploy without passion. Result? At best, requirements are fulfilled but users are unlikely to be empowered. Milestones may be met but business sponsors still complain. Prioritizing deadlines, linking IT staff bonuses to delivery metrics, squashing creativity is a sure way to launch technology failures that crush morale.”

Monitoring File Permission Changes with the Windows Security Log

Unstructured data access governance is a big compliance concern.  Unstructured data is difficult to secure because there’s so much of it, it’s growing so fast and it is user created so it doesn’t automatically get categorized and controlled like structured data in databases.  Moreover unstructured data is usually a treasure trove of sensitive and confidential information in a format that bad guys can consume and understand without reverse engineering the relationship of tables in a relational database.

Most of this unstructured data is still found on file shares throughout the network, and file system permissions are the main control over this information.  Therefore knowing when permissions change unstructured is critical to governance and control. File permissions should normally be fairly static but end-users are (by default) the owner of files and subfolders they create and can therefore change permissions on those files. And of course, administrators can change permissions on any object.  Either way you need to know when this happens. Here’s how to do it with the Windows Security Log.

First we need to enable the File System audit subcategory.  You’ll find this in any group policy object under Computer Configuration\Windows Settings\Security Settings\Advanced Audit Policy Configuration\System Audit Policies\Object Access.  Enable File System for success.  (By the way, make sure you also enable Computer Configuration\Windows Settings\Security Settings\Local Policies\Security Options\Audit: Force audit policy subcategory settings to override audit policy category settings to make sure your audit policy takes effect.) Now you need to enable object level auditing on the root folders containing your unstructured data.  For example, if you have a shared folder called c:\files, go to that folder in Windows Explorer, open the security tab of the folders properties, click Advanced and select the Auditing tab.  Now add an entry for Everyone that enables successful use of the Change permissions as shown below.

File permission change

At this point Windows will begin generating two events each time you change permissions on this folder or any of its subfolders or files.  One event is the standard event ID 4663, “An attempt was made to access an object”, which is logged for any kind of audited file access like read, write, delete, etc.  That event will show WRITE_DAC under the Access Request Information but it doesn’t tell you what the actual permission change was.  So instead, use event ID 4670, “Permissions on an object were changed”, which provides the before and after permissions of the object under Permissions Change as shown in the example below.

File permission change

“What does D:AI(A;ID;FA;;;AU)(A;ID;FA;;;WD)(A;ID;FA;;;BA)(A;ID;FA;;;SY)(A;ID;0x1200a9;;;BU) mean?” This is the original access control list of asdf.txt but in the very cryptic Security Descriptor Definition Language (SDDL).  SDDL definitely isn’t something you want to manually parse and translate on a regular basis, but you can when necessary.

Look for the “D:” which is close to the beginning of the string or even the very beginning in this case.  “D:” means Discretionary Access Control List (DACL) which are the actual permissions on the object as opposed to other things that show up in a security descriptor – like owner, primary group and the audit policy (aka SACL).  Until you hit another letter-colon combination like “S:” you are looking at the object’s permissions.  An ACL is made up of Access Control Entries which correspond to each item in the list you see in the Permissions tab of an object’s properties dialog.  But in SDDL before listing the ACEs comprising the ACL you will see any flags that affect the entire ACL as a whole.  In the example above you see AI as the first element after D:.  AI stands for SDDL_AUTO_INHERITED which means permissions on parent objects are allowed to propagate down to this object.

Now come the ACEs.  In SDDL, each ACE is surrounded by parenthesis and the fields within it delimited by semicolons.  The first ACE in the event above is (A;ID;FA;;;AU).  The first field tells you what type of ACE it is – either A for allow or D for deny.  The next field lists any ACE flags that specify whether this ACE is an inherited ACE prorogated down from a parent object and if and how this ACE should propagate down to child objects.  The only flag in this ACE is ID which means the ACE is in fact inherited.  The next field lists the permissions this ACE allows or denies.  In this example FA stands for all file access rights.  The next 2 fields, Object Type and Inherited Object Type,  are always blank on file system permissions (hence the 3 semicolons in a row); they are only used places like Active Directory where there are different types of objects (user, group, computer, etc) that you can define permissions for.  Finally, the last field is Trustee and identifies the user, group or special principal begin allowed or denied access.  Here you will either see the SID of the user or group if the ACE applies to a so-called “well-known” SID you’ll the corresponding acronym.  In this example AU stands for Authenticated Users.

Event ID 4670 does a great job of alerting you when permissions change on an object and telling you which object was affected and who did it.  To go further and understand what permissions where actually changed you have to dive into SDDL.  I recommend Ned Pyle’s 2-part TechNet blog, The Security Descriptor Definition Language of Love for more information on SDDL.

Digital detox: Learning from Luke Skywalker

For any working professional in 2013, multiple screens, devices and apps are integral instruments for success. The multitasking can be overwhelming and dependence on gadgets and Internet connectivity can become a full-blown addiction.

There are digital detox facilities for those whose careers and relationships have been ruined by extreme gadget use. Shambhalah Ranch in Northern California has a three-day retreat for people who feel addicted to their gadgets. For 72 hours, the participants eat vegan food, practice yoga, swim in a nearby creek, take long walks in the woods, and keep a journal about being offline. Participants have one thing in common: they’re driven to distraction by the Internet.

Is this you? Checking e-mail in the bathroom and sleeping with your cell phone by your bed are now considered normal. According to the Pew Research Center, in 2007 only 58 percent of people used their phones to text; last year it was 80 percent. More than half of all cell phone users have smartphones, giving them Internet access all the time. As a result, the number of hours Americans spend collectively online has almost doubled since 2010, according to ComScore, a digital analytics company.

Teens and twentysomethings are the most wired. In 2011, Diana Rehling and Wendy Bjorklund, communications professors at St. Cloud State University in Minnesota, surveyed their undergraduates and found that the average college student checks Facebook 20 times an hour.

So what can Luke Skywalker teach you? Shane O’Neill says it well:

“The climactic Death Star battle scene is the centerpiece of the movie’s nature vs. technology motif, a reminder to today’s viewers about the perils of relying too much on gadgets and not enough on human intuition. You’ll recall that Luke and his team of X-Wing fighters are attacking Darth Vader’s planet-size command center. Pilots are relying on a navigation and targeting system displayed through a small screen (using gloriously outdated computer graphics) to try to drop torpedoes into the belly of the Death Star. No pilot has succeeded, and a few have been blown to bits.

“Luke, an apprentice still learning the ways of The Force from the wise — but now dead — Obi-Wan Kenobi, decides to put The Force to work in the heat of battle. He pushes the navigation screen away from his face, shuts off his “targeting computer” and lets The Force guide his mind and his jet’s torpedo to the precise target.

“Luke put down his gadget, blocked out the noise and found a quiet place of Zen-like focus. George Lucas was making an anti-technology statement 36 years ago that resonates today. The overarching message of Star Wars is to use technology for good. Use it to conquer evil, but don’t let it override your own human Force. Don’t let technology replace you.

Take a lesson from a great Jedi warrior. Push the screen away from time to time and give your mind and personality a chance to shine. When it’s time to use the screen again, use it for good.”

Looking back: Operation Buckshot Yankee & agent.btz

It was the fall of 2008. A variant of a three year old relatively benign worm began infecting U.S. military networks via thumb drives.

Deputy Defense Secretary William Lynn wrote nearly two years later that the patient zero was traced to an infected flash drive that was inserted into a U.S. military laptop at a base in the Middle East. The flash drive’s malicious computer code uploaded itself onto a network run by the U.S. Central Command. That code spread undetected on both classified and unclassified systems, establishing what amounted to a digital beachhead, from which data could be transferred to servers under foreign control. It was a network administrator’s worst fear: a rogue program operating silently, poised to deliver operational plans into the hands of an unknown adversary.

The worm, dubbed agent.btz, caused the military’s network administrators major headaches. It took the Pentagon nearly 14 months of stop and go effort to clean out the worm — a process the military called Operation Buckshot Yankee. It was so hard to do that it led to a major reorganization of the information defenses of the armed forces, ultimately causing the new Cyber Command to come into being.

So what was agent.btz? It was a variant of the SillyFDC worm that copies itself from removable drive to computer and back to drive again. Depending on how the worm is configured, it has the ability to scan computers for data, open backdoors, and send through those backdoors to a remote command and control server.

To keep it from spreading across a network, the Pentagon banned thumb drives and the like from November 2008 to February 2010. You could also disable Windows’ “autorun” feature, which instantly starts any program loaded on a drive.

As Noah Shachtman noted, the havoc caused by agent.btz has little to do with the worm’s complexity or maliciousness — and everything to do with the military’s inability to cope with even a minor threat. “Exactly how much information was grabbed, whether it got out, and who got it — that was all unclear,” says an officer who participated in the operation. “The scary part was how fast it spread, and how hard it was to respond.”

Gen. Kevin Chilton of U.S. Strategic Command said, “I asked simple questions like how many computers do we have on the network in various flavor, what’s their configuration, and I couldn’t get an answer in over a month.” As a result, network defense has become a top-tier issue in the armed forces. “A year ago, cyberspace was not commanders’ business. Cyberspace was the sys-admin guy’s business or someone in your outer office when there’s a problem with machines business,” Chilton noted. “Today, we’ve seen the results of this command level focus, senior level focus.”

What can you learn from Operation Buckshot Yankee?
a) That denial is not a river in Egypt
b) There are well known ways to minimize (but not eliminate) threats
c) It requires command level, senior level focus; this is not a sys-admin business

Defense in Depth – The New York Times Case

In January 2013, the New York Times accused hackers from China with connections to its military of successful penetrating its network and gained access to the logins of 53 employees, including Shanghai bureau chief David Barboza who last October published an embarrassing article on the vast secret wealth of China’s prime minister, Wen Jiabao.

This came to light when AT&T noticed unusual activity which it was unable to trace or deflect. A security firm was brought into conduct a forensic investigation that uncovered the true extent of what had been going on.

Over four months starting in September 2012, the attackers had managed to install 45 pieces of targeted malware designed to probe for data such as emails after stealing credentials, only one of which was detected by the installed antivirus software from Symantec. Although the staff logins were hashed, that doesn’t appear to have stopped the hackers in this instance. Perhaps, the newspaper suggests, because they were able to deploy rainbow tables to beat the relatively short passwords.

Symantec offered this statement: “Turning on only the signature-based anti-virus components of endpoint solutions alone are not enough in a world that is changing daily from attacks and threats.”

Still think that basic AntiVirus and firewall is enough? Take it directly from Symantec – you need to monitor and analyze data from inside the enterprise for evidence of compromise. This is Security Information and Event Management (SIEM).

Cyber Pearl Harbor a myth?

Eric Gartzke writing in International Security argues that attackers don’t have much motive to stage a Pearl Harbor-type attack in cyberspace if they aren’t involved in an actual shooting war.

Here is his argument:

It isn’t going to accomplish any very useful goal. Attackers cannot easily use the threat of a cyber attack to blackmail the U.S. (or other states) into doing something they don’t want to do. If they provide enough information to make the threat credible, they instantly make the threat far more difficult to carry out. For example, if an attacker threatens to take down the New York Stock Exchange through a cyber attack, and provides enough information to show that she can indeed carry out this attack, she is also providing enough information for the NYSE and the U.S. Government to stop the attack.

Cyber attacks usually involve hidden vulnerabilities — if you reveal the vulnerability you are attacking, you probably make it possible for your target to patch the vulnerability. Nor does it make sense to carry out a cyber attack on its own, since the damage done by nearly any plausible cyber attack is likely to be temporary.

Points to ponder:

  • Most attacks are occurring against well known vulnerabilities; systems that are unpatched
  • Most attacks are undetected and systems are “pwned” for weeks/months
  • The disruption caused when attacks are discovered are significant both in human and cost terms
  • There was little logic in the 9/11 attacks other than to cause havoc and fear (i.e., terrorists are not famous for logical well thought out reasoning)

Coming to commercial systems, attacks are usually for monetary gain. Attacks are often performed because “they can” [Remember George Mallory famously quoted as having replied to the question “Why do you want to climb Mount Everest?” with the retort “Because it’s there”].

Did Big Data destroy the U.S. healthcare system?

The problem-plagued rollout of healthcare.gov has dominated the news in the USA. Proponents of the Affordable Care Act (ACA) urge that teething problems are inevitable and that’s all these are. In fact, President Obama has been at pains to say the ACA is more than just a website. Opponents of the law see the website failures as one more indicator that it is unworkable.

The premise of the ACA is that young healthy persons will sign up in large numbers and help defray the costs expected from older persons and thus provide a good deal for all. It has also been argued that the ACA is a good deal for young healthies. The debate between proponents of the ACA and the opponents of ACA hinge around this point. See for example, the debate (shouting match?) between Dr. Zeke Emmanuel and James Capretta on Fox News Sunday. In this segment, Capretta says the free market will solve the problem (but it hasn’t so far, has it?) and so Emmanuel says it must be mandated.

So when then has the free market not solved the problem? Robert X. Cringely argues that big data is the culprit. Here’s his argument:

– In the years before Big Data was available, actuaries at insurance companies studied morbidity and mortality statistics in order to set insurance rates. This involved metadata — data about data — because for the most part the actuaries weren’t able to drill down far enough to reach past broad groups of policyholders to individuals. In that system, insurance company profitability increased linearly with scale, so health insurance companies wanted as many policyholders as possible, making a profit on most of them.

– Enter Big Data. The cost of computing came down to the point where it was cost-effective to calculate likely health outcomes on an individual basis.

– Result? The health insurance business model switched from covering as many people as possible to covering as few people as possible — selling insurance only to healthy people who didn’t much need the healthcare system. The goal went from making a profit on most enrollees to making a profit on all enrollees.

Information Security Officer Extraordinaire

IT Security cartoon

Industry News:

Lessons Learned From 4 Major Data Breaches In 2013
Dark Reading

Last year at this time, the running count already totaled approximately 27.8 million records compromised and 637 breaches reported. This year, that tally so far equals about 10.6 million records compromised and 483 breaches reported. It’s a testament to the progress the industry has made in the fundamentals of compliance and security best practices. But this year’s record is clearly far from perfect.

How Will NIST Framework Affect Banks?
BankInfoSecurity

The NIST cybersecurity framework will help U.S. banking institutions assess their security strategies, but some institutions fear the framework could trigger unnecessary regulations, says Bill Stewart of Booz Allen Hamilton.

Did you know that EventTracker is NIST certified for Configuration Assessment?

EventTracker News

EventTracker Wins Government Security News Homeland Security Award

EventTracker announced today that it has won the Security Incident/Event Management (SIEM) category for the 2013 Government Security News Homeland Security Awards.  EventTracker competed for the win among a group of solution providers that included LogRhythm, Solarwinds and RSA.

EventTracker and Secure Links Partner to Bring Better Network Visibility

EventTracker announced that Secure Links, a leading IT services company serving the Canadian market, has joined the Managed Security Service Provider (MSSP) Partner Program. Secure Links will provide and manage EventTracker’s comprehensive suite of log management and SIEM solutions which offer security, operational, and regulatory compliance monitoring.

The VARs tale

The Canterbury Tales is a collection of stories written by Geoffrey Chaucer at the end of the 14th century. The tales were a part of a story telling contest between pilgrims going to Canterbury Cathedral with the prize being a free meal on their return. While the original is in Middle English, here is the VARs tale in modern day English.

In the beginning, the Value Added Reseller (VAR) represented products to the channel and it was good. Software publishers of note always preferred the indirect sales model and took great pains to cultivate the VAR or channel, and it was good. The VAR maintained the relationship with the end user and understood the nuances of their needs. The VAR gained the trust of the end user by first understanding, then recommending and finally supporting their needs with quality, unbiased recommendations, and it was good. End users in turn, trusted their VAR to look out for their needs and present and recommend the most suitable products.

Then came the cloud which appeared white and fluffy and unthreatening to the end user. But dark and foreboding to the VAR, the cloud was. It threatened to disrupt the established business model. It allowed the software publisher to sell product directly to the end user and bypass the VAR. And it was bad for the VAR. Google started it with Office Apps. Microsoft countered with Office 365. And it was bad for the VAR. And then McAfee did the same for their suite of security products. Now even the security focused VARs took note. Woe is me, said the VAR. Now software publishers are selling directly to the end user and I am bypassed. Soon the day will come when cats and dogs are friends. What are we to do?

Enter Quentin Reynolds who famously said, If you can’t lick ‘em, join them.” Can one roll back the cloud? No more than King Canute could stop the tide rolling in. This means what, then? It means a VAR must transition from being a reseller of product to one of services or better yet, a provider of services. In this way, may the VAR regain relevance with the end user and cement the trust built up over the years, between them.

Thus the VARs tale may have a happy ending wherein the end user has a more secure network, and the auditor being satisfied, returns to his keep and the VAR is relevant again.

Which service would suit, you ask? Well, consider one that is not a commodity, one that requires expertise, one that is valued by the end user, one that is not a set-and-forget. IT Security leaps to mind; it satisfies these criteria. Even more within this field is SIEM, Log Management, Vulnerability scan and Intrusion Detection, given their relevance to both security and regulatory compliance.

Auditing File Shares with the Windows Security Log

Over the years, security admins have repeatedly asked me how to audit file shares in Windows.  Until Windows Server 2008, there were no specific events for file shares.  The best we could do was to enable auditing of the registry key where shares are defined.  But in Windows Server 2008 and later, there are two new subcategories for share related events:

  • File Share
  • Detailed File Share

File Share Events

This subcategory allows you to track the creation, modification and deletion of shared folders (see table below).  You have a different event ID for each of those three operations.  The events indicate who made the change in the Subject fields, and provides the name the share users see when browsing the network and the patch to the file system folder made available by the share.  See the example of event ID 5142 below.

A network share object was added.

Subject:
Security ID:  W8R2\wsmith
Account Name:  wsmith
Account Domain:  W8R2
Logon ID:  0x475b7

Share Information:
Share Name:  \\*\AcmeAccounting
Share Path:  C:\AcmeAccounting

The bad news is that the subcategory also produces event ID 5140 every time a user connects to a share.  The data logged, including who accessed it, and their client IP address is nice, but the event is logged much too frequently.  Since Windows doesn’t keep network logon sessions active if no files are held open, you will tend to see this event frequently if you enable the “File Share” audit subcategory.  There is no way to configure Windows to produce just the share change events and not this access event as well.  Of course that’s the point of a log management solution like EventTracker, which can be configured to filter out the noise.

Detailed File Share Events

Event ID 5140, as discussed above, is intended to document each connection to a network share, and as such it does not log the names of the files accessed through that share connection.  The “Detailed File Share” audit subcategory provides this lower level of information with just one event ID – 5145 – which is shown below.

A network share object was checked to see whether client can be granted desired access.

Subject:
Security ID:  SYSTEM
Account Name:  WIN-KOSWZXC03L0$
Account Domain:  W8R2
Logon ID:  0x86d584

Network Information:
Object Type:  File
Source Address:  fe80::507a:5bf7:2a72:c046
Source Port:  55490

Share Information:
Share Name:  \\*\SYSVOL
Share Path:  \??\C:\Windows\SYSVOL\sysvol
Relative Target Name: w8r2.com\Policies\{6AC1786C-016F-11D2-945F-00C04fB984F9}\Machine\Microsoft\Windows NT\Audit\audit.csv

Access Request Information:
Access Mask:  0x120089
Accesses:  READ_CONTROL
SYNCHRONIZE
ReadData (or ListDirectory)
ReadEA
ReadAttributes

Access Check Results:
READ_CONTROL: Granted by Ownership
SYNCHRONIZE: Granted by D:(A;;0x1200a9;;;WD)
ReadData (or ListDirectory): Granted by D:(A;;0x1200a9;;;WD)
ReadEA: Granted by D:(A;;0x1200a9;;;WD)
ReadAttributes: Granted by D:(A;;0x1200a9;;;WD)

This event tells identifies the user (Subject fields), the user’s IP address (Network Information), the share, and the actual file accessed via the share (Share Information) and then provides the permissions requested and the results of the access request.  This event actually logs the access attempt and allows you to see failure versions of the event as well as success events.

Be careful about enabling this audit subcategory because you will get an event for every file accessed through network shares each time the application opens the file.  This can be more frequent than imagined for some applications like Microsoft Office.  Conversely, remember that this category won’t catch access attempts on the same files if a locally executing application accesses the file via the local patch (e.g. c:\docs\file.txt) instead of via a patch.

You might also want to consider enabling auditing on individual folders containing critical files and using the File System subcategory.  This method allows you to be much more selective about who, which files and what types of access are audited.

For most organizations, enable the File Share subcategory if it’s important to you to know when new folders are shared. You will probably want to filter out the 5140 occurrences.  Then, if you have file level audit needs, turn on the File Access subcategory, identify the exact folders containing the relevant files and enable auditing on those folders for the specific operations (e.g. Read, Write, Delete) needed to meet your audit requirements.  Don’t enable the Detailed File Share audit subcategory unless you really want events for every access to every file via network shares.

The air gap myth

As we work with various networks to implement IT Security in general and SIEM, Log Management and Vulnerability scanning in particular, we sometimes meet with teams that inform us that they have air gapped networks. An air gap is a network security measure that consists of ensuring physical isolation from unsecured networks (like the Internet for example). The premise here being harmful packets cannot “leap” across the air gap. This type of measure is more often seen in utility and defense installations. Are they really effective in improving security?

A study by the Idaho National Laboratory shows that in the utility industry, while an air gap may provide defense, there are many more points of vulnerability in older networks. Often, critical industrial equipment is of older vintage when insecure coding practices were the norm. Over the years, such systems have had web front ends grated on to them to ease configuration and management. This makes them very vulnerable indeed. In addition these older systems are often missing key controls such as encryption. When automation is added to such systems (to improve reliability or reduce operations cost), the potential for damage is quite high indeed.

In a recent interview, Eugene Kaspersky stated that the ultimate air gap had been compromised. The International Space Station, he said, suffered from virus epidemics. Kaspersky revealed that Russian astronauts carried a removable device into space which infected systems on the space station. He did not elaborate on the impact of the infection on operations of the International Space Station (ISS). Kaspersky doesn’t give any details about when the infection he was told about took place, but it appears as if it was prior to May of this year when the United Space Alliance, the group which oversees the operation of the ISS, moved all systems entirely to Linux to make them more “stable and reliable.”

Prior to this move the “dozens of laptops” used on board the space station had been using Windows XP. According to Kaspersky, the infections occurred on laptops used by scientists who used Windows as their main platform and carried USB sticks into space when visiting the ISS. A 2008 report on ExtremeTech said that a Windows XP laptop was brought onto the ISS by a Russian astronaut infected with the W32.Gammima.AG worm, which quickly spread to other laptops on the station – all of which were running Windows XP.

If the Stuxnet infection from June 2010 wasn’t enough evidence, this should lay the air gap myth to rest.

End(er’s) game: Compliance or Security?

Who do you fear more – The Auditor or The Attacker? The former plays by well-established rules, gives plenty of prior notice before arriving on your doorstep and is usually prepared to accept a Plan of Action with Milestones (POAM) in case of deficiencies. The latter gives no notice, never plays fair and will gleefully exploit any deficiencies. Notwithstanding this, most small enterprises, actually fear the auditor more and will jump through hoops to minimize their interaction. It’s ironic, because the auditor is really there to help; the attacker, obviously is not.

While it is true that 100% compliance is not achievable (or for that matter desirable), it is also true that even the most basic of steps towards compliance go a long way to deterring attackers. The comparison to the merits of physical exercise is an easy one. How often have you heard it said that even mild physical exercise (taking the steps instead of elevator) gives you benefit? You don’t have to be a gym rat, pumping iron for hours every day.

And so, to answer the question: What comes first, Compliance or Security? It’s Security really, because Compliance is a set of guidelines to help you get there with the help of an Auditor. Not convinced? The news is rife with accounts of exploits which in many cases are at organizations that have been certified compliant. Obviously there is no such thing as being completely secure, but will you allow the perfect to be the enemy of the good?

The National Institutes of Standards (NIST) released Rev 4 of its seminal publication 800-53, one that applies to US Government IT systems. As budgets (time, money, people) are always limited, it all begins with risk classification, applying  scarce resources in order of value. There are other guidelines such as the SANS Institute Consensus Audit Guidelines to help you make the most of limited resources.

You may not have trained like Ender Wiggin from a very young age through increasingly difficult games, but it doesn’t take a tactical genius to recognize “Buggers” as attackers and Auditors as the frenemies.

Looking for assistance with your IT Security needs? Click here for our newest publication and learn how you can simplify with services.

Simplifying SIEM

Since its inception, SIEM has been something for the well-to-do IT Department; the one that can spend tens or hundreds of thousands of dollars on a capital acquisition of the technology and then afford the luxury of qualified staff to use it in the intended manner. In some cases, they hire experts from the SIEM vendor to “man the barricades.”

In the real world of a typical IT Department in the Medium Enterprise or Small Business, this is a ride in Fantasy Land. Budgets simply do not allow capital expenditures of multiple six or even five figures; expert staff, to the extent they exist, are hardly idling and available to work the SIEM console; and hiring outside experts – the less said, the better. And so, SIEM has remained the in the province of the well heeled.

In the meantime, the security and compliance pressures continue to mount. PCI-DSS compliance in particular, but also HIPAA-HiTech, continues to drive to smaller organizations.

Question: How do we square this circle where budgets are tight and IT Security expertise is rare?
Answer: By delivering value as a service, that is, as a MSP/MSSP.

At EventTracker, we’ve obsessed on this problem for a dozen years; powering and then simplifying the implementation, and with v7.5 that trend continues. Let me count the ways:

  • EventTracker is implemented as a virtual appliance. This means it can be right-sized for the environment. Scale up to very large networks of tens of thousands nodes; scale down to a site with only handful of sources.
  • The Collection Point/Master model allows you to “divide and conquer.” Locate a Collection Point per a geographic or logical group; roll up to a single pane of glass at a central Collection Master. Enjoy local control with global oversight.
  • Consolidate all incident data, prioritized by risk, at both the Collection Point and Master. An MSP SOC operator can now watch for incidents at a Connection Master, being fed from any number of underlying Collection Points. After-hours coverage at a single pane of glass? No problem.
  • Archive data at either Collection Point or Collection Master or both with different retention periods. Don’t want data replication? Not interested in operating a SAS-70 or FISMA certified datacenter? No problem. Retain data at customer premises, subject to their access control.
  • Aggregated licensing – enjoy the best possible price point by rolling up all log sources or volume.
  • Flexible licensing models – buy by the node with unlimited log volume or by log volume with unlimited nodes

For MSPs and MSSPs looking to drive greater revenue or customer loyalty, EventTracker 7.5 helps with both by satisfying the customer’s compliance and security needs. For the medium enterprise or small business looking to meet these needs without breaking the bank – now there is a way.

SIEM Simplified, it’s what we do.

Three common SMB mistakes

Small and medium business (SMB) owners/managers understand that IT plays a vital role within their companies. However, many SMBs are still making simple mistakes with the management of their IT systems, which are costing them money.

1) Open Source Solutions In a bid to reduce overall costs, many SMBs look to open source applications and platforms. While such solutions appear attractive because of low or no license costs, the effort required for installation, configuration, operation, maintenance and ongoing upgrades should be factored in. The total cost of ownership of such systems are generally ignored or poorly understood. In many cases, they may require a more sophisticated (and therefore more expensive and hard to replace) user to drive them.

2) Migrating to the Cloud Cloud based services promise great savings, which is always music to an SMB manager/owner’s ears, and the entire SaaS market has exploded in recent years. However the costs savings are not always obvious or tangible. The Amazon ec2 service is often touted as an example of cost savings but it very much depends on how you use the resource. See this blog for an example. More appropriate might be a hybrid system that keeps some of the data and services in-house, with others moving to the cloud.

3) The Knowledge Gap Simply buying technology, be it servers or software, does not provide any tangible benefit. You have to integrate it into the day-to-day business operation. This takes expertise both with the technology and your particular business.

In the SIEM space, these buying objections have often stymied SMBs from adopting the technology, despite its benefits and repeated advice from experts. To overcome these, we offer a managed SIEM offering called SIEM Simplified.

The Holy Grail of SIEM

Merriam Webster defines “holy grail” as a goal that is sought after for its great significance”. Mike Rothman of Securosis has described a twofold response to what the “holy grail” is for a security practitioner, i.e.,

  1. A single alert specifying exactly what is broken, with relevant details and the ability to learn the extent of the damage
  2. Make the auditor go away, as quickly and painlessly as possible

How do you achieve the first goal? Here are the steps:

  • Collect log information from every asset on the enterprise network,
  • Filter it through vendor provided intelligence on its significance
  • Filter it through local configuration to determine its significance
  • Gather and package related, relevant information – the so-called 5 Ws (Who, What, Where, When and Why)
  • Alert the appropriate person in the notification method they prefer (email, dashboard, ticket etc.)

This is a fundamental goal for SIEM systems like EventTracker, and over the ten plus years working on this problem, we’ve got a huge collection of intelligence to draw on to help configure and tune the system to you needs. Even so, there is an undefinable element of luck to have it all work out for you, just when you need it. Murphy’s Law says that luck is not on your side. So now what?

One answer we have found is Anomalous Behavior detection. Learn “normal” behavior during a baseline period and draw the attention of a knowledgeable user to out of ordinary or new items. When you join these two systems, you get coverage for both known-knowns as well as unknown-unknowns.

The second goal involves more discipline and less black magic. If you are familiar with the audit process, then you may know that it’s all about preparation and presentation. The Duke of Wellington famously remarked that the “Battle of Waterloo was won on the playing fields of Eton” another testament to winning through preparation. Here again, to enable diligence, EventTracker Enterprise  offers several features including report/alert annotation, summary report on reports, incident acknowledgement and an electronic logbook to record mitigation and incident handling actions.

Of course, all this requires staff with the time and training to use the features. Lack time and resources you say? We’ve got you covered with SIEM Simplified, a co-sourcing option where we do the heavy lifting leaving you to sip from the Cup of Jamshid.

Have neither the time, nor the tools, nor budget? Then the story might unfold like this.

SIEM vs Search Engine

The pervasiveness of Google in the tech world has placed the search function in a central locus of our daily routine. Indeed many of the most popular apps we use every day are specialized forms of search. For example:

  • E-Mail is a search for incoming msgs; search by sender, by topic, by key phrase, by thread
  • Voice calling or texting is preceded by a search for a contact
  • Yelp is really searching for a restaurant
  • The browser address bar is in reality a search box

And the list goes on.

In the SIEM space, the rise of Splunk, especially when coupled with the promise of “big data”, has led to speculation that SIEM is going to be eclipsed by the search function. Let’s examine this a little more closely, especially from the viewpoint of an expert constrained Small Medium Enterprise (SME) where Data Scientists are not idling aplenty.

Big data and accompanying technologies are, at present, more developer level elements that require assembly with application code or intricate setup and configuration before they can be used by typical system administrators much less mid-level managers. To leverage the big-data value proposition of such platforms, the core skill required by such developers is thinking about distributed computing where the processing is performed in batches across multiple nodes. This is not a common skill set in the SME.

Assuming the assembly problem is somehow overcome, can you rejoice in your big-data-set and reduce the problems that SIEM solves to search queries? Well maybe, if you are a Data Scientist and know how to use advanced analytics. However, SIEM functions include things like detecting cyber-attacks, insider threats and operational conditions such as app errors – all pesky real-time requirements. Not quite so effective as a search on archived and indexed data of yesterday. So now the Data Scientist must also have infosec skills and understand the IT infrastructure.

You can probably appreciate that decent infosec skills such as network security, host security, data protection, security event interpretation, and attack vectors do not abound in the SME. There is no reason to think that the shortage of cyber-security professionals and the ultra-shortage of data scientists and experienced Big Data programmers will disappear anytime soon.

So how can an SME leverage the promise of big-data now? Well, frankly EventTracker has been grappling with the challenges of massive, diverse, fast data for many years before became popularly known as Big Data. In testing on COTS hardware, our recent 7.4 release showed up to a 450% increase in receiver/archiver performance over the previous 7.3 release on the same hardware. This is not an accident. We have been thinking and working on this problem continuously for the last 10 years. It’s what we do. This version also has advanced data-science methods built right in to the EventVault Explorer, our data-mart engine so that security analysts don’t need to be data scientists. Our behavior module incorporates data visualization capabilities to help users recognize hidden patterns and relations in the security data, the so-called “Jeopardy” problem wherein the answers are present in the data-set, the challenge is in asking the right questions.

Last but not the least, we recognize that notwithstanding all the chest-thumping above, many (most?) SMEs are so resource constrained that a disciplined SOC-style approach to log review and incident handling is out of reach. Thus we offer SIEM Simplified, a service where we do the heavy lifting leaving the remediation to you.

Search engines are no doubt a remarkably useful innovation that has transformed our approach to many problems. However, SIEM satisfies specific needs in today’s threat, compliance and operations environment that cannot be satisfied effectively or efficiently with a raw big-data platform.

Resistance is futile

The Borg are a fictional alien race that are a terrifying antagonist in the Star Trek franchise. The phrase “Resistance is futile” is best delivered by Patrick Stewart in the episode The Best of Both Worlds.

When IBM demonstrated the power of Watson in 2011 by defeating two of the best humans to ever play Jeopardy, Ken Jennings who won 74 games in a row admitted in defeat, “I, for one, welcome our new computer overlords.”

As the Edward Snowden revelations about the collection of metadata for phone calls became known, the first thinking was that it would be technically impossible to store data for every single phone call – the cost would be prohibitive. Then Brewster Kahle, one of the engineers behind the Internet Archive made this spreadsheet to calculate the storage cost to record and store one year’s-worth of all U.S. calls. He works the cost to about $30M which is non-trivial but not out of reach by any means for a large US Gov’t agency.

The next thought was – ok so maybe it’s technically feasible to record every phone call, but how could anyone possibly listen to every call? Well obviously this is not possible, but can search terms be applied to locate “interesting” calls? Again, we didn’t think so, until another N.S.A. document, cited by The Guardian, showed a “global heat map” that appeared to represent how much data the N.S.A. sweeps up around the world. If it were possible to efficiently mine metadata, data about who is calling or e-mailing, then the pressure for wiretapping and eavesdropping on communications becomes secondary.

This study in Nature shows that just four data points about the location and time of a mobile phone call, make it possible to identify the caller 95 percent of the time.

IBM estimates that thanks to smartphones, tablets, social media sites, e-mail and other forms of digital communications, the world creates 2.5 quintillion bytes of new data daily. Searching through this archive of information is humanly impossible, but precisely what a Watson-like artificial intelligence is designed to do. Isn’t that exactly what was demonstrated in 2011 to win Jeopardy?