Once More Unto the Data (Breach), Dear Friends

As I reflect on this year, a Shakespearean quote plays out in my mind – when King Henry the Fifth is rallying his troops to attack a breach, or gap, in the wall of a city, “Once more unto the breach, dear friends”. Sadly, this has become the new normal. But even more so, 2017 has felt like Lemony Snicket's, A Series of Unfortunate Events. There were massive data breaches, unintended exposures of sensitive information on the internet, and other unfortunate tech incidents. 
Here are the five to illustrate the variety:
  1. Dallas Emergency Sirens: Just before midnight on a Friday in early April, all 156 of the emergency sirens in Dallas started sounding simultaneously for no apparent reason. The hubbub lasted a full 90 minutes before the sirens could be manually overridden and shut down, during which time panicked residents flooded 911 with calls. Dispatchers who typically pick up within 10 seconds were so overwhelmed that the wait time hit six minutes. Officials blamed hackers for the intrusion into their emergency alert system. Nobody had ever thought this could happen.
  2. WannaCry The National Security Agency has for years been diligently finding major weaknesses in commonly used pieces of software. Instead of alerting the affected companies about the vulnerabilities, however, it’s been hiding those aces up its sleeve for future use. This year, a group of hackers calling themselves the Shadow Brokers, stole a bunch of those exploits then proceeded to turn them loose on the internet. North Korea used one such NSA-developed hacking technique to target Windows, resulting in a piece of ransomware called “WannaCry” that crippled an estimated 230,000 computers around the world. Brad Smith, Microsoft’s Chief Legal Officer remarked, "An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen.”
  3. State Election Systems: Russian hackers targeted election systems in 21 states during the 2016 presidential election (to say nothing of their activity on FacebookTwitter, Reddit, etc.), as part of what the Department of Homeland Security called “a decade-long campaign of cyber-enabled operations directed at the U.S. Government and its citizens.” Jeanette Manfra, acting as assistant secretary for the office of cybersecurity and communications, told the Senate Select Committee on Intelligence that "the cyberattacks were intended or used to undermine public confidence in electoral processes.”
  4. : In September, consumer credit ratings agency, Equifax, revealed hackers had stolen the personal details of roughly half of all Americans – 143 million people. Equifax waited five months to tell anyone and then bungled its response, initially forcing those affected to sign a legal document prohibiting them from joining a class-action suit, then inadvertently directing potential victims to a fake phishing site which proceeded to steal yet more information.
  5. Deep Root Analytics: This summer, a Republican data analysis company called Deep Root Analytics left exposed a 1.1-terabyte online database containing the personal information of 200 million American voters. Not just birthdays and addresses, this leak included deeply personal information about individual voters, including their likely stance on abortion, gun control, stem cell research, environmental issues, and 44 other categories.
Will 2018 be better? 
There is the promise of advancements in fields like AI and machine learning. And we could learn from our mistakes but nah, not really. I don't mean to be a nattering nabob of negativism. Given the increasing penetration of IT in every facet of life, so long as those tasked with administering these increasingly complex systems are equipped with weaponry from the last war, then it’s hard to see improvement.

Still bringing a knife to a gunfight? SIEMphonic can help level the odds.

For of all sad words of tongue or pen, the saddest are these: 'We weren't logging'

It doesn't rhyme and it's not what Whittier said but it's true. If you don't log it when it happens, the evidence is gone forever. I know personally of many times where the decision was made not to enable logging and was later regretted when something happened that could have been explained, attributed or proven had the logs been there. On the bright-side there're plenty of opposite situations where thankfully the logs were there when needed. In fact, in a recent investigation we happened to enable a certain type of logging hours before the offender sent a crucial email that became the smoking gun in the case thanks to our ability to correlate key identifying information between the email and log.

Why don't we always enable auditing everywhere? Sometimes it's simple oversight but more often the justification is:

  • We can't afford to analyze it with our SIEM
  • We don't have a way to collect it
  • It will bog down our system

Let's deal with each of those in turn and show why they aren't valid.

We can't afford to analyze it with our SIEM

Either because of hardware resources, scalability constraints or volume based licensing organizations limit what logging they enable. Let's just assume you really can't upgrade your SIEM for whatever reason. That doesn't stop you from at least enabling the logging. Maybe it doesn't get analyzed for intrusion detection. But at least it's there (the most recent activity anyway) when you need it. Sure, audit logs aren't safe and shouldn't be left on the system where they are generated but I'd still rather have logging turned on even if it just sits there being overwritten. Many times, that's been enough to explain/attribute/prove what happened. But here's something else to consider, even if you can't analyze it "live" in your SIEM, doesn't mean you have to leave it on the system where it's generated - where's it's vulnerable to deletion or overwriting as it ages out. At least collect the logs into a central, searchable archive like open-source Elastic.

We don't have a way to collect it

That just doesn't work either. If your server admins or workstation admins push back against installing an agent, you don't have to resort to remote polling-based log collection. On Windows use native Windows Event Forwarding and on Linux use syslog. Both technologies are agentless and efficient. And Windows Event Forwarding is resilient. You can even define noise filters so that you don't clog your network and other resources with junk events.

Logging will bog down our system

This bogey-man is still active. But it's just not based on fact. I've never encountered a technology or installation where properly configured auditing made a material impact on performance. And today storage is cheap and you only need to worry about scheduling and compression on the tiniest of network pipes - like maybe a ship with a satellite IP link. Windows auditing is highly configurable and as noted earlier you can further reduce volume by filtering noise at the source. SQL Server auditing introduced in 2008 is even more configurable and efficient. If management is serious they will require this push-back be proven in tests and - if you carefully configure your audit policy and output destination - likely the tests will show auditing has negligible impact.

When it comes down to it, you can't afford not to log. Even if today you can't collect and analyze all your logs in real-time at least turn on logging in each system and application. And keep working to expand collection and analysis. You won't regret it.

True Cost of Data Breaches

The Cisco 2017 Annual Cybersecurity Report provides insights based on threat intelligence gathered by Cisco's security experts, combined with input from nearly 3,000 Chief Security Officers (CSOs), and other security operations leaders from businesses in 13 countries. 
Here are some takeaways:
  • Data breaches have repercussions: More than 50 percent of organizations faced public scrutiny after a security breach. Operations and finance systems were the most affected, followed by brand reputation and customer retention.
    Lesson: Is sunlight the best disinfectant?
  • Repercussions are expen$ive: For organizations that suffered a breach, the effect was substantial: 22% of breached organizations lost customers – 40% of them lost more than a fifth of their customer base and 29% lost revenue, with 38% of that group losing more than a fifth of their revenue. In addition, 23% of breached organizations lost business opportunities, with 42% of them losing more than a fifth of such opportunities.
    Lesson: There's a bad moon rising.
  • Complexity and skill shortage drive risk: CSOs cite budget constraints, poor compatibility of systems, and a lack of trained talent as the biggest barriers to advancing their security postures. Security leaders also reveal that their security departments are increasingly complex environments with nearly two-thirds of organizations using six or more security products – some with even more than 50 – increasing the potential for security effectiveness gaps and mistakes.
    Lesson: Calculate asset risk to prioritize spending; co-sourcing can help.
  • It’s the basics: Criminals are leveraging "classic" attack mechanisms such as adware and email spam in an effort to easily exploit the gaps that such complexity can create. Old-fashioned adware software that downloads advertising without user permission continues to prove successful, infecting 75% of organizations polled.
    Lesson: Security laggards, beware. Here are "some stories that never happened" from "files that do not exist".
  • Spam works: Spam is now at a level not seen since 2010, and accounts for nearly two-thirds of all email – with 8-10% of it being outright malicious. Global spam volume is rising, often spread by large and thriving botnets.
    Lesson: Spam is easy and effective, so a mix of technology and awareness is needed.
  • Data is everywhere; not much actionable intelligence: Just 56% of security alerts are investigated and less than half of legitimate alerts are actually remediated. Defenders, while confident in their tools, are undermined by complexity and manpower challenges. Criminals are exploiting the inability of organizations to handle all important security matters in a timely fashion.
    Lesson: Look for ease of use; get access to expertise via co-sourcing.
What can/should you do?
  1. Improve threat defense technologies and processes after attacks by separating IT and security functions 
  2. Increase security awareness training for employees 
  3. Implement risk mitigation techniques

The Perimeter is Dead: Long-live the Perimeter

In 2005, the Department of Homeland Security commissioned Livermore National Labs to produce a kind of pre-emptive post-mortem report. Rather than wait for a vengeful ex-KGB hacker agent to ignite an American pipeline until it could be seen from space, the report issued recommendations for preventing an incursion that had yet never happened, from ever happening again.
Recommendation Number 1: Know your perimeter.
"The perimeter model is dead," pronounced Bruce Schneier, author of The New York Times' best seller Data and Goliath, and the CTO of IBM Resilient. "But there are personal perimeters. It doesn't mean there exists no perimeters. It just means it's not your underlying metaphor any more. So, I wouldn't say to anyone running a corporate network: There are no perimeters, zero."

"The traditional fixed perimeter model is rapidly becoming obsolete," stated the CSA's December 2013 white paper,” because of BYOD and phishing attacks providing untrusted access inside the perimeter, and SaaS and IaaS changing the location of the perimeter. Software defined perimeters address these issues by giving application owners the ability to deploy perimeters that retain the traditional model's value of invisibility and inaccessibility to ‘outsiders’, but can be deployed anywhere – on the internet, in the cloud, at a hosting center, on the private corporate network, or across some or all of these locations."

This reality invalidates the model of safeguarding the corporate network via the fortress model, one where all assets are inside and a well-defined perimeter exists, which can be defended. Instead, each asset requires a micro-fortress around it, regardless of where it is located. The EventTracker sensor enables a micro-fortress around and near the endpoint on which it operates. It provides host-based intrusion detection, data leak protection and endpoint threat detection. While the sensor itself operates on any Windows platform, it is able to act as a forwarder for any local syslog sources, relaying logs over an encrypted connection.
Welcome to your software defined perimeter.