Download the Report
Advanced Threat Protection
Download the Datasheet
Let's Go Threat Hunting: Gain Visibility and Insight into Potential Threats and Risks
Download the Whitepaper
Bracing for the Tidal Wave of Data Privacy Compliance in America
View Recent Catches
Catch More Threats
October 02, 2014
In April 16 of 2013, a sniper took a hundred shots at Pacific Gas and Electric’s (PG&E) Metcalf Electric Power Transformer Station. The utility was able to reroute power on the grid and avert a black out. The whole ordeal took nineteen tension-filled minutes. The event added muscle to the regulatory grip of The North American Electric Reliability Corporation (NERC) – a not-for-profit entity whose mission is to ensure the reliability of the bulk power system in North America. A terrorist attack, domestic or otherwise, could bring the state’s power grid down.
October 01, 2014
An essential part of any IT Security program is to hunt for unusual patterns in sensor (or log) data to uncover attacks. Aside of tools that gather and collate this data (for example SIEM solutions like EventTracker), a smart pair of eyeballs is needed to sift through the data warehouse. In modern parlance, this person is called a data scientist, one who extracts knowledge from data. This requires a deep understanding of the available data and a feel for pattern recognition and visualization.
As Michael Schrage notes in the HBR Blog network “…the opportunities for data-science-enabled efficiencies and innovation are too important to defer or deny. Big organizations can afford — or think they can afford — to throw money at the problem by hiring laid-off Wall Street quants or hiring big-budget analytics boutiques. More frugal and prudent enterprises seem to be taking alternate approaches.”
Starting up a “center of excellence” or addressing a “grand challenge” is not practical for most organizations. Instead, how about an effort to deliver tangible and data-driven benefits in a short time frame?
Interestingly, Schrage notes “Without exception, every team I ran across or worked with hired outside expertise. They knew when a technical challenge and/or statistical technique was beyond the capability…the relationship was less of an RFP box-ticking exercise than a shared space…”
What does any of this have to do with SIEM you ask?
Well for the typical Small/Medium Enterprise [SME] this is a familiar dilemma. Data, data everywhere and not a drop (of intelligence) to drink. Either the “data scientist” is not on the employee roster or does not have time available. How then do you square this circle? Look for outside expertise, as Schrage notes.
SIEM Simplified service
SME’s looking for expertise to leverage the existing mountain of security data within their enterprise can leverage our SIEM Simplified service.
Unicorns don’t exist but that doesn’t mean that do-nothing is a valid option.
September 29, 2014
What’s your thought on Shellshock? EventTracker CEO A.N. Ananth weighs in.
Shellshock (also known as Bashdoor) CVE-2014-6271 is a security bug in the broadly used Unix Bash shell. Bash is used to process certain commands across many internet daemons. It is a program that is used by various Unix-based systems to execute command scripts and command lines. Often it is installed as the system’s default command line interface.
September 24, 2014
#36 on the American Film Institute list of Top Movie Quotes is “Badges? We don’t need no stinkin badges” which has been used often (e.g., Blazing Saddles). The equivalent of this in the log management universe is a “Connector”. We are often asked how many “Connectors” we have readily available or how long it takes to develop a Connector.
These questions stem from a model used by programs such as ArcSight which depend on Early Binding. In an earlier era of computing, Early Binding was needed for the compiler could not create an entry in the virtual method table for the procedure being compiled. It has the advantage of being efficient, an important consideration when CPU and memory are in very short supply, like years ago.
Just in time languages such as .NET or Java adopt Late Binding where the v-table is computed at run time. Years ago, Late Binding had negative connotations in terms of performance but that hasn’t been true for at least 20 years now.
Early binding requires a fixed schema to be mandated for all possible entries and for input to be “normalized” to this schema. The benefit of the fixed plan is efficiency in output since the data is already normalized. While that may make some sense for compilers, input in formalized language grammars makes almost no sense in the log management universe, where the input is log data from sources that do not adopt any standardization at all. The downside of such an approach is to require a “Connector” to normalize a new log source to the normalized schema. Another consideration is that outputs can greatly vary depending on usage – there are many possible uses for the data, the limitation is only the users imagination. The Early Binding model however, is designed with fixed outputs in mind. These disadvantages limit such designs.
In contrast, EventTracker uses Late Binding, where the meaning of tokens can be assigned at output (run) time, rather than being fixed at receive time. Thus new log formats do not need a “Collector” to be available at ingest time. The desired output format can be specified at search or report time for easy viewing. This requires somewhat greater computing capacity with Moores Law to the rescue. Late Binding is the primary advantage of EventTrackers’ “Fast In, Smart Out” architecture.
September 10, 2014
If you spend any time at all looking at log data from any server that is accessible to the Internet, you will be shocked at the brazen attempts to knock the castle over. They being within minutes of the server being available. They most commonly include port scans, login attempts using default username/password, web server attacks described by OWASP.
How can this possibly be? Given the sheer number of machines that are visible on the Internet? Don’t these guys have anything better to do?
The answer is automation and scripted attacks, also known as spray and pray. The bad guys are capitalists too (regardless of country of origin!) and need to maximize their effort, computing capacity and network bandwidth usage. Accordingly, they use automation to “knock on all available doors in a wealthy neighborhood” as efficiently and regularly as possible. Why pick on servers in developed countries? Because that’s where the payoff is likely to be higher. Its Risk v. Reward all the way.
The automated (first) wave of these attacks is to identify vulnerable machines and establish presence. Following waves may be staffed depending on the the location and identity and thus the potential value to be obtained by a greater investment of (scarce) expertise by the attacker.
Such attacks can be deterred quite simply by using secure (non-default) configuration, system patching and basic security defenses such as firewall and anti-virus. This explains the repeated exhortations of security pundits on “best practice” and also the rationale behind compliance standards and auditors trying to enforce basic minimum safeguards.
The 80/20 rule applies to attackers just as it does to defenders. Attackers are trying to cover 80% of the ground at 20% of the cost so as to at-least identify soft high value targets and at most steal from them. Defenders are trying to deter 80% of the attackers at 20% of cost by using basic best practices.
Guidance such as SANS Critical Controls or lessons from Verizon’s Annual Data Breach studies can help you prioritize your actions. Attackers depend on the fact that the majority of users do not follow basic security hygiene, don’t collect logs which would expose the attackers actions and certainly never actually look at the logs.
Defeating a “spray and pray” attacks requires basic tooling and discipline. The easy way to so this? We call it SIEM Simplified. Drop us a shout, it beats being a victim.
September 03, 2014
Most hackers are looking into critical data for credential theft. A credential theft attack is when an attacker initially gains privileged access to a computer on a network and then uses freely available tooling to extract credentials from the sessions of other logged-on accounts. The most prevalent target for a credential theft is a “VIP account.” VIP account’s consist of contacts with highly sensitive data attached: access to accounts and secure data that many others within that organization probably don’t have.
It’s very important for administrators to be conscious of activities that increase the likelihood of a successful credential-theft attack.
These activities are:
• Logging on to unsecured computers with privileged accounts
• Browsing the Internet with a highly privileged account
• Configuring local privileged accounts with the same credentials across systems
• Overpopulation and overuse of privileged domain groups
• Insufficient management of the security of domain controllers.
There are specific accounts, servers, and infrastructure components that are the usual primary targets of attacks against Active Directory.
These accounts are:
• Permanently privileged accounts
• VIP accounts
• “Privilege-Attached” Active Directory accounts
• Domain controllers
• Other infrastructure services that affect identity, access, and configuration management, such as public key infrastructure (PKI) servers and systems management servers
Although pass-the-hash (PtH) and other credential theft attacks are ubiquitous today, it is because there is freely available tooling that makes it simple and easy to extract the credentials of other privileged accounts when an attacker has gained Administrator – or SYSTEM-level access to a computer.
Even without this tool, an attacker with privileged access to a computer can just as easily install keystroke loggers that capture keystrokes, screenshots, and clipboard contents. An attacker with privileged access to a computer can disable anti-malware software, install rootkits, modify protected files, or install malware on the computer that automates attacks or turns a server into a drive-by download host.
The tactics used to extend a breach beyond a single computer vary, but the key to propagating compromise is the acquisition of highly privileged access to additional systems. By reducing the number of accounts with privileged access to any system, you reduce the attack surface not only of that computer, but the likelihood of an attacker harvesting valuable credentials from the computer.
August 22, 2014
I often get asked how to audit the deletion of objects in Active Directory. It’s pretty easy to do this with the Windows Security Log – especially for tracking deletion of users and groups which I’ll show you first. All you have to do is enable “Audit user accounts” and “Audit security group management” in the Default Domain Controllers Policy GPO.
August 20, 2014
Nikunj Shah, team lead of EventTracker SIEM Simplified team provides some practical tips on analyzing login and pre-authentication failures:
1) Learn and know how to identify login events and their descriptions. A great resource to find event IDs is here: http://technet.microsoft.com/en-us/library/cc787567(v=ws.10).aspx.
2) Identify and look into the event description. To analyze events efficiently and effectively you must analyze the event description. Within the login failure description, paying attention to the details like: failure reason, user name, logon type, workstation name and source network address are critical to your investigation and analysis. By identifying the description and knowing what to pay attention to, you will easily eliminate the noise.
When using a system like EventTracker, the display of the required fields used to showcase eliminates the noise and show you the immediate error results. EventTracker will provide a summary based on the total number of events for each failure type and user name to demonstrate the automation of your systems’ critical information.
Using IDS will help your enterprise run more efficiently and effectively with the analysis of traditional reports for the hundreds of events that happen every day. Doing this without the help of a management and a monitoring tool is nearly impossible.
Please reference here for detailed charts.
August 14, 2014
To support security, compliance and operational requirements, specific and fast answers to the 4 W questions (Who, What, When, Where) are very desirable. These requirements drive the need to Security Information Event Management (SIEM) solutions that provide detailed and one-pain-of-glass visibility into this data, which is constantly generated within your information ecosystem. This visibility and the attendant effectiveness are made possibly by centralizing the collection, analysis and storage of log and other security data from sources throughout the enterprise network.
To obtain value from your SIEM solution, it must be watered and fed. This is an eternal commitment, whether your team chooses to do-it yourself or get someone to do it for you. This new white paper from EventTracker examines the pros and cons of using a specialist external service provider.
“Think about this for a second: a lot more people will engage professional services to help them RUN, not just DEPLOY, a SIEM. However, this is not the same as managed services, as those organization will continue to own their SIEM tools.” –Anton Chuvakin, Gartner Analyst
August 06, 2014
“There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know. ”
–Donald Rumsfeld, Secretary of Defense
In SIEM world, the known knowns are alerts. We configure rules to look at security data for threats/problems that we find to be interesting and bring them to the operators’ attention. This is a huge step up in the SIEM maturity scale from log ignorance. The Department of Homeland Security refers to this as “If you see something, say something.” What do you do when you see something? You “do something,” better known as alert-driven workflow. In the early stages of a SIEM implementation there is a lot of time spent refining alert definitions in order to reduce “noise.”
While this approach addresses the “known knowns”, it does nothing for the “unknown unknowns”. To identify the unknown, you must stop waiting for alerts and instead search for the insights. This approach starts with a question rather than a reaction to an alert. Notice that often enough, it’s non IT persons asking the questions e.g., Who changed this file? Which systems did “Susan” access on Saturday?
This approach results in interactive investigation rather than the traditional drill down. For example:
– Show me all successful login’s over the weekend
– Filter these to show only those on server3
– Why did “Susan” login here? Show all “Susan” activity over the weekend…
This form of active data exploration requires a certain degree of expertise in log management tools, with experience and knowledge of the data set to review a thread that looks out of place. Once you get used to the idea, it is incredible to see how visible these patterns become to you. This is essential to “running a tight ship” and being aware of out of the ordinary patterns given the baseline. When staffing technical persons for the EventTracker SIEM Simplified service team, we are constantly looking for “insight hunters” instead of mere “alert responders.” Alert responding is so 2013…
July 30, 2014
The cliché goes “When you assume, you make an ass out of u and me.” When implementing a SIEM solution, these five assumptions have the potential to get us in trouble. They stand in the way or organization and personal success and thus are best avoided.
5. Security by obscurity or my network is too unimportant to be attacked
Small businesses tend to be more innovative and cost-conscious. Is there such a thing as too small for hackers to care? In this blog post we outlined why this is almost never the case. As the Verizon Data Breach shows year in and year out, companies with 11-100 employees from 36 countries had the maximum number of breaches.
4. I’ve got to do it myself to get it right
Charles De Gaulle on humility “The graveyards are full of indispensable men”. Everyone tries to demonstrate multifaceted skill but its neither effective nor efficient. Corporations do it all the time. Tom Friedman explains it in “The World is Flat.”
3. Compliance = Security
This is only true if your auditor is your only threat actor. We tend to fear the known more than the unknown so it is often the case that we fear the (known) auditor more than we fear the (unknown) attacker. Among the myriad lessons from the Target breach, perhaps the most important is that “Compliance” does NOT equal Security.
2. All I have to do it plug it in, the rest happens by magic
Marketing departments of every security vendor would have you believe this of their magic appliance or software. When has this ever been true? Self-propelling lawn mower anyone?
1. It’s all about buying the most expen$ive technology
Kivas Fajo in “The Most Toys” the 70th episode of Star Trek TNG believed this. You could negotiate a 90% discount on a $200K solution and then park it as shelfware, what did you get? Wasted $20K is what. It’s always about using what you have.
Bad assumptions = bad decisions.
July 24, 2014
Return on investment (ROI) — it is the Achilles heel of IT management. Nobody minds spending money to avoid costs, prevent disasters, and ultimately yield more than the initial investment outlay. But is the investment justified? It is challenging to calculate the ROI for any IT investment, and security information and event management (SIEM) tools are no exception. We recently explored some basic precepts or “pillars” of the ROI of SIEM tools and technology. These pillars provide some sensible groundwork for the difficult endeavor to justify intangible costs of SIEM tools and technology.
July 16, 2014
The three sides of the security triangle are People, Processes and Technology.
None of this is particularly new to CIOs and CSOs, yet how often have you seen six or seven digit “investments” sitting on datacenter racks, or even sometimes on actual storage shelves, unused or heavily underused? Organizations throw away massive amounts of money, then complain about “lack of security funds” and “being insecure.” Buying security technologies is far too often an easier task than utilizing them, and “operationalizing” them for many organizations. SIEM technology suffers from this problem as do many other “Monitoring” technologies.
Compliance and “checkbox mentality” makes this problem worse as people read the mandates and only pay attention to sections that refer to buying boxes.
Despite all this rhetoric, many managers equate information security with technology, completely ignoring the proper order. In reality, a skilled engineer with a so-so tool, but a good process is more valuable than an untrained person equipped with the best of tools.
As Gartner analyst Anton Chuvakin notes, “…if you got a $200,000 security appliance for $20,000 (i.e. at a steep 90% discount), but never used it, you didn’t save $180k – you only wasted $20,000!”
Security is not something you BUY, but something you DO.
July 09, 2014
As we deal with forensic reviews of log data, our SIEM Simplified team is called upon to piece together a trail showing the four W’s: Who, What, When and Where. Logs can be your friend and if collected, centralized and indexed can get you answers very quickly.
There is a catch though. The “Where” question is usually answered by supplying either a system name or an IP Address which at the time in question was associated with that system name.
Is that good enough for the law? i.e., will the legal system accept that you are your IP Address?
Florida District Court Judge Ursula Ungaro says no.
Judge Ungaro was presented with a case brought by Malibu Media, who accused IP-address “188.8.131.52″ of sharing one of their films using BitTorrent without their permission. The Judge, however, was reluctant to issue a subpoena, and asked the company to explain how they could identify the actual infringer.
Responding to this order to show cause, Malibu Media gave an overview of their data gathering techniques. Among other things they explained that geo-location software was used to pinpoint the right location, and how they made sure that it was a residential address, and not a public hotspot.
Judge Ungaro welcomed the additional details, but saw nothing that actually proves that the account holder is the person who downloaded the file.
“Plaintiff has shown that the geolocation software can provide a location for an infringing IP address; however, Plaintiff has not shown how this geolocation software can establish the identity of the Defendant,” Ungaro wrote in an order last week.
“There is nothing that links the IP address location to the identity of the person actually downloading and viewing Plaintiff’s videos, and establishing whether that person lives in this district,” she adds.
As a side note, on April 26, 2012, Judge Ungaro ruled that an order issued by Florida Governor Rick Scott to randomly drug test 80,000 Florida state workers was unconstitutional. Ungaro found that Scott had not demonstrated that there was a compelling reason for the tests and that, as a result, they were an unreasonable search in violation of the Constitution.
July 02, 2014
There are three trends in Enterprise Networks:
1) Internet of Things Made Real. We’re all familiar with the challenge of big data how the volume, velocity and variety of data is overwhelming. Studies confirm the conclusion many of you have reached on your own: There’s more data crossing the internet every second than existed on the internet in total 20 years ago. And, now, as customers deploy more sensors and devices in every part of their business, the data explosion is just beginning. This concept, called the “Internet of Things,” is a hot topic. Many businesses are uncovering efficiencies based on how connected devices drive decisions with more precision in their organizations.
2) “Reverse BYOD.” Most of us have seen firsthand how a mobile workplace can blur the line between our personal and professional lives. Today’s road warrior isn’t tethered to a PC in a traditional office setting. They move between multiple devices throughout their workdays with the expectation that they¹ll be able to access their settings, data and applications. Forrester estimates that nearly 80 percent of workers spend at least some portion of their time working out of the office and 29 percent of the global workforce can be characterized as “anywhere, anytime” information workers. This trend was called “bring your own device” or “BYOD.” But now we¹re seeing the reverse. Business-ready, secure devices are getting so good that organizations are centrally deploying mobility solutions that are equally effective at work and play.
3) Creating New Business Models with the Cloud. The conversation around cloud computing has moved from “if to “when.” Initially driven by the need to reduce costs, many enterprises saw cloud computing as a way to move non-critical workloads such as messaging and storage to a more cost-efficient, cloud-based model. However, the larger benefit comes from customers who identify and grow new revenue models enabled by the cloud. The cloud provides a unique and sustainable way to enable business value, innovation and competitive differentiation all of which are critical in a global marketplace that demands more mobility, flexibility, agility and better quality across the enterprise.
June 18, 2014
Are you familiar with the Kübler-Ross 5 Stages of Grief model?
SIEM implementation (and indeed most enterprise software installations) bear a striking resemblance.
May 20, 2014
The prevailing IT requirement tends toward doing more work faster, but with fewer resources to do such work, many companies must reconsider their traditional approaches to developing, deploying and maintaining software. One such approach, called DevOps, first gained traction as a viable software development and deployment strategy in Europe in the late 2000s. DevOps is a marriage of convenience
May 12, 2014
Ask a pragmatic CISO about achieving a state of complete organizational security and you’ll quickly be told that this is unrealistic and financially imprudent goal. So then how much security is enough?
More than merely complying with regulations or implementing “best practice”, think in terms of optimizing the outcome of the security investment. So never mind the theoretical state of absolute security, think instead of determining and managing risk to critical business processes and assets.
Risk appetite is defined by the Committee of Sponsoring Organizations of the Treadway Commission (COSO) as “… the amount of risk, on a broad level, an entity is willing to accept in pursuit of value (and its mission).” Risk appetite influences the entity’s culture, operating style, strategies, resource allocation, and infrastructure. Risk appetite is not a constant; it is influenced by and must adapt to changes in the environment. Risk tolerance could be defined as the residual risk the organization is willing to accept after implementing risk-mitigation and monitoring processes and controls. One way to implement this is to define levels of residual risk and therefore the levels of security that is “enough”.
The basic level of security is the diligent one which is the staple of every business network; the organization is able to deal with known threats. The hardened level adds the ability to be proactive (with vulnerability scanning), compliant and gives the ability to perform forensic analysis. At the advanced level, predictive capabilities are introduced and the organization develops the ability to deal with unknown threats.
If it all sounds a bit overwhelming, take heart; managed security services can relieve your team of the heavy lifting that is a staple of IT Security.
Bottom line – determine your risk appetite to determine how much security is enough.
April 28, 2014
Security Information and Event Management (SIEM) is a term coined by Gartner in 2005 to describe technology used to monitor and help manage user and service privileges, directory services and other system configuration changes; as well as providing log auditing and review and incident response.
The core capabilities of SIEM technology are the broad scope of event collection and the ability to correlate and analyze events across disparate information sources. Simply put, SIEM technology collects log and security data from computers, network devices and applications on the network to enable alerting, archiving and reporting.
Once log and security data has been received, you can:
Logs from firewalls and IDS/IPS sensors are useful to uncover external threats; logs from e-mail servers, proxy servers can help detect phishing attacks; logs from badge and thumbprint scanners are used to detect physical access
Computers, network devices and application logs are used to develop a trail of activity across the network by any user but especially users with high privileges
Most enterprises have critical data repositories in files/folder /databases and these are attractive targets for attackers. By monitoring all server and db resource access, security is improved.
With all logs and security data in one place, an especially useful benefit is the ability to correlate user activity across the network.
Often the source of funding for SIEM, when properly setup, auditor on-site time can be reduced by up to 90%; more importantly, compliance is to the spirit of the law rather than merely a check-the-box exercise
Answer Who, What, When, Where questions. Such questions are the heart of forensic activities and critical to draw valuable lessons.
SIEM technology is routinely cited as a basic best practice by every regulatory standard and its absence has been regularly shown as a glaring weakness in every data breach post mortem.
Want the benefit but not the hassle? Consider SIEM Simplified, our service where we do the disciplined blocking and tackling which forms the core of any security or compliance regime.
April 16, 2014
Analyzing all the login and pre-authentication failures within your organization can be tedious. There are thousands of login failures generated for several reasons. Here we will discuss the different event IDs and error codes and how you can simplify the login failure review process.
April 15, 2014
The typical SIEM implementation suffers from TMI, TLA (Too Much Information, Too Little Analysis). And if any organization that’s recently been in the news knows this, it’s the National Security Agency (NSA). The Wall Street Journal carried this story quoting William Binney, who rose through the ranks at the National Security Agency (NSA) over a 30 year career, retiring in 2001. “The NSA knows so much it cannot understand what it has,” Binney said. “What they are doing is making themselves dysfunctional by taking all this data.”
Most SIEM implementations start at this premise – open the floodgates, gather everything because we are not sure what we are specifically looking for, and more importantly, the auditors don’t help and the regulations are vague and poorly worded.
Lt Gen Clarence E. McKnight is the former head of the Signal Corps and opined that “The issue is a straightforward one of simple ability to manage data effectively in order to provide our leaders with actionable information. Too much raw data compromises that ability. That is all there is to it.”
A presidential panel recently recommended the NSA shut down its bulk collection of telephone call records of all Americans. It also recommended creation of “smart software” to sort data as it is collected, rather than accumulate vast troves of information for sorting out later. The reality is that the collection becomes an end in itself, and the sorting out never gets done.
The NSA may be a large, powerful bureaucracy, intrinsically resistant to change, but how about your organization? If you are seeking a way to get real value out of SIEM data, consider co-sourcing that problem to a team that does that for a living. SIEM Simplified was created for just that purpose. Switch from TMI, TLA (Too Much Information, Too Little Analysis) to JEI, JEA (Just Enough Information, Just Enough Analysis).
April 11, 2014
The usage of OpenSSL in EventTracker v7.5 is NOT vulnerable to heartbleed.
A lot of attention has focused on CVE-2014-0160, the Heartbleed vulnerability in OpenSSL. According to http://heartbleed.com, OpenSSL 0.9.8 is NOT vulnerable.
The EventTracker Windows Agent uses OpenSSL indirectly if the following options are enabled and used:
1) Send Windows events as syslog messages AND use the FTP server option to transfer non real-time events to a FTP server. To support this mode of operation, WinSCP.exe v4.2.9 is distributed as part of the EventTracker Windows Agent. This version of WinSCP.exe is compiled with OpenSSL 0.9.8, as documented in http://winscp.net/eng/docs/history_old (v4.2.6 onwards). Accordingly, the EventTracker Windows Agent is NOTvulnerable.
2) Configuration Assessment (SCAP). This optional feature uses ovaldi.exe v5.8 Build 2 which in turn includes OpenLDAP v2.3.27 as documented in the OVALDI-README distributed with the EventTracker install package. This version of OpenLDAP uses OpenSSL v0.9.8c which is NOT vulnerable.
April 10, 2014
Top 5 reasons IT Admins love logs:
1) Answer the ‘W’ questions
Who, what, where and when; critical files, logins, USB inserts, downloads…see it all
2) Cut ’em off at the pass, ke-mo sah-bee
Get an early warning of the railroad jumping off track. It’s what IT Admins do.
3) Demonstrate compliance
Don’t even try to demonstrate compliance until you get a log management solution in place. Reduce on-site auditor time by 90%.
4) Get a life
Want to go home on time and enjoy the weekend? How about getting proactive instead of reactive?
5) Logs tell you what users don’t
“It wasn’t me. I didn’t do it.” Have you heard this before? Logs don’t lie.
March 26, 2014
After an attacker has compromised a target infrastructure, the typical next step is credential theft. The objective is to propagate compromise across additional systems, and eventually target Active Directory and domain controllers to obtain complete control of the network.
March 20, 2014
Top 5 Reasons Sys Admins hate logs:
1) Logs multiply – the volume problem
A single server easily generates 0.25M logs every day, even when operating normally. How many servers do you have? Plus you have workstations, applications and not to mention network devices.
2) Log obscurity – what does it mean?
Jan 2 19:03:22 r37s9p2 oesaudit: type=SYSCALL msg=audit(01/02/13 19:03:22.683:318) : arch=i386 syscall=open success=yes exit=3 a0=80e3f08 a1=18800
Do what now? Go where? ‘Nuff said.
3) Real hackers don’t get logged
If your purpose of logging is, for example, to review logs to “identify and proactively address unauthorized access to cardholder data” for PCI-DSS, how do you know what you don’t know?
4) How can I tell you logged in? Let me count the ways
This is a simple question with a complex answer. It depends on where you logged in. Linux? Solaris? Cisco? Windows 2003? Windows 2008? Application? VMware? Amazon EC2?
5) Compliance forced down your throat, but no specific guidance
Have you ever been in the rainforest with no map, creepy crawlies everywhere, low on supplies and a day’s trek to the nearest settlement? That’s how IT guys feel when management drops a 100+ page compliance standard on their desk.
March 06, 2014
The US Presidential elections of 2012 confounded many pundits. The Republican candidate, Gov. Mitt Romney, put together a strong campaign and polls leading into the final week that suggested a close race. The final results were not so close, and Barack Obama handily won a second term.
Antony Young explains how the Obama campaign used big data, analytics and micro targeting to mobilize key voter blocks giving Obama the numbers needed to push him over the edge.
“The Obama camp in preparing for this election, established a huge Analytics group that comprised of behavioral scientists, data technologists and mathematicians. They worked tirelessly to gather and interpret data to inform every part of the campaign. They built up a voter file that included voter history, demographic profiles, but also collected numerous other data points around interests … for example, did they give to charitable organizations or which magazines did they read to help them better understand who they were and better identify the group of ‘persuadables‘ to target.”
That data was able to be drilled down to zip codes, individual households and in many cases individuals within those households.”
“However it is how they deployed this data in activating their campaign that translated the insight they garnered into killer tactics for the Obama campaign.
“Volunteers canvassing door to door or calling constituents were able to access these profiles via an app accessed on an iPad, iPhone or Android mobile device to provide an instant transcript to help them steer their conversations. They were also able to input new data from their conversation back into the database real time.
“The profiles informed their direct and email fundraising efforts. They used issues such Obama’s support for gay marriage or Romney’s missteps in his portrayal of women to directly target more liberal and professional women on their database, with messages that “Obama is for women,” using that opportunity to solicit contributions to his campaign.
“Marketers need to take heed of how the Obama campaign transformed their marketing approach centered around data. They demonstrated incredible discipline to capture data across multiple sources and then to inform every element of the marketing – direct to consumer, on the ground efforts, unpaid and paid media. Their ability to dissect potential prospects into narrow segments or even at an individual level and develop specific relevant messaging created highly persuasive communications. And finally their approach to tap their committed fans was hugely powerful. The Obama campaign provides a compelling case for companies to build their marketing expertise around big data and micro-targeting. How ready is your organization to do the same?”
February 26, 2014
Doris Lessing passed away at the end of last year. She was the freewheeling Nobel Prize-winning writer on racism, colonialism, feminism and communism who died November 17 at the age of 94, was prolific for most of her life. But five years ago, she said the writing had dried up. “Don’t imagine you’ll have it forever,” she said, according to one obituary. “Use it while you’ve got it because it’ll go; it’s sliding away like water down a plug hole.”
In the very fast changing world of IT, it is common to feel like an old fogey. Everything changes at bewildering speed. From hardware specs to programming languages to user interfaces. We hear of wunderkinds whose innovations transform our very culture. Think Mozart, Zuckerberg to name two.
Tara Bahrampour examined the idea, and quotes author Mark Walton, “What’s really interesting from the neuroscience point of view is that we are hard-wired for creativity for as long as we stay at it, as long as nothing bad happens to our brain.”
The field also matters.
Howard Gardner, professor of cognition and education at the Harvard Graduate School of Education says, “Large creative breakthroughs are more likely to occur with younger scientists and mathematicians, and with lyric poets, than with individuals who create longer forms.”
In fields like law, psychoanalysis and perhaps history and philosophy, on the other hand, “you need a much longer lead time, and so your best work is likely to occur in the latter years. You should start when you are young, but there is no reason whatsoever to assume that you will stop being creative just because you have grey hair.” Gardner said.
Old dogs take heart; you can learn new tricks as long as you stay open to new ideas.
February 20, 2014
Over the years, we had a chance to witness a large number of SIEM implementations, with results from the superb to the colossal failures. What is common with the failures? This blog by Keith Strier nails it:
“1) Design Democracy: Find all internal stakeholders and grant all of them veto power. The result is inevitably a mediocre mess. The collective wisdom of the masses is not the best thing here. A super empowered individual is usually found at the center of the successful implementation. If multiple stakeholders are involved, this person builds consensus but nobody else has veto power.
“2) Ignore the little things: A great implementation is a set of micro-experiences that add up to make the whole. Think of the Apple iPhone, every detail from the shape, size, appearance to every icon and gesture and feature converges to enhance the user experience. The path to failure is just focus on the big picture, ignore the little things from authentication to navigation and just launch to meet deadline.
“3) Avoid Passion: View the implementation as non-strategic overhead; implement and deploy without passion. Result? At best, requirements are fulfilled but users are unlikely to be empowered. Milestones may be met but business sponsors still complain. Prioritizing deadlines, linking IT staff bonuses to delivery metrics, squashing creativity is a sure way to launch technology failures that crush morale.”
February 19, 2014
Unstructured data access governance is a big compliance concern. Unstructured data is difficult to secure because there’s so much of it, it’s growing so fast and it is user created so it doesn’t automatically get categorized and controlled like structured data in databases. Moreover unstructured data is usually a treasure trove of sensitive and confidential information in a format that bad guys can consume and understand without reverse engineering the relationship of tables in a relational database.
January 30, 2014
For any working professional in 2013, multiple screens, devices and apps are integral instruments for success. The multitasking can be overwhelming and dependence on gadgets and Internet connectivity can become a full-blown addiction.
There are digital detox facilities for those whose careers and relationships have been ruined by extreme gadget use. Shambhalah Ranch in Northern California has a three-day retreat for people who feel addicted to their gadgets. For 72 hours, the participants eat vegan food, practice yoga, swim in a nearby creek, take long walks in the woods, and keep a journal about being offline. Participants have one thing in common: they’re driven to distraction by the Internet.
Is this you? Checking e-mail in the bathroom and sleeping with your cell phone by your bed are now considered normal. According to the Pew Research Center, in 2007 only 58 percent of people used their phones to text; last year it was 80 percent. More than half of all cell phone users have smartphones, giving them Internet access all the time. As a result, the number of hours Americans spend collectively online has almost doubled since 2010, according to ComScore, a digital analytics company.
Teens and twentysomethings are the most wired. In 2011, Diana Rehling and Wendy Bjorklund, communications professors at St. Cloud State University in Minnesota, surveyed their undergraduates and found that the average college student checks Facebook 20 times an hour.
So what can Luke Skywalker teach you? Shane O’Neill says it well:
“The climactic Death Star battle scene is the centerpiece of the movie’s nature vs. technology motif, a reminder to today’s viewers about the perils of relying too much on gadgets and not enough on human intuition. You’ll recall that Luke and his team of X-Wing fighters are attacking Darth Vader’s planet-size command center. Pilots are relying on a navigation and targeting system displayed through a small screen (using gloriously outdated computer graphics) to try to drop torpedoes into the belly of the Death Star. No pilot has succeeded, and a few have been blown to bits.
“Luke, an apprentice still learning the ways of The Force from the wise — but now dead — Obi-Wan Kenobi, decides to put The Force to work in the heat of battle. He pushes the navigation screen away from his face, shuts off his “targeting computer” and lets The Force guide his mind and his jet’s torpedo to the precise target.
“Luke put down his gadget, blocked out the noise and found a quiet place of Zen-like focus. George Lucas was making an anti-technology statement 36 years ago that resonates today. The overarching message of Star Wars is to use technology for good. Use it to conquer evil, but don’t let it override your own human Force. Don’t let technology replace you.
Take a lesson from a great Jedi warrior. Push the screen away from time to time and give your mind and personality a chance to shine. When it’s time to use the screen again, use it for good.”
See EventTracker in action!
Join our next live demo December 4th at 2:00 p.m. EST.