Download the Report
Advanced Threat Protection
Download the Datasheet
Let's Go Threat Hunting: Gain Visibility and Insight into Potential Threats and Risks
Download the Whitepaper
Bracing for the Tidal Wave of Data Privacy Compliance in America
View Recent Catches
Catch More Threats
July 24, 2012
The single most revealing moment in the coverage of JPMorgan’s multibillion dollar debacle can be found in this take-your-breath-away passage from The Wall Street Journal: On April 30, associates who were gathered in a conference room handed Mr. Dimon summaries and analyses of the losses. But there were no details about the trades themselves. “I want to see the positions!” he barked, throwing down the papers, according to attendees. “Now! I want to see everything!”
When Mr. Dimon saw the numbers, these people say, he couldn’t breathe.
Only when he saw the actual trades — the raw data — did Mr. Dimon realize the full magnitude of his company’s situation. The horrible irony: The very detail-oriented systems (and people) Dimon had put in place had obscured rather than surfaced his bank’s horrible hedge.
This underscores the new trust versus due diligence dilemma outlined by Michael Schrage. Raw data can have enormous impact on executive perceptions that pre-chewed analytics lack. This is not to minimize or marginalize the importance of analysis and interpretation; but nothing creates situational awareness faster than seeing with your own eyes what your experts are trying to synthesize and summarize.
There’s a reason why great chefs visit the farms and markets that source their restaurants: the raw ingredients are critical to success — or failure.
We have spent a lot of energy in building dashboards for critical log data and recognize the value of these summaries; but while we should trust our data, we also need to do the due diligence.
June 19, 2012
Previously, we discussed looking for opportunities to apply analytics to the data in your own backyard. The focus on ‘Big Data’ and sophisticated analytics tends to obscure and cause business and IT staff to overlook the in-house data already abundantly present and available for analysis. As the cost of data acquisition and storage has dropped along with the cost of computing, the amount of data available, as well as the opportunity and ability to extensively analyze it has exploded. The task is to discover and unlock the information that is hidden in all the available data.
May 23, 2012
In information technology, big data consists of data sets that grow so large that they become awkward to work with using whatever database management tools are on-hand. For that matter, how big is big? It depends on when you need to reconsider data management options – in some cases it may be 100Gb, in others, it may be 100Tb. So, following up on our earlier post about big data and insight, there is one more important consideration:
Does insight equal decision?
The foregone conclusion from big data proponents is that each nugget of “insight” uncovered by data mining will somehow be implicitly actionable and the end user (or management) will gush with excitement and praise.
The first problem is how can you assume that “insight” is actionable? It very well may not be, so what do you do then? The next problem is how can you convince the decision maker that the evidence constitutes an imperative to act? Absent action, the “insight” remains simply a nugget of information.
Note that management typically responds to “insight” with skepticism, seeing the message bearer as yet another purveyor of information (“insight”) and insisting that this new method is the silver bullet, thereby adding to workload.
Being in management myself, my team often comes to me with their little nuggets … some are gold, but some are chicken. Rather than purvey insight, think about a recommendation backed up by evidence.
May 09, 2012
In information technology, big data consists of data sets that grow so large they become unwieldy to work with using available database management tools. How big is big? It depends on when you need to reconsider data management options – in some cases it may be 100 Gigabytes, in others, as great as 100 Terabytes.
Does more data necessarily mean more insight?
The pro-argument is that larger data sets allow for greater incidences of patterns, facts, and insights. Moreover, with enough data, you can discover trends using simple counting that are otherwise undiscoverable in small data using sophisticated statistical methods.
On the other hand, while this is perfectly valid in theory, for many businesses the key barrier is not the ability to draw insights from large volumes of data; it is asking the right questions for which insight is needed.
The ability to provide answers does depend on the question being asked and the relevance of the big-data set to that question. How can one generalize to an assumption that more data will always mean more insight? It isn’t always the answer that’s important, but the questions that are key.
May 02, 2012
Here is an anecdote from a recent interaction with an enterprise application in the electric power industry:
1. Dave the developer logs all kinds of events. Since he is the primary consumer of the log, the format is optimized for human-readability. For example:
02-APR-2012 01:34:03 USER49 CMD MOD0053: ERROR RETURN FROM MOD0052 RETCODE 59
Apparently this makes perfect sense to Dave: each line includes a timestamp and some text.
2. Sam from the Security team needs to determine the number of daily unique users. Dave quickly writes a parser script for the log and schedules it. He also builds a little Web interface so that Sam can query the parsed data on his own. Peace reigns.
3. A few weeks later, Sam complains that the web interface is broken. Dave takes a look at the logs, only to realize that someone else has added an extra field in each line, breaking his custom parser. He pushes the change and tells Sam that everything is okay again. Instead of writing a new feature, Dave has to go back and fill in the missing data.
4. Every 3 weeks or so, repeat Step 3 as others add logs.
April 18, 2012
Back in January, I said that the use of sophisticated analytics as a business and competitive tool would become widespread. Since then, the number of articles, blogs and announcements relating to analytics has increased dramatically: an internet search for the term ‘Business Analytics’ using Bing yields over 47 million hits. Smart Analytics (an IBM term) shrinks that number to approximately 12.3 million hits. If we change the search term to ‘Applied Analytics,’ the number decreases to a little less than 7 million hits.
March 14, 2012
Prism Microsystem’s founders decided early on that their goal and reason for the company’s existence was to design, develop and deliver SIEM services. As executives with a successful history in entrepreneurship, product development and enterprise management, they knew the risk and seductive promise of distractive diversification in pursuit of expanded revenues. They committed to concentrating specifically on SIEM functions of monitoring, discovery and warning about threats to security, compliance (in its multiple modes) and operational commitments.
March 07, 2012
In The Information Diet, Clay Johnson wrote, “The modern human animal spends upwards of 11 hours out of every 24 in a state of constant consumption. Not eating, but gorging on information … We’re all battling a storm of distractions, buffeted with notifications and tempted by tasty tidbits of information. And just as too much junk food can lead to obesity, too much junk information can lead to cluelessness.”
Audit yourself and you may be surprised to find that you get more than 10 notifications per hour; they can be disruptive to your attention. I find myself trying hard (and often failing) to ignore the smartphone as it beeps softly to indicate a new distraction. I struggle to remain focused on the person in my office as the desktop tinkles for attention.
Should you kill off notifications though? Clay argues that you should and offers tools to help.
When designing EventTracker v7, minimizing notifications was a major goal. On Christmas Day in 2008, nobody was stirring, but the “alerts” console rung up over 180 items demanding review. It was obvious these were not “alerts.” This led to the “risk” score which dramatically reduces notifications.
We know that all “alerts” are not equal: some merit attention before going to lunch, some before the end of the day, and some by the end of the quarter, budget permitting. There are a very rare few that require us to drop the coffee mug and attend instantly. Accordingly, a properly configured EventTracker installation will rarely “notify” you; but when you need to know — that alert will come screaming for your attention.
I am frequently asked what is the maximum events per second that can be managed. I think I’ll begin to ask how many notifications per hour (NPH) the questioner can handle. I think Clay Johnson would approve.
February 29, 2012
The sailor in The Rime of the Ancient Mariner relates his experiences after long sea voyage when his ship is blown off course:
“Water, water, every where,
And all the boards did shrink;
Water, water, every where,
Nor any drop to drink.”
An albatross appears and leads them out, but is shot by the Mariner and the ship winds up in unknown waters. His shipmates blame the Mariner and force him to wear the dead albatross around his neck.
Replace water with data, boards with disk space, and drink with value and the lament would apply to the modern IT infrastructure. We are all drowning in data, but not so much in value. “Big data” are datasets that grow so large that managing them with on-hand tools is awkward. They are seen as the next frontier in innovation, competition, and productivity.
Log management is not immune to this trend. As the basic log collection problem (different sources, different protocols and different formats) has been resolved, we’re now collecting even larger datasets of logs. Many years ago we refuted the argument that log data belonged in a RDBMS, precisely because we saw the side problem of efficient data archival begin to overwhelm the true problem of extracting value from the data. As log data volumes continue to explode, that decision continues to be validated.
However, while storing raw logs in a database was not sensible, their power in extracting patterns and value from data is well established. Recognizing this, EventVault Explorer was released in 2011. Users can extract selected datasets to their choice of external RDBMS (a datamart) for fuzzy searching, pivot tables etc. As was noted here , the key to managing big data is to personalize the results for maximum impact.
As you look under the covers of SIEM technology, pay attention to that albatross called log archives. It can lead you out of trouble, but you don’t want it around your neck.
February 22, 2012
5. Overdoing compensating controls
When a legitimate technological or documented business constraint prevents you from satisfying a requirement, a compensating control can be the answer after a risk analysis is performed. Compensating controls are not specifically defined inside PCI, but are instead defined by you (as a self-certifying merchant) or your QSA. It is specifically not an excuse to push PCI Compliance initiatives through completion at a minimal cost to your company. In reality, most compensating controls are actually harder to do and cost more money in the long run than actually fixing or addressing the original issue or vulnerability. See this article for a clear picture on the topic.
4. Separation of duty
Separation of duties is a key concept of internal controls. Increased protection from fraud and errors must be balanced with the increased cost/effort required. Both PCI DSS Requirements 3.4.1 and 3.5 mention separation of duties as an obligation for organizations, and yet many still do not do it right, usually because they lack staff.
3. Principle of Least privilege
PCI 2.2.3 says they should “configure system security parameters to prevent misuse.” This requires organizations to drill down into user roles to ensure they’re following the rule of least privilege wherever PCI regulations apply. This is easier said than done; more often it’s “easier” to grant all possible privileges rather than determine and assign just the correct set. Convenience is the enemy of security.
2. Fixating on excluding systems from scope
When you make the process of getting things out of scope a higher priority than addressing real risk, you get in trouble. Risk mitigation must come first and foremost. In far too many cases, out-of-scope becomes out-of-mind. This may make your CFO happy, but a hacker will get past weak security and not care if the system is in scope or not.
And drum roll …
1. Ignoring virtualization
Many organizations have embraced virtualization wholeheartedly given its efficiency gains. In some cases, virtualized machines are now off-premises and co-located at a service provider like Rackspace. This is a trend at federal government facilities. However, “off-premises” does not mean “off-your-list”. Regardless of the location of the cardholder data, such systems are within scope as are the hypervisor. In fact, PCI DSS 2.0 says, if the cardholder data is present on even one VM, then the entire VM infrastructure is “in scope.”
February 15, 2012
While there are still some who question the ‘relevance’ of IT to the enterprise, and others who question the ‘future’ of IT, those involved in day-to-day business activities recognize and acknowledge that IT operations is integral to business success and this is unlikely to change in the immediate future. Today’s IT staffer with security incident and event management (SIEM) responsibility must be able not only to detect, identify and respond to anomalies in infrastructure performance and operations, but also build processes, make decisions and take action based on the business impact of the incidents and events recorded in ubiquitous logs.
February 14, 2012
Since every cause needs “Awareness,” here are my picks for management speak to camouflage the bloody obvious:
5. Events per second
Log Management vendors are still trying to “differentiate” with this tired and meaningless metric as we pointed out in The EPS Myth.
4. Thought leadership
Mitch McCrimmon describes it best.
Now here is a term that means all things to all people.
2. Does that make sense?
The new “to be honest.” Jerry Weismann discusses it in the Harvard Business Review.
During the recent SOPA debate, so many self-described “country boys” wanted to get the “nerds” to explain the issue to them; as Jon Stewart pointed out, the word they were looking for was “expert.”
February 08, 2012
The Appalachian Trail is a marked hiking trail in the eastern United States extending between Georgia and Maine. It is approximately 2,181 miles long and takes about six months to complete. It is not a particularly difficult journey from start to finish; yet even so, completing the trail requires more from the hiker than just enthusiasm, endurance and will.
Likewise, SIEM implementation can take from one to six months to complete (depending on the level of customization) and like the Trail, appears deceptively simple. It too, can be filled with challenges that reduce even the most experienced IT manager to despair, and there is no shortage of implementations that have been abandoned or uncompleted. As with the Trail, SIEM implementation requires thoughtful consideration.
1) The Reasons Why
It doesn’t take too many nights scurrying to find shelter in a lightning storm, or days walking in adverse conditions before a hiker wonders: Why am I doing this again? Similarly, when implementing any IT project, SIEM included, it doesn’t take too many inter-departmental meetings, technical gotchas, or budget discussions before this same question presents itself: Why are we doing this again?
All too often, we don’t have a compelling answer, or we have forgotten it. If you are considering a half year long backpacking trip through the woods, there is a really good reason for it. In the same way, one embarks on a SIEM project with specific goals, such as regulatory compliance, IT security improvement or to control operating costs. Define the answer to this question before you begin the project and refer to it when the implementation appears to be derailing. This is the compass that should guide your way. Make adjustments as necessary.
2) The Virginia Blues
Daily trials can include anything from broken bones to homesickness, a circumstance that occurs on the Appalachian Trail about four to eight weeks into the journey, within the state lines of Virginia. Getting through requires not just perseverance but also an ability to adapt.
For a SIEM project, staff turnover, false positives, misconfigurations or unplanned explosions of data can potentially derail the project. But pushing harder in the face of distress is a recipe for failure. Step back, remind yourself of the reasons why this project is underway, and look at the problems from a fresh perspective. Can you be flexible? Can you make find new avenues to go around the problems?
3) A Fresh Perspective
In the beginning, every day is chock full of excitement, every summit view or wild animal encounter is exciting. But life in the woods will become the routine and exhilaration eventually fades into frustration.
In much the same way, after the initial thrill of installation and its challenges, the SIEM project devolves into a routine of discipline and daily observation across the infrastructure for signs of something amiss.
This is where boredom can set in, but the best defense against the lull that comes along with the end of the implementation is the expectation of it. The journey’s going to end. Completing it does not occur when the project is implemented. Rather, when the installation is done, the real journey and the hard work begins.
February 01, 2012
Among InfoSec and IT staff, there is a lot of behind-the-scenes hand wringing that users are the weakest link. But are InfoSec staff that much stronger?
While automation is and does have a place, Dan Geer, of CIA-backed venture fund In-Q-Tel, properly notes that while ” …humans can build structures more complex” than they can operate, ” …Are humans in the loop a failsafe or a liability? Is fully automated security to be desired or to be feared?”
We’ve considered this question before at Prism, when “automated remediation” was being heavily touted as a solution for mid-market enterprises, where IT staff is not abundant. We’ve found that human intervention is not just a fail-safe, but a necessity. The interdependencies, even in medium sized networks are far too complex to automate. We introduced the feature a couple of years back and in reviewing the usage, concluded that such “automated remediation” does have a role to play in the modern enterprise. Use cases include changes to group membership in Active Directory, unrecognized processes, account creation where the naming convention is not followed or honeypot access. In other words, when the condition can be well defined and narrowly focused, humans in the loop will slow things down. However for every such “rule” there are hundreds more that will be obvious to a human but missed by the narrow rule.
So are humans in the loop a failsafe or a liability? It depends on the scenario.
What’s your thought?
January 25, 2012
Nearly every analyst has made aggressive predictions that outsourcing to the cloud will continue to grow rapidly. It’s clear that servers and applications are migrating to the cloud as fast as possible, but according to an article in The Economist, the tradeoff is efficiency vs. sovereignty. The White House announced that the federal government will shut down 178 duplicative data centers in 2012, adding to the 195 that will be closed by the end of this year.
Businesses need motivation and capability to recognize business problems, solutions that can improve the enterprise, and ways to implement those solutions. There is clearly a role for outsourced solutions and it is one that enterprises are embracing.
For an engineer, however, the response to outsourcing can be one of frustration, and concerns about short-sighted decisions by management that focus on short term gains at the risk of long term security. But there is also an argument why in-sourcing isn’t necessarily the better business decision: a recent Gartner report noted that IT departments often center too much of their attention on technology and not enough on business needs, resulting in a “veritable Tower of Babel, where the language between the IT organization and the business has been confounded, and they no longer understand each other.”
Despite increased migration to cloud services, it does not appear that there is an immediate impact on InfoSec-related jobs. Among the 12 computer-related job classifications tracked by the Department of Labor’s Bureau of Labor Statistics (BLS), information security analysts, along with computer and information research scientists, were among those whose jobs did not report unemployment during the first two quarters of 2011.
John Reed, executive director at IT staffing firm Robert Half Technology, attributes the high growth to the increasing organizational awareness of the need for security and hands-on IT security teams to ensure appropriate security controls are in place to safeguard digital files and vital electronic infrastructure, as well as respond to computer security breaches and viruses.
Simply put: the facility of using cloud services does not replace the skills needed to analyze and interpret the data to protect the enterprise. Outsourcing to a cloud may provide immediate efficiencies, but it’s the IT security staff who deliver business value that ensure long term security.
January 18, 2012
The past year has been a hair-raising series of IT security breakdowns and headlining events reaching as high as RSA itself falling victim to a phishing attack. But as the year set on 2011, the hacker group Anonymous remained busy, providing a sobering reminder that IT Security can never rest.
It turned out that attackers sent two different targeted phishing e-mails to four workers at its parent company, EMC. The e-mails contained a malicious attachment that was identified in the subject line as “2011 Recruitment plan.xls” which was the point of attack.
Back to Basics:
Using administrative controls such as security awareness training, technical controls such as firewalls, and anti-virus and IPS, to stop attacks from penetrating the network. Most industry and government experts agree that security configuration management is probably the best way to ensure the best security configuration allowable, along with automated patch management and updating anti-virus software.
Employing a blend of technical controls such as anti-virus, IPS, intrusion detection systems (IDS), system monitoring, file integrity monitoring, change control, log management and incident alerting can help to track how and when system intrusions are being attempted.
Applying operating system upgrades, backup data restore and vulnerability mitigation and other controls to make sure systems are configured correctly and can prevent the irretrievable loss of data.
January 17, 2012
The beginning of a new year marks a time of reflection on the past and anticipation of the future. The result for analysts, pundits and authors is a near irresistible urge to identify important trends in their areas of expertise (real or imagined). I am no exception, so here are my thoughts on what we’ll see in the next year in the areas of application and evolution of Information Technology.
January 11, 2012
In the InfoSec industry, there is an abundance of familiar flaws and copycat theories and approaches. We repeat ourselves and recommend the same approaches. But what has really changed in the last year?
The emergence of hacking groups like Anonymous, LulzSec, and TeaMp0isoN.
In 2011, these groups brought the fight to corporate America, crippling firms both small (HBGary Federal) and large (Stratfor, Sony). As the year drew to a close these groups shifted from prank-oriented hacks for laughs (or “lulz”), to aligning themselves with political movements like Occupy Wall Street, and hacking firms like Stratfor, a Austin, Tex.-based security “think tank” that releases a daily newsletter concerning security and intelligence matters all over the world. After HBGary Federal CEO Aaron Barr publicly bragged that he was going to identify some members of the group during a talk in San Francisco at the RSA Conference week, Anonymous members responded by dumping a huge cache of his personal emails and those of other HBGary Federal executives online, eventually leading to Barr’s resignation. Anonymous and LulzSec then spent several months targeting various retailers, public figures and members of the security community. Their Operation AntiSec aimed to expose alleged hypocrisies and sins by members of the security community. They targeted a number of federal contractors, including IRC Federal and Booz Allen Hamilton, exposing personal data in the process. Congress got involved in July when Sen. John McCain urged Senate leaders to form a select committee to address the threat posed by Anonymous/LulzSec/Wikileaks.
The attack on RSA SecurId was another watershed event. The first public news of the compromise came from RSA itself, when it published a blog post explaining that an attacker had been able to gain access to the company’s network through a “sophisticated” attack. Officials said the attacker had compromised some resources related to the RSA SecurID product, which set off major alarm bells throughout the industry. SecurID is used for two-factor authentication by a huge number of large enterprises, including banks, financial services companies, government agencies and defense contractors. Within months of the RSA attack, there were attacks on SecurID customers, including Lockheed Martin, and the current working theory espoused by experts is that the still-unidentified attackers were interested in LM and other RSA customers all along and, having run into trouble compromising them directly, went after the SecurID technology to loop back to the customers.
The specifics of the attack were depressingly mundane (targeted phishing email with a malicious Excel file attached).
Then too, several certificate authorities were compromised throughout the year. Comodo was the first to fall when it was revealed in March that an attacker (apparently an Iranian national) had been able to compromise the CA infrastructure and issue himself a pile of valid certificates for domains belonging to Google, Yahoo, Skype and others. The attacker bragged about his accomplishments in Pastebin posts and later posted evidence of his forged certificate for Mozilla. Later in the year, the same person targeted the Dutch CA DigiNotar. The details of the attack were slightly different, but the end result was the same: he was able to issue himself several hundred valid certificates and this time went after domains owned by, among others, the Central Intelligence Agency. In the end, all of the major browser manufacturers had to revoke trust in the DigiNotar root CA. The damage to the company was so bad that the Dutch government eventually took it over and later declared it bankrupt. Staggering, isn’t it? A lone attacker not only forced Microsoft, Apple and Mozilla to yank a root CA from their list of trusted roots, but he was also responsible for forcing a certificate authority out of business.
What has changed in our industry? Nothing really. It’s not a question “if” but “when” the attack will arrive on your assets.
Plus ça change, plus c'est la même, I suppose.
December 09, 2011
Changes in end-user behavior and the resulting “consumerization” of IT have contributed to the changing and expanding definition of Application Performance Management (“APM”). APM can no longer focus just on the application or the optimization of infrastructure against abstract limits; APM must now view performance from the end-user’s access point back across all infrastructure involved in the delivery of the service.
November 21, 2011
The commercialization of Cloud-based IT services, along with market and economic challenges are changing the way business services are conceived, created, delivered and consumed. This change is reflected in the growing interest in alternative delivery models and solutions.
November 17, 2011
October 25, 2011
On September 22, the Senate Judiciary Committee approved and passed Sen. Richard Blumenthal’s (D, Conn.) bill, the “Personal Data Protection and Breach Accountability Act of 2011,” sending it to the Senate floor. The bill will penalize companies for online data breaches and was introduced on the heels of several high profile security breaches and hacks that affected millions of consumers. These included the Sony breach which compromised the data of 77 million customers, and the DigiNotar breach which resulted in 300,000 Google GMail account holders having their mail hacked and read. The measure addresses companies that hold the personal information of more than 10,000 customers and requires them to put privacy and security programs in place to protect the information, and to respond quickly in the event of a security failure.
The bill proposes that companies be fined $5,000 per day per violation, with a maximum of $20 million per infringement. Additionally, companies who fail to comply with the data protection law (if it is passed) may be required to pay for credit monitoring services and subject to civil litigation by the affected consumers. The bill also aims to increase criminal penalties for identity theft, as well as crimes including the installing of a data collection program on someone’s computer and concealing any security breached in which personal data is compromised.
Key provisions in the bill include a process to help companies establish appropriate minimum security standards, notifications requirements, information sharing after a breach and company accountability.
While the intent of the bill is admirable, the problem is not a lack of laws to deter breaches, but the insufficient enforcement of these laws. Many of the requirements espoused in this new legislation already exist in many different forms.
SANS is the largest source for information security training and security certification, and their position is that we don’t need an extension to the Federal Information Security Management Act of 2002 (FISMA) or other compliance regulations, which have essentially encouraged a checkbox mentality: “I checked it off, so we are good.” This is the wrong approach to security but companies get rewarded for checking off criteria lists. Compliance regulations do not drive improvement. Organizations need to focus on the actual costs that can occur by not being compliant:
The gap does not exist in the laws, but rather, in the enforcement of those laws. Until there is enforcement any legislation or requirements are hollow threats.
October 24, 2011
October 13, 2011
Those in IT operations responsible for service delivery or infrastructure operations know what it’s like: collect and store a growing amount of the data that is necessary to do our jobs, but at a rate that drives up cost. However, the problem with infinite detail is not much different than trying to organize and analyze noise; there’s plenty of it, but finding the signal underneath is the difficult, but critical point.
September 21, 2011
It’s a dirty secret, many IT projects fail; maybe even as many as 30% of all IT projects.
Amazing, given the time, money and mojo spent on them, and the seriously smart people working in IT.
As a vendor, it is painful to see this. We see it from time to time (often helplessly from the sidelines), we think about it a lot, we’d like to see eliminated along with malaria, cancer and other “nasties.”
They fail for a lot of reasons, many of them unrelated to software.
At EventTracker we’ve helped save a number of nearly-failed implementations, and we have noticed some consistency of why they fail.
From the home office in Columbia MD, here are the top 10 reasons IT projects fail:
This is the “if you don’t do it right, don’t do it at all” belief system. With this viewpoint, the project lead person believes that the solution must perfectly fit existing or new business processes. The result is a massive, overly complicated implementation that is extremely expensive. By the time it’s all done, the business environment has changed and an enormous investment is wasted.
Lesson: Value does not mean perfection. Make sure the solution delivers value early and often, and let perfection happen as it may.
In almost every IT shop, “seamless integration with everything” is the mantra. Vendors tout it, management believes it, and users demand it. In other words to be all things to all people, IT project cannot exist in isolation. Integration has become a key component of many IT projects and it can’t exist alone anymore.
Lesson: Examine your needs for integration before you start the project. Find out if there are pre-built tools to accomplish this. Plan accordingly if they aren’t.
This is the classic “committee” problem. The CIO or IT Manager decides the company needs an IT solution, so they assign the task of getting it done to a group. No one is accountable, no one is in charge. So they deliberate and discuss forever. Nothing gets done, and when it does, no one makes sure it gets driven into the organization. Failure is imminent.
Lesson: Make sure someone is accountable in the organization for success. If you are using a contractor, give that contractor enough power to make it happen.
This is a tough problem to foresee because employees don’t usually broadcast their departure or disinterest before bailing. The bottom line is that if the project lead leaves, the project will suffer. It might kill the project if no one else is up to speed. It’s a risk that should be taken seriously.
Lesson: Make sure that more than just one person is involved, and keep a new interim project manager shadowing and up-to-date.
IT projects are often as much about people and processes as it is about technology. If the project doesn’t have consistent management support, the project will fail. After all, if no one knows how or why to use the solution, no one will
Lesson: Make sure you and your team have allocated time to define, test, and use your new solution as it is rolled out.
One day someone realized, “hey we need a good solution to address the compliance regulations and these security gaps.” The next day someone started looking at packages, and a month later you buy one. Then you realized that there were a lot of things this solution affects, including core systems, router, applications and operations processes. But you’re way too far down the road on a package and have spent too much money to switch to something else. So you keep investing until you realize you are dumping money down a hole. It’s a bad place to be.
Lesson: Make sure you think it all through before you buy. Get support. Get input. Then take the plunge. You’ll be glad you did.
In this all-too-common example, half way through a complex project, someone says “we actually want to rework our processes to fit X.” The project guys look at what they have done, realize it won’t work, and completely redesign the system. It takes 3 months. The project goes over budget. The key stakeholder says “hey this project is expensive, and we’ve seen nothing of value.” The budget vanishes. The project ends.
Lesson: Make sure you know what you want before you start building it. If you don’t know, build the pieces you do, then build the rest later. Don’t build what you don’t understand.
This relates to #4 above. Sometimes requirements are defined, but they don’t match good processes, because these processes don’t exist. Or no one follows them. Or they are outdated. Or not well understood. The point is that the solution is computer software: it does exactly what you tell it the same way every time, and it’s expensive to change it. Sloppy processes are impossible to create in software making the solution more of a hindrance than a help.
Lesson: Only implement and automate processes that are well understood and followed. If they are not well understood, implement them in a minimal way and do not automate until they are well understood and followed.
Any solution with no users is a very lonely piece of software. It’s also a very expensive use of 500Mb on your server. Most IT projects fail because they just aren’t used by anyone. They are a giant database of old information and spotty data. That’s a failure.
Lesson: Focus on end user adoption. Buy training. Talk about the value that it brings your customers, your employees, and your shareholders. Make usage a part of your employee review process. Incentivize usage. Make it make sense to use it.
This is by far the most prevalent problem in implementing IT solutions: Businesses don’t take time to define what they want out of their implementation, so it doesn’t do what they want. This goes further than just defining requirements. It’s about defining what value the new software will deliver for the business. By focusing on the nuts and bolts, the business doesn’t figure out what they want from the system as a whole.
Lesson: Instead of starting with “hey I need something to accomplish X,” the organization should be asking “how can this software help us bring value to our security posture, to our internal costs, to our compliance requirements.”
This list is not exhaustive – there are many more ways to kill your implementation. However if your organization is aware of the pitfalls listed above, you have a very high chance of success.
September 20, 2011
I have two rules of thumb when it comes to audit logging: first, if it has a log, enable it. Second, if you can collect the log and archive it with your log management/SIEM solution, do it – even if you don’t set up any alert rules or reports.
August 30, 2011
Columbia, MD, August 30, 2011 — Prism Microsystems, a leading provider of comprehensive security and compliance software for the US Department of Defense (DoD) and US Federal Government agencies, today announced the release of EventTracker DriveShield, an easy-to-deploy solution designed to provide visibility to files copied to USB devices or burned to CD/DVD-W drives.
August 24, 2011
No one needs to be convinced that monitoring Domain Controller security logs is important; member servers are equally as important: most people understand that member servers are where “our data” is located. But I often face an uphill battle helping people understand why workstation security logs are so critical. Frequently I hear IT administrators tell me they have policies that forbid the of storing confidential information locally. But the truth is, workstations and laptops always have sensitive information on them – there’s no way to prevent it. Besides applications like Outlook, Offline Files and SharePoint workspace that cache server information locally, there’s also the page file, which can contain content from any document or other information at any time.
August 17, 2011
Security and Compliance At Talbot’s Talbots is a leading multi-channel retailer and direct marketer of women’s apparel, shoes and accessories, based in Tampa, Florida. Talbots is well known for it’s stellar reputation in classic fashion. Everyone knows to look to Talbots when it is time to buy the perfect jacket or a timeless skirt. Talbots customers are women in the 35+ population that shop at their 568 stores in 47 states, catalogs and online at www.talbots.com. Approximate sales for Talbots in 2010 were $991 million.
July 20, 2011
An area of audit logging that is often confusing is the difference between two categories in the Windows security log: Account Logon events and Logon/Logoff events. These two categories are related but distinct, and the similarity in the naming convention contributes to the confusion.
See EventTracker in action!
Join our next live demo August 6th at 2:00 p.m. EST.
Join our next EventTracker live product demo to see our award-winning SIEM solution in action. We’ll showcase the critical security features you need to protect your organization from threats, demonstrate how the built-in reporting streamlines compliance requirements, and answer any questions you have on the spot.
Our next live product demo of EventTracker is scheduled for Tuesday, August 6th at 2:00 p.m. EST.