Geeks.com (which sounds like a dating site for programmers but is actually an online discounter of computer equipment) got hit by the US Federal Trade Commission last week. (For international readers the FTC is a US government agency primarily concerned with consumer protection).
The complaint and settlement make for a brief and interesting read. The FTC doesn't seem to think much of Geeks.com's security, but takes even less kindly to their apparent misrepresentation of the security measures they do have in place. Note to CISOs - make sure you know what the marketing department is saying about your security to the outside world. And make sure that your security policy actually reflects what's going on in your organizaiton. As any lawyer will tell you, it is better to have no policy in place than a policy you haven't actually implemented.
Getting hit by the FTC is no fun. The settlement will force Geeks.com to subject itself to ongoing audits for many years to come. The overall cost of this action are enormous - hiring outside counsel to deal with the FTC, the bureaucratic overhead of maintaining all the newly required paperwork, and so forth.
I have posted in the past on justifying security spending. A joe-average data breach seems to have lost its shock value and in some instances may even, ironically, provide leser known companies with some brand recognition. But FTC actions like the one against Geeks.com carry real costs, imposing huge administrative burdens and damaging the brand, if not in the eyes of consumers then at least in the eyes of investors. (The New York Law Journal has a good overview of the overall costs of FTC investigation).
Is a post-breach investigation by the FTC something that companies should be worried about? A back of the napkin calculation shows that the answer is probably not. There were hundreds of public data breaches last year, and yet scanning the FTC website for actions in 2008 shows that there only a few dozen investigations of any kind in any given month, and very few of those were information security related.
It doesn't take a genius to predict that greater regulation is forthcoming as a result of the new administration and the collosal failure of current institutions like the SEC to prevent Madoff-like frauds. This will affect not only the financial accounting but also seeming unrelated areas like information security. Although the current risk of investigation by the FTC is very low, security is about an overall narrative that can be used to address a wide range of upcoming regulations.
One final noteworthy point about the FTC judgment. It specifically lists SQL injection as a form of attack that Geeks.com should have taken measures to prevent. This is part of an ongoing development in requiring companies to take reasonable steps to prevent well-known attacks. PCI references (an albeit outdated version of) the OWASP Top 10, but there have been few cases I know of in which a specific technical vulnerability is mentioned in an FTC action. I suspect we will be seeing more of this in the future.
Wednesday, February 11, 2009
Friday, February 6, 2009
Justifying Security Spending
I just finished skimming through "The Business Justification for Data Security", a new report by the folks at Securosis sponsored by McAfee. At 38 pages it's a bit on the long side but well worth the read for its exploration of the different models for justifying security payments.
This whole Recession thing has businesses cutting back on everything and security is no exception. This hasn't stopped people from promoting the misguided notion that security somehow pays for itself or is a revenue generator. I like the way the Securosis report dismisses this fallacy - "When applying ROI to data security, you attempt to quantify loss, and then substitute loss as revenue". When you buy an alarm system for your house, the alarm doesn't "pay for itself". It simply makes an unlikely event (your house getting burglarized) even less likely.
Moving from home security to data security, one can claim that security spending will prevent future losses, but invoking a revenue argument (as though a security investment is actually earning money in the same way that a new sale does) just doesn't fit with the way businesses think about revenues and losses. The very subjective valuations that go into measuring data loss further weaken the ROI argument to the point of irrelevance.
As I have written before, network security is relatively well understood and the security portion of the total IT spending pie is broadly accepted to be in the neighborhood of 10%. This is the security tax, and a CISO's job is to manage and spend that tax in the most efficient way. Justifying a particular security expenditure outside of the context of total security costs doesn't make sense.
This whole Recession thing has businesses cutting back on everything and security is no exception. This hasn't stopped people from promoting the misguided notion that security somehow pays for itself or is a revenue generator. I like the way the Securosis report dismisses this fallacy - "When applying ROI to data security, you attempt to quantify loss, and then substitute loss as revenue". When you buy an alarm system for your house, the alarm doesn't "pay for itself". It simply makes an unlikely event (your house getting burglarized) even less likely.
Moving from home security to data security, one can claim that security spending will prevent future losses, but invoking a revenue argument (as though a security investment is actually earning money in the same way that a new sale does) just doesn't fit with the way businesses think about revenues and losses. The very subjective valuations that go into measuring data loss further weaken the ROI argument to the point of irrelevance.
As I have written before, network security is relatively well understood and the security portion of the total IT spending pie is broadly accepted to be in the neighborhood of 10%. This is the security tax, and a CISO's job is to manage and spend that tax in the most efficient way. Justifying a particular security expenditure outside of the context of total security costs doesn't make sense.
Another big problem with any sort of quantitative loss prevention model is the vagueness of what exactly constitutes security spending. The days of buying a "security product" to address security issues is fast disappearing. Most of the major IT vendors these days have been building their own security features or purchasing smaller security vendors and integrating their functionality. Process change and leveraging existing technologies - not buying security products - is in most cases the path to a more secure business.
An example of this would be a database auditing. The required investment is not monetary but rather an organizational one. Enabling auditing has many internal costs (testing, etc) and must be weighed against actual product enhancements that could be done instead.
There are of course examples of security spending that do follow the pattern of a simple dollar investment/loss prevention analysis. An example given in the Securosis report is that of a lost laptop, where the security measures have clear monetary costs (namely full disk encryption) and the losses have easily quantifiable costs (notifying customers in the event of a breach). But laptop encryption is the exception, not the rule. Very few security costs can be quantified in this way.
An example of this would be a database auditing. The required investment is not monetary but rather an organizational one. Enabling auditing has many internal costs (testing, etc) and must be weighed against actual product enhancements that could be done instead.
There are of course examples of security spending that do follow the pattern of a simple dollar investment/loss prevention analysis. An example given in the Securosis report is that of a lost laptop, where the security measures have clear monetary costs (namely full disk encryption) and the losses have easily quantifiable costs (notifying customers in the event of a breach). But laptop encryption is the exception, not the rule. Very few security costs can be quantified in this way.
As I have written before, I believe that the major driver of security spending in the coming years will be compliance. Almost all compliance is less about using system X vs system Y and more about having an overall security narrative. The CISO's job is to be the owner of that narrative and to make it happen within an industry acceptable budget. Every security dollar spent (and for that matter every hour of someone else's time committed to security) should serve to advance that narrative.
Wednesday, February 4, 2009
Cost of Data Breaches
The Ponemon Institute has been tracking the cost of data breaches over the last few years. Together with PGP they just released their most recent report after polling 43 companies that experienced data breaches. You can view a summary of their report here (you need to fill out your personal data for the full report).
The central finding is that data breaches cost organizations $200 a breached record (well, actually $202...). I hear variants of this $200 figure at almost every security conference I attend. Security vendors have also incorporated it into their sales pitch and I often hear this number as part of the ROI angle when I am evaluating vendors. But how accurate is it? And should it motivate companies to spend on security?
Data breaches have hard costs and soft costs associated with them. Hard costs like notifying customers account for only $15 of the $202 figure in the Ponemon study and are at the lowest point in the last four years. Soft costs like "lost business" account for $139 or 69%. I have serious doubts whether it is even possible to estimate lost business in a meaningful way. But even if it is, do so many customers really leave companies because of data breaches?
For small companies in non-critical industries this might be possible, but I find some of the figures on lost business very improbable to say the least. The highest rate of post-data breach customer churn in the report is 6.5% in the healthcare industry. I don't know about you, but when I choose a health provider the most important thing to me is medical credentials. When you're sitting in that wait room and feeling like *%$&, the last thing you are worried about is the security of the router configurations at the doctor's office.
But let's say for the sake of argument that the $202 figure is generally correct. To me that indicates that companies are spending too much, not too little, to prevent data breaches, to the detriment of reducing other forms of risk.
The Identity Theft Resource Center counts just under 36 million breached figures in the United States in 656 reported incidents for 2008. A back-of-the-napkin calculation (well, more of a Google calculation) yields a $7.2 billion dollar cost from publicly announced data breaches in the US. That's a drop in the pond in a GDP of over 14 trillion dollars, and less than the $10 billion dollars a year that businesses suffer in cheque fraud alone. Those $7.2 billion can't possibly justify the multi-billion dollar security industry, especially given that most published data breaches are the result of human error that would not have been prevented by technology.
I've said before that security spending will be primarily compliance driven in the future. The relatively low cost of a data breach to an organization is yet another reason to ditch loss-prevention approaches to justifying security spending.
The central finding is that data breaches cost organizations $200 a breached record (well, actually $202...). I hear variants of this $200 figure at almost every security conference I attend. Security vendors have also incorporated it into their sales pitch and I often hear this number as part of the ROI angle when I am evaluating vendors. But how accurate is it? And should it motivate companies to spend on security?
Data breaches have hard costs and soft costs associated with them. Hard costs like notifying customers account for only $15 of the $202 figure in the Ponemon study and are at the lowest point in the last four years. Soft costs like "lost business" account for $139 or 69%. I have serious doubts whether it is even possible to estimate lost business in a meaningful way. But even if it is, do so many customers really leave companies because of data breaches?
For small companies in non-critical industries this might be possible, but I find some of the figures on lost business very improbable to say the least. The highest rate of post-data breach customer churn in the report is 6.5% in the healthcare industry. I don't know about you, but when I choose a health provider the most important thing to me is medical credentials. When you're sitting in that wait room and feeling like *%$&, the last thing you are worried about is the security of the router configurations at the doctor's office.
But let's say for the sake of argument that the $202 figure is generally correct. To me that indicates that companies are spending too much, not too little, to prevent data breaches, to the detriment of reducing other forms of risk.
The Identity Theft Resource Center counts just under 36 million breached figures in the United States in 656 reported incidents for 2008. A back-of-the-napkin calculation (well, more of a Google calculation) yields a $7.2 billion dollar cost from publicly announced data breaches in the US. That's a drop in the pond in a GDP of over 14 trillion dollars, and less than the $10 billion dollars a year that businesses suffer in cheque fraud alone. Those $7.2 billion can't possibly justify the multi-billion dollar security industry, especially given that most published data breaches are the result of human error that would not have been prevented by technology.
I've said before that security spending will be primarily compliance driven in the future. The relatively low cost of a data breach to an organization is yet another reason to ditch loss-prevention approaches to justifying security spending.
Monday, February 2, 2009
Ma.gnolia disappears in the Cloud
Yikes. Seems like Ma.gnolia has disappeared in the cloud.
This weekend Ma.gnolia experienced what seems to be a catastrophic data loss - their website is down and they have no idea when it will be back up.
I have never used Ma.gnolia, a service that lets you move your bookmarks and favorites around with you. Before I read about their data outage I had barely heard of them, but they seem to be a popular alternative to Deli.cio.us amongst the bookmarking crowd.
Do stories like this prove that the cloud is a dangerous place? The anti-cloud perimeter types love this stuff. But of course you are much more likely to lose your laptop than to end up in a Ma.gnolia type situation. In the bigger picture, disaster recovery is a great reason to move towards the cloud, not away from it (anyone who has lost their personal laptop wishes they had more, not less, data in the cloud).
It makes sense that this kind of situation happens a lot less often than data breaches. There are many reasons that vendors cut corners on data security, but it doesn't make much business sense not to have some disaster recovery plan. After all, total data loss and unavailability of services are much more fatal to customers than security breaches. Unless you have a rabid fan base or have locked in customers, your customers will defect when faced with protracted outages (those two exceptions are the only way to explain Apple Mobile Me's continued use).
It doesn't work that way with data security. I don't buy the high rates of supposed customer churn after a data breach (measured at anywhere from 1 to 6.5 percent by the Ponemon Institute). Data breaches occur frequently enough that customers are growing immune to them. If Ma.gnolia had accidentally leaked user's bookmarks, it would be met with a shrug. But now their entire business is at serious risk and users are defecting to Del.ico.us in droves.
At the risk of a vast generalization, information security is something you do because the law requires it even though your users probably don't care, whereas disaster recovery is something you do because your customers care even though the law probably doesn't require it.
The moral of the Ma.gnolia story for CISOs is to make sure the right safeguards and processes are put in place for using SaaS and web apps. A few simple actions can limit the risk these pose to your enterprise:
1) get the contractual language right - get specific on DR, security, privacy, etc.
2) ask some basic questions of the vendors (like do you have a DR plan)
3) define precisely what data and business functions can and can't be mixed in with the service.
4) document the use of the service and the associated credentials
Subscribe to:
Posts (Atom)