Hiding behind the firewall just got a bit tougher as Google announed it's new Security Data Connector that allows Google Apps even more access to your corporate goodies.
A lot of security folks have a what-the-$*%#@ knee-jerk reaction when they hear about stuff like this. But the truth is that when you are using Google Apps you're already in pretty deep with Google. Skirting the firewall isn't going to really change things one way or the other.
Secure Data Connector is another step in what is probably an inevitable move to cloud computing. I hate to use such a nebulous - pun intended - term, so let me put it another way - the old days of companies owning their own firewalled data centers and only working off their own equipment are clearly numbered. Using services like Google Apps has always been first a business decision, then a contractual one, and only lastly a security one.
Cloud computing is really nothing more than outsourcing non-essential IT functions. The reason outsourcing happens is because it is cheaper and more efficient for others to provide a service than to do it in house. Why would running a data center or an operating system be any different?
The main difference, of course, is legal. There are decades of law and precedents that govern what happens when a company has a fire in the building they lease. There is very little law or precedent to govern what happens when a company's Google Apps application is hacked. And because you have very little ability to audit (much less enforce) security measures on a third party vendor, the legalese becomes all the more important.
Moving to the cloud (a.k.a. owning and managing less of your non-workstation infrastructure) also requires a serious change in a company's entire security narrative. Most organizations still have a network-centric way of thinking about security. This is reflected not only in their security spending priorities but also in their strategic approach to security - for example, many companies have relatively open internal systems that rely on the inability of an intruder to get onto certain parts of the network. The prevalence of network-centric vs. host-centric or data-centric security is clearly visible in the prioritization of security requirements that PCI recently published.
Part of this network-centric approach is justified because it reflects the real world legal importance of owning and defending your data. There is also a self-enforcing cycle at play here - as long as network-centric security remains the norm, it is by definition the best practice/commonly used mechansim that is referenced in so many contracts and regulations. You may have some explaining to do to the CEO if your novel Web 2.0/cloud computing/ (insert more buzzwords here) security model was hacked. If a defense-in-depth network with an expensive IDS and lots of pricey Cisco gear gets hacked, well stuff happens. To paraphrase an overused expression, no one ever got fired for installing too many network security products.
Monday, April 13, 2009
Wednesday, April 8, 2009
Scareware and the Digital Divide
Today Microsoft's Security Intelligence Report came out with the news that "rogue security software" is on the rise. This will come as absolutely no surprise for those of us who have spent a Sunday afternoon trying to rid their friend/grandmother/brother-in-law/neighbor/cousin's PC of the latest AntiSpyware1-ish rogue security software.
For those of you that haven't had the pleasure, these programs infect a PC either by stealth or by inducing a user to click on a pop-up ("You're computer is not running at optimal speed. Click here to fix this issue"). Once installed, scareware basically hold your computer hostage with gazillions of warnings and pop-ups until you buy their "security" product. Depending on your definitions, scareware can be seen as a special case of phishing.
These rogue anti-virus programs have an enormous indirect cost to society by cementing the digital divide. Users who were already wary of computers are tempted to throw in the towel when confronted with persistent security warnings that they neither understand nor can do anything about. Scareware is only a nuisance to advanced users but is a real show stopper for the least technical and disenfranchised users.
The Microsoft report underscores this fact. The results imply that keeping your computer and applications updated and exercising some caution with your surfing and downloading are a fairly strong defense against getting rogue security software. Unfortunately, the less tech-savvy a user is, the less likely they are to be able to do either of these things. Advanced users usually have properly configured System Restore options on their computer, which can address most (though not all) of these programs. Less advanced users either do not have these configured or don't know how to use them.
Going after scareware is tough for all the usual reasons that fraud can thrive on the Internet - the ability for perpetrators to cover their tracks, jurisdiction problems, cost of investigation, and so forth. But to prosecute scareware peddlers you also need to prove that the product is actually a fraud. I haven't seen much case law on this topic, but I can imagine that a lot of it falls more under the FTC-deceptive-practice umbrella than total criminality. After all, any one who has tried to remove some of the older versions of Norton anti-virus from their computer knows that they don't go down without a fight. Where is the line between aggressive market positioning and fraud?
All of this does not bode well for the fight against scareware. Unlike traditional spam, scareware is amazingly effective with response rates in the high single digits. And because it mainly victimizes the end user - unlike say click through fraud which eventually costs everyone money - we are unlikely to see any particular industry move to seriously address this issue.
Back in the Middle Ages of the Internet in the early 2000's, I was a member of the eEurope 2005 Advisory Group that advised the European Commission on Internet policy. The roughly 30 members of this group included a motley crew of former government ministers, professors, subject-matter experts and CEOs. Back then physical access and so-called eInclusion were the primary focus of this group as they were seen as the prime barriers to participation. Today physical access has for the most part been commodotized in the western world. The overall effect of the Internet has been overwhelmingly inclusive for previously disadvantaged groups. But the so-called security tax has fallen disproportionately on people who lack basic Internet skills to begin with.
For those of you that haven't had the pleasure, these programs infect a PC either by stealth or by inducing a user to click on a pop-up ("You're computer is not running at optimal speed. Click here to fix this issue"). Once installed, scareware basically hold your computer hostage with gazillions of warnings and pop-ups until you buy their "security" product. Depending on your definitions, scareware can be seen as a special case of phishing.
These rogue anti-virus programs have an enormous indirect cost to society by cementing the digital divide. Users who were already wary of computers are tempted to throw in the towel when confronted with persistent security warnings that they neither understand nor can do anything about. Scareware is only a nuisance to advanced users but is a real show stopper for the least technical and disenfranchised users.
The Microsoft report underscores this fact. The results imply that keeping your computer and applications updated and exercising some caution with your surfing and downloading are a fairly strong defense against getting rogue security software. Unfortunately, the less tech-savvy a user is, the less likely they are to be able to do either of these things. Advanced users usually have properly configured System Restore options on their computer, which can address most (though not all) of these programs. Less advanced users either do not have these configured or don't know how to use them.
Going after scareware is tough for all the usual reasons that fraud can thrive on the Internet - the ability for perpetrators to cover their tracks, jurisdiction problems, cost of investigation, and so forth. But to prosecute scareware peddlers you also need to prove that the product is actually a fraud. I haven't seen much case law on this topic, but I can imagine that a lot of it falls more under the FTC-deceptive-practice umbrella than total criminality. After all, any one who has tried to remove some of the older versions of Norton anti-virus from their computer knows that they don't go down without a fight. Where is the line between aggressive market positioning and fraud?
All of this does not bode well for the fight against scareware. Unlike traditional spam, scareware is amazingly effective with response rates in the high single digits. And because it mainly victimizes the end user - unlike say click through fraud which eventually costs everyone money - we are unlikely to see any particular industry move to seriously address this issue.
Back in the Middle Ages of the Internet in the early 2000's, I was a member of the eEurope 2005 Advisory Group that advised the European Commission on Internet policy. The roughly 30 members of this group included a motley crew of former government ministers, professors, subject-matter experts and CEOs. Back then physical access and so-called eInclusion were the primary focus of this group as they were seen as the prime barriers to participation. Today physical access has for the most part been commodotized in the western world. The overall effect of the Internet has been overwhelmingly inclusive for previously disadvantaged groups. But the so-called security tax has fallen disproportionately on people who lack basic Internet skills to begin with.
Monday, April 6, 2009
PCI Hearing in Congress
Last week the Subcommittee on Emerging Threats, Cybersecurity, and Science and Technology held a hearing on the effectiveness of PCI.
Industry tends to be wary of government regulation, and often with good reason. The congressional hearing included the statement that the only way to protect networks is by continuously pen testing them. This may or may not be true, but I am not sure that the government is best positioned to mandate one approach over another. A further indication of the entry of opinions being accepted as fact was the repetition by one of the congressmen of the oft-quoted yet ridiculous figure of $1 trillion dollars in cybercrime losses.
PCI is in very early days, and the one thing that wasn't even mentioned at the hearing (unless I missed something...) was the issue of assessor liability. If a PCI-compliant company is breached, shoudn't the finger first be pointed at the assessor to justify why they certified the company in the first place? Until assessors are somehow on the hook for the quality of their assessments (in the same way that an accountant is), one can't really blame the standard itself for failing to enforce itself.
You know that PCI has hit primetime when Congress is taking a look at it. But PCI's Washington debut didn't go as smoothly as the Council would have liked. As Anton Chavukin points out, PCI was ripped at by both the government and the merchants for opposite reasons - the feds think it's too little, and the merchants say it's too much.
I don't think that PCI is a perfect standard, but it is wrong to assume that every data breach of a PCI-certified entity signifies the complete failure of the standard. PCI has taken positive steps recently, including laying out a "Prioritized Approach" which should help make the standard more digestible for smaller organizations.
On the whole the problem with PCI is not so much that companies declared compliant are suffering breaches, but that companies are being declared PCI compliant too readily. Oh, and no one seems to know who is liable when this happens.
But the congressional hearing didn't really focus on liability and enforcement issues. The main theme from government was that PCI was broken and that the bar needs to be raised. Chairwoman Yvette Clark's prepared statement also singles out eliminating terrorist financing as a major reason - perhaps the most important reason - to eliminate the hacking of companies housing credit card data.
This is an interesting because there is a big difference between preventing data breaches in general and preventing data breaches that benefit terrorists. Let's assume for the sake of argument that preventing terrorists from committing credit card fraud is a major priority (although somehow I fail to see why credit card fraud - a crime with many digital footprints - would be their first choice). Doesn't that mean that the standard should focus on preventing the specific types of fraud that terrorists are most likely to commit? (For example, war driving is not really a concern if we are worried about people commiting fraud from foreign countries).
In practice I don't think that it makes sense to mix national security into the PCI discussion. I think the real debate about PCI is whether a having a technology-specific standard reduces the number of data breaches. As I have written in the past, most compliance - whether SOX, HIPAA, GLBA, or others - is so non-technical as to really not require companies to do anything specific. On the other hand while government regulation is good for establishing general principles, it has done a poor job when it starts to mandate specific technological solutions. So the government would probably do a worse job at PCI than the PCI Council does.
Industry tends to be wary of government regulation, and often with good reason. The congressional hearing included the statement that the only way to protect networks is by continuously pen testing them. This may or may not be true, but I am not sure that the government is best positioned to mandate one approach over another. A further indication of the entry of opinions being accepted as fact was the repetition by one of the congressmen of the oft-quoted yet ridiculous figure of $1 trillion dollars in cybercrime losses.
PCI is in very early days, and the one thing that wasn't even mentioned at the hearing (unless I missed something...) was the issue of assessor liability. If a PCI-compliant company is breached, shoudn't the finger first be pointed at the assessor to justify why they certified the company in the first place? Until assessors are somehow on the hook for the quality of their assessments (in the same way that an accountant is), one can't really blame the standard itself for failing to enforce itself.
Subscribe to:
Posts (Atom)