I came about this post about ethical hacking and I felt the need to respond to it publicly since (I feel that) the article offers a skewed view and does not present the counter-arguments:
First of all I would like to stress that discovering and writing exploits for certain types of flaws (and I'm not referring to XSS :) ) does require serious knowledge and skills, which 99.9% of the programmers do not posses (and I'm saying this as a malware analyst who is doing reverse engineering as part of his daily job). While humility is a good, the fact of the matter is that these people are part of a select group. Also, as a sidenote: a large percentage of programmers (I don't want to guess, but certainly more than half) do not understand even the basics of the possible security risks which may affect their products (in case of a "web application" this may be sql injections or in case of binary product something like stack overflows).
Second of all, from an economics point of view: software vendors have no financial reason to fix bugs (and by bugs I mean here problems which wouldn't come up in real life - ie. they wouldn't bother the customers - but under exceptional circumstances - like a specially crafted query or input file - they could lead to information disclosure / arbitrary code execution / etc). Fixed bugs don't sell products. New features sell products. And most of the time the client isn't knowledgeable enough to asses how secure the product s/he buys really is. One might argue that competitors could disclose the bugs, but this rarely happens because the competing companies know that their product is equally buggy and if they disclose a bug in their competitors product, it will try (and most probably succeed) to find bugs in their product and the whole thing will come crashing down. In this sense "ethical hacking" and the threat of full disclosure plays a role of keeping the players (at least somewhat) honest.
Where the ethics part comes in (in my opinion) is thinking about the customer. As I see it there are two extremes:
- the "bad guys" discover the vulnerability and use it to take advantage of the users of the certain product without anybody knowing it
- the vendor discovers it and patches the problem (hopefully before anybody else discovers it)
Of course (as with everything) there are many shades of gray inbetween (like customers not deploying the patch right away and the "bad guys" reverse engineering it to find the flaw it fixes and then exploit it on the customer base who didn't apply the patch), but I didn't want to complicate this description.
The "ethical hacker" approach falls somewhere in the middle: after discovery let the vendor know and if it doesn't care (doesn't communicate with you, doesn't promise to release a fix within reasonable time-frame), release the vulnerability publicly, preferably with methods for potential customers to mitigate it. Why should it be released? Because as time passes, the probability that the "bad guys" find it increases! As an independent security researcher you don't have any other choice than to follow this path (because I don't think that very many companies will admit that they screwed up and bring you in to help them - because this would mean admitting failure which would result in many management types loosing their bonus packages which they don't want).
There are many bad apples in the "research community" who place personal pride before the interest of the customers, but they are not practicing ethical hacking! Examples:
- Publishing more information than needed in a place where the affected customers have very little chance to discover it: http://hype-free.blogspot.com/2007/02/full-disclosure-gone-bad.html
- The selling of discovered bugs to anyone willing to pay: http://www.matousec.com/purchase.php
However the example you cited does not apply. A big vendor can not disregard a serious vulnerability just because the style of the communication. Do you consider that just because I write "I'm the king of the world and you know s**** about software development" in an e-mail to MS in which I disclose a remotely exploitable flaw for Vista, they should disregard it? If the vulnerability is genuine and the vendor really doesn't communicate (doesn't even acknowledge the receiving of the mail) there is no other possibility than going public (again: preferably with a mitigation method for clients) - the alternative being to wait until the "bad guys" discover the vulnerability and the exploitation becomes widespread enough that the company is forced to do something about it. Here are some examples which you should consider:
- Apple trying to discredit security researchers who found exploitable code in their wireless drivers
- A person being arrested and persecuted because he discovered an information disclosure vulnerability in the website of a university and tried to notify them!
- Amazon not fixing a bug for one year (!) which allowed arbitrary websites to manipulate your shopping cart if you were logged in to the Amazon website
- Oracle not fixing vulnerabilities for years
- Microsoft knowing about the recent .ANI vulnerability since December (by their own admission) and not releasing a patch sooner (exactly how many months does it take to test patch?!!) and when releasing it breaking software (the later of course can be the fault of poorly written software, but in this case it's not probable)
- And finally a personal story: I discovered that using some CSS and JScript I can crash IE at my will. This was several years ago. I tried to notify MS several times and got back no response. The vulnerability persisted (I last verified it a couple of months ago that my fully patched XP box with IE 6.0 was vulnerable). Now I'm not going to disclose to anybody the code (because I migrated away from MS products), but after such long a period of time don't you think that I would be justified to do so?
You can't rely on companies to try to make the most secure products. They will make the products which generates the most revenue. Cars didn't have safety belts until they were forced to. The same way software vendors won't place security first (or at least in the first 3 positions) until they will be forced to.