There is no easy way to distinguish between a human and a spambot. It’s an arms race which we’ll always be behind. I’m talking here about spam in more general—not only on e-mail but also on for instance Wikipedia or on blogposts. Even if we would have a perfect-solution to test whether there is a human behind something, we still have to combat cheap labour in India: real people spamming “by hand”.
I think the solution is a Web of Trust similar to that of PGP. An identity (not necessarily a person) publishes who she trusts (not to be spammy/phishy) or not trusts. Ideally everyone would be in one big web. Only someone who my blog via-via trusts may post.
Obviously one still may gather trust of people over time and enter the web of trust and then start spamming with that identity. However, then that identity will be marked untrusted by people and also the people who initially marked the identity as trusted will be less trusted. Also, there are way more sophisticated measures of establishment in the web/trust to conceive than just being trusted via-via by one identity.
There is no way to prevent spam perfectly, but the amount of work that has to go in to making an identity trusted and established in the web is several orders of magnitude greater than any other protection we have. The big problem: we don’t have such an ubiquitous web of trust yet. (Yes, it’ll be in SINP if I’ll get around to working on it)
This section has come into effect over the weekend. It makes it illegal to create, possess, obtain, provide access to, yield, distribute or otherwise allow access to lots of widespread tools that can be used to breach security. Take for instance nmap.
This law does not only impede our freedom (of speech), research, decrease security and allow for misuse, but more importantly it won’t even stop the real criminals.
Stefan of the Month of PHP Bugs Project writes:
The law does not affect our freedom of speech to report and inform about security vulnerabilities and how to exploit them.
We are just not allowed to create/distribute/use software that could be used as “hacking tools”.
Where would they draw the line between reporting/informing about a vulnerability and how to exploit it and the actual source code to do it. Would pseudocode be illegal? Would literate code be illegal? Also there would be no way for security researchers to try out their work.
What will happen in the worst case if similar laws are accepted in other countries and enforced, is that vendors will rather cover up all vulnerabilities using these laws instead of securing it. That there are lots of ready-to-use exploits is good. It’s a very good incentive for security.
That there will always be a leak in a piece of software that someone will be able to find on his own will not be changed by this law. Also there will be no way to stop the real criminals from creating and distributing tools underground. Now everyone still knows what kind of tools are around and will know what to expect.
I just got back from Wacken Open Air 2007 (it was great!) and noticed that my firstname.lastname@example.org account has been activated :).