Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Considering that they get requests for something like 8 million URLs to be removed every week, your ire seems a bit misdirected.

Safe harbor has enabled the modern web in many ways, but the DMCA takedown process is still heavily, heavily weighted toward the claimants.

It looks more like the URLs not taken down were mostly malformed or duplicates of earlier claims and they took action on the rest. Purposefully not acting on DMCA complaints due to obviously bogus takedown notices doesn't happen very often because you generally have to be really sure of what you're doing. Hard to do that when you have 8 million URLs to sift through to find the problematic ones.



> your ire seems a bit misdirected.

Ire? You are misreading my comment. I merely sought to correct the parent poster’s view that Google would not remove anything partly because they have lawyers who used to work at the EFF. The fact is that Google did remove almost everything, and merely employing people who used to work for the EFF does not cause Google to be the EFF.


Yes, but you can certainly automate the process to a large extent. For example, many sites simply don't have the functionality allow copyright infringement. Also many sites will have >99% false positives, whereas torrent sites will have >99% true positives.

They could hire a few people to get through as many computer sorted reviews as possible, and the rest of them fall through the cracks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: