Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not sure I fully understand the proposed fix here, how does it differ from the application simply including random chunks of data inside the response?

This area of things isn't my strong suite, but assuming that this is analogous to just adding random data to the response, I believe that simply adding random data to the response can be worked around by doing more requests as using statistics to factor out the noise introduced.

If my understanding is wrong then excuse me :)



Yes, it functionally the same.

The difference is that it is extremely easy to add at an http proxy or load balancer level, and could potentially turn 30 seconds into hours;

I would love a way to figure out the math on how many bytes of random response length changes the number of requests needed?


I have seen random workarounds at the app level as well, where the app adds a random length HTML comment on the end of the page.

But if random can be statistically removed, then they shouldn't add a random amount. Maybe just track the max size of the returned response and always add enough to reach that max size. Therefore the lengths of all the pages will always be the same. This is still better than turning off compression completely. A typical max for a detail page in an app might just be the size of the page plus 256 bytes per app output field.


You can basically add as many or as few bytes as you like, by abusing the chunk-extension in chunked encoding:

http://tools.ietf.org/html/rfc2616#section-3.6.1

So you could make all http responses round into 128 byte chunks, by appending 1 to 128 bytes at the end of every response.

Effectively it gives you length hiding at an http layer; Still attackable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: