One could argue that when talking about security, it's always about making things harder to breech, not a full proof protection..
I'm not sure how adding 32 bytes of "good randomness" would help.. because the size might be very similar since the randomness might not get properly reduced. And thus, the slight variation in size will still be very relevant.
However, adding between 1 and 32 bytes of randomness might be a pretty good counter! I.e. If you request the page with the "guess letter A", and then request again the same page with the same "guess letter A" and you get +/- 32bytes of different encrypted stuff, it's very hard to assume something was better compressed.
The cool thing about that is that it's fairly trivial to do with most implementation of CSRF. Thoughts?
The attacker just needs to work out the average size of the normal response, then the average size of the response with the extra character. Send enough requests and the difference will be noticeable. It slows them down, but doesn't fix the problem.
I'm not sure how adding 32 bytes of "good randomness" would help.. because the size might be very similar since the randomness might not get properly reduced. And thus, the slight variation in size will still be very relevant.
However, adding between 1 and 32 bytes of randomness might be a pretty good counter! I.e. If you request the page with the "guess letter A", and then request again the same page with the same "guess letter A" and you get +/- 32bytes of different encrypted stuff, it's very hard to assume something was better compressed.
The cool thing about that is that it's fairly trivial to do with most implementation of CSRF. Thoughts?