Sort of, but you vary the weighting to fade the blocks into each other.
Or technically speaking, what you do is you multiply the blocks by a window function before you do the Fourier related transform. If you choose the windowing function carefully, you can even set it up so that all you have to do is add the overlapping areas together.
This is especially easy to understand in one dimension, which is more or less how MP3, Vorbis and AAC do it. Block boundary effects are so noticeable with audio that unless they are corrected very robustly, the quality is unacceptably choppy.
The technique generalizes basically without alteration to two dimensional data, but I've never seen an image or video algorithm that used it. JPEG just ignores the blocking issue entirely, and video algorithms seem to rely exclusively on deblocking filters. As I said, I think this has to do with the fact that TDAC doesn't trivially generalize to motion compensation.
Or technically speaking, what you do is you multiply the blocks by a window function before you do the Fourier related transform. If you choose the windowing function carefully, you can even set it up so that all you have to do is add the overlapping areas together.
This is especially easy to understand in one dimension, which is more or less how MP3, Vorbis and AAC do it. Block boundary effects are so noticeable with audio that unless they are corrected very robustly, the quality is unacceptably choppy.
The technique generalizes basically without alteration to two dimensional data, but I've never seen an image or video algorithm that used it. JPEG just ignores the blocking issue entirely, and video algorithms seem to rely exclusively on deblocking filters. As I said, I think this has to do with the fact that TDAC doesn't trivially generalize to motion compensation.