Except resumed downloads are broken when used in conjunction with compression, because its not clear whether the byte range refers to the compressed or uncompressed resource.
The HTTP spec solved this problem elegantly, it had the concept of the identity of a resource, and gave two headers to declare compression: Content-Encoding (=the resource is always compressed, like an tar.gz, byte range refers to compressed) and Transfer-Encoding (=the resource is compressed only for transfer, the uncompressed is the real thing).
As of 2023, this has not been implemented and the Content-Encoding header is used for both semantics. So resuming downloads over a proxy has a good chance of corrupting your file, i also had source tarballs being decompressed on the fly and failing their checksums.
The HTTP spec solved this problem elegantly, it had the concept of the identity of a resource, and gave two headers to declare compression: Content-Encoding (=the resource is always compressed, like an tar.gz, byte range refers to compressed) and Transfer-Encoding (=the resource is compressed only for transfer, the uncompressed is the real thing).
As of 2023, this has not been implemented and the Content-Encoding header is used for both semantics. So resuming downloads over a proxy has a good chance of corrupting your file, i also had source tarballs being decompressed on the fly and failing their checksums.