Shortly after submitting a PR the code went through major surgery, and my patch then needed a similar amount of surgery. Oracle then whacked most of the Solaris org, and I don’t think this ever got updated to work with the current pigz.
You can grab the version from the solaris userland repo I linked and use it without me completing a homework assignment. Just grab the pigz-2.3.4 source then apply the patches from [1] in the proper order. Maybe some of them aren't needed for non-Solaris.
I thought I had opened a PR for that a long while ago, but it doesn't show up on github these days. In any case, I did ask Mark Adler to review it. It was never a priority, then the code changed in ways that I don't really want to deal with.
While looking through the PRs, I noticed a PR for Blocked GZip Format (BGZF) [2]. That's very interesting, and perhaps suggests that bgzip is a tool you would be interested in.
Nice! You should be able to do it without an index by periodically restarting the dictionary on compression and then looking for something resembling the dictionary, right?
I have not only implemented parallel decompression but also random access to offsets in the stream with https://github.com/mxmlnkn/pragzip I did some benchmarks on some really beefy machines with 128 cores and was able to reach over 10 GB/s decompression bandwidth. This works without any kind of additional metadata but if such an index file with metadata exists, it can double the decompression bandwidth and reduce the memory usage. The single-core decoder has lots of potential for optimization because I had to write it from scratch, though.
This is of course more of a problem of the gz format than pigz although last time I looked hacks are possible to parallelize decompression.