Modern filesystems like ZFS can do it transparently and should be used.
CURL="${CURL:-curl}"
-ZSTD="${ZSTD:-zstdmt -19}"
WGET="${WGET:-wget}"
PARALLEL="${PARALLEL:-parallel --bar --shuf}"
[ -s download.hash ] && hash_their="`cat download.hash`" || :
[ "$hash_our" != "$hash_their" ] || exit 0
[ -s max ] && max=`cat max` || max=$FEEDER_MAX_ITEMS
-$ZSTD -d < feed.zst | $cmds/feed2mdir/feed2mdir -max-entries $max . > title.tmp
+$cmds/feed2mdir/feed2mdir -max-entries $max . <feed >title.tmp
mv title.tmp title
echo "$hash_their" >parse.hash
@code{mtime} (for @code{If-Modified-Since} header generation),
@code{ETag} and response headers for debugging.
-@item feed.zst
-It contains the content itself. Compressed with
-@url{https://facebook.github.io/zstd/, Zstandard}.
+@item feed
+It contains the content itself.
@item download.hash, parse.hash
-SHA-512 hash of the @file{feed.zst}, used to determine if feed was
+SHA-512 hash of the @file{feed}, used to determine if feed was
updated and parser has to do the job.
@item title
There are a few configuration options defined in @file{cmd/env.rc}. You can
override them either with environment variables, or by editing that file
directly. You can override @command{curl}, @command{wget},
-@command{zstd}, @command{parallel} command invocations,
+@command{parallel} command invocations,
@code{User-Agent}, number of download/parse jobs run in parallel and so on.