tar gz (single core) -> tar pigz (multi core)

tar might be old, but still a great program for archiving files

manpage: tar.man.txt

to get multi core usage for compression (faster) the user tells tar to use a different compression algorithm that has multi-core support like pigz or xz.

# install pigz
apt install pigz
# create tar archive using pigz as "compressor"
time tar -c --use-compress-program=pigz -f file.tar.gz directory
# time meassures the time it needed using all cores
# lenovo t440 i5 and ssd
# took 6min to compress a 29 GByte home directory
# size of home directory was compressed by -70% to 8.5 GByte
# multi threaded decompress is not implemented
# decompression seems to use way less CPU resources than compression
# (have 25MByte/sec during compression and over 100MByte/sec during decompression)

# time to compress without pigz:
# real 13m13.490s

# time to compression with pigz:
# real 5m57.117s
# ;) nice

# time to decompress without pigz (not supported):
# real  3min 37sec
tar fxvz file.tar.gz

manpage: pigz.man.txt

want to safe disk space? have got the time? xz!

xz takes way more cpu time but compresses way more than gzip or pigz

# if user wants to use more than 4 cpu threads/cores just increase the number
# compress "folder" into archive.tar.xz
XZ_DEFAULTS="--threads=4"; export XZ_DEFAULTS; tar fcvJ /path/to/backup/archive.tar.xz /compress/this/folder

# de compress archive.tar.xz into current directory
XZ_DEFAULTS="--threads=4"; export XZ_DEFAULTS; tar fxvJ /path/to/backup/archive.tar.xz

manpage: xz.man.txt

real world example:

4x Core2Quad Q6600@2.4GHz managed to compress 62GBytes (random also binary data) into 26GBytes (41% of original) in 152min (2.53 hours).