Room for improvement?

In my continuing hunt for the bug with the standalone version of Calamares I wondered whether teh problem had anything to do with squashfs. After a bit of hunting around I cam across this page Squashfs Performance Testing – Jonathan Carter Which contains some simple scripts to test the efficiency of the various compression types available in mksquashfs as well as the effect of blocksize. I have been wondering whether all the toubles with the install part of the ISO may be due to the slowness of unsquashfs since adding delays in the scripts did seem to help the issue.

The results of the tests are very revealing hence my post here.
They would argue for a change in how we create the squashfs filesystems for our isos.
The left hand column shows both the type of compression and the blocksize the rest is self explanatory.
The script decompresses the original filesystem and then recompresses it hence the varying inital sizes.
For reference our settings are those in bold italic.
These tests were run on my local 8 cpu AMD with 16Gig RAM
The memory throughput is:-
dd if=/dev/zero of=/dev/null bs=1M count=100000
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB, 98 GiB) copied, 7.46942 s, 14.0 GB/s

Filename Size Ratio CTIME UTIME
squashfs-gzip-4096.squashfs 2408360 42.17% 0m44.844s 1m26.626s
squashfs-gzip-8192.squashfs 2299960 40.29% 0m52.296s 1m11.806s
squashfs-gzip-16384.squashfs 2222788 38.95% 1m10.460s 1m11.559s
squashfs-gzip-32768.squashfs 2164848 37.94% 1m32.189s 1m46.050s
squashfs-gzip-65536.squashfs 2124288 37.23% 1m52.192s 1m48.464s
squashfs-gzip-131072.squashfs 2102304 36.85% 2m45.725s 1m50.582s
squashfs-lzo-4096.squashfs 2704312 47.35% 4m44.273s 1m10.986s
squashfs-lzo-8192.squashfs 2580616 45.21% 7m42.889s 0m53.958s
squashfs-lzo-16384.squashfs 2483156 43.51% 4m5.703s 0m55.145s
squashfs-lzo-32768.squashfs 2402196 42.10% 5m41.550s 1m8.597s
squashfs-lzo-65536.squashfs 2335468 40.94% 6m18.788s 1m1.768s
squashfs-lzo-131072.squashfs 2295204 40.23% 3m56.797s 1m0.462s
squashfs-xz-4096.squashfs 2334384 40.88% 7m14.503s 3m55.037s
squashfs-xz-8192.squashfs 2162352 37.88% 6m26.875s 3m44.205s
squashfs-xz-16384.squashfs 2039460 35.74% 5m9.183s 3m41.578s
squashfs-xz-32768.squashfs 1947760 34.14% 7m4.122s 3m55.465s
squashfs-xz-65536.squashfs 1877668 32.91% 5m34.739s 4m13.721s
squashfs-xz-131072.squashfs 1822024 31.94% 4m50.316s 5m13.156s

We had a rather interesting project in Linaro where we hacked squashfs to do Zstd compression - that should give us quite a nice speedup.
I’ll add the needed patchset to our next kernel build.

1 Like

@bero i’ve just added zstd patches to kernel-release and squashfs-tools.
kernel is building now, while squashfs-tools with zstd support are published.

According to benchmarks zstd may give us a real speed up when booting iso.

Should speed up installation too? I would think so.

zstd gives better decompression over xz like 4-5 times, but it also have less compression that xz, like 0,5.
This means ISO compressed with zstd, will boot faster (decompressing squash image), but it will have bigger size.

Current ISO with squash compressed with xz is like 2 GB
With zstd it will be like 2,3 GB (based on guess :slight_smile: )

I ran a local experiment with lzo compression and ended up with an ISO of around 2.3Gb in VB you ceratinly notice the difference but I’m not you’d see the same improvement from a USB2 memstick as read speed may well be the limiting factor there. UFS storage and USB-3 may see an improvment though.

Current ISO boot time in live mode is something that needs improvement.
We are quite close to give it a test with a zstd compression.

I’ll try to add support to zstd to dracut.

1 Like