I wanted to talk about some of the optimisations I’ve made to this site to give some ideas on how you can optimise your Hugo or static HTML sites to another level. Whilst I’m not claiming this is the most optimised the site can be, there has certainly been a lot of effort put in to make it more performant. I’ll start with the simplest methods and move onto the more complicated ones later on.
Optimisation One - no JavaScript
Seems obvious, but JavaScript is a horribly slow language that only adds unnessesary complexity to your site. There are tricks to making JavaScript faster but the best optimisations you can do is to avoid using it entirely.
Optimisation Two - pure CSS and no libraries
My CSS is in one single main.css file, and no libraries are used like Bootstrap or Tailwind. These libraries add unnessasary bloat and I only need minimal CSS in which I can optimise myself if required.
Optimisation Three - minify HTML/CSS
A good little trick that comes with Hugo as default (minus enabling in your config file) that allows you to minify your HTML and CSS resulting in smaller files thus smaller network transfers.
One note I will mention is that minifying HTML will strip out what it thinks is unnessesary whitespace, meaning if you have text that requires it’s whitespace to remain untouched, you will have to remember to wrap it with the <pre> tag. (I found this out the hard way.)
Optimisation Four - Critical CSS
If you load your CSS inline at the top of the page before the <body> tag instead of a link referencing to an external CSS file, it forces your browser to load the CSS before the contents of the HTML page. This is called Critical CSS. This avoids unstyled HTML pages showing to the user, particularly noticeable on slow connections.
Also, my theory is (but ultimately not actually tested,) it’s probably faster to load a single HTML file rather than one HTML file and one CSS file separately.
Optimisation Five - use of system fonts
I just use monospace for the font, avoids loading fonts from the server/cross-domain and having to do optimisations like preloads.
Optimisation Six - togif script
I recorded an mkv video with OBS testing out my new bfpprint program so I could put it up as an example on the GitHub README.md, I converted that file to a GIF and it’s size was 1.4MB, I initially converted it down to ~500KB but then I snooped online to find someone’s solution online.
My version of their solution looks like this:
ffmpeg -y -i $1 -filter_complex
"fps=12,scale=480:-1:flags=lanczos,split[s0][s1];
[s0]palettegen=max_colors=24[p];[s1][p]paletteuse=dither=bayer" $2
Differences are:
- fps=5 -> 12
- max_colors=32 -> 24
It takes parameters $1 and $2, the former being an input file, the latter is an output file.
The max_colors change removes the transparency byte saving one byte every pixel.
Running the script converted the file from 1.4MB -> 126kB for an 11 second clip with no real noticeable differences. (Maybe I can optimise this even more in future.)
Plus because the GIF is hosted on GitHub, my web-server doesn’t have to serve it, which is always a big plus, and GitHub is probably happier not having to serve my 1.4MB GIFs anymore.
REVISION
I’ve now converted from ’togif’ to ’towebp’ and have converted the script slightly to this:
ffmpeg -y -i $1 -filter_complex
"fps=12,scale=480:-1:flags=lanczos,split[s0][s1];
[s0]palettegen=max_colors=24[p];[s1][p]paletteuse=dither=bayer" -lossless 1
-compression_level 6 -loop 0 $2
Additions include:
- -lossless 1
- -compression_level 6
- -loop 0
This takes the previous 125kB GIF and improves it to a 66kB WebP.
Optimisation Seven - apache2 mod_deflate
This compresses files before transfer, meaning less bytes are transferred over the network. For instance, as of writing, the home-page for this site only transfers 2.70kB (2.3kB + header) when the actual size is 6.35kB.
I can’t remember but I think this is turned on as default when I was setting up my apache server, make sure you check if setting up your own server.
REVISION
Enabled brotli module on apache2 and set BrotliCompressionQuality to 11 (the max) meaning it now compressed the home-page to 2.34kB (1.95kB + header.)
Not noticing any real slowdown with max compression level, only maybe at worst 1ms more wait time.
Optimisation Eight - apache2 mod_http2
Recently found out my web-server was responding back with HTTP 1.1. I enabled mod_http2 and edited the apache config to include
Protocols h2 h2c http/1.1
so it now responds with HTTP/2 and falls back to HTTP 1.1 if not available.
HTTP/2 allows header compression and something called multiplexing which as far as I can gather allows multiple requests and responses over a single TCP connection allowing for faster service.
Optimisation Nine - ensuring your webpages are < 15kB (TCP slow start)
I’m too retarded to explain this so I will link this article on why this is vital for fast page loading.
Future optimisations
I will probably look into figuring out how to automatically preload Hugo images/gifs for faster page responses.
That’s about all.