I’ve noticed that Google Chrome says that this Discourse site is using “obsolete cryptography”, specifically RC4 128 bit. @raphael is access to this installation proxied through Jasper, or no? I feel this is something we should address, but I’m not entirely sure where the necessary changes must be made.
Also, the initial SSL handshake required to access this site seems to take a rather long time from my location. Is there any way we can speed this up, for starters I’ll increase the SSL keep alive length once I know where to make this change.
It says the same thing for all our sites. Our Nginx setup seems to default to RC4 for this version of chrome (41.0) for some reason. I haven’t tested it with any other browsers at this stage, though.
@john not only I improved TLS keepalive and cache, but also but also added OCSP stapling for faster load times. If still not enough, I have also set nginx to use session tickets for handshakes shortcuts, and even to use SPDY experimental protocol for reducing latencies.
At the same time, for improving the security, I have set a ephemeral Diffie-Hellman cipher, HSTS policy (this latest only works for modern browsers) and public key pinning for avoiding MIDM attacks.
I did not use the full ciphers list of Mozilla but tuned it to carry favour to forward secrecy. Unfortunately, this penalizes old environment (such as ie6, java6 etc.)
I’ve also made some improvements in nginx, not totally finished, but the overall access to websites should be faster (for discourse, it’s different, I need to tune the reverse proxy).
However I’m also interested to your TTFB @john, as you connect from quite far from France, maybe we could improve the access to content by using a free CDN service (cloudflare?)
I was seeing a TTFB of 5.5 seconds here. If I open Chrome’s developer tools and refresh the page with caching disabled, it takes up to 26 seconds to load the main discourse page fully from scratch, which is a 5.3 MB download spread across 31 requests.
I’ve spent a couple of hours tweaking Nginx’s config file. It now caches any requests for images so Discourse doesn’t have to deal with them, and adds expires headers to them. I also enabled level 9 gzip compression on all css, js and htm files, which reduced the total size of the home page to 3.3 MB. Then, I noticed a whole bunch of .map files (no idea what they’re for) were being downloaded on the home page, so I told Nginx to also compress these. Now, the home page takes 1.3 MB to fully download.
Oh, and I also recompressed our site asset png images with super tight compression.
(if we have load issues, we could look at turning the gzip down from level 9 to something more like level 7).
15 seconds is still quite a long time for a new visitor to wait…
The cpu consumption of Nginx hasnt risen at this stage. However I do agree with your article. My focus yesterday was moving load from Discourse to Jasper, then today I’ll try and figure out how Discourse’s css and JS systems work so I can compress them at the source using the technique in your article.
If we’re only doing it once, I see no reason not to -9 them, though, as it makes hardly any difference to client side CPU requirements.
Nginx is not fixed in discourse VM, it uses docker compiler and everything is recompiled and reconfigured during rebuild. If there is something to do, it’s rather on Jasper, but on the other side, CDN seems to give much more velocity.
Is there a way to get Nginx to cache files already zipped instead of zipping them from the cache every time?
A cdn won’t do much good if our origin isn’t telling them to cache anything so all these optimisations need to happen anyway at some point. Performance is already far better from my location.