Discourse SSL setup

I’ve noticed that Google Chrome says that this Discourse site is using “obsolete cryptography”, specifically RC4 128 bit. @raphael is access to this installation proxied through Jasper, or no? I feel this is something we should address, but I’m not entirely sure where the necessary changes must be made.

Also, the initial SSL handshake required to access this site seems to take a rather long time from my location. Is there any way we can speed this up, for starters I’ll increase the SSL keep alive length once I know where to make this change.

Do you have the same message for other websites? Like wiki or pad.

It says the same thing for all our sites. Our Nginx setup seems to default to RC4 for this version of chrome (41.0) for some reason. I haven’t tested it with any other browsers at this stage, though.

@john better now?

I’ve also added OCSP stapling for shorter access.

:thumbsup: all the sites now use AES_128_GCM by default on this browser with our server.

1 Like

@john not only I improved TLS keepalive and cache, but also but also added OCSP stapling for faster load times. If still not enough, I have also set nginx to use session tickets for handshakes shortcuts, and even to use SPDY experimental protocol for reducing latencies.

[merlin@localhost Documents]$ openssl s_client -connect openmandriva.org:443 -CAfile AddTrustExternalCARoot.crt -tls1  -tlsextdebug  -status
[snip 8<--]
OCSP response: 
======================================
OCSP Response Data:
    OCSP Response Status: successful (0x0)
    Response Type: Basic OCSP Response
    Version: 1 (0x0)
    Responder Id: B390A7D8C9AF4ECD613C9F7CAD5D7F41FD6930EA
    Produced At: Apr  9 09:24:24 2015 GMT
    Responses:
    Certificate ID:
      Hash Algorithm: sha1
      Issuer Name Hash: A5E2344EF5763A9CE2F31E9B9807B0075727A5F9
      Issuer Key Hash: B390A7D8C9AF4ECD613C9F7CAD5D7F41FD6930EA
      Serial Number: 78F9AE13F95A81B685727394887DC747
    Cert Status: good
    This Update: Apr  9 09:24:24 2015 GMT
    Next Update: Apr 13 09:24:24 2015 GMT

    Signature Algorithm: sha256WithRSAEncryption
[snip 8<--]
======================================

At the same time, for improving the security, I have set a ephemeral Diffie-Hellman cipher, HSTS policy (this latest only works for modern browsers) and public key pinning for avoiding MIDM attacks.

I did not use the full ciphers list of Mozilla but tuned it to carry favour to forward secrecy. Unfortunately, this penalizes old environment (such as ie6, java6 etc.)

You can see here that the golbal configuration is considered as not too bad: SSL Server Test: openmandriva.org (Powered by Qualys SSL Labs) :smile:

I’ve also made some improvements in nginx, not totally finished, but the overall access to websites should be faster (for discourse, it’s different, I need to tune the reverse proxy).

However I’m also interested to your TTFB @john, as you connect from quite far from France, maybe we could improve the access to content by using a free CDN service (cloudflare?)

I was seeing a TTFB of 5.5 seconds here. If I open Chrome’s developer tools and refresh the page with caching disabled, it takes up to 26 seconds to load the main discourse page fully from scratch, which is a 5.3 MB download spread across 31 requests.

I’ve spent a couple of hours tweaking Nginx’s config file. It now caches any requests for images so Discourse doesn’t have to deal with them, and adds expires headers to them. I also enabled level 9 gzip compression on all css, js and htm files, which reduced the total size of the home page to 3.3 MB. Then, I noticed a whole bunch of .map files (no idea what they’re for) were being downloaded on the home page, so I told Nginx to also compress these. Now, the home page takes 1.3 MB to fully download.

Oh, and I also recompressed our site asset png images with super tight compression.

(if we have load issues, we could look at turning the gzip down from level 9 to something more like level 7).

15 seconds is still quite a long time for a new visitor to wait…

OK I’ve seen your tuning, it’s good work. I did not activate the cache for all websites as I wanted to check cpu consumption.

For compression, I think we should stay around 3 rather than 9 or even 7, see here http://www.media-division.com/generation-of-gzip-files-for-nginx/

What will improve your access also is the use of a CDN.

BTW the use of only gzip_static with a cron’ed compression seems even a better solution :slight_smile:

The cpu consumption of Nginx hasnt risen at this stage. However I do agree with your article. My focus yesterday was moving load from Discourse to Jasper, then today I’ll try and figure out how Discourse’s css and JS systems work so I can compress them at the source using the technique in your article.
If we’re only doing it once, I see no reason not to -9 them, though, as it makes hardly any difference to client side CPU requirements.

Nginx is not fixed in discourse VM, it uses docker compiler and everything is recompiled and reconfigured during rebuild. If there is something to do, it’s rather on Jasper, but on the other side, CDN seems to give much more velocity.

Other field to explore is Pagespeed

Is there a way to get Nginx to cache files already zipped instead of zipping them from the cache every time?

A cdn won’t do much good if our origin isn’t telling them to cache anything :wink: so all these optimisations need to happen anyway at some point. Performance is already far better from my location.

Discourse already provide gzip compressed files