This weekend I planned to clean up a mess that I had created with allowing my lets-encrypt security certificate to expire. It was a challenge initially to even get it setup but there was a distinct lack of information on how to renew these certificates. Especially in an automated fashion. Luckily I found an opensource repository my provider had adopted from another member that did that job in a much more automated fashion than before I originally set it up. Once my site was once again viewable without scary security warnings.
I decided I would try scanning the website with Mozilla’s Observatory project and see my website score. My website had basically no good security settings what so ever. The majority of these were simple fixes to adding headers to the webserver. Many webservers set a header describing the specific server one is interacting with. This is one type of attack vector as now the person attempting intrusion knows that exploits to target the webserver with. The other headers related to different types of attacks.
X-Frame-Options: This setting tells the browser whether the webpage can be iframed into another webpage or not. In some cases you want this if you want a page to have content from another page on it. Otherwise this should be off or set as ‘SAMEORIGIN’ so its only your domain that iframes the content.
X-XSS-Protection: Generally the browsers will not execute scripts from other domains on a webpage nowadays but some older browsers had support for this. So its generally safe to toggle this on but we aware if you are loading scripts from other webpages on your site this might stop that from working.
Strict-Transport-Security: This tells the browser that for a duration only serve the webpage over HTTPS and not over HTTP to prevent a man-in-the-middle attack.
X-Content-Type-Options: Disable browsers being able to sniff content types. I.E. if the web page owner sets some content as text or JSON the browser cannot try other types only the one type works. The scenario seems like it is around a user uploading content to a shared web service and other users downloading content in a way that executes it instead of sending it to be read.
Content-Security-Policy: This is a very strict header about what types of content one can load from other websites. Great care and caution should be used with this one as it is easy to break your sites functionality. If you are used to using scripts/css/libraries from cdns then this will cause some headaches or a lot of tweaking. This protocol makes it very explicit what content you allow to render from external websites on your own website. This makes cross site scripting attacks impossible to use at the cost of a sysadmin probably needing to tweak a lot of settings or serve resources locally.
As I do not run my own website’s webserver I edited the .htaccess file to add these rules.
<IfModule mod_headers.c> Header unset Server Header append X-Frame-Options "SAMEORIGIN" Header append Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" Header append X-XSS-Protection "1; mode=block" Header append X-Content-Type-Options "nosniff" Header set Content-Security-Policy "default-src 'none'; script-src 'self' 'unsafe-inline' cdnjs.cloudflare.com; connect-src 'self'; img-src 'self' s3-ap-northeast-1.amazonaws.com; style-src 'self' 'unsafe-inline' cdnjs.cloudflare.com https://fonts.googleapis.com; font-src 'self' https://fonts.googleapis.com data: https://fonts.gstatic.com cdnjs.cloudflare.com;" </IfModule>
Since this website uses some fonts and scripts from google and cloudflare I had to do quite a bit of whitelisting and I am considering removing these external dependencies.
Public-Key-Pining: This was one security setting I chose not to set. While reading about how it all works there is the risk of doing it incorrectly and making your website inaccessible. If you set this up you are basically signalling that you trust certain certificate authorities to vet your website’s identity.