If you have already started using HSTS to force users to your HTTPS website, the use of ‘preload’ is another simple addition as it only requires the addition of the keyword to the header.
Once done, you can either wait for your site to be identified (which can take a long time, or forever for less popular websites) or ideally, submit your hostname to be added to the lists preloaded in many modern browsers. The advantage here is that your users will never make a single request to your HTTP website and will automatically be directed to HTTPS.
An HTTP Header example:
Strict-Transport-Security: max-age=63072000; includeSubDomains; preload
Apache2 configuration example:
Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
A relatively new HTTP Header that is supported by most modern browsers (except MSIE) is the “Referrer-Policy” header. There have been previous attempts to implement similar protections through use of the ‘rel’ (or ‘rev’) attributes on links to external websites. The latest approach takes a different approach and prevents leaking of internal URLs, and in some cases parameters, to external websites. This is important from a security perspective as you might maintain some sensitive information in your page urls, that would otherwise be inadvertently shared with an external website.
Clearly, you’ll need to determine your own level of security based upon your needs. Example: ‘no-referrer’ would be the most strict and would prevent the browser from sending the ‘Referer'(sic) header even to your own websites pages.
Example header values:
Implementation can be accomplished in many ways, the most simple being and addition to your HTTP server configuration similar to the one shown below for Apache 2.x:
Header always set Referrer-Policy strict-origin
If you are running a secure website, it’s a good idea to prevent non-secure assets from being included on your page. This can often happen through the use of content management system, or even through website vulnerabilities. A simple change in HTTP headers will help browsers to defend against them.
Most modern browsers, except MSIE, currently support this approach.
– Firefox 48+
The use of
In most cases,
document.write() can be replaced by inserting
Google has recently changed the default behavior, such that when on a slow (currently 2G) connection, but discussions have also leaned toward including any slow connection.
As such, right now, the following will occur on slow (2G) connections:
- Chrome 53+ (warning displayed in debugger console)
- Chrome 55+ (blocked – code will not execute, warning message will appear in debugger console)
For users on slow connections, such as 2G, external scripts dynamically injected via document.write() can delay the display of main page content for tens of seconds, or cause pages to either fail to load or take so long that the user just gives up. Based on instrumentation in Chrome, we’ve learned that pages featuring third-party scripts inserted via document.write() are typically twice as slow to load than other pages on 2G.
My advice – remove all use of document.write() for required content in your code now, as your users MAY NOT see that content if you do not.
With a few simple steps, Google Chrome can be installed on Ubuntu.
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
sudo sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
sudo apt-get update
sudo apt-get install google-chrome-stable
sudo apt-get install google-chrome-beta
I’ve been a user of Selenium testing for several years, though I noticed that some classes related to the HtmlUnit WebDriver were missing after upgrading from 2.52.0 to 2.53.0. After some research, I discovered that it is now a separate dependency allowing for a separate release cycle. Additionally, if you don’t use this (relatively generic) webdriver, you will no longer need to have it in your binaries.
Here’s all you need to do to add it to your Maven projects for testing.
In your pom.xml file:
Link prefetching is used to identify a resource that might be required by the next navigation, and that the user agent SHOULD fetch, such that the user agent can deliver a faster response once the resource is requested in the future.
<link rel="prefetch" href="http://www.example.com/images/sprite.png" />
<link rel="prefetch" href="/images/sprite.png" />
- MSIE 11+/Edge
- Firefox 3.5+ (for HTTPS)
In addition to dns-prefetch, you can take browser performance one step further by actually creating a new connection to a resource.
By initiating an early connection, which includes the DNS lookup, TCP handshake, and optional TLS negotiation, you allow the user agent to mask the high latency costs of establishing a connection.
- Firefox 39+ (Firefox 41 for crossorigin)
- Chrome 46+
<link rel="preconnect" href="//example.com" />
<link rel="preconnect" href="//cdn.example.com" crossorigin />
I often get into some fringe areas of micro-optimizations of website performance, DNS prefetching is another one of those topics.
To understand how this can help, you must first understand the underlying concepts that are used within the communications used to build your web page.
The first of these is a “DNS Lookup”, where the domain name (www.example.com) is converted into a numerical address, the IP address of the server that contains the file(s).
In many websites, content is included from other domains for performance or security purposes.
When the domain names are known in advance, this approach can save time on the connection as the lookup can fetched in advance, before it is required on the page to retrieve assets.
This can be particularly useful for users with slow connections, such as those on mobile browsers.
<link rel="dns-prefetch" href="//www.example.com" />
- MSIE9+ (MSIE10+ as dns-prefetch)/Edge
If you look at HTTP Headers as often as I do, you’ve likely noticed something different in Firefox 44 and Chrome 49. In addition to the usual ‘gzip’, ‘deflate’ and ‘sdhc’ , a new value ‘br’ has started to appear for HTTPS connections.
Compared to gzip, Brotli claims to have significantly better (26% smaller) compression density woth comparable decompression speed.
The smaller compressed size allows for better space utilization and faster page loads. We hope that this format will be supported by major browsers in the near future, as the smaller compressed size would give additional benefits to mobile users, such as lower data transfer fees and reduced battery use.
- Brotli outperforms gzip for typical web assets (e.g. css, html, js) by 17–25 %.
- Brotli -11 density compared to gzip -9:
- html (multi-language corpus): 25 % savings
- js (alexa top 10k): 17 % savings
- minified js (alexa top 10k): 17 % savings
- css (alexa top 10k): 20 % savings
NOTE: Brotli is not currently supported Apache HTTPd server (as of 2016feb10), but will likely be added in an upcoming release.
Until there is native support, you can pre-compress files by following instructions here…