Minify .js files during Maven builds

Minifying files for use on the web is essential to improving performance, to reduce network overhead as well as a slight bump in execution speed.

Long ago I used the YUICompressor plugin for both JS as well as CSS files, unfortunately that project appears to have been abandoned many years ago and no longer functions well for JS files that make use of modern features.

For JS files, I’ve found that most capabilities can be replicated with the following in the pom.xml:

<plugin>
<groupId>com.github.blutorange</groupId>
<artifactId>closure-compiler-maven-plugin</artifactId>
<version>2.21.0</version>
<executions>
<execution>
<id>default-minify-js</id>
<phase>generate-resources</phase>
<configuration>
<!-- not supported (always uses .min) <suffix>-min</suffix> -->
<encoding>UTF-8</encoding>
<baseSourceDir>${basedir}/${webapp-folder}</baseSourceDir>
<baseTargetDir>${webapp.path}/</baseTargetDir>
<sourceDir>js</sourceDir>
<targetDir>js</targetDir>
<skipMerge>true</skipMerge>
<includes>
<include>**/*.js</include>
</includes>
<excludes>
<exclude>**/webjars-requirejs.js</exclude>
<exclude>**/bootstrap*.*</exclude>
<exclude>**/jasmine*.*</exclude>
<exclude>**/*-min.js</exclude>
<exclude>**/*.min.js</exclude>
</excludes>
</configuration>
<goals>
<goal>minify</goal>
</goals>
</execution>
</executions>
</plugin>

NOTE: the only feature I have not yet been able to match is the suffix, as it appears to always use *.min.js (where I used to prefer *-min.js).

An additional advantage of using the plugin is that many common syntax errors will be identified at build time, before they cause user problems… but they will also break your build!

REFERENCES:

Minify .css files during Maven builds

Minifying files for use on the web is essential to improving performance, to reduce network overhead as well as a slight bump in execution speed.

Long ago I used the YUICompressor plugin for both JS as well as CSS files, unfortunately that project appears to have been abandoned many years ago but is still effective for CSS even if it is less useful for modern JS.

NOTE: default extension uses -min, to change it set property ‘maven.yuicompressor.suffix’ to the desired value, such as .min to match GCC used for JS.

For CSS files, you can use following in the pom.xml (JS is also possible if you are not using ES6 features):

<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>yuicompressor-maven-plugin</artifactId>
<version>1.5.1</version>
<executions>
<execution>
<id>compressyui-min</id>
<phase>prepare-package</phase>
<goals>
<goal>jslint</goal>
<goal>compress</goal>
</goals>
</execution>
</executions>
<configuration>
<nosuffix>false</nosuffix><!-- false will create -min versions, true does not -->
<warSourceDirectory>${basedir}/${webapp-folder}</warSourceDirectory>
<webappDirectory>${webapp.path}/</webappDirectory>
<jswarn>false</jswarn>
<gzip>false</gzip><!-- create .min.gz files -->
<nocompress>false</nocompress>
<force>true</force>
<excludes>
<exclude>**/*.js</exclude><!-- YUI cannot handle ES6 const let, use closure-compiler instead -->
<exclude>${webjars-output.path}*.*</exclude>
<exclude>**/webjars-requirejs.js</exclude>
<exclude>**/bootstrap*.*</exclude>
<exclude>**/jasmine*.*</exclude>
<exclude>**/*-min.css</exclude>
<exclude>**/*.min.css</exclude>
</excludes>
</configuration>
</plugin>

REFERENCES:

Content-Security-Policy: upgrade-insecure-requests;

As the web has been shifting to HTTPS for security and performance reasons, there are many methods to migrate users. One simple method is via the use of the Content-Security Header.


Content-Security-Policy: upgrade-insecure-requests;

Most modern browsers, except MSIE, currently support this approach.
– Chrome 43+

REFERENCES

document.write() Intervention!

The use of document.write() has always been a bad “code smell” in JavaScript. Most web performance guides such as WebPageTest and Yahoo Exception Performance have warned against this practice.

In most cases, document.write() can be replaced by inserting innerHTML into an empty element after the rest of the page loads. This approach also allows the developer to “think” about how the page might react in cases where JavaScript is disabled or not available on the client.

Google has recently changed the default behavior, such that when on a slow (currently 2G) connection, but discussions have also leaned toward including any slow connection.
As such, right now, the following will occur on slow (2G) connections:

  • Chrome 53+ (warning displayed in debugger console)
  • Chrome 55+ (blocked – code will not execute, warning message will appear in debugger console)

For users on slow connections, such as 2G, external scripts dynamically injected via document.write() can delay the display of main page content for tens of seconds, or cause pages to either fail to load or take so long that the user just gives up. Based on instrumentation in Chrome, we’ve learned that pages featuring third-party scripts inserted via document.write() are typically twice as slow to load than other pages on 2G.


My advice – remove all use of document.write() for required content in your code now, as your users MAY NOT see that content if you do not.

REFERENCES:

Modify Ubuntu Swappiness for performance

Sometimes, it is possible to improve the performance of Ubuntu on older hardware by modifying the disk swapping behavior.

Check your current setting:

cat /proc/sys/vm/swappiness

To modify the behavior, just change the value and reboot. Most documentation recommends trying a value of 10.

sudo vi /etc/sysctl.conf

Add (or change):

# Decrease swappiness value (default:60)
vm.swappiness=10

REFERENCES:

Squid3 Proxy on Ubuntu

Using a personal proxy server can be helpful for a variety of reasons, such as:

  • Performance – network speed and bandwidth
  • Security – filtering and monitoring
  • Debugging – to trace activity

Here are some simple steps to get you started,  obviously you will need to further “harden” security to make it production ready!


sudo apt-get install squid3


cd /etc/squid3/
sudo mv squid.conf squid.orig
sudo vi squid.conf

NOTE: the following configuration works, but will likely need to be adapted for your specific usage.


http_port 3128
visible_hostname proxy.EXAMPLE.com
auth_param digest program /usr/lib/squid3/digest_file_auth -c /etc/squid3/passwords
#auth_param digest program /usr/lib/squid3/digest_pw_auth -c /etc/squid3/passwords
auth_param digest realm proxy
auth_param basic credentialsttl 4 hours
acl authenticated proxy_auth REQUIRED
acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC 1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC 1918 possible internal network
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
#acl SSL_ports port 443
#http_access deny to_localhost
#http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access allow authenticated
via on
forwarded_for transparent

Create the users and passwords:

sudo apt-get install apache2-utils (required for htdigest)
sudo htdigest -c /etc/squid3/passwords proxy user1
sudo htdigest /etc/squid3/passwords proxy user2

Open up firewall port (if enabled):

sudo ufw allow 3128

Restart the server and tail the logs:

sudo service squid3 restart
sudo tail -f /var/log/squid3/access.log

OTHER FILE LOCATIONS:

/var/spool/squid3
/etc/squid3

MONITORING with Splunk…

sudo /opt/splunkforwarder/bin/splunk add monitor /var/log/squid3/access.log -index main -sourcetype Squid3
sudo /opt/splunkforwarder/bin/splunk add monitor /var/log/squid3/cache.log -index main -sourcetype Squid3

REFERENCES:

HTML5 Link Prefetching

Link prefetching is used to identify a resource that might be required by the next navigation, and that the user agent SHOULD fetch, such that the user agent can deliver a faster response once the resource is requested in the future.


<link rel="prefetch" href="http://www.example.com/images/sprite.png" />

<link rel="prefetch" href="/images/sprite.png" />

Supported in:

  • MSIE 11+/Edge
  • Firefox 3.5+ (for HTTPS)
  • Chrome
  • Opera

REFERENCES:

HTML5 preconnect

In addition to dns-prefetch, you can take browser performance one step further by actually creating a new connection to a resource.

By initiating an early connection, which includes the DNS lookup, TCP handshake, and optional TLS negotiation, you allow the user agent to mask the high latency costs of establishing a connection.

Supported in:

  • Firefox 39+ (Firefox 41 for crossorigin)
  • Chrome 46+
  • Opera


<link rel="preconnect" href="//example.com" />
<link rel="preconnect" href="//cdn.example.com" crossorigin />

REFERENCES:

HTML5 DNS prefetch

I often get into some fringe areas of micro-optimizations of website performance, DNS prefetching is another one of those topics.

To understand how this can help, you must first understand the underlying concepts that are used within the communications used to build your web page.

The first of these is a “DNS Lookup”, where the domain name (www.example.com) is converted into a numerical address, the IP address of the server that contains the file(s).

In many websites, content is included from other domains for performance or security purposes.

When the domain names are known in advance, this approach can save time on the connection as the lookup can fetched in advance, before it is required on the page to retrieve assets.

This can be particularly useful for users with slow connections, such as those on mobile browsers.


<link rel="dns-prefetch" href="//www.example.com" />

Supported in:

  • MSIE9+ (MSIE10+ as dns-prefetch)/Edge
  • Firefox
  • Chrome
  • Safari
  • Opera

REFERENCES:

Brotli Compression

If you look at HTTP Headers as often as I do, you’ve likely noticed something different in Firefox 44 and Chrome 49. In addition to the usual ‘gzip’, ‘deflate’ and ‘sdhc’ , a new value ‘br’ has started to appear for HTTPS connections.

Request:

Accept-Encoding:br

Response:

Content-Encoding:br

Compared to gzip, Brotli claims to have significantly better (26% smaller) compression density woth comparable decompression speed.

The smaller compressed size allows for better space utilization and faster page loads. We hope that this format will be supported by major browsers in the near future, as the smaller compressed size would give additional benefits to mobile users, such as lower data transfer fees and reduced battery use.

Advantages:

  • Brotli outperforms gzip for typical web assets (e.g. css, html, js) by 17–25 %.
  • Brotli -11 density compared to gzip -9:
  • html (multi-language corpus): 25 % savings
  • js (alexa top 10k): 17 % savings
  • minified js (alexa top 10k): 17 % savings
  • css (alexa top 10k): 20 % savings


NOTE: Brotli is not currently supported Apache HTTPd server (as of 2016feb10), but will likely be added in an upcoming release.

http://mail-archives.apache.org/mod_mbox/httpd-users/201601.mbox/%[email protected]%3E

Until there is native support, you can pre-compress files by following instructions here…
https://lyncd.com/2015/11/brotli-support-apache/

REFERENCES: