SameParty cookie attribute

While Google has made strides to remove cookies, there was a recent addition to the Chromium product upon which Chrome, Safari, and Edge are based.

I saw this written up as the following:

The SameParty attribute takes no value, and requires that the cookie also specify the “Secure” attribute, and not specify “SameSite=Strict”. If either of these constraints is violated, the cookie will be considered invalid, and will not be set. “SameSite=Strict” is not supported because “SameSite=Strict” is intended as a security boundary, rather than a privacy boundary (which First-Party Sets aims to establish). Valid use-cases of “SameSite=Strict” in cross-site contexts should not be loosened even when the sites are same-party.

Better stated…

  • The SameParty attribute is specified without a value (as are Secure and HttpOnly) as ;SameParty;
  • The Secure attribute is required in order to use the SameParty attribute. Any cookie specifying SameParty without Secure will be rejected as invalid.
  • Additionally, any cookie specifying SameParty in the presence of SameSite=Strict will be rejected as invalid.

While I’ve seen this implemented in versions of Chrome 89+, it is not yet adopted in Firefox (and may never be).

References:

Cookie Priority attribute

While Google has made strides toward removing cookies, this new feature was recently added in Chrome 81+ in what appears to be a method for developers to better manage cookie lifespans when the browser client limit is being reached. This value can be seen in the DevTools, it appears that some cookies can be elevated even when the attribute is not specified.

This is added to the cookie string like any other attribute with value:
;Priority=High;

NOTE: this appears to be implemented only in Google Chrome.

References

Google Federated Learning of Cohorts (FLoC) – optout

Google Chrome 89 and other browsers based upon it such as Chromium Edge have introduced a new capability known as FLoC. This approach removes the need for third-party cookies by passing a group identifier in the HTTP Headers in a manner similar to how Cookies are exchanged. While FLoC should allow for users to remain more anonymous as advertisers only receive a group identifier for the user, it would not be difficult to use their IP address or other features available via device fingerprinting to track the individual.

As a web user, you would need to use several approaches to avoid this:
1. Use a browser without FLoC support. Hopefully, this will be added to the configuration menus to allow users to prevent it, similar to DNT.
2. Use a browser plugin (or other software/proxy) to remove the FLoC headers.

As a web-developer, you can add configuration to opt-out of all FLoC cohort calculation by sending the following HTTP response header:


Permissions-Policy: interest-cohort=()

If you really want to see the data, the following javascript will expose it:

const { id, version } = await document.interestCohort();
console.log('FLoC ID:', id);
console.log('FLoC version:', version);

REFERENCES:

Enabling HTTP/3 (QUIC) in browsers for improved network performance

Back in 2015, Google introduced SPDY as a method of improving TCP connections. HTTP/3 now improves upon that by removing the blocking of TCP with the use of UDP (QUIC).

Firefox: currently disabled by default in version 85, to enable use about:config and set network.http.http3.enabled = true

IOS Safari 14+: currently disabled by default, but can be enabled under Settings > Safari > Advanced > Experimental Features > HTTP/3

Chrome/Chromium: current versions 88+ are currently implementing by default.

Chromium Edge: as new versions are based upon Chromium, support should follow Chrome.

MSIE: was never and will never be implemented.

REFERENCES:

HSTS preload

If you have already started using HSTS to force users to your HTTPS website, the use of ‘preload’ is another simple addition as it only requires the addition of the keyword to the header.

Once done, you can either wait for your site to be identified (which can take a long time, or forever for less popular websites) or ideally, submit your hostname to be added to the lists preloaded in many modern browsers. The advantage here is that your users will never make a single request to your HTTP website and will automatically be directed to HTTPS.

An HTTP Header example:

Strict-Transport-Security: max-age=63072000; includeSubDomains; preload

Apache2 configuration example:

Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"

REFERENCES:

“Referrer-Policy” HTTP Header

A relatively new HTTP Header that is supported by most modern browsers (except MSIE) is the “Referrer-Policy” header. There have been previous attempts to implement similar protections through use of the ‘rel’ (or ‘rev’) attributes on links to external websites. The latest approach takes a different approach and prevents leaking of internal URLs, and in some cases parameters, to external websites. This is important from a security perspective as you might maintain some sensitive information in your page urls, that would otherwise be inadvertently shared with an external website.

Clearly, you’ll need to determine your own level of security based upon your needs. Example: ‘no-referrer’ would be the most strict and would prevent the browser from sending the ‘Referer'(sic) header even to your own websites pages.

Example header values:

Referrer-Policy: no-referrer
Referrer-Policy: no-referrer-when-downgrade
Referrer-Policy: origin
Referrer-Policy: origin-when-cross-origin
Referrer-Policy: same-origin
Referrer-Policy: strict-origin
Referrer-Policy: strict-origin-when-cross-origin
Referrer-Policy: unsafe-url

Implementation can be accomplished in many ways, the most simple being and addition to your HTTP server configuration similar to the one shown below for Apache 2.x:

Header always set Referrer-Policy strict-origin

REFERENCES:

Content-Security-Policy: block-all-mixed-content

If you are running a secure website, it’s a good idea to prevent non-secure assets from being included on your page. This can often happen through the use of content management system, or even through website vulnerabilities. A simple change in HTTP headers will help browsers to defend against them.


Content-Security-Policy: block-all-mixed-content

Most modern browsers, except MSIE, currently support this approach.
– Firefox 48+

REFERENCES

Content-Security-Policy: upgrade-insecure-requests;

As the web has been shifting to HTTPS for security and performance reasons, there are many methods to migrate users. One simple method is via the use of the Content-Security Header.


Content-Security-Policy: upgrade-insecure-requests;

Most modern browsers, except MSIE, currently support this approach.
– Chrome 43+

REFERENCES

Squid3 Proxy on Ubuntu

Using a personal proxy server can be helpful for a variety of reasons, such as:

  • Performance – network speed and bandwidth
  • Security – filtering and monitoring
  • Debugging – to trace activity

Here are some simple steps to get you started,  obviously you will need to further “harden” security to make it production ready!


sudo apt-get install squid3


cd /etc/squid3/
sudo mv squid.conf squid.orig
sudo vi squid.conf

NOTE: the following configuration works, but will likely need to be adapted for your specific usage.


http_port 3128
visible_hostname proxy.EXAMPLE.com
auth_param digest program /usr/lib/squid3/digest_file_auth -c /etc/squid3/passwords
#auth_param digest program /usr/lib/squid3/digest_pw_auth -c /etc/squid3/passwords
auth_param digest realm proxy
auth_param basic credentialsttl 4 hours
acl authenticated proxy_auth REQUIRED
acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC 1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC 1918 possible internal network
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
#acl SSL_ports port 443
#http_access deny to_localhost
#http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access allow authenticated
via on
forwarded_for transparent

Create the users and passwords:

sudo apt-get install apache2-utils (required for htdigest)
sudo htdigest -c /etc/squid3/passwords proxy user1
sudo htdigest /etc/squid3/passwords proxy user2

Open up firewall port (if enabled):

sudo ufw allow 3128

Restart the server and tail the logs:

sudo service squid3 restart
sudo tail -f /var/log/squid3/access.log

OTHER FILE LOCATIONS:

/var/spool/squid3
/etc/squid3

MONITORING with Splunk…

sudo /opt/splunkforwarder/bin/splunk add monitor /var/log/squid3/access.log -index main -sourcetype Squid3
sudo /opt/splunkforwarder/bin/splunk add monitor /var/log/squid3/cache.log -index main -sourcetype Squid3

REFERENCES:

HTML5 Link Prefetching

Link prefetching is used to identify a resource that might be required by the next navigation, and that the user agent SHOULD fetch, such that the user agent can deliver a faster response once the resource is requested in the future.


<link rel="prefetch" href="http://www.example.com/images/sprite.png" />

<link rel="prefetch" href="/images/sprite.png" />

Supported in:

  • MSIE 11+/Edge
  • Firefox 3.5+ (for HTTPS)
  • Chrome
  • Opera

REFERENCES: