Caching Forged Alliance Forever downloads

In additional to this post here about caching steam games

I’ve run a few LAN parties where downloading updates from the FAF servers can take a while. This is no fault of the FAF servers, just i’ll have up to 10 people hitting the same mod and we all sit there watching progress bars as it downloads.

solution : modify the steamcache container image to also cache downloads for FAF
added benefit : takes load off the faf servers

If you want to give it a go, all you need to do;
(you’ll need a bit of experience with docker / containers)

  • download and run steamcache/steamcache container. This is effectively a reverse proxy running nginx. You can download the image form here https://hub.docker.com/r/steamcache/steamcache/
  • modify /etc/nginx/sites-available/steamcache.conf file in the container. Add the following directly below the existing “location /depot/” entry (for steam) to cache all /faf/ URI requests;
location /faf/ {
try_files $uri @mirror;
access_log /data/logs/access.log steamcache-local;
}
  • redirect local DNS queries for content.faforever.com to your local server (above). I entered a host entry on my local DNS (pfsense : resolver)
  • profit

There is a built-in script /scripts/watchlog.sh that displays access.log and highlights in green any cache HITS (why i’ve left logging classified as “steamcache-local”)

SSL caching and redirects in chrome

While setting up SSL reverse proxy using lets encrypt and nginx i  had a few troubles with testing via googles Chrome browser.

  • Chrome caches some SSL responses which can be cleared by deleting your browsing data via settings or Ctrl+Shift+Del.
  • Chrome also caches http -> https redirects, you can see these by going to chrome://net-internals and select “HSTS” from the drop down. Enter the domain name under “Delete domain” and press the Delete button

The easiest thing to do during testing is use incognito mode. You will not need to clear the cache every time you change config or re-issue certificates.

bye bye http – hello letsencrypt

I’ve been a fan of HTTP and caching since my dial-up modem.

Skip to today where google returns HTTPS sites higher in its search results. Its could also be possible that you cannot trust a man in the middle HTTP cache any longer either right? :)

SSL certs have traditionally been expensive, but say welcome to lets encrypt which provides a free way of securing all of your websites. If you haven’t heard of it check it out here – https://letsencrypt.org/

As always i’ve implemented my LetsEncrypt trial via docker. The container image i have been using has been put together by the linuxserver guys  – https://hub.docker.com/r/linuxserver/letsencrypt/
(i use a few of their container images, they seem legit)

This container image comes ready to roll with Nginx built in which can act as a reverse proxy to your unsecured websites at the back-end. I’ll be testing it for the next few days to see how it stacks up, but so far so good. Nginx is fast, so a good transition if only to offload all my SSL traffic. If all goes well it will be the end of my squid reverse proxy which i have used happily for many years.

In the past http had the performance, certs were too difficult (but are they?) and expensive to implement and i was a fan of my sites being cached. New times are here, SSL (TLS) rules supreme.

On another note, HTTP 1 sites are dwindling, SPDY didnt last long but apparently some of that has been built into HTTP/2 – exciting!

Check out these links for some interesting reading on performance –
https://samrueby.com/2015/01/26/why-is-https-faster-than-http/
http://www.httpvshttps.com/

Handy link for testing if your site is using HTTP2 – https://www.ssllabs.com/ssltest/

zfs : accidentally adding cache drive to raidz zpool

http://forums.freebsd.org/showthread.php?t=23127

Unfortunately if you have accidentally added a single drive into your raidz pool at the top-level there is no way to just remove the non redundant disk. Your pool is now dependant on this disk.

If you want your pool to be just raidz vdevs, then you will need to backup your data, destroy your pool, create a new pool, and restore your data.

There is no current way to remove a top-level vdev from a pool.

Squid – optimizing cache hits

The first place to look for increasing your hit ratio is the refresh_pattern parameter within the squid.conf file.

Note: the following applies to squid 3.0 and higher only….

I have found a really good page here http://linux.com/archive/feature/153221 that explains some good setups if you with to increase your cache hits…

refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90% 432000 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i \.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$ 10080 90% 43200 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
refresh_pattern . 0 40% 40320

This is an example of a site that you may wish to heavily cache…

refresh_pattern -i youtube.com/.* 10080 90% 43200

http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube

http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube

# REMOVE these lines from squid.conf

acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY

 

# Break HTTP standard for flash videos. Keep them in cache even if asked not to.
refresh_pattern -i \.flv$ 10080 90% 999999 ignore-no-cache override-expire ignore-private

# Apparently youtube.com use 'Range' requests
# - not seen, but presumably when a video is stopped for a long while then resumed, (or fast-forwarded).
# - convert range requests into a full-file request, so squid can cache it
# NP: BUT slows down their _first_ load time.
quick_abort_min -1 KB

# Also videos are LARGE; make sure you aren't killing them as 'too big to save'
# - squid defaults to 4MB, which is too small for videos and even some sound files
maximum_object_size 4 GB

# Let the clients favorite video site through with full caching
# - they can come from any of a number of youtube.com subdomains.
# - this is NOT ideal, the 'merging' of identical content is really needed here
acl youtube dstdomain .youtube.com
cache allow youtube

# kept to demonstrate that the refresh_patterns involved above go before this.
# You may be missing the CGI pattern, it will need to be added if so.
refresh_pattern -i (/cgi-bin/|\?)   0   0%      0
refresh_pattern .                   0   0%   4320