bye bye http – hello letsencrypt

I’ve been a fan of HTTP and caching since my dial-up modem.

Skip to today where google returns HTTPS sites higher in its search results. Its could also be possible that you cannot trust a man in the middle HTTP cache any longer either right? :)

SSL certs have traditionally been expensive, but say welcome to lets encrypt which provides a free way of securing all of your websites. If you haven’t heard of it check it out here – https://letsencrypt.org/

As always i’ve implemented my LetsEncrypt trial via docker. The container image i have been using has been put together by the linuxserver guys  – https://hub.docker.com/r/linuxserver/letsencrypt/
(i use a few of their container images, they seem legit)

This container image comes ready to roll with Nginx built in which can act as a reverse proxy to your unsecured websites at the back-end. I’ll be testing it for the next few days to see how it stacks up, but so far so good. Nginx is fast, so a good transition if only to offload all my SSL traffic. If all goes well it will be the end of my squid reverse proxy which i have used happily for many years.

In the past http had the performance, certs were too difficult (but are they?) and expensive to implement and i was a fan of my sites being cached. New times are here, SSL (TLS) rules supreme.

On another note, HTTP 1 sites are dwindling, SPDY didnt last long but apparently some of that has been built into HTTP/2 – exciting!

Check out these links for some interesting reading on performance –
https://samrueby.com/2015/01/26/why-is-https-faster-than-http/
http://www.httpvshttps.com/

Handy link for testing if your site is using HTTP2 – https://www.ssllabs.com/ssltest/

steamcache for gaming

Assuming you have docker running at home, check out these two docker projects – one is the cache (powered by nginx), the other is the dns servcie (which intercepts steam calls)

  1. https://hub.docker.com/r/steamcache/steamcache
  2. https://hub.docker.com/r/steamcache/steamcache-dns
  3. https://hub.docker.com/r/steamcache/sniproxy

When you have all three up and running you can confirm HITS to the cache by running the following against the steamcache container;

docker exec -it steamcache tail -f /data/logs/access.log

This is great if you have a gaming cafe or LAN over at your house on occasion, all steam games will be cached to local disk so that your internet pipe gets a break. ;)

Update 15/10/2018

  • Added SNI-Proxy. More and more HTTPS request break above if not implemented. Steam is pushing some images / videos via HTTPS now.
  • Replaced steamcache/steamcache with steamcache/generic – seems to have more active development around it
    • watchlog.sh does not appear to be in generic cache yet.

Update 1/11/2018

  • Switched back to steamcache/steamcache. steamcache/generic was much slower (re-validated downloads etc) which isn’t needed for my small network. I’m after performance! :)

Unifi Video Controller NVR for UNRAID

If you run UNRAID at home and you have a UniFi Camera system then check out my latest container….

https://hub.docker.com/r/superd/unifi-nvr/

A dockerised UniFi NVR

Ubiquiti UniFi-Video-Controller (NVR) — Docker Container

Ubuntu 16.04, UniFi-Video-Controller 3.8.3

Setup / Quick Start

docker run \
--net=host \
-v /var/lib/unifi-video/:<YOUR DATA DIR> \
-v /var/log/unifi-video/:<YOUR LOG DIR> \
superd/unifi-nvr

Troubleshooting

UNRAID – If you have issue with MongoDB continually restarting please check your data mapping. I have seen issues where user shares do not work correctly. Please try mapping direct to a single disk or to cache drive to ensure smooth operation.

i.e. instead of /mnt/user/usershare/nvr/data  use  /mnt/disk1/usershare/nvr/data

Update 4/02/2020 ;

I would recommend using this docker image – https://hub.docker.com/r/pducharme/unifi-video-controller/ 

I have found that i no longer require a direct disk mapping as stated above, the built in DB seems to work fine on user shares. Unraid 6.8.2

New Commandline as per follows;

docker create --name='unifi-video-controller' 
--net='host'
-e TZ="Pacific/Auckland"
-e HOST_OS="Unraid"
-e 'PUID'='99'
-e 'PGID'='100'
-v '/mnt/user/appdata-unifi/unifi-video/':'/var/lib/unifi-video':'rw'
-v '/mnt/user/appdata-unifi/unifi-video/videos/':'/usr/lib/unifi-video/data/videos':'rw'
'pducharme/unifi-video-controller'

I use ‘host’ network mappings just as there are a ton of ports and i’m lazy – use bridge ports if you want to reduce footprint.

my first few docker containers

This is my first dabbling in creating my OWN git hub (for the code) and docker hub (for the orchestration / build)

https://hub.docker.com/u/superd/

I have created a container for Unifi-Video NVR and storj. Yet to update documentation on Storj container.

I’m currently working on building containers for a news indexer, either newsnab or nzedb. There is another docker project called pynab which was an interesting idea, but seems to have gone stale over time. It used to be almost hands free indexer that ran reasonably efficiently. I’m hoping to re-create something similar soon.

https://github.com/Murodese/pynab

Docker – Running Ubiquiti NVR and Plex

downloadBye bye virtual machines and their inherent OS bloat. Docker and containerization is here…

The trick to containerization is picking the right workload (as with most things). Think about data, its state and where it lives and whether there are any benefits to running as a container.

Both Ubiquiti’s NVR and Plex’s media server software run’s some base application, this app within its own container then maps to data (which can exist outside the instance) that is consistent.

The fun continues when you can update a container (updating the running application), but keeping the data intact at another location. This can really help with version control etc where you can sometimes just point the new container at the data and turn off the old instance. Rollback? easy. Turn off new container and roll back to old.

Of course things are easier if you are running applications that do not change the data.  Both NVR and Plex only index and capture new data (in consistant format), which makes moving between application versions much easier.

The nature of containerization means that the full power of the host is taken into regard. This is different to regular visualization where each guest is limited to the virtual hardware it is assigned. There are of course challenges where resource is congested, but this can also happen in the latter (cpu scheduling, under / over allocation of resources).

Availability also has to be built with containers in mind, with load balances and instances across multiple hosts.

Update : this site has now now been migrated from a VM to 2 x docker containers…. One for MySQL Backend and one for WordPress FrontEnd. Containers can be linked – so the WordPress container can access MySQL container via its own local port. Very cool.