zfs share for Time machine backups

I’ve tried using a share for time machine before, but due to the nature of how time machine works it eventually consumed all the spare space that is available on that particular share.

bring on zfs quotas… (my drive on my mini mac is 100GB, so 200GB should be enough for a few variations)

zfs set quota=200G unprotected/timemachine

This adds a artificial limit to your zfs filesystem, making sure that time machine does not consume more than 200 gigabytes of space.

NFS is your best bet, i’ve created a NFS share like so…

zfs create unprotected/timemachine

zfs set sharenfs=on unprotected/timemachine



Why is UNRAID cool for home?

I’ve played with many storage technologies at home, ZFS being one of my favs when it comes to performance. But i’ve been looking for something that suits a typical home environment where power usage and capacity is usually more important than performance. Thats where UNRAID has come in…

UNRAID give me these advantages;

  • Different sized disks in a single pool (only requires largest disk as parity)
  • Files are distributed over all the disks – so even if you lost more than a single drive you still still have some of your data. Note : with parity drive you can handle a single drive failing without loosing anything, sorta like RAID4 – non distributed parity.
  • Power usage, since the files are stored on specific disks not all the disks need to power on to give you your file.
  • Runs on a USB stick – no large operating system required.
  • Crashplan module can be installed to provide backup options.

http://lime-technology.com/

solaris – storage server

http://p2v.it/2010/04/15/a-next-generation-multiprotocol-storage-for-your-homelab-on-a-budget-part-1/

Setup iSCSI target using comstar —
pkg install -v storage-server
pkg install pkg:/SUNWiscsit
svcadm enable stmf
svcadm enable svc:/network/iscsi/target:default
itadm create-target

reboot — -r

Create the iSCSI store —
zfs create -V 100G DataPool/TestDatastore1
sbdadm create-lu /dev/zvol/rdsk/DataPool/TestDatastore1
stmfadm create-hg ESX4-group
stmfadm add-hg-member -g ESX4-group wwn.2100001b329711bd wwn.2101001b32b711bd
stmfadm add-view -h ESX4-group -n 0 600144F03EBEC50000004BA86E460001
stmfadm list-lu -v

zfs : accidentally adding cache drive to raidz zpool

http://forums.freebsd.org/showthread.php?t=23127

Unfortunately if you have accidentally added a single drive into your raidz pool at the top-level there is no way to just remove the non redundant disk. Your pool is now dependant on this disk.

If you want your pool to be just raidz vdevs, then you will need to backup your data, destroy your pool, create a new pool, and restore your data.

There is no current way to remove a top-level vdev from a pool.

4k sector hard drives and zfs

I hit this as a problem recently. One of my disks died in my raidz so i ran down to the store and grabbed me a replacement WD10EARS (Western Digital 1Tb Green) drive.

BUT…

The one thing the store didn’t mention to me is the new 4K cluster sizing on the drive. I guess they assume most people run windows (though the issues are also present in XP). See these posts…

http://blog.temeletry.co.uk/2010/05/wd-green-wd10ears/

Unfortunately they really don’t work as well as you’d like in a server :(

  • They come with a 5 second head spin down setting that causes them to park their heads if they have been left idle for more than 5 seconds. As it takes a second or two to spin back up this can result in a very laggy experience during interactive sessions.
  • They do not have NCQ or any form of command queing/optimisation. This means that (on FreeBSD at least) you are stuck in the LOOK elevator. In particular this was noticed when doing sequential read & write (think dump|restore tar|untar etc) and interactive tasks simultaneously
  • They really suck with FreeBSD and ZFS…

http://community.wdc.com/t5/Desktop/Poor-performace-in-OpenSolaris-with-4K-sector-drive-WD10EARS-in/m-p/21132

While the other 512-byte sector HDDs were reading/writing at 30MB/s sustained, this EARS model did not exceeded the 1MB/s barrier.

I know for sure that this is related to the 512-byte sector firmware emulation, because the disk works perfectly well if I partition it in a 4k-sector alignment.

The thing is that even in that way, using it in a ZFS RAIDZ configuration the performance is very poor because RAIDZ uses a dynamic stripe size.

The bottom line here is that folks like me, that use different versions of Unix, need the firmware to present the disk as a 4K-sector disk to unleash the full potential of the technology. The OS is already prepared to support that sector size, no need for emulation here.

http://opensolaris.org/jive/thread.jspa?threadID=125702

Some preliminary testing that I have done…the WD20EARS (2TB advanced format drives) actually presents emulated 512byte sectors to the host o/s.

The drive documentation indicates that jumpers 7-8 should be enabled if the o/s does not support advanced format drives – the drive still present 512 bytes sectors.

I have attempted to raise a support ticket querying this, and how one can disable 512byte sector emulation in the drive (perhaps through a firmware upgrade) but I have not received any response to date.

Hopefully is enough people raise support tickets, WD may release firmware that allows the drive to natively present 4k blocks. Other doco indicates several other jumper combinations – all do not seem to make the drive present 4k byte blocks.

Perhaps someone internal to sun that has a relationship with WD may be able to shed some light on this? It would be fantastic to find out that I was just doing something wrong -> then I can get the drives to be seen on 32bit systems (ie – our embedded kit for osol, velitium)

Tested using b133 (64bit intel).

Try to avoid the green drives in ZFS for now. Remember to do your research before you buy a bunch of disks. I was caught off guard by this small change (works fine in win7 etc) which kills performance in ZFS. Ouch.