Citrix XenServer 5.5 – First impressions

I had decided to try out Citrix Xen Server at home since i work a lot with vmware during my working week and felt like a change. It all seemed well… That is until i had to deal with snapshots. I suppose i have taken for granted almost all other virtual host software that provides a simple “revert to snapshot” option. From what i can tell this is totally absent from Citrix XenServer 5.5.

There are comments from within Citrix that they are working on this as a feature, but it has yet to make it to fruition. Unfortunately for me with the type of work i do (testing / proof of concept etc) this is a deal breaker. Looks like i’m gunna have to try out vSphere at home. (currently only using 3.5 at work)

Least vSphere has thin provisioning now, so nothing (feature wise) i’ll be missing from Citrix’s Xen Server.

Updated : 28/07/2009

I’ve actually got no choice but to stay with Citrix Xen for now, looks like the sata controller and network chip on my motherboard is not supported by either 3.5 U4 or vSphere. Doh! (i should have checked the HCL but sometimes just like to try my luck)

Troubleshooting – Time Slider (zfs snapshots)

1. snapshot complains about no access to cron

This problem i came across was after i was playing with crontab. It looks like the zfs snapshot service uses an account called “zfssnap” and if it doesnt have access to cron then it will have issues creating / checking snapshots. Check the file /etc/cron.d/cron.allow and ensure that “zfssnap” is in there. The issues i had looked like this in the log…   (check the logs via the log file viewer)

Checking for non-recursive missed // snapshots  rpool

Checking for recursive missed // snapshots protected rpool/backup rpool/export rpool/ROOT unprotected

crontab: you are not authorized to use cron.  Sorry.

crontab: you are not authorized to use cron.  Sorry.

Error: Unable to add cron job!

Moving service to maintenance mode.

The actual crontab lives in the /var/spool/cron/crontab/zfssnap file. (don’t edit this manually)

Restart the services by clearing the maintenance status then if required enable or restart like so…

svcadm clear auto-snapshot:frequent

svcadm enable auto-snapshot:frequent

Check that all zfs snapshot services are running as expected….

svcs -a | grep snapshot

online         22:26:12 svc:/system/filesystem/zfs/auto-snapshot:weekly

online          9:06:36 svc:/system/filesystem/zfs/auto-snapshot:monthly

online          9:11:23 svc:/system/filesystem/zfs/auto-snapshot:daily

online          9:12:00 svc:/system/filesystem/zfs/auto-snapshot:hourly

online          9:23:57 svc:/system/filesystem/zfs/auto-snapshot:frequent

2. snapshot fails with dataset busy error

Seen something similar to this in the logs? …

Checking for recursive missed // snapshots protected rpool/backup rpool/export rpool/ROOT unprotected

Last snapshot for svc:/system/filesystem/zfs/auto-snapshot:frequent taken on Sun Mar 15 22:26 2009

which was greater than the 15 minutes schedule. Taking snapshot now.

cannot create snapshot ‘rpool/ROOT/opensolaris@zfs-auto-snap:frequent-2009-03-16-09:06’: dataset is busy

no snapshots were created

Error: Unable to take recursive snapshots of rpool/ROOT@zfs-auto-snap:frequent-2009-03-16-09:06.

Moving service svc:/system/filesystem/zfs/auto-snapshot:frequent to maintenance mode.

Here is an bit from this site – “This problem is being caused by the old (IE: read non-active) boot environments not being mounted and it is trying to snapshot them. You can’t ‘svcadm clear’ or ‘svcadm enable’ them because they will still fail.”

Apparently a bug with the zfs snapshots similar to /root/opensolaris type pools — anyhow to fix i’ve just used a custom setup in time slider. Clear all the services set to “maintenance” then launch time-slider-setup and configure to exclude the problem pools.

Update : As per Johns comments below you can disable the snapshots on the offending zfs system using the following command…

zfs set com.sun:auto-snapshot=false rpool/ROOT

As above to clear “maintenance” status on the effected services run the following command…

svcadm clear auto-snapshot:hourly

svcadm clear auto-snapshot:frequent

Now run this to ensure all the SMF services are running without issue…

svcs -x

If all is well you will get no output.

ZFS – Creating snapshots

There is some funky ways of modifying the default “time slider” services to do the work for you, but i like a bit more hands on. Generally so i know what is happening in the background, but the time slider can be sometimes overkill creating snapshots every 15 mins if not configured properly.

On a side note i’ve yet to get my head around the SMF stuff properly… Anyhow onto creating snapshots.

I”ve decided to snapshot both my unprotected and protected zpools.

I’ve created three scripts,this is what my snapdaily.sh script looks like ;

zfs destroy -r protected@daily

zfs destroy -r unprotected@daily

zfs snapshot -r protected@daily

zfs snapshot -r unprotected@daily

echo “Daily ZFS snapshot done” – output saved as part of the crontab job

The other two are similar, but weekly and monthly.  The name of the snapshot is after the @ symbol as above. the -r switch is recursive, so all zfs file systems beneath the named zfs also have snapshots created.

next I’ve saved this script and added it to crontab (as root since its zfs commands which are usually restricted);

su – enter password, you are now root.

crontab -e – edit roots crontab file (use vi to insert the following line)

0 5 * * * /protected/snapdaily.sh – this will run every day at 5am.

Run the script first to see if it works, then check with this command;

zfs list -t snapshot – you should see the above snapshots.

Repeat for weekly / monthly as above…

I have disabled all the automatic snapshots….

svcs -a | grep “snapshot” – should show you all the zfs snapshot services

svcadm disable svc:/system/filesystem/zfs/auto-snapshot:daily – etc, will disable these snapshots.

You can also turn off time slider via the GUI if you have turned it on.