1. snapshot complains about no access to cron
This problem i came across was after i was playing with crontab. It looks like the zfs snapshot service uses an account called “zfssnap” and if it doesnt have access to cron then it will have issues creating / checking snapshots. Check the file /etc/cron.d/cron.allow and ensure that “zfssnap” is in there. The issues i had looked like this in the log… (check the logs via the log file viewer)
Checking for non-recursive missed // snapshots rpool Checking for recursive missed // snapshots protected rpool/backup rpool/export rpool/ROOT unprotected crontab: you are not authorized to use cron. Sorry. crontab: you are not authorized to use cron. Sorry. Error: Unable to add cron job! Moving service to maintenance mode.
The actual crontab lives in the /var/spool/cron/crontab/zfssnap file. (don’t edit this manually)
Restart the services by clearing the maintenance status then if required enable or restart like so…
svcadm clear auto-snapshot:frequent
svcadm enable auto-snapshot:frequent
Check that all zfs snapshot services are running as expected….
svcs -a | grep snapshot
online 22:26:12 svc:/system/filesystem/zfs/auto-snapshot:weekly online 9:06:36 svc:/system/filesystem/zfs/auto-snapshot:monthly online 9:11:23 svc:/system/filesystem/zfs/auto-snapshot:daily online 9:12:00 svc:/system/filesystem/zfs/auto-snapshot:hourly online 9:23:57 svc:/system/filesystem/zfs/auto-snapshot:frequent
2. snapshot fails with dataset busy error
Seen something similar to this in the logs? …
Checking for recursive missed // snapshots protected rpool/backup rpool/export rpool/ROOT unprotected Last snapshot for svc:/system/filesystem/zfs/auto-snapshot:frequent taken on Sun Mar 15 22:26 2009 which was greater than the 15 minutes schedule. Taking snapshot now. cannot create snapshot 'rpool/ROOT/opensolaris@zfs-auto-snap:frequent-2009-03-16-09:06': dataset is busy no snapshots were created Error: Unable to take recursive snapshots of rpool/ROOT@zfs-auto-snap:frequent-2009-03-16-09:06. Moving service svc:/system/filesystem/zfs/auto-snapshot:frequent to maintenance mode.
Here is an bit from this site – “This problem is being caused by the old (IE: read non-active) boot environments not being mounted and it is trying to snapshot them. You can’t ‘svcadm clear’ or ‘svcadm enable’ them because they will still fail.”
Apparently a bug with the zfs snapshots similar to /root/opensolaris type pools — anyhow to fix i’ve just used a custom setup in time slider. Clear all the services set to “maintenance” then launch time-slider-setup and configure to exclude the problem pools.
Update : As per Johns comments below you can disable the snapshots on the offending zfs system using the following command…
zfs set com.sun:auto-snapshot=false rpool/ROOT
As above to clear “maintenance” status on the effected services run the following command…
svcadm clear auto-snapshot:hourly
svcadm clear auto-snapshot:frequent
Now run this to ensure all the SMF services are running without issue…
svcs -x
If all is well you will get no output.