Opensolaris – Managing Boot Environments with beadm

To get a list of all of your boot environments within Opensolaris use beadm like so…. 

beadm list 

Assuming you know the name of the boot environment that is causing issues you can use the following command (using its name) to remove the ones you think are suss. Note: you need to boot up in the environment that you wish to keep first. 

beadm destroy opensolaris-2

If you destroy an environment that is active on boot it will be changed to another be that is available. You can use the following command to set it to a specific boot environment; 

beadm activate opensolaris-1

Remember to go into package manager and set the default repository as the preferred again. Update manager may recommend updates from the dev repository still though, remove this repository to prevent this.

Best not to delete a boot enviornment unless you have to.

Opensolaris – where has my memory gone?

Use this command in 2008.11 to get details on where your memory is currently being used…

echo ::memstat | pfexec mdb -k

Page Summary                Pages                MB  %Tot
————     —————-  —————-  —-
Kernel                     263992              1031   34%
ZFS File Data               91917               359   12%
Anon                       376867              1472   48%
Exec and libs               11484                44    1%
Page cache                   3387                13    0%
Free (cachelist)             9766                38    1%
Free (freelist)             24807                96    3%

Total                      782220              3055
Physical                   782219              3055

Note: ZFS should eat up the remainder of your ram after a bit of use.

“ZFS File Data” is the one to look at – if it is low then most of your ram may be eaten up in other areas of the system.

From the output above you can see that i have 3GB installed. I have a few VirtualBox VM’s running on my server which show up as “Anon”, they are consuming almost half of my ram.

Opensolaris – Updating solaris with the development repository

Want to keep your opensolaris on the “edge” of developments? Most of the experience i have had is that the “dev” repository contains mainly stable releases that are being held until the next official release (every 6 months).

This is how you do it…

Launch Package Manager – and choose “settings” then “manage repositories”

Add the dev repository URL and give it a name.

NOTE: The name cannot contain any spaces!

Name : dev.opensolaris.org
URL : http://pkg.opensolaris.org/dev

If you make it the preferred repository Update manager will inform you of the new updates that are available.

Update : i have had a lockup with the cifs service as well as a problem with the keyboard working in VNC (if it wasnt plugged in at boot). So i have decided to revert back to the non-development repository. See this post on how to use beadm

zfs – java management gui

It hasn’t made it into opensolaris yet. But from what i’ve heard it should be making an appearance over from the solaris 10 OS soon. Here is a screenshot of what it looks like…

zfsscreenshot

Should make managing zfs a bit easier – though its already quite easy.  Perhaps if you have quite alot of zpools / zfs file systems it will look prettier.   ;)

Troubleshooting – Time Slider (zfs snapshots)

1. snapshot complains about no access to cron

This problem i came across was after i was playing with crontab. It looks like the zfs snapshot service uses an account called “zfssnap” and if it doesnt have access to cron then it will have issues creating / checking snapshots. Check the file /etc/cron.d/cron.allow and ensure that “zfssnap” is in there. The issues i had looked like this in the log…   (check the logs via the log file viewer)

Checking for non-recursive missed // snapshots  rpool

Checking for recursive missed // snapshots protected rpool/backup rpool/export rpool/ROOT unprotected

crontab: you are not authorized to use cron.  Sorry.

crontab: you are not authorized to use cron.  Sorry.

Error: Unable to add cron job!

Moving service to maintenance mode.

The actual crontab lives in the /var/spool/cron/crontab/zfssnap file. (don’t edit this manually)

Restart the services by clearing the maintenance status then if required enable or restart like so…

svcadm clear auto-snapshot:frequent

svcadm enable auto-snapshot:frequent

Check that all zfs snapshot services are running as expected….

svcs -a | grep snapshot

online         22:26:12 svc:/system/filesystem/zfs/auto-snapshot:weekly

online          9:06:36 svc:/system/filesystem/zfs/auto-snapshot:monthly

online          9:11:23 svc:/system/filesystem/zfs/auto-snapshot:daily

online          9:12:00 svc:/system/filesystem/zfs/auto-snapshot:hourly

online          9:23:57 svc:/system/filesystem/zfs/auto-snapshot:frequent

2. snapshot fails with dataset busy error

Seen something similar to this in the logs? …

Checking for recursive missed // snapshots protected rpool/backup rpool/export rpool/ROOT unprotected

Last snapshot for svc:/system/filesystem/zfs/auto-snapshot:frequent taken on Sun Mar 15 22:26 2009

which was greater than the 15 minutes schedule. Taking snapshot now.

cannot create snapshot 'rpool/ROOT/opensolaris@zfs-auto-snap:frequent-2009-03-16-09:06': dataset is busy

no snapshots were created

Error: Unable to take recursive snapshots of rpool/ROOT@zfs-auto-snap:frequent-2009-03-16-09:06.

Moving service svc:/system/filesystem/zfs/auto-snapshot:frequent to maintenance mode.

Here is an bit from this site – “This problem is being caused by the old (IE: read non-active) boot environments not being mounted and it is trying to snapshot them. You can’t ‘svcadm clear’ or ‘svcadm enable’ them because they will still fail.”

Apparently a bug with the zfs snapshots similar to /root/opensolaris type pools — anyhow to fix i’ve just used a custom setup in time slider. Clear all the services set to “maintenance” then launch time-slider-setup and configure to exclude the problem pools.

Update : As per Johns comments below you can disable the snapshots on the offending zfs system using the following command…

zfs set com.sun:auto-snapshot=false rpool/ROOT

As above to clear “maintenance” status on the effected services run the following command…

svcadm clear auto-snapshot:hourly

svcadm clear auto-snapshot:frequent

Now run this to ensure all the SMF services are running without issue…

svcs -x

If all is well you will get no output.