opensolaris – smbd issues?

Hmm… i’ve been having problems since the 2009.06 (snv_111b) update with cifs.

Cant pin it exactly as it could be “load” related… hmmm.

found this ? http://opensolaris.org/jive/thread.jspa?threadID=107681 this also may be a clue.. http://opensolaris.org/jive/thread.jspa?threadID=92472&tstart=75

imapd ?  might have to go back to 2008.11

You might get better performance if you enable oplocks but
there are known issues with it but you can do it just to
see if you see any difference:

svccfg -s smb/server setprop smbd/oplock_enable=boolean: true

So far running the above command has fixed things for me? I’ll update if the problem returns.

svccfg -s smb/server setprop smbd/oplock_enable=boolean: true

Updated : 27/07/2009

Problem came back, so i’m updating to 117 as per comments below

Opensolaris : Citrix XenServer / ESX – Hooking into ZFS

To share your zfs pool via NFS (that works with Citrix Xen / ESX) to a host called “esxhost”;

zfs set sharenfs=rw,nosuid,root=esxhost tank/nfs

Note : You MUST have a resolvable name from the opensolaris box. i.e. you should be able to ping it. I have tried with ip’s only and it will fail. I have edited the /etc/hosts file to include the following line for my config;

# Copyright 2007 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# ident “%Z%%M% %I% %E% SMI”
#
# Internet host table
#
192.168.9.120 esxhost

This also requires that you are using both DNS and Files in your /etc/nsswitch.conf file. You should have a line like so;

# You must also set up the /etc/resolv.conf file for DNS name
# server lookup. See resolv.conf(4). For lookup via mdns
# svc:/network/dns/multicast:default must also be enabled. See mdnsd(1M)
hosts: files dns mdns

# Note that IPv4 addresses are searched for in all of the ipnodes databases
# before searching the hosts databases.
ipnodes: files dns mdns

i’ve also run this before hand; (to allow full access)

chmod -R 777 /tank/nfs

Update : check this guide http://blog.laspina.ca/ubiquitous/running-zfs-over-nfs-as-a-vmware-store

Update 2: there are known issues with waiting for sync when using both NFS and ZFS together…. There are reasons why you shouldnt do this, but in a test enviornemnt disabling sync at ZFS level may help performance (zfs set sync=disabled)

I like this idea of spliting up your SSD too… again in test enviornment no problems, in production i would utilize the entire drive to the tasks https://blogs.oracle.com/ds/entry/make_the_most_of_your

opensolaris / zfs – whitebox build

I’ve built a little server for home use, but it pales in comparison to this beast… This type of setup would be perfect for a lab / test environment that requires lots of fast and reliable disk. SCSI drives are fading out, SATA can perform if its setup right. When you look at the price of the entire build you wonder why corporations continue to spend the big bucks on the big storage names.

Check out this build (very nice clear guide)   http://www.stringliterals.com/?p=77

rpc-4020b (1)

Awesome piece of work.

zfs – playing with various configs

If you dont have the disks available to build up a zpool and have a play with zfs you can actually just use files created with the mkfile command… The commands are exactly the same.

mkfile 64m disk1

mkfile 64m disk2

mkfile 64m disk3

mkfile 10m disk4

mkfile 100m disk5

mkfile 100m disk6

Now you can create a zpool using the above files… (i’m using raidz for this setup)

zpool create test raidz /fullpath/disk1 /fullpath/disk2 /fullpath/disk3

if you now want to expand this pool using another three drives (files) you can run this command

zpool add test raidz /fullpath/disk4 /fullpath/disk5 /fullpath/disk6

Check the status of the zpool

zpool status test

NAME STATE READ WRITE CKSUM

test ONLINE 0 0 0
raidz1 ONLINE 0 0 0
/export/home/daz/disk1 ONLINE 0 0 0
/export/home/daz/disk2 ONLINE 0 0 0
/export/home/daz/disk3 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
/export/home/daz/disk4 ONLINE 0 0 0
/export/home/daz/disk5 ONLINE 0 0 0
/export/home/daz/disk6 ONLINE 0 0 0

errors: No known data errors


Now time to replace a drive (perhaps you wish to slowly increase your space) Note: all drives in that particular raidz pool need to be replaced with larger drives before the additional space is shown.

mkfile 200m disk7

mkfile 200m disk8

mkfile 200m disk9

Check the size of the zpool first;

zpool list test

NAME SIZE USED AVAIL CAP HEALTH ALTROOT
test 464M 349M 115M 75% ONLINE –

Now replace all of the smaller drives with the larger ones…

zpool replace test /export/home/daz/disk1 /export/home/daz/disk7
zpool replace test /export/home/daz/disk2 /export/home/daz/disk8
zpool replace test /export/home/daz/disk3 /export/home/daz/disk9

The space will show up if you bounce the box, i’ve heard that sometimes you may need to export and import but i’ve never had to do that.