opensolaris v134 – CIFS has gone walkies

Errr, I cant find the CIFS service in the 134 build.

I know it was renamed to as per below but still cant see it anywhere?

> system/file-system/smb (was SUNWsmbfs*)
> service/file-system/smb (was SUNWsmbs*)

Anyone have any ideas?

pkg search *smb gets me these… but i cant install them.

require depend service/file-system/smb@0.5.11-0.134 pkg:/redistributable@0.1-0.134
require depend service/file-system/smb@0.5.11-0.134 pkg:/storage/storage-server@0.1-0.134
require depend service/file-system/smb@0.5.11-0.134 pkg:/system/security/kerberos-5@0.5.11-0.134
require depend service/file-system/smb@0.5.11-0.134 pkg:/storage/storage-nas@0.1-0.134
require depend system/file-system/smb@0.5.11-0.134 pkg:/redistributable@0.1-0.134
require depend system/file-system/smb@0.5.11-0.134 pkg:/storage/storage-server@0.1-0.134
require depend system/file-system/smb@0.5.11-0.134 pkg:/slim_install@0.1-0.134

pkg install system/file-system/smb

No updates necessary for this image.

pkg install service/file-system/smb

Creating Plan
pkg: The following pattern(s) did not match any packages in the current catalog.
Try relaxing the pattern, refreshing and/or examining the catalogs:
service/file-system/smb

Update 23/05/2010
Problem was due to errors within package manager — see this post

OpenSolaris – iSCSI

Want iSCSI in opensolaris?

Grab SUNWiscsitgt via package manager.

enable the server via svcadm;

svcadm enable iscsitgt

create your zfs iscsi pool;  (this command will limit iscsi drive to 500GB in size)

zfs create -V 500G tank/iscsi

set isci on via zfs command;

zfs set shareiscsi=on tank/iscsi

check that target is up and running;

iscsitadm list target -v

Done. Should be able to connect via ip from another machine. I have not covered CHAP or any client side configuration. Assumed isolated LAN.

HDTune_Benchmark_SUN_____SOLARIS

Opensolaris – white screen on logon

I had this problem when i enabled 3d effects on my server. The screen just goes white and you cannot see anything. Even after a reboot as soon as you logon to gnome the screen goes white.

To fix you’ll need to logon using a “failsafe terminal” session, then run gnome-cleanup from the command line. This will blow away any of your gnome settings (although they are actually backed up to a file), but it’ll mean you’ll be able to logon again without issue.

VirtualBox – crashing / freezing

I’ve had some problems since my upgrade to virtualbox 2.2.0 on OpenSolaris. After some time all of my linux boxes seem to just die. The virtual machine just stops responding. Strangely there was no problem with my windows vms after the update.

From what i can tell it looks like the upgrade turned off “IO APIC” – this is the bit that seemed to cause the problem. Re-enabling this on all of my linux boxes seems to have fixed the problem. I’ll continue testing for another week and update this post if any problems re-occur.

Updated : 01/09/2009

Here is a bit more on IO APIC from the virtualbox wiki…  (from a windows perspective)
http://www.virtualbox.org/wiki/Migrate_Windows

The hardware dependent portion of the Windows kernel is dubbed “Hardware Abstraction Layer” (HAL). While hardware vendor specific HALs have become very rare, there are still a number of HALs shipped by Microsoft. Here are the most common HALs (for more information, refer to this article: http://support.microsoft.com/kb/309283):

Hal.dll (Standard PC)
Halacpi.dll (ACPI HAL)
Halaacpi.dll (ACPI HAL with IO APIC)

If you perform a Windows installation with default settings in VirtualBox, Halacpi.dll will be chosen as VirtualBox enables ACPI by default but disables the IO APIC by default. A standard installation on a modern physical PC or VMware will usually result in Halaacpi.dll being chosen as most systems nowadays have an IO APIC and VMware chose to virtualize it by default (VirtualBox disables the IO APIC because it is more expensive to virtualize than a standard PIC). So as a first step, you either have to enable IO APIC support in VirtualBox or replace the HAL. Replacing the HAL can be done by booting the VM from the Windows CD and performing a repair installation.

Updated : 5/09/2009

I’ve had even more problems with opensolaris crashing completely after upgrading to the newer versions of virtualbox (3.0.4), and have since reverted back to 2.2.0 which has fixed alot of the hanging issues i have encountered

opensolaris – zfs PCI-e sata controller

Time for me to add some more sata ports to my opensolaris build. I’ve been using the SI 3114 PCI cards  (4x sata) up until now without any issue, but they are limited by the bandwidth on the PCI slot. Time to upgrade and boost my performance.

At the moment i’m looking at grabbing one of these UIO cards;

AOC-USAS-L8i

http://www.supermicro.com/products/accessories/addon/AOC-USAS-L8i.cfm

From what iv’e been reading these cards will work fine in a PCI-e slot (8x / 16x) after a bit of modding and display the drives straight to opensolaris without any additional drivers etc. (same chipset used in various sun servers)

The backplate on a UIO card is essentially on backwards, when you remove the backplate and put the card into the PCI-e slot all the components will appear on the other side to normal. It is possible if you have a spare PCI-e backplate to attach to this card (just unscrew the current backplate and replace).

And the required mini SAS to SATA cables from extreme deal;

http://www.dealextreme.com/details.dx/sku.18023

Done.

Updated : 02/09/2009

Put this card in and bingo no problems. Had to export and re-import the zpool as it had problems with the drives being on a different controller? (hadn’t seen that before), but after that everything was working very well as expected. Cool!