Linux – crontab

Time to schedule some tasks!

First you need to make sure your in the /etc/cron.d/cron.allow file. If you are not SU to root and add yourself into it.

Now to create your new crontab file;

crontab -e

You are in vi in your newly created crontab file (note if you do not have access to do this it will say so)

Now, i’m no expert at using vi, but if you press i you will go into insert mode. Do this, then type the cron job details. A typical cron job will look like this

0 4 * * * /export/home/user/backup.sh

so there are five slots before the command you want to run. They are: min, hour, day, month, day-week. So my example above will run at 4am every day. See below for more detail.

once you have entered the line press ESC then type :wq to save and quit.

To confirm the job has been saved successfully in cron type this command;

crontab -l  this will show you your current scheduled tasks, it should output the above job.

In OpenSolaris the crontab files are located at /var/spool/cron/crontabs/ they are named after the specific user. Sometimes its easier to go there than to use vi, but make sure your permissions are all set correctly before editing any system files.

In some distros there are folders like /etc/cron.daily/ which run any scripts inside them on a schedule matching the folder name.

This is a bit more on the format of the 5 timings within the crontab file;

*     *   *   *    *  command to be executed

|     |     |     |     |
|     |     |     |     +—– day of week (0 – 6) (Sunday=0)
|     |     |     +——- month (1 – 12)
|     |     +——— day of month (1 – 31)
|     +———– hour (0 – 23)
+————- min (0 – 59)

Update

@reboot is also another very handy cron modifier. I use the following quite a lot…
@reboot /bin/sleep 600 ; /path/to/your/your_program@reboot /bin/sleep 600 ; /path/to/your/your_program

ZFS – Error 16 : Inconsistent filesystem structure

Oh no. I’ve managed to get this error before. And it was right after i applied compression to the rpool zfs filesystem. Upon the next reboot i was greeted with this error message;

Error 16 : Inconsistent filesystem structure

For me it was a show stopper and i had to go into recovery. So i’ve learnt my lesson – Not to touch the rpool zpool.

I was lucky enough to have put my data within a zfs filesystem (rpool/virtual) i created within the default rpool zpool.  I use this spot for my VirtualBox virtual machines.

Recovery for my rpool/virtual zfs filesystem;

  1. Boot the live cd
  2. open terminal, type SU, enter default password as of 2008.11 “opensolaris
  3. zpool import rpool – brings rpool and associated zfs filesystems back online
  4. type nautalis &, copy data from rpool/virtual to another drive (i mounted another disk by also importing another zpool – zpool import will list available zpools)
  5. zpool export rpool, then re-run the installation program. Note: you need to dismount rpool or the install will fail and stop.

Worked for me, usual disclaimer though. Most Guys Want To Get Bigger Muscles, how to get bigger muscles

 

ZFS compression types

ZFS compression as of OpenSolaris 2008.11 has a few types to choose from.

lzjb (default) | gzip | gzip-[1-9]

They are used via the zfs set compression=gzip poolname command.

The following test was quickly done out of personal interest – and is in no way scientific!

I have a AMD cpu with 3 cores (2.4Ghz). The data i copied to each of the shares consisted of video / documents / pictures and music. The first test i have done is based on compression only (i have not measured throughput)

Original Data Size : 412MB

lzjb : 312MB Compression ratio : 1.32

gzip : 293MB Compression ratio : 1.41

gzip9 : 292MB Compression ratio : 1.41

gzip is the winner on compression. With this small sample of data it is unclear if the extra CPU overhead on a gzip-9 zfs files system is worth it – from these results i would say it isn’t.

Again – gzip may be the winner on compression, but this does not reflect an improvement on throughput (untested).

Update: i’ve done a quick test on cpu load and throughput and i wouldnt recommend using gzip unless you are really limited on disk space – or have plenty of CPU to spare. lzjb is much faster (less load on cpu) and does a pretty good job for compression on the fly.

08/06/2011 Update

If you want to check the compression on a particular file you can use a combination of ls (true files size) and du (size after compression) like so…

actual size

ls -lh file*

Compressed size

du -hs file*

 

ZFS basics

I’ve had a play with WHS but eventually got annoyed with its lack of performance. Yes i know its not built for performance and typically is used just as a backup / simple store with duplication as redundancy, just i couldn’t stand the speed of the thing. If you were ever unlucky enough (even post power pack 1) to do a copy during the “data moving” the performance was even worse. On a positive note nothing beats it if you have a heap of non-similar sized disks that you want to put together (with redundancy) as a single shared storage pool.

Welcome to ZFS performance bliss….

Grab yourself OpenSolaris (i’m using 2008.11)

The tools of the trade are ;

zpool – this manages the zfs pools

zfs – this manages the zfs file systems

I was lucky enough to have 3 x 250GB drives, which i setup in raidz1 (similar to raid5 – single drive redundancy). The rest of my drives were just setup as a striped volume which contained mainly things i can afford to loose if a drive dies.  I used a separate 500GB disk as the system disk

After i had built the server i put in only the disks i wanted to work with next. So first i installed the 3 x 250GB disks and booted the machine. Running format then Ctrl-C showed me the device names. From deduction you can figure the names of the 3 new drives. Now its time to create a new raidz1 zpool with the following command;

zpool create poolname raidz1 dev2 dev3 dev4

Done – you should now have a mountable (and usable) file system at /poolname. If you didn’t want any redundancy just drop the “raidz1” out of the above command and you would get essentially a striped pool. Check the status of your zpool with this command;

zpool status poolname

Another thing i like to modify at the root of the new zpool is compression. So i usually run this command…

zfs set compresion=on poolname – enables compression (note: this does not typically slow down your file sever if you have the spare CPU). See this post for further details on zfs compression.

To check the settings currently applied to your pool run;

zfs get all poolname

If you wanted to create some additional zfs file systems within he zpool use the following command;

zfs create -o casesensitivity=mixed -o nbmand=on poolname/share

set casesensitivty=mixed  – allows windows to access files (via SMB) if not specified exactly to their original case. (this has to be set on creation). nbmand=on enables Cross-Protocol Locking.

The future of ZFS…

ZFS is adding more and more features as time goes on. I have heard rumors about some kind of de-duplication (single instant storage) type technology being implemented at some point. Also a data merger? – assuming it moves data across the pool more evenly.

Removing a device from a pool is also on the cards. Unsure if this is both striped and redundant pools though?

Visit gooseberry benefits for more information

OpenSolaris – Autologon

There are two ways to setup the auto login for OpenSolaris (2008.11);

One – You can fire up the GUI using gdmsetup. Update : this didnt actually work when i tried it recentely – it might not be in this release of opensolaris.

Two – go to /etc/x11/gdm/custom.conf (in newer releases its just /etc/gdm/custom.conf) and manually modify the required fields in the [daemon] section and add the following;

[daemon]
AutomaticLoginEnable=true
AutomaticLogin=username

Note: You cannot auto-logon as root.

Bare in mind that I’m not concerned about security at this point as the box is not exposed to the open world, and is somewhat protected by the fact it lives on a “private” LAN. (i.e. my home lan)

If you want to run something automatically on logon you can put the commands into the /etc/profile file. Since I’ve been looking for a way to autostart virtualbox machines this will be the place i do it. (mainly because it is one of the easiest way to do it)

I have two machines “Test” and “Test2” which i start by adding the following commands to the end of the /etc/profile file;   (vrdp port set to 3395, 3396)

Sleep 10 – just to give logon a bit of time to recover (all services to start etc). Might be easier to use timed logon instead for this one as all terminal (ssh) sessions also incur the delay.

gnome-terminal -e “VBoxHeadless -s Test -p 3395”

sleep 2 – delay between starting vms

gnome-terminal -e “VBoxHeadless -s Test2 -p 3396”

touch ~/imadeit – i use this just to check it makes it to the end of the script o.k.

I’m just looking to see if there is a nice way to initiate the shutdown when the box is rebooted.