This is quite a good diagnostic for checking your disk throughput. Try copying data to and from your zpool while your running this command on the host…
zpool iostat -v unprotected 2
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
unprotected 1.39T 668G 18 7 1.35M 161K
c7d0 696G 403M 1 2 55.1K 21.3K
c9d0 584G 112G 8 2 631K 69.3K
c7d1 141G 555G 8 2 697K 70.0K
---------- ----- ----- ----- ----- ----- -----
The above command will keep displaying the above output every 2 seconds (average during that time). I’ve used it a few times to ensure that all disks are being used (in write operations) where needed. Of course read op’s may not be typically across all disks as it will depend where the data is…
As you can see in the output from my “unprotected” zpool, my disk “c7d0” is near full so less write operations will be on this disk. In my scenario most of my reads also come from this disk, this was due me copying most of the data into this zpool when there was only this single disk.
I’ve heard rumor of a zfs feature in future that will re-balance the data across all the disks (unsure if its live or on a set schedule)
Another way to show some disk throughput figures is to run the iostat command like so…
iostat -exn 10
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
cmdk17 1.0 0.0 71.5 0.0 0.0 0.0 10.9 0 1
cmdk18 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
cmdk19 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
cmdk20 0.8 0.0 33.5 0.0 0.0 0.0 13.5 0 1
cmdk21 0.4 0.0 0.5 0.0 0.0 0.0 15.5 0 1
cmdk22 0.8 0.0 66.3 0.0 0.0 0.0 9.0 0 1
cmdk23 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
cmdk24 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
extended device statistics —- errors —
extended device statistics ---- errors ---
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c11d1
0.0 7.7 0.0 25.8 0.0 0.0 2.3 4.9 0 3 0 0 0 0 c8d0
0.0 17.6 0.0 238.0 0.0 0.0 0.0 0.3 0 0 0 0 0 0 c9d0
0.0 1.0 0.0 0.8 0.0 0.0 0.0 0.3 0 0 0 0 0 0 c7t0d0
0.0 1.0 0.0 0.8 0.0 0.0 0.0 0.2 0 0 0 0 0 0 c7t2d0
0.0 1.0 0.0 0.8 0.0 0.0 0.0 0.3 0 0 0 0 0 0 c7t3d0
0.7 21.1 29.9 315.0 0.0 0.0 0.0 1.1 0 1 0 0 0 0 c7t4d0
0.7 20.9 29.8 314.9 0.0 0.0 0.0 1.7 0 2 0 0 0 0 c7t5d0
0.8 21.0 34.1 315.0 0.0 0.0 0.0 1.2 0 1 0 0 0 0 c7t6d0
0.5 20.8 21.3 314.8 0.0 0.0 0.0 1.1 0 1 0 0 0 0 c7t7d0
This should show you all your disks and update on a 5 second interval. Copying data back and forth to your drives will show various stats.