Monday, May 23, 2011

big fat storage in an extreme hurry!

This morning I posted a "tweet" and a picture [http://jcuff.us/mjic9f] of a box o' disk drives that arrived at the office:


By later on in the day our team email list was updated:

from      Chris Walker 
to        Research Computing Operations List
date      Mon, May 23, 2011 at 4:53 PM
subject   Re: [Rcops-list] storage *cough*

Mike, Jerry, and I just replaced all but the systems disks, no problem.  



Sure enough, it was not fluff, they happily took the 80 lbs of disk, put it in the back of a car, drove to our data center and updated a legacy 48 drive array for the shortest money, and or time I've ever seen!

By the power of puppet, and our automatic systems they had the whole thing up in about 3 hrs from loading up the back of car. If you have ever driven in Boston you will know that travel time affects this TTL heuristic ;-)

As a $6K gamble so far it has worked out. Actually at even less $ than we have spent on other "projects":

http://blog.jcuff.net/2011/03/diy-tb-and-rc-playing-catch-up-quick.html

This is going to be a great system to test and see what we can do with it.

I LOVE working with this group!!!

[root@h1 ~]# fdisk -l | grep dev | grep bytes | grep Disk
Disk /dev/sda: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdb: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdc: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdd: 3000.5 GB, 3000592982016 bytes
Disk /dev/sde: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdf: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdg: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdh: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdi: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdj: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdk: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdl: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdm: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdn: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdo: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdp: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdq: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdr: 3000.5 GB, 3000592982016 bytes
Disk /dev/sds: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdt: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdu: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdv: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdw: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdx: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdy: 1000.2 GB, 1000204886016 bytes
Disk /dev/sdz: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdaa: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdab: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdac: 1000.2 GB, 1000204886016 bytes
Disk /dev/sdad: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdae: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdaf: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdag: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdah: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdai: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdaj: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdak: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdal: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdam: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdan: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdao: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdap: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdaq: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdar: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdas: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdat: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdau: 3000.5 GB, 3000592982016 bytes
Disk /dev/sdav: 3000.5 GB, 3000592982016 bytes
Disk /dev/md2: 15.7 GB, 15726608384 bytes
Disk /dev/md5: 2097 MB, 2097348608 bytes
Disk /dev/md3: 4293 MB, 4293525504 bytes
Disk /dev/md1: 205 MB, 205520896 bytes

-bash-3.2$ df -H .
Filesystem             Size   Used  Avail Use% Mounted on
/dev/mapper/sol02_vg-sol02_lv
124T    11G   124T   1% /mnt

-bash-3.2$ cat /proc/mdstat 
Personalities : [raid1] [raid6] [raid5] [raid4] 
md10 : active raid5 sdd[12] sdl[10] sdaj[9] sdar[8] sdt[7] sdab[6] sdh[5] sdp[4] sdan[3] sdav[2] sdx[1] sdaf[0]
32232930752 blocks super 1.0 level 5, 64k chunk, algorithm 2 [12/11] [UUUUUUUUUUU_]
[=============>.......]  recovery = 68.7% (2015969424/2930266432) finish=1050.1min speed=14507K/sec

md12 : active raid5 sdf[11] sdn[9] sdal[8] sdat[7] sdv[6] sdad[5] sdj[4] sdah[3] sdap[2] sdr[1] sdz[0]
29302664320 blocks super 1.0 level 5, 64k chunk, algorithm 2 [11/10] [UUUUUUUUUU_]
[==============>......]  recovery = 71.7% (2101048920/2930266432) finish=618.2min speed=22352K/sec

md11 : active raid5 sdc[12] sdk[10] sdai[9] sdaq[8] sds[7] sdaa[6] sdg[5] sdo[4] sdam[3] sdau[2] sdw[1] sdae[0]
32232930752 blocks super 1.0 level 5, 64k chunk, algorithm 2 [12/11] [UUUUUUUUUUU_]
[=============>.......]  recovery = 69.1% (2024975536/2930266432) finish=1065.6min speed=14156K/sec

md13 : active raid5 sda[10] sdi[8] sdag[7] sdao[6] sdq[5] sde[4] sdm[3] sdak[2] sdas[1] sdu[0]
26372397888 blocks super 1.0 level 5, 64k chunk, algorithm 2 [10/9] [UUUUUUUUU_]
[===========>.........]  recovery = 55.0% (1612084144/2930266432) finish=1847.6min speed=11889K/sec

Once the sync finishes time to start some serious soak testing abuse!

For now all disks present, and all 144 raw TB accounted for!



[any opinions here are all mine, and have absolutely nothing to do with my employer]
(c) 2011 James Cuff