Thursday, March 15, 2012

glusterfs @ 80TB in seconds flat

tl;dr:
[root@rcss1 /]# df -H /rcss_gfs
Filesystem             Size   Used  Avail Use% Mounted on
rcss1:/rcss             80T   137M    80T   1% /rcss_gfs

It was all so very, very complicated!

We started off with 4 C2100's each with 20TB local disk:
[root@rcss1 /rcss_gfs]# df -H /rcss1
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sdb                20T    11G    20T   1% /rcss1

then off we went... first install the code:
[root@rcss1 /]# yum install glusterfs-server


[root@rcss1 /]#  /etc/init.d/glusterd start
Starting glusterd:                                         [  OK  ]

[root@rcss1 /]# chkconfig glusterd on

Probe the peers, and connect them:
[root@rcss1 /]# gluster peer probe rcss2
Probe successful
[root@rcss1 /]# gluster peer probe rcss3
Probe successful
[root@rcss1 /]# gluster peer probe rcss4
Probe successful

[root@rcss1 /]# gluster peer status
Number of Peers: 3

Hostname: rcss2
Uuid: 0d1170c3-b16b-4084-9ecf-c22865fd2fa8
State: Peer in Cluster (Connected)

Hostname: rcss3
Uuid: 171362a0-d767-4fd6-a3f6-561a43fd2b69
State: Peer in Cluster (Connected)

Hostname: rcss4
Uuid: 899be5de-f560-4cc2-ac58-10a4bdaa5574
State: Peer in Cluster (Connected)

Make a file system, we are striping here:
[root@rcss1 /]# gluster volume create stripe 4 transport tcp rcss rcss1:/rcss1 rcss2:/rcss2 rcss3:/rcss3 rcss4:/rcss4
Creation of volume rcss has been successful. 
Please start the volume to access data.

[root@rcss1 /]# gluster volume info

Volume Name: rcss
Type: Stripe
Status: Created
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: rcss1:/rcss1
Brick2: rcss2:/rcss2
Brick3: rcss3:/rcss3
Brick4: rcss4:/rcss4


[root@rcss1 /]# gluster volume start rcss
Starting volume rcss has been successful


[root@rcss1 /]# mkdir /rcss_gfs

[root@rcss1 /]# mount -t glusterfs rcss1:/rcss /rcss_gfs/

[root@rcss1 /]# df -H /rcss_gfs
Filesystem             Size   Used  Avail Use% Mounted on
rcss1:/rcss             80T   137M    80T   1% /rcss_gfs

Test it:
[[root@rcss1 /rcss_gfs]# dd if=/dev/zero of=test2.dat bs=1024k count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 77.4103 s, 135 MB/s
It took longer to write this posting than building a clustered filesystem.

Update... client testing, 6 manage to fill the inter tubes ;-)





[any opinions here are all mine, and have absolutely nothing to do with my employer]
(c) 2011 James Cuff