Wednesday, October 26, 2011

simple quicktime screen capture and recording

I spent some time at a great workshop over the last two days, mainly talking about communication. A brief topic came up about how to grab screen casts. There was some consternation. However, I just stumbled across this simple method that you can run on an OSX 10.6.8 and above machine:

Applications -> QuickTime Player -> File -> New Screen Recording


Awesome!

http://msrworkshop.tumblr.com/

http://research.microsoft.com/en-us/events/escience2011-scholarly-communications/agenda.aspx


Wednesday, October 19, 2011

try not to be so worried about xfs >100T

There has been a lot of chatter about >100TB file systems with XFS:

http://blog.jcuff.net/2011/06/xfsrepair-testing-double-disk-raid5.html

http://blog.jcuff.net/2011/05/big-fat-storage-in-extreme-hurry.html

http://scalability.org/?p=3192 (joe's write up is awesome!)

and the thing that started it:

http://www.enterprisestorageforum.com/storage-hardware/the-state-of-file-systems-technology-problem-statement.html

We had this one system start to show very odd behavior yesterday. We had multiple drive failures and some self inflicted injury that caused our inode table to get out of sync for some of our hardlinked directories. We use the awesomeness of rsnapshot to take snaps of some of our filesystems onto large 100T+ filesystems with xfs a bit like this one:

[root@emcbackup9 emcback9]# df -H .
Filesystem             Size   Used  Avail Use% Mounted on
/dev/mapper/emcbackup9_vg-emcbackup9_lv
                       124T    32T    92T  26% /mnt/emcback9

We thought we would have to get into it in a big way to repair, but a quick read of http://oss.sgi.com/projects/xfs/training/xfs_slides_11_repair.pdf and some experience had the file systems up and sorted in about 10 hours.

The filesystem was not all that poorly and stage 7 fixed the missing inode meta data swiftly, these were all hardlinks that had a simple reference count problem you can see below. The filesystem was showing an interesting feature though, "rm -rf directory" would give "directory not empty":

[root@emcbackup9 Lab]# rm -rf Landscape/
rm: cannot remove directory `Landscape/': Directory not empty

but was clearly empty:

[root@emcbackup9 Lab]# ls -ltra Landscape/
total 0
drwxrwxrwx 1 root root 6 Oct 18 16:02 .
drwxrwxrwx 3 root root 22 Oct 18 16:16 ..

You can clearly see the (1) in the link counter from stat,
something wrong here ;-)

[root@emcbackup9 Lab]# stat Landscape/
File: `Landscape/'
Size: 6 Blocks: 0 IO Block: 4096 directory
Device: fd00h/64768d Inode: 5577705 Links: 1
Access: (0777/drwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2011-10-18 16:17:02.511914105 -0400
Modify: 2011-10-18 16:02:55.424823531 -0400
Change: 2011-10-18 16:16:23.872022080 -0400

Interesting failure, not a show stopper just rather annoying when trying to keep the filesystem neat and tidy. The eventual magic we used was:
xfs_repair -P -o bhash=1024 /dev/mapper/emcbackup9_vg-emcbackup9_lv

Here is the phase 7 sweep:
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
resetting inode 3527544 nlinks from 2 to 23787
resetting inode 4215328 nlinks from 1 to 2
resetting inode 4215348 nlinks from 1 to 2
resetting inode 4215354 nlinks from 1 to 2
resetting inode 4586590 nlinks from 1 to 2
resetting inode 4649378 nlinks from 1 to 2

So albeit my tweet:

https://twitter.com/#!/jamesdotcuff/status/126400580972326912

looked pretty ominous, in the end it all was smiles and sunshine. It was particularly interesting because I was asked about this *exact issue* on twitter yesterday:

https://twitter.com/#!/jamesdotcuff/status/126351096938639360

This is a 16G memory box with 4,805,636,416 files in it but only 26% full, the xfs_repair bhash flag is your friend, it used 100% of the available memory.

Mind you as always your mileage can and will vary!


Tuesday, October 18, 2011

paranoid about your cloud provider? why not raid1 mirror two cloud services?

Usual disclaimer, this is nuts, don't try this at home, your mileage will vary, things will no doubt fail #exercisetoreader etc. ;-)

Summary: A RAID mirror of box.net and dropbox.com - just for fun!

Following on from yesterday's post, I was sitting on the redline this morning thinking about using a RAID mirror to take two discrete cloud storage providers to provide an n+1 file system. Turns out that you can! This is just a proof of concept, no more no less, no one was injured in the making of this motion picture!

So here we go, first follow the fuse system commands from yesterday to set up your box.net account. Then grab and set up dropbox.com like this:

jcuff@shuttle:~$ wget https://www.dropbox.com/download?dl=packages/nautilus-dropbox_0.6.9_i386.deb
jcuff@shuttle:~$ sudo dpkg -i ./download\?dl\=packages%2Fnautilus-dropbox_0.6.9_i386.deb 
jcuff@shuttle:~$ dropbox status
Dropbox isn't running!
jcuff@shuttle:~$ dropbox start -i
Starting Dropbox...

and set up a file, just like we did yesterday:
jcuff@shuttle:~/Dropbox$ dd if=/dev/zero of=test.dat bs=1024k count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.0464433 s, 226 MB/s

Now you can simply set up your loop back interfaces, remember to use the fuse interface we described yesterday for box.net, webdav (davfs) will not mount the loopback and will fail with an I/O error, but this one passes muster. Basically you are setting up two block devices on the remote cloud offerings.
jcuff@shuttle:~/Dropbox$ sudo losetup /dev/loop3 ~/Dropbox/test.dat
jcuff@shuttle:~/Dropbox$ sudo losetup /dev/loop4 /tmp/boxed/test.dat

jcuff@shuttle:~$ sudo losetup -a
/dev/loop3: [0900]:18352720 (/home/jcuff/Dropbox/test.dat)
/dev/loop4: [0017]:6 (/tmp/boxed/test.dat)

Ok now let us assemble the RAID1 and mount it:
root@shuttle:/home/jcuff# mdadm --create /dev/md2 --level=1 --raid-devices=2 --auto=part /dev/loop[34]
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.

root@shuttle:/home/jcuff# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md2 : active raid1 loop4[1] loop3[0]
      10228 blocks super 1.2 [2/2] [UU]

root@shuttle:/home/jcuff# mkfs.ext4 /dev/md2
mke2fs 1.41.14 (22-Dec-2010)

root@shuttle:/home/jcuff# mount /dev/md2/ /mnt/cloudraid/

root@shuttle:/home/jcuff# df -H /mnt/cloudraid/
Filesystem             Size   Used  Avail Use% Mounted on
/dev/md2                11M   1.2M   8.5M  12% /mnt/cloudraid

Does it work?
root@shuttle:/home/jcuff# echo hellojames > /mnt/cloudraid/hellothere

root@shuttle:/home/jcuff# strings /dev/md2 
lost+found
hellothere
lost+found
hellothere
hellojames
Again, please don't try this at home ;-)


Monday, October 17, 2011

boxfs: 50G from the cli

Box.net has 50G's worth of free space!

Here's how you can get the most of it from via FUSE. Totally awesome!

This was all executed on DISTRIB_DESCRIPTION="Ubuntu 11.04"

http://code.google.com/p/boxfs/wiki/Compiling

First fetch the code:
jcuff@shuttle:~/box$ svn checkout http://boxfs.googlecode.com/svn/trunk/ boxfs-read-only
A    boxfs-read-only/boxopts.c
A    boxfs-read-only/boxfs.c
A    boxfs-read-only/mkrel.sh
A    boxfs-read-only/boxapi.c
A    boxfs-read-only/boxopts.h
A    boxfs-read-only/boxpath.c
A    boxfs-read-only/COPYING
A    boxfs-read-only/boxapi.h
A    boxfs-read-only/boxhttp.c
A    boxfs-read-only/boxpath.h
A    boxfs-read-only/README
A    boxfs-read-only/Makefile
A    boxfs-read-only/boxhttp.h
U   boxfs-read-only
Checked out revision 84.

First off some simple prereqs:
jcuff@shuttle:~/$ sudo apt-get install libxml2-dev libfuse-dev libcurl4-gnutls-dev libzip-dev

and libapp:

jcuff@shuttle:~/$ git clone http://github.com/drotiro/libapp.git
jcuff@shuttle:~/$ sudo make install

Then simply add this to your Makefile:
FLAGS = -D_FILE_OFFSET_BITS=64

Remember to run ldconfig or you will get:
jcuff@shuttle:~/$ boxfs --help
boxfs: error while loading shared libraries: libapp.so: cannot open shared object file: No such file or directory

Then make a config file:
jcuff@shuttle:~/$ cat config
username = you@youremail.org
mountpoint = /home/jcuff/boxed
verbose = yes
secure = yes
password = secritpasscodez
largefiles = yes

Then you are pretty much all set to fire it up:
jcuff@shuttle:~/$ boxfs -f ./config

Tada!
jcuff@shuttle:~/$ ls -ltra ~/boxed
total 4
drwxr-xr-x  3 root  root  21390149 1969-12-31 19:00 .
-r--r--r--  1 root  root  19023676 2011-10-17 12:41 Box App Overview.mp4
-r--r--r--  1 root  root    439630 2011-10-17 12:41 Box Overview.pdf
-r--r--r--  1 root  root   1563857 2011-10-17 12:41 Box for iPhone.pdf
drwxr-xr-x  2 root  root    362986 2011-10-17 12:46 Private

jcuff@shuttle:~/$ df -H ~/boxed
Filesystem             Size   Used  Avail Use% Mounted on
boxfs                  2.2G    22M   2.2G   1% /home/jcuff/boxed

The system uses delayed writes, but works nicely!
jcuff@shuttle:~/$ time dd if=/dev/zero of=~/boxed/test.dat bs=1024k count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.178957 s, 58.6 MB/s

jcuff@shuttle:~/$ ls -ltra ~/boxed/test.dat 
-r--r--r-- 1 root root 10485760 2011-10-17 13:12 /home/jcuff/boxed/test.dat

And as they say "wooph, there it is!"

update! remember box.net is a webdav provider!

jcuff@shuttle:~$ sudo apt-get install davfs2

jcuff@shuttle:~$ sudo mount -t davfs https://www.box.net/dav ./tt
Please enter the username to authenticate with server
https://www.box.net/dav or hit enter for none.
  Username: you@youremail.com
Please enter the password to authenticate user you@youremail.com with server
https://www.box.net/dav or hit enter for none.
  Password:  

jcuff@shuttle:~$ df -H /home/jcuff/tt
Filesystem             Size   Used  Avail Use% Mounted on
https://www.box.net/dav
                        28G    14G    14G  50% /home/jcuff/tt

It is even easier on OSX, or windows!

Top menu - > "Go" -> "Connect to Server":

https://www.box.net/dav



[any opinions here are all mine, and have absolutely nothing to do with my employer]
(c) 2011 James Cuff