Sunday, January 9, 2011

how openstack stacks up: private cloud on top of cloud action

There is a *lot* of documentation for nova/openstack etc out there, took me a while to realize that there is actually a rather simple way to install this. I decided to take on something I've been meaning to do for a while and pull down a UEC image directly from http://uec-images.ubuntu.com/ and see if I can boot it directly from a local ubuntu KVM instance, this one is fairly unique, so was certainly worth writing up.

Cloud meets cloud on top of cloud so to speak!


Part one: UEC cloud container install


Nova hooks in pretty well to an existing system, so I thought we would use an existing AWS AMI to encapsulate it incase we need it later on again. Maverick releases are needed to do this so we have the UCE floppy image to boot from (I grabbed the 64bit one - make sure if you are on a 64bit host, you are using 64bit images, or the later virtual inside of virtual will hork):
jcuff@srv:~$ wget http://uec-images.ubuntu.com/releases/10.10/release/ubuntu-10.10-server-uec-amd64.tar.gz

Now unpack that tar.gz file and then apply some hot local kvm action via console redirection; this writes directly to the current console (really good for headless installs!):
jcuff@srv:~$ sudo kvm -fda maverick-server-uec-amd64-floppy -drive if=virtio,file=maverick-server-uec-amd64.img -m 4096 -smp 4 -boot a -monitor pty -nographic -net nic,model=virtio -net "user,hostfwd=tcp::5555-:22"

I used the simple ubuntu/ubuntu login GRUB selection:
GNU GRUB version 1.98+20100722-1ubuntu1

+-----------------------------------------------------------------------+
|uec-image with random ubuntu password
|***uec-image with ubuntu:ubuntu***
|uec-image seeded from a url [must be edited]

(wait a bit)

Ubuntu 10.10 ubuntuhost ttyS0

ubuntuhost login: ubuntu

You can also reach the console via ssh (password is ubuntu):
root@srv:~# ssh -p 5555 ubuntu@localhost

Ok - so that's part one done!

We have our UEC running locally under KVM. Nice!


Now read on for how to get openstack running inside of your new shiny machine...


Part two: Openstack install


First remember to set up /etc/hosts for new fresh ubuntuhost or your services will hork on configure and startup - rabbitmq in particular needs to know what the local machine ip addy is:
echo -n `ifconfig eth0 | grep "inet addr" | awk '{print $2}' | sed 's/addr\://'` >> /etc/hosts ; echo " ubuntuhost" >> /etc/hosts

now time to grab git:
root@ubuntuhost:~# apt-get install git

And you will need this awesome wrapper, it gets all the deps that you need to be successful, doing it any other way will lead to hair loss:
root@ubuntuhost:~$ git clone https://github.com/vishvananda/novascript.git

now you can fetch all your deps with:
root@ubuntuhost:~# ./nova.sh branch
root@ubuntuhost:~# ./nova.sh install

So now nova is available in your new local kvm / uce image. I then shutdown and set up a local disk for volumes etc, you don't have to do this, but is worth it for later on.
jcuff@srv:~$ qemu-img create novavol 5G
Formatting 'novavol', fmt=raw size=5368709120

I then restarted the kvm with a slightly larger home directory where nova stores images etc, and made double sure I had at least 1024M of memory - if not your qemu devices will ABORT because they default to 512 (that one had me going for a *fair* old while!)
jcuff@srv:~$ sudo /usr/bin/qemu-system-x86_64 -fda maverick-server-uec-amd64-floppy -drive if=virtio,file=maverick-server-uec-amd64.img -m 4096 -boot a -monitor pty -nographic -net nic,model=virtio -net "user,hostfwd=tcp::5555-:22" -drive if=virtio,file=novavol -smp 4 -cpu host -drive if=virtio,file=homenovavol

then log in to your vm and do:
root@ubuntuhost:~# vgcreate nova-volumes /dev/vdb
No physical volume label read from /dev/vdb
Physical volume "/dev/vdb" successfully created
Volume group "nova-volumes" successfully created

(set up a test lv to make sure nova can find it ok)
root@ubuntuhost:~# sudo lvcreate -L 1G --name test nova-volumes

I also synced /home to the new vdc1 disk to give me some room for the images. Again this part is optional.

Then you can finally get to the cloud controller piece:
root@ubuntuhost:~# ./nova.sh run

This sets up a set of gnu screen instances you can use CTRL-A " to list the screens, go to the test screen (number 7), the other screens are fab in case you botch anything up as all the debug for each component drops into each window - nice touch:
Num Name Flags

0 nova
1 api
2 objectstore
3 compute
4 network
5 scheduler
6 volume
7 test

root@ubuntuhost:~# euca-add-keypair test > test.pem

root@ubuntuhost:~# euca-run-instances -k test -t m1.tiny ami-tiny
RESERVATION r-mm3lc9sb admin
INSTANCE i-1 ami-tiny scheduling test (admin, None) 0 m1.tiny 2011-01-09 19:13:32.732532

root@ubuntuhost:~# euca-describe-instances
RESERVATION r-rbl9mxvq admin
INSTANCE i-1 ami-tiny 10.0.0.3 10.0.0.3 launching test (admin, ubuntuhost) 0 m1.tiny 2011-01-09 19:27:36.667119



TADA! All set!


root@ubuntuhost:~# chmod 600 test.pem
root@ubuntuhost:~# ssh -i test.pem root@10.0.0.3
--
-- This lightweight software stack was created with FastScale Stack Manager
-- For information on the FastScale Stack Manager product,
-- please visit www.fastscale.com
--
-bash-3.2# uname -a
Linux localhost.localdomain 2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux


So that is pretty awesome, there you have it - openstack cloud manager inside a UCE cloud server image, all ready to head out to the cloud I guess!


[any opinions here are all mine, and have absolutely nothing to do with my employer]
(c) 2011 James Cuff