Wednesday, October 26, 2011

Half an hour for a Eucalyptus Dream

I hope you are enjoying the recently released Ubuntu 11.10 (Oneiric Ocelot). I recently did take it for a spin, and this blog is about my experience. Eucalyptus has been part of Ubuntu since Jaunty, and I wanted to see how well the latest available version of Eucalyptus integrates with Oneirc

And I am pleased to say that in less than 1/2 hour, you can have your Eucalyptus cloud running in Oneric! You don't believe me? Let's do it:

10:17: popped the CD for the server install into two desktops machines we had laying around:

I just did add the openssh server to the basic install;

10:31: install is done, machines rebooted. At this point I lost the monitor I was borrowing, so I switched to ssh from the comfort of my office in front of a cup of coffee. I did do a quick update and upgrade to ensure all the security patches were applied;


10:34: Time to get to the Eucalyptus install. There are nice instruction on help.ubuntu.com which I partly followed: my engineering background prevents me from following nicely written instructions ... Soo, I did
     apt-get install eucalyptus-cloud eucalyptus-cc eucalyptus-sc eucalyptus-walrus
On the second desktop I did install the node controller
     apt-get install eucalyptus-nc


10:43: got to the familiar webUI

and downloaded the admin credentials. The configure tab was missing some components. I assume the network I was using confused a bit the nice autoregistration mechanism of the UEC, so I switched to manual mode. I first de-register all components (I used the default cluster1 for the cluster name)
     euca_conf --deregister-cluster cluster1
     euca_conf --deregister-sc cluster1
and re-register with a cluster name I'm more familiar with
     euca_conf --register-cluster pippo 192.168.7.246
     euca_conf --register-sc pippo 192.168.7.246

10:46: I did register the node controller
     euca_conf --register-nodes 192.168.4.7
(yep, our network is not a /24) waited few seconds to allow the node to report and got

Phew, just under half an hour, as I promised you. In a future blog I will talk about the images I uploaded on the cloud and tested it: you can check the work we are doing on our projects.eucalyptus.com.

Monday, October 17, 2011

Planet Eucalyptus

You already know our planet, since quite a few visit it regularly and requested to add feeds (let us know if you want your feed on our planet). When you visit it, you will find a somewhat ordinary planet site, perhaps too plain if you will, but what I want to mention in this blog, is how we are handling it and how it is running 'in the cloud'.

In the previous drinking champagne blog, I mentioned we have quite a few application and services in our internal production cloud (of course powered by Eucalyptus) and planet is one of them. What we did with planet, is to make it very simple to deploy and customize it: the original work was done by Mark Atwood. In order to take advantage of the cloud we heavily relied on the meta-data service.

The meta-data service allows for instances started in the cloud (private and public clouds which follows the AWS API) to learn data pertinent to the instance itself (hence it is called meta-data). Public IP, ssh keys, storage information, instance IDs (EMIs, kernel and ramdisk) are examples of what can be retrieved. All these data are accessible at http://169.254.169.254/latest/meta-data, and are easily accessible from within the instance with a browser or most likely with wget or curl.

Amongst these data, the user is allowed to pass few kB of data to the instance. To do so one could use the euca2ools and in particular the euca-run-instances with the -f or -d option. The instance can then access this data at http://169.254.169.254/latest/user-data: cloud-init uses it to run scripts at boot time or otherwise customize the instance and the official Amazon Linux AMI uses a port of cloud-init. Cloud-init is not yet available to all distros and earlier version of Eucalyptus suffered a bug which prevented cloud-init to work properly (instance would get delayed at boot time), so we decided to use a much simpler rc.local script to allow for a subset of its functionality. You can find information on the rc.local script we use and of other information about images on projects.eucalyptus.com. With these starter images in our production cloud, we set to host planet in an instance.

Our first attempt to run a service in the cloud, emulated the boot from EBS capability (see our issue tracker in the cloud), but this time we changed completely tack. We pushed all the planet's configuration into a Walrus bucket, then we created a script to be used when starting an instance.

You can inspect the script on projects.eucalyptus.com under the Cloud Application Architect area. The script gets all the css, ini and others needed file from a walrus bucket, installs nginx and other needed packages, and set up a cron-job to re-read the configuration files at set intervals, thus allowing for the dynamic configuration of planet. When we changed the logo, it was a matter of uploading the new css and png files, and presto! Planet got a new skin.

While this setup seems complicated at first glance, it is fairly easy: consider that it's all done with a few line of a shell script. It also allows for a easy failure recovery since restarting planet is a matter of 2 euca2ools command. If the instance were to fail we can issue:

euca-run-instances -f planet.sh -k my-ssh-key emi-F3DF1488
euca-associate-address -i i-xxxxx 173.205.188.124

and since there is no persistent data on the instance, that's all we need to do. And to apply a security update, we simply spin up a new instance with the same script (the script we use upgrades to all latest packages at start up), disassociate the public IP from the old instance and associate it to the new instance. And of course we can terminate the old planet once the elastic IP has been moved.

To customize the planet we upload the new version of the specific file to the Walrus bucket, and to do so we use a version of s3curl modified to allow for different endpoint then S3. For example to add a new feed we first get the current planet.ini:

wget http://173.205.188.8:8773/services/Walrus/planet/planet.ini

modify it, to add the new feed, and we upload it back into the bucket



s3curl --id graziano --acl public-read --put planet.ini -- http://173.205.188.8:8773/services/Walrus/planet/planet.ini


and wait for the cron-job to execute. Easy, isn't it?

We have few more scripts which we use for our production services on projects.eucalyptus.com and if you have a similar script you want to share, let us know and we'll add it, or do a git-hub merge request. The recipes are ready to go, just add Eucalyptus to it.