Advertisements

General Musing

blaze your trail

Posts Tagged ‘openmosix

Proof of Concept: Mixing Clustering with Virtualization #openmosix #linux #vserver #cluster

leave a comment »

Cluster armado con hardware viejo y donado - Cluster armed with old and donated hardware

For over 7 years I’ve been thing of the possibilities of mixing clustered computing with virtualization. Distributed computing is in essence combining multiple physical computers to create one big virtual computer, virtualization is in essence about creating multiple smaller computers in the context of one physical computer.

I originally thought of creating a Beowulf cluster, for which I had ordered a CD in 1998, although my experience with Beowulf had been practically zero. So I decided to start a new homegrown project and investigated the existing tools which cold be used to implement this. After due diligence I took two platforms which I liked for their potential and FOSS nature OpenMosix and Linux VServer and integrated those kernel patches to make it possible to run both together. I was primarily going to using Gentoo in both the hosts as the guests servers, with the exception of one RedHat, one Mandrake (Mandriva) and one Debian guest for the compiler packaging farm. My plan was to have a hetrogenous cluster with an underlying clustered filesystem.

At the time I was doing this distributed filesystems where not my forte, and I wanted a working proof of concept which I could use without spending too much time on getting the filesystem working. So I chose for NFS, with locking disabled. This did mean that I needed to have a master server which could do the primary updating of the NFS. This master server would double as my Gentoo package server and my syslog server, this would avoid me needing to make any changes in the VServers themselves. A drawback of a hetrogenous cluster is the need to compile for the lowest common denominator CPU, and although the gain of sharing binaries did make up for that the guarantee that the whole cluster would be able to run the distributed threads and the investment cost, which was close to zero as I’d saved most of the computers from the garbage.

In the end I was running 6 hosts, each with between 2 and 5 guests.

Image source: Martinez Zea

Advertisements

Written by Daniël W. Crompton (webhat)

February 16, 2011 at 12:17 pm

Rebuilding a kernel on a remote host #vserver #kernel #linux

leave a comment »

I prefer Linux-VServer, and consider that I have a reasonable amount of knowledge of the codebase. Enough at least that I’ve written a few patches for a VServer-openMosix kernel.1 There’s just one thing that I haven’t been able to do yet, set up a VServer on a remote host. So how can I do that?

Firstly I had to have a remote machine, so I couldn’t cheat. It’s easier to cheat if you have physical access to the machine. So I borrowed a virtual machine (VMware ) running on the server of a friend, he started up a vanilla Ubuntu Edgy 6.10, which is based on debian. Which is where I would be faced with my first problem, Linux Logical Volume Manager (LVM). It’s not that something like that would be such a problem – usually – I just hadn’t build one before, so I was unsure how to configure and use it. Luckily I found an article on O’Reilly‘s LinuxDevCenter called “Managing Disk Space with LVM.

I must say LVM was slightly difficult to get setup, just recompiling a vanilla ubuntu kernel with “Multiple devices driver support (RAID and LVM)” and “Device mapper support” in addition to the VServer patches wasn’t possible. I soon found out that couldn’t patch the debian kernel as it produced too many errors. And the vanilla kernel was giving me problems as I just wasn’t able to mount the “[…] several nicely named logical volumes […]” Even the article “Linux-Vserver With LVM And Quotas” wasn’t helping me.

My main problem was that it didn’t boot from the LVM partition. I’ll have to explain, debian seems to boot from the LVM partition by default and uses an initrd to boot from, it means that you have to create an image which is loaded during boot. This image contains a root partition with lvm tools and busybox to supply the mount and boot programs. The initrd was the actual problem, I was just getting the following message:

unpacking initramfs...... <0> Kernel panic - not syncing : no cpio magic

I asked my friend to reinstall without lvm. And obviously the vanilla kernel worked fine when compiled, although I got some errors with my vserver partition:

Checking file systems.......
Fsck 1.39 ..........
Fsck.ing.ext3 Bad magic number in super-block while trying to open /dev/hda6
Dev/hda6:
The super block could not be read or does not describe a correct ext2 file system.
If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or some thing else ), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock:
E2fsck ..b 8193 <device>

Fsck died with exit status 8

.. File system check failed.
A log is being saved in /var/log/fsck/checkfs if that location is writable.
Please repair the file system manually.

.. A maintainance shell will now be started.
.. CONTROL-D will terminate this shell and resume system boot.

That was just a spelling mistake in /etc/fstab

As I look back – this was in December – I should have spend more time on the initrd. I would have loved to get it working, but under the pressure of time it wasn’t possibe. I wanted to use a second machine to create a serial connection to the first. (Remote Serial Console HOWTO)

Originally posted here.

Technorati technorati tags: , , , , , , , , ,

Written by Daniël W. Crompton (webhat)

June 15, 2010 at 8:43 am

%d bloggers like this: