Posts Tagged ‘linux’
Today I spend my day at the Red Hat Open Cloud Tour, this is what happened today:
Just heard the opening by Rajiv Sodhi, who is here despite having a baby due any moment.
Margaret J. Rimmler’s keynote was interesting. One of the key takeaways being openness RedHat customers should have the choice to remain portable and replace RedHat, if that is what they want. Read the rest of this entry »
For over 7 years I’ve been thing of the possibilities of mixing clustered computing with virtualization. Distributed computing is in essence combining multiple physical computers to create one big virtual computer, virtualization is in essence about creating multiple smaller computers in the context of one physical computer.
I originally thought of creating a Beowulf cluster, for which I had ordered a CD in 1998, although my experience with Beowulf had been practically zero. So I decided to start a new homegrown project and investigated the existing tools which cold be used to implement this. After due diligence I took two platforms which I liked for their potential and FOSS nature OpenMosix and Linux VServer and integrated those kernel patches to make it possible to run both together. I was primarily going to using Gentoo in both the hosts as the guests servers, with the exception of one RedHat, one Mandrake (Mandriva) and one Debian guest for the compiler packaging farm. My plan was to have a hetrogenous cluster with an underlying clustered filesystem.
At the time I was doing this distributed filesystems where not my forte, and I wanted a working proof of concept which I could use without spending too much time on getting the filesystem working. So I chose for NFS, with locking disabled. This did mean that I needed to have a master server which could do the primary updating of the NFS. This master server would double as my Gentoo package server and my syslog server, this would avoid me needing to make any changes in the VServers themselves. A drawback of a hetrogenous cluster is the need to compile for the lowest common denominator CPU, and although the gain of sharing binaries did make up for that the guarantee that the whole cluster would be able to run the distributed threads and the investment cost, which was close to zero as I’d saved most of the computers from the garbage.
In the end I was running 6 hosts, each with between 2 and 5 guests.
Image source: Martinez Zea
“What I did for a project I was working on was I create a LD_PRELOAD library which overloaded the i/o operations and used gz and bz2. This could easily be adapted to overload with encryption library functions rather than compression libraries. You can also use this to keep the bash history in memory using a shared memory location.“
What I did which inspired the message above was to replace a number of functions – including read, write and lseek – with custom functions. What the underlying custom code did was fingerprint – using the magic file – the file to discover which compression mechanism was being used for an existing file, and when creating a new file it would use the compression based on the value set in an environment variable. The file was never extracted to and only held in memory as these were mostly streamed to and from disk compressed, which means that with a little tweaking that these could include a stream cipher, provided the key is long enough to avoid stream cipher attacks.
For completeness I’ll add here that the code supported the formats listed below, and a number of other historic formats and others that I don’t recall:
- pkzip (deflate)
Somebody else’s LD_PRELOAD examples can be found here: LD_PRELOAD fun
Image source: John Davey
I prefer Linux-VServer, and consider that I have a reasonable amount of knowledge of the codebase. Enough at least that I’ve written a few patches for a VServer-openMosix kernel.1 There’s just one thing that I haven’t been able to do yet, set up a VServer on a remote host. So how can I do that?
Firstly I had to have a remote machine, so I couldn’t cheat. It’s easier to cheat if you have physical access to the machine. So I borrowed a virtual machine (VMware ) running on the server of a friend, he started up a vanilla Ubuntu Edgy 6.10, which is based on debian. Which is where I would be faced with my first problem, Linux Logical Volume Manager (LVM). It’s not that something like that would be such a problem – usually – I just hadn’t build one before, so I was unsure how to configure and use it. Luckily I found an article on O’Reilly‘s LinuxDevCenter called “Managing Disk Space with LVM.“
I must say LVM was slightly difficult to get setup, just recompiling a vanilla ubuntu kernel with “Multiple devices driver support (RAID and LVM)” and “Device mapper support” in addition to the VServer patches wasn’t possible. I soon found out that couldn’t patch the debian kernel as it produced too many errors. And the vanilla kernel was giving me problems as I just wasn’t able to mount the “[…] several nicely named logical volumes […]” Even the article “Linux-Vserver With LVM And Quotas” wasn’t helping me.
My main problem was that it didn’t boot from the LVM partition. I’ll have to explain, debian seems to boot from the LVM partition by default and uses an initrd to boot from, it means that you have to create an image which is loaded during boot. This image contains a root partition with lvm tools and busybox to supply the mount and boot programs. The initrd was the actual problem, I was just getting the following message:
unpacking initramfs...... <0> Kernel panic - not syncing : no cpio magic
I asked my friend to reinstall without lvm. And obviously the vanilla kernel worked fine when compiled, although I got some errors with my vserver partition:
Checking file systems.......
Fsck 1.39 ..........
Fsck.ing.ext3 Bad magic number in super-block while trying to open /dev/hda6
The super block could not be read or does not describe a correct ext2 file system.
If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or some thing else ), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock:
E2fsck ..b 8193 <device>
Fsck died with exit status 8
.. File system check failed.
A log is being saved in /var/log/fsck/checkfs if that location is writable.
Please repair the file system manually.
.. A maintainance shell will now be started.
.. CONTROL-D will terminate this shell and resume system boot.
That was just a spelling mistake in
As I look back – this was in December – I should have spend more time on the initrd. I would have loved to get it working, but under the pressure of time it wasn’t possibe. I wanted to use a second machine to create a serial connection to the first. (Remote Serial Console HOWTO)
Originally posted here.
On Dark Reading site editor Tim Wilson attacks Linus Torvalds for making the comment “To me, security is important. But it’s no less important than everything else that is also important!” He is correct in his arguments against Linus’ point of view with the exception of his statement:
If I build a house that is unsafe, it threatens the inhabitants. If I build a bank that is insecure, it threatens not only the welfare of the business, but the lives of thousands of customers.
This is really a fallacious statement, an insecure bank is far less of a problem than he thinks.
Consider the current banking crisis and the number of times security problems have dumped large quantities of credit card numbers on the street, most reasonable banks have a number of backups. And when they didn’t national banks have bailed them out. Much of the risk a bank’s customers face are mitigated, transferred, and even budgeted. How can you transfer the risk of shoddy construction once the building has collapsed on top of you? An unsafe house is almost always more of a hazard than bank insecurity, just ask the Chinese earthquake victims.
Tim, if you are going to use a metaphor try to use one that isn’t so obviously flawed.