Cymen Vig

Software Craftsman

Debian testing, Xen and "Error: Boot loader didn't return any data!"

I wanted to use Debian testing in a Xen DomU but after upgrading (including grub), “xm create “ no longer worked. It failed with the error:

Error: Boot loader didn't return any data!

I followed the suggestion of running “pygrub /path/to/xen/disk” but each of my DomU get a chunk of LVM disk. Within the LVM logical volume, the DomU OS partitions the disk however it wants. I need to get at one of the partitions inside the LVM from the Dom0 or Xen host OS. To do this, get kpartx and run it like so:

kpartx /dev/physical_volume_name/logical_volume_name

After running that, I found my disk partitions from the DomU logical volume at /dev/mapper/logical_volume_name1, 2, 3, etc. So I could then run:

pygrub /dev/mapper/logical_volume_name1

That error out again but it was line 68 of GrubConf.py:

` def set_disk(self, val): val = val.replace(“(“, “”).replace(“)”, “”) self._disk = int(val[2:])`

I added the Python lines to print out the value of val and found that it was getting set to an empty string. Then I simply mounted the now accessible root partition:

mount -t ext3 /dev/pv/lv1 /mnt/tmp

And editing the …/boot/grub/menu.lst revealed the debian upgraded and spewed some cruft into the file. Cleaning that up resulted in a working DomU. The upgrade probably mentioned this but I glazed over it. I suspect I’m not the only one so it’s worth a check.

Finding kpartx made it worthwhile…

Comments and Reactions

Reading Update

I’ve been on a reading spurt lately. I’m going to jot down some quick notes avoiding getting very deep.

“Maus” by Art Spiegelman: Great.

“Spook Country” by William Gibson: Better than “Pattern Recognition” but I suspect that is due to getting used to the characters. I began to see more in “Pattern Recognition” after reading “Spook Country.” I still think Gibson must have struggled a lot with Pattern Recognition (and it shows in the book) but after reading Spook, I’m more optimistic.

“Marooned in Realtime” by Vernor Vinge: I found another favorite author and it’s Vinge. Really enjoyable. “A Fire Upon the Deep” was great but I enjoyed Marooned even more. Currently reading Vinges “A Deepness in the Sky” and finding it excellent.

“Quicksilver” by Neal Stephenson: I ran out of library time due to poor planning. Going to return to it shortly.

Comments and Reactions

Published PHPGatewayInterface to github...

I wrote a hopefully generic class in PHP to proxy CGI scripts. The idea being that sometimes one wants to get the output of a CGI script and do various things with it before displaying it. In my particular case, wrap CVSWeb so the output can be put into a HTML DIV.

The code is published on github as PHPGatewayInterface.

Comments and Reactions

MythBuntu and Pinnacle 800i DVB card...

  1. Add dvb-fe-xc5000-1.6.114.fw to /lib/firmware

  2. Install Mercurial (apt-get install mercurial)

  3. Compile and install v4l-dvb (directions)

Comments and Reactions

MediaWiki and Google Search Appliance (GSA)

The Google Search Appliance advertises via the Accept-Encoding part of the HTTP request header that it can handle gzip content. However, this does not appear to be the case with at least gzip-encoded content coming from MediaWiki.

The HTTP request header looks like this: ` GET HOST: www.xyz.com ACCEPT: text/html,text/plain,application/* FROM: USER-AGENT: gsa-crawler (Enterprise; … ; …) ACCEPT-ENCODING: gzip `

The solution is to remove the gzip option from Accept-Encoding which can be done by:

  1. Go to GSA admin interface.

  2. Crawl and Index->HTTP Headers

  3. Set field Additional HTTP Headers for Crawler to Accept-Encoding:

The HTTP request header now looks like this: ` GET HOST: www.xyz.com ACCEPT: text/html,text/plain,application/* FROM: USER-AGENT: gsa-crawler (Enterprise; … ; …) ACCEPT-ENCODING: `

Solution source: A posting in the Google Search Appliance/Google Mini group. I found that simply setting the field to “Accept-Encoding:” worked just fine – no need to include “foo”.

Comments and Reactions

Minimizing Windows Server 2003 CPU usage as a DomU

This is a work in progress and I’m still testing some things but here are a few ideas:

  1. Use the GPL Paravirtualization drivers

  2. Disable the login screen saver

Comments and Reactions

Xen and DomU Clone

This basic approach works:

  1. Power down target system.

  2. Create another logical volume the same size as the target system.

  3. dd if=/dev/$PV/$TARGET of=/dev/$PV/$CLONE bs=1M

  4. Copy Xen config and modify with clone details

  5. Power up clone, change hostname and any other relevant details

  6. Power up target

For a 32 GB LV, dd took just over 11 minutes to dump the data from the target volume to the clone volume on my hardware. This was for a Windows Server 2003 clone using DHCP so the only thing I changed was the host name (so far).

I’d think the LVM snapshot method might be able to do this more intelligently but I couldn’t quite grok it right away and this method worked.

Comments and Reactions

Xen and Windows Server (2003)

I got a Windows Server 2003 Xen DomU up and running within a couple hours. The configuration was fairly simple but the issue I ran into with Debian Lenny was that that Xen defaults to xenbr0 as the network bridge interface while eth0 is the correct one to to use. After modifying /etc/xen/xend-config.sxp (actually, I first stated the bridge explicitly in my DomU configuration), networking worked and the VM fired up with the regular install screens. The integration with VNC works very well.

The next step after all the OS updates was to install the Windows paravirtual drivers. I used gplpv_fre_wnet_x86_0.10.0.69.msi and the VM rebooted after install without issue except for the IntelIde drivers/service failing to load (guessing it just needs to be disabled – not critical as far as I can tell).

So far, with my DomU installs, I’ve been doing it the hard way by which I mean I’ve been reinstalling from scratch with each new DomU. Next on the list is to clone a VM using LVM snapshots which should be much easier (and reduce all that update downloading and configuration).

Update: Indeed, the IntelIde startup error is apparently fixed (hidden properly) in Xen 3.4!

Comments and Reactions

Xen, LVM and DomU ext3 resizing...

I’m using a Debian 5 (Lenny) host with Xen 3.2 from packages with software RAID and LVM. At the moment, all of my guests (DomU) are also Debian 5. To resize a guest filesystem, in Dom0 resize the logical volume:

lvextend -L +100G /dev/$PV/$LV

In guest (DomU), resize the partition by deleting and creating again. The trick is to make sure the first block stays. With the default steps on the Debian netboot installer with the “all in one” option, it makes these partitions:

/dev/xvda1 as / (1st primary partition) /dev/xdva2 as extended partition /dev/xvda5 as swap

So I unmounted the swap (swapoff /dev/xvda5), deleted all of the partitions, recreated /dev/xvda1 as the new size marked bootable leaving just enough room for the swap partition, and added the small swap partition. Then edit /etc/fstab to point to the new device (I changed swap to be on /dev/xvda2), reboot and do “resize2fs /dev/xvda1”. I tried doing this without the reboot but resize2fs didn’t want to see the new partition sizes. I used cfdisk for the partitioning so maybe it would work with fdisk or parted.

I did forget one detail – I rebooted the DomU after resizing the LVM volume in Dom0. Mentioning just in case that is required. I’ll find out next time I need to add more space to a logical volume.

Comments and Reactions

MySQL and Round Robin Database (RRD)

While looking for a MySQL RRD storage engine, I came across Round-Robin Database Storage Engine (RRD) (pdf) which describes how to setup a MySQL table to act as a RRD. The PDF appears to have been created in February of 2007 but the benchmark result at the end of 600 inserts/second says this was achieved on a 1350 MHz AMD CPU which suggests the article may be older.

I replicated the configuration and tested it on my laptop (5400 RPM disk, 2.4 GHz Intel T7700 CPU). With the MyISAM database, a brieft test of about 50k inserts resulted in ~7000 inserts/second. But the 25m max rows means the trigger functionality (the part that makes the table behave like an RRD) wasn’t really tested.

To make a more interesting test, I recreated the database with max rows set to 25920 (or one sample every 5 minutes for 90 days):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
CREATE TABLE statistic_rrd (
rrd_key INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY
    , attribute_key INT UNSIGNED NOT NULL DEFAULT '0'
    , start_utime INT UNSIGNED NOT NULL DEFAULT '0'
    , end_utime INT UNSIGNED DEFAULT NULL
    , logging_interval INT UNSIGNED NOT NULL DEFAULT '0'
    , value BIGINT UNSIGNED NOT NULL DEFAULT '0'
    , UNIQUE KEY (attribute_key, start_utime)
    , KEY start_time (start_utime)
    ) ROW_FORMAT = FIXED
    , MAX_ROWS = 25920
;

CREATE TABLE statistic_rrd_key (
    rrd_key BIGINT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY
)
;
INSERT INTO statistic_rrd_key VALUES (0);

DROP TRIGGER IF EXISTS statistic_rrd_ins;
DELIMITER $$
CREATE TRIGGER statistic_rrd_ins
BEFORE INSERT ON statistic_rrd
FOR EACH ROW
BEGIN
    SET @rrd_key = 0;
    SET @rows = 25920;
    -- PK is NULL
    IF NEW.rrd_key = 0 THEN
        SELECT rrd_key + 1
            FROM statistic_rrd_key
            INTO @rrd_key;
        SET NEW.rrd_key = @rrd_key;
    END IF;
    IF (NEW.rrd_key % @rows) THEN
        SET NEW.rrd_key = NEW.rrd_key % @rows;
    ELSE
        SET NEW.rrd_key = @rows;
    END IF;
UPDATE statistic_rrd_key SET rrd_key = NEW.rrd_key;
END;
$$
DELIMITER ;

Results: 25920 - ~ 7k inserts/second (on empty database) 25920 * 5 - ~6k inserts/second 25920 * 10 - ~6k inserts/second

The insert rate was CPU limited with one of the two cores at 100% and the hard drive rarely being written. The total number of the rows in the table was 25,243 at the end so this suggests my idea of capping at 25920 wasn’t ideal (I didn’t examine the trigger to determine exactly how it works).

After converting both tables to InnoDB, the results where:

25920 - ~630 inserts/second (empty db) 25920 * 5 - ~660 inserts/second

With the hard drive thrashing, the insert rate was clearly limited by the transaction log. This time though both cores varied at around 30% utilization.

Enabling some sane InnoDB performance options per Innodb Performance Optimization Basics at the MySQL Performance Blog:

1
2
3
4
5
6
7
8
[innodb]
innodb_buffer_pool_size = 1G
innodb_log_file_size = 256M
innodb_log_buffer_size = 4M
innodb_flush_log_at_trx_commit = 2
innodb_thread_concurrency = 8
innodb_flush_method = O_DIRECT
innodb_file_per_table

25920 = ~ 1580 inserts/second 25920 * 5 = ~ 1580 inserts/second

Now the disk wasn’t thrashing so much but the CPU cores were switching back and forth between 80-90% and 60-70% which suggested contention on the rrd_key. One approach would be to make that table (with just a single row) use the MEMORY engine. I made that change and:

25920 * 10 = ~ 1730 inserts/second

But the insertion code is also naive as it makes a SQL call for each insertion. A more realistic scenario is having approximately 500 inserts per call. But this doesn’t work with the trigger properly…

According to hdparm, the write speed of

insert.php v1:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
<?php

try {
    $db = new PDO('mysql:dbname=rrd;host=localhost', 'username', 'password');
    echo "PDO connection object created\n";
} catch (PDOException $e) {
    echo $e->getMessage() ."\n";
}

$sql = <<<EOD
REPLACE INTO statistic_rrd
    (attribute_key, start_utime, end_utime, logging_interval, value)
VALUES
    (ROUND(RAND()*100000), UNIX_TIMESTAMP(NOW()), NULL, 100, 123456789)
;
EOD;

try {
    $rows_max = 25920*10;
    $start_time = microtime(true);
    $count = $rows_max;
    while ($count) {
        --$count;
        $db->exec($sql);
    }
    $end_time = microtime(true);
    $total_time = $end_time - $start_time;
    echo ($rows_max / $total_time) ." inserts/sec\n";
    echo "rows_max: $rows_max\n";
    echo "total_time: $total_time\n";
} catch (PDOException $e) {
    echo $e->getMessage() ."\n";
}

if ($db) $db = null;
?>
Comments and Reactions

Regular Expression Tools

Regular Expressions tools:

Comments and Reactions

William Gibson's "Pattern Recognition"

In short, this book falls flat. My impression is that Gibson wants to jettison his past audience for the literary establishment. The end result is prose that is wooden instead of the sharp metal of his earlier works. Jack Womak should have let Gibson put it to sleep. Some observations:

  • After Casey had the locks replaced, Gibson mentions she forgets to lock the rear door. Why was the flow interrupted to dangle that tidbit which was not used later. And no, I don’t think going with the idea that this is to interrupt the reader’s own pattern recognition. Slack editing?

  • Gibson has 356 pages to introduce the reader to Casey. At the end of 356 pages, the reader is left with little more than a shell of a character. We know about her dad, we know about her phobias, we know about her preferences in clothes. But Casey never quite becomes familiar.

  • The book reads as a one beginning, a bunch of jumping around in the middle and one ending. The flow of the plot is regular but lacking. If I were to take a guess, I’d say (as Gibson hints at the end note), that this book was a struggle. The narrative flow of a book that is written within a relatively short period of time with intense energy is missing.

  • Gibson trots out the usual characterizations of geeks. Can we please have something more insightful? This lends to my argument that he no longer appreciates the members of the audience that began reading his books long ago.

The reason I bother to write this review is that Gibson can write a good novel and has done so a number of times. I hope that he listens to his inner critic on future works. If you don’t have a vision to share with the reader, put down the pen. At least until it comes back! It was frustrating to write this review as it was enjoyable to read.

Comments and Reactions