Installing the BSD's: impressions

I have recently had the chance to play again with the 3 BSD's, under the excuse of testing the portability of a piece of software. I was able to install all of them in separate virtual machines under VMWare Server. I'm summarizing my impressions below. Chiefly, I was surprised at the number of (admittedly minor) annoyances that NetBSD gave me (I recall having no such problems a couple of years ago when I first tried this OS).


  • Most desktop-friendly, easiest to install
  • Issues with XFree86 to Xorg transition still cause problems in 6.3 (something the Linux distros have long ago solved)


  • Less friendly installer compared to FreeBSD, but still quite useable
  • Make sure to install the text and man sets
  • pkg_add doesn't quite work with some mirrors (e.g. ftp.at.netbsd.org) because it insists on using a complex wildcard pattern to look for packages (package-*.t[bg]z). Some FTP servers don't support this pattern, and return no results. Bottom line: pkg_add bash sometimes fails, but pkg_add bash-3.2.25 always works.
  • IPv6 seems to be sadly enabled in most applications and for the most part causes only delays and error messages; most annoyingly so with pkg_add
  • Furthermore, the community seems to think IPv6 is the best thing since sliced bread. Search for "disable ipv6 netbsd"; you will find mostly unanswered forum messages, or when they are answered, the answer is along the lines "why do you need to do this".
  • Under VMWare, you must wait until NetBSD finishes booting up before you can press CTRL+ALT to release the mouse (and do other things in the host OS); otherwise, the console dies.
  • Seems easiest to port on an embedded system (and runs on the largest number of platforms)


  • Somewhat resembles NetBSD
  • Least friendly installer ("dumb terminal" style, not even curses-enhanced)
  • You must install the xbase set (even if you're not planning on using X) or else most packages won't install later (including bash)
  • Default GCC version is the oldest (3.3.5); newer versions are available separately, but seem to come with different features (e.g. the ProPolice stack protector is not enabled)
  • Large number of security features that I did not try out

Ubuntu: make the world a better place by holding users hostages?

Note: to the many people who just want to fix their problems and don't care about politics — scroll to the end of this post.

As I was having trouble getting the VMWare MUI to work on Ubuntu, I came upon a bugzilla thread that solved my original problems, but made me very concerned about the Ubuntu developer team. The discussion highlights serious problems with their mentality, priorities, and attitude.

The controversy centers around the default Bourne shell, /bin/sh, which executes scripts in Linux (expert readers may skip this paragraph and the next two). For as long as anyone remembers, Linux distros have provided GNU Bash, an "embrace and extend" version of the original sh (the behaviour of sh is actually standardised in POSIX 1003.2). So /bin/sh was a symlink to /bin/bash — yet bash has extensions that would not work in a standards-compliant sh.

Now, scripts get to choose which shell they run under: the first line in any shell script must read something like #!/path/to/shell. But authors want their scripts to run on as many systems as possible, and the only cross-UNIX shell is /bin/sh — if you required /bin/bash, your script might not run on Solaris.

The problem is that many scripts only see actual usage on Linux, and since there has never really been a "bare" sh around, many scripts inadvertently rely on bash-only features. Everything worked though, and no one complained — until last year, that is.

In June 2006, Ubuntu registered a "feature specification" to use dash rather than bash as /bin/sh. Apparently, dash is faster and needs less memory, so mostly for these reasons the change was approved for Edgy. But dash also struggles to be "more catholic" than bash (though it has its sins too), so not every bash script runs on dash. Since Debian had previously conducted a shell script audit to rid packages of bash-isms, this wasn't immediately noticed. However, outside packages were never reviewed, and complaints started piling up as new users (and upgraders) flocked to Ubuntu Edgy after its final release.

At that point, a previously obscure bug started gaining entries and visibility. The developers' response was not what you'd expect for a distro backed by Canonical and self-styled "Linux for human beings":

there are no plans to change the default configuration back to bash [...] If vendors are distributing software that expects /bin/sh to be bash, then that software is broken. Please take it up with them.

So the users are supposed to notice the breakage, carefully debug the scripts to learn that the bug is due to bash-isms, complain to the authors and wait for the fix to arrive. If the users are not programmers, they're out of luck. All this for software that ran just fine previously, mind you.

Of course, Ubuntu could easily fix this bug, retaining the speed improvement without inconveniencing users: revert sh to bash, and change Ubuntu packages to use dash. But I suppose that would mean conceding users were right from the start (and thus losing face).

Is this going to be a Jeff Johnson moment? What really scared me were comments by someone who claims to be a non-developer (strangely enough, the only non-developer to support the official policy):

Bashisms are bad. They need to be fixed [...] Sometimes you have to do things the hard way to make the world a better place. I think we have begun down a slippery slope towards eradication of bashisms. They never would have gone away if it was just 'the right thing to do', but now if you write broken scripts you give up support for a major distro.

So, making the world a better place involves taking the userbase hostage, wasting thousands of people anywhere from 30 minutes to a couple of hours, and expecting them to do your bidding (i.e. persuade third parties to conform to some lousy standard that sported incompatible changes several times in a decade)? I really hope this is not what the developer team is secretly thinking, but the fact that there are exactly two replies from a single developer, in spite of the mounting frustration expressed in tens of comments, doesn't look good. In any case, causing lost productivity that ranges somewhere into the hundreds of thousands of dollars is a remarkable accomplishment, only not one to be proud of.

Update: to those who just want to fix this problem without downgrading Ubuntu: either run dpkg-reconfigure dash or, more brutally, ln -sf /bin/bash /bin/sh

VMWare: when two OSs access the same partition

Probably the most convenient way to run Windows under Linux is to start with a dual-boot setup, then create (in Linux) a VMWare Server virtual machine based on the physical Windows partition. This ensures that you don't have re-install Windows and your favorite applications.

But with great convenience comes great danger. When you power on the virtual machine, it will boot into GRUB (or LILO) which will ask which OS you want to run. No problem you'll say, select Windows, it's just a small inconvenience. Until the day your fingers err. Or, if GRUB has a timeout, the day you run to get a cup of water and come back to witness Linux booting. That means that the virtual machine and the host OS are now accessing the same partitions simultaneously.

The various VMWare tutorials strongly caution you to avoid this situations, which will likely result in data loss. But maybe you are wondering just how bad things can go (at least I always have). Well, about a month ago, facing a complete Linux re-install, I found the perfect opportunity to experiment. I had two Linux partitions (a JFS root and an EXT3 volume). So I powered up the virtual machine into Linux, and let it run its course, after which I rebooted.

The results? Surprisingly, the root JFS partition came out from fsck unscratched. That's right, there were no errors, and nothing in /lost+found. The EXT3 partition, by contrast, was destroyed beyond repair (it started with a bad superblock, and went downhill from there as I tried to recover). Unphased, I decided to try again (after reformatting my EXT3 partition). The same thing happened. I have no ideea why, and I wouldn't necessarily conclude that JFS is safer, but if you ever have the chance (or misfortune) to experiment, let me know how it goes...

And now, on to something more useful: how do you prevent such disasters? The answer is to force the VMWare partition to boot from a virtual floppy disk that makes the correct OS choice automatically (it could be GRUB with a single-item boot menu, or an NTLDR-based solution). Scott Bronson's VMWare tutorial shows how to do this. Unfortunately, his method is rather inconvenient, requiring several reboots. So what follows is a simpler solution that replaces steps 3-10 from his Set up the Boot Disk section:

dd if=/dev/zero of=bootdisk.img bs=1k count=512
mke2fs -F bootdisk.img
mount -oloop bootdisk.img /mnt
mkdir -p /mnt/boot/grub
cp /boot/grub/stage[12] /mnt/boot/grub/
cat >/mnt/boot/grub/grub.conf <<EOF
root            (hd0,0)
chainloader     +1
umount /mnt
grub --device-map=/dev/null <<EOF
device (fd0) bootdisk.img
root (fd0)
setup (fd0)

The rest of Scott's tutorial still applies — in particular, setting up different hardware profiles is important. How important? I'll let you know next time I'm stuck with a complete Windows reinstall...