Tuesday, October 03, 2006

Slackware 11.0 - first experiences and upgrade process

Well, Slackware 11.0 is officially out after an exhaustive batch of release candidates and (due to some hideously attentive monitoring of the ChangeLog in recent weeks) I've already upgraded to this system. In fact, I've been downloading each new package for the last few weeks as and when they changed the Changelog, and about once every few days, I'd create a DVD and install it onto a copy of my primary partition to find problems before I took the plunge and actually started using it as my primary desktop (replacing an up-to-date, clean Slackware 10.2).

The upgrade, as ever, goes like a dream so long as you follow instructions *very* carefully - don't omit any steps. We won't mention my moment of idiotic forgetfulness where I forgot to upgrade the rc.udev file once I'd installed it or even failing to copy the initial /dev from my existing partition to the "mirror" partition that I was upgrading, both of which caused Slackware to fail to boot... One of those is mentioned in the upgrade.txt (transfer ALL the .new files across! In a moment of blindness, I omitted rc.udev.new), however the other was just common sense if you intend to work from an accurate copy of your existing system! Those installing onto a clean partition should have no problem at all.

After many, many tests (I was, after all, performing a major operating system upgrade on a system that was still being used for "real" work), I copied my Slackware 10.2 main partition to a blank space, freed up 1.5 Gb on it (as leeway for new packages, upgrades, temporary files etc.) and installed the upgrades over the top of this copy. Once I'd followed the upgrade.txt (which, at the time, was the 10.1->10.2 upgrade.txt but still the principle is the same), all I had to do was recompile my kernel (Slackware 11.0 now ships with GCC 3.4 which means you also have to recompile any custom kernel or kernel modules that you may have had from Slackware 10.2, which only used GCC 3.3), reinstall lilo and it all just worked.

However, be very careful if you have extra modules in your kernel (e.g. nVidia/ATI drivers, out-of-tree wireless drivers etc.) as they *will* need recompiling. When you are using the same compiler and kernel on two different machines, the modules are usually transferable between the two machines, but Slackware has changed the default compiler so this time-saving trick no longer works. Failing to recompile them *will* crash your machine, maybe not immediately - for instance, in my initial testing on a blank partition, failing to recompile the nVidia modules and instead "copying" them from a previous installation crashed the machine hard as soon as OpenGL was used, but had functioned perfectly until then (even accelerating X and video flawlessly).

The crash was so hard that the (journalled) filesystem stopped halfway through a write and corrupted the partition - proof, if ever it were needed, that the use of proprietry modules removes any guarantees of stability and also that journalled filesystems and RAID are no substitute for adequate backups. That's also why you should always backup and/or test on a copy of your primary partition before you do stuff like this - I did it out of academic interest and was surprised that X or the kernel didn't throw up more warnings.

Applications, however, should not require any re-compilation at all, unless they are very tightly integrated into the kernel or statically build from the supplied libraries (something that they shouldn't do for most purposes). I haven't found anything that I've needed to recompile except for kernel modules but I'm sure I will find something that will have stopped working - a lot of stuff uses the kernel as the definitive source of information on things like kernel structures etc.

KDE was neatly upgraded to 3.5.4 in the process, all my old settings just ported over without any hassle (although a few KDE-specific tweaks, such as what the taskbar looks like and how multiple-desktop thumbnails work had reverted to a new default - easily changed and they were the exception rather than the rule). And it still runs like a dream.

Given a 1Ghz, 512Mb RAM machine, there is no significant detrimental performance difference between Slackware 10.2 and 11.0. In fact, because of both the KDE and X.org upgrades, programs under X run noticeably smoother - this is an old machine and small optimisations make a big difference when you don't use eye-candy like transluceny and anti-aliasing. Even code like a static-QT installation of Opera 9.0 is seeing responsiveness improvements compared to before. Given that Slackware is not designed as a desktop OS, it functions admirably under such circumstances and the system requirements are minimal.

Because it's Slackware, most desktop software will require, at some point, extra libraries or installations (for instance, mplayer codecs and OpenOffice.org are not included) but everything will usually compile cleanly from source without any patches or there is always Linux Packages.net for packages of any extra software you may need. Slackware's main install is only around the 3-4Gb mark when installed (depending on filesystem and block size) so on a modern hard disk, there's plenty of room for extra software. My main partition (not including personal files in /home directories) rides comfortably around the 10Gb mark and there's always plenty of space for a full install of Slackware plus Wine, Crossover Office, Microsoft Office, OpenOffice and many, many other large pieces of software.

One word of advice - take upgrade.txt's suggestion to just "install the rest of the packages" lightly - in fact the best method, especially if you are short of disk space, is to go through each package directory one-by-one... e.g. upgradepkg --install-new /root/slackware/a/*.tgz etc. Not only does this make it easier to omit the KDE/Koffice internationalisation packages for languages you don't speak (e.g. upgradepkg --install-new /root/slackware/kdei/*en_GB*.tgz), or to omit those packages that you don't need installed anyway (TeX or emacs for example), it saves a lot of time and diskspace and prevents you upgrading to a 2.4 kernel (hiding in slackware/k) and then having to reinstall the 2.6 kernel packages from /extra.

In terms of the final product, hotplug/udev is greatly improved and detects most peripherals and uses them automatically - nothing new or exciting unless you've not run any other recent Linux distro, but being able to plug in a USB drive, joystick or mouse and have it instantly recognised and have X/KDE start using it is a welcome return to the ease of a typical Windows installation. This does require a relatively modern 2.6 kernel though, but the scripts are still designed to take account of older kernels (2.4 or 2.6) that are not able to do this. And yes, 2.4 kernels are still the default for the time being.

One other change to the install process is that the sata.i bootdisk is now set as the default for any bootable CD or DVD (even the text on "how to boot in an emergency" on the boot screen reflects this), allowing direct installion on a old-style-PATA or shiny-new-SATA harddrive without having to select a different bootdisk - apparently a code conflict between the modules for the different hardware has now been resolved, making this possible. It's a welcome simplification to the install process.

All in all, the installation was fairly flawless but be careful about your kernel - unless you stick with the default 2.4 or 2.6 kernels (which will be out-of-date within a week or so, if not already) you are going to have to recompile the kernel and any modules and then reinstall LILO or GRUB. The only other thing to remember (and it's in upgrade.txt) is to ensure that all the .new files that appear on your computer have your settings transferred into them and then rename *them* to replace your original configuration - this way you won't miss any new config options that might have appeared.

I suppose the biggest disappointment would be for Gnome users - there isn't a sign of Gnome left in Slackware (the distribution cited ease of compilation/packaging as the reason for its removal in the last release) although you can still get Slackware packages for it from various third-party sites. To me, this went unnoticed as when I first started off installing Linux with X desktops, I tried both of the major window managers at the time and KDE came off best every time. Gnome felt clunky, old, out-of-place, like the Borland Windows dialogs used to back in the early days of Windows... nothing WRONG with them, they just didn't fit.

They are both now skinnable and in fact either can look like the other, so it's not a win-win situation - however, because of that there's also little reason to claim Gnome's loss is devastating... KDE can be made to work just the same and in fact the two projects are collaborating on just about everything these days. The GTK libraries etc. are still installed by default and, in fact, some ancient Gnome-based software that was left on my setup from its previous Slackware upgrades still functions perfectly.

People say that KDE is full of bloat but, I'm sorry, 3Gb for an entire OS including X and an office suite? That's well within the realms of convenience on a modern computer and most packages can be omitted if you really want (you can get a X installation down to less than a Gb if you really try and omit all the rubbish - I'd hate to imagine what the absolute minimum would be - I should think it would be amazingly small). And a 512Mb system showing only 100-200Mb in use when I have several applications open (and a few dozen background processes including Apache) under X/KDE is perfectly acceptable. And with KDE4 currently in development, the introduction of QT4 is supposed to make everything so much faster and leaner. But let's not get ahead of ourselves.

Altogether, Slackware 11.0 is another flawless upgrade of a "clean source" distribution - there are very few patches to the software included on the disks and the kernel is always "pure"... making upgrades, recompiles and troubleshooting simplicity itself. I should also imagine that it makes the lives of the software developers much easier as the bug reports are directly relevant to the software, rather than patches that the distro has tried to add itself.

Thursday, September 07, 2006

Computers, gadgets, modern life and control.

It occurs that I've discovered a major theme in my life - control. Everything that I own has to be under my control. This obviously starts with my PC. Starting out on DOS, I was used to controlling most aspects of the computer's operation; I could control what programs it ran, whether they stayed resident or just ran once, when they ran, etc. Windows 3.1 also allowed such control but moving towards the more modern Windows OS, I gradually realised that I was losing this control. I didn't know what programs were running, when or why. I couldn't change settings, I couldn't rename stupidly-named menus or folders, I didn't get choices over where software installed itself and a lot of the time I couldn't change it later.

So I switched, to Linux of course, and now I have my control back. I get to decide what runs and when. Programs don't mysteriously insert themselves into the depths of my computer without me a) knowing, b) being able to revert the changes or c) having some way to prevent it. Software doesn't taunt me with promises of being able to do something only to stop me doing that exact thing until I pay more money, take out a subscription, click on an advert etc. If it does, there's always an alternative somewhere that I will be able to freely use. This is what used to make Windows more tolerable - If I couldn't control some aspect, such as which programs are allowed to connect to the Internet, I could always find some shareware or freeware that would allow me to control that aspect.

Unfortunately, this modern epidemic of removing control of modern electronics from the user's hands (the person who PAYS for these same products or services and enables them to be produced) and into the hands of the company (that receives the money, could not be in business without the user and whom, without checks, would dictate when, where and how much upgrades and further products will cost) has reached even the humble television.

Televisions and VCR's were always quite complex for the inexperienced user and rather difficult to exercise control over. I've seen at least one TV that, by a single keypress, will wipe out all its stored channel information, seek out every frequency and insert only discovered stations back into a list but in the order it finds them rather than any realistic ordering (such as by channel number etc.) and in so doing wipe out any channels for VCR's, games consoles or camcorders that might have been painstakingly set up.

The programming of the average VCR was legendary but towards the end of its life it became greatly simplified - no longer were they used as tuners but just as plain recorders, thus eliminating one level of commplexity in setting up recordings. VideoPlus codes enabled simple future-recording of programs on any channel for the exact time required to capture a particular program, even in some cases accounting for last-minute re-scheduling. Now most of that same functionality has moved into the era of the DVD-Recorder and HD-recorder although if anything the complexity is even more reduced - integration with automatic digital TV programme guides enables one-touch recording of future programs just from advertisements or trailers on any of a thousand channels. Programme guides also make browsing and recording any program on any channel on any day a breeze.

However, DVD has brought with it restrictions - restrictions which remove our control. You can't play an American DVD on a British DVD player - why? Not because of a technical difference, not because of an incompatibility but because the DVD inventors and distributors don't want to let you. Why? So they can sting more money out of you if you happen to live in such a closed market. The solution? Most DVD players sold are now either multi-region by default or multi-region capable. However, problems still remain with this format - UOP (User Operation Flags - those restrictions that prevent you skipping trailers, adverts, copyright warnings etc.). Again, these are an in-built mechanism designed to do nothing else but ensure that you see those messages.

When you are browsing your DVD's or waiting for the film to start UOP's are usually mainly used on trailers and commercial content that the user just DOES NOT WANT TO SEE, ever, and after they have seen it once, why would they want to watch any of it again? Sure, give them the option to see it again but let them skip it if they want as well. What's so wrong with a menu option on the DVD that lets me see trailers and other notices IF I decide to? I am perfectly aware of copyright law and if I wish to skip that inevitable 20-seconds of static screen I should be able to. Few DVD players have anything that lets you bypass the DVD's UOPS - however most PC DVD players can be patched or have utilities installed that will make them bypassable.

In fact, the libraries and media players that I have installed on my Linux computer just to be able to watch DVD's in the first place automatically do this for me - I just press skip and it skips forward, no matter what the DVD says I can or can't do. My control is restored, just not in my own front room where instead I make a point of noting which DVD's have excessive UOP control and either copy them, removing it in the process, or label them so I know to put them on five minutes before my tea is done so I can be out of the room while they go through their rigamarole. If neither is possible or practical, I merely make a mental note to never buy any of the things advertised and to try not to buy from the same distributor again, or at least not until they sort their act out. I honestly have a product/manufacturer blacklist in my head whenever I purchase a product.

My PC speaker's don't insist on locking themselves to full volume when a banner advert plays sound because it's intrusive, obnoxious and counter-productive (I will not buy those speakers or will stop visiting those websites) so I don't see why I should be forced to sit through ten minutes of adverts just to get to the film that I'VE PAID TO SEE. The same principle applies to cinema - how many people don't go in until the main feature has started, i.e. ten minutes after they say it starts on the ticket?

My car takes a lot of my control out of my hands but for very important reasons (my life, my safety) I still have control over some important points. I don't expect to be able to modify the algorithm that controls the activation of an airbag by any setting or by modifying any software or hardware - I wouldn't want to and there is no reason to. However, I would expect to be able to disable it, for instance, in the case of an emergency or if someone was working on the steering rack. This particular device is something installed to save my life - I don't expect it to be at all tinkerable or for it to be disabled without major interference with the car and major warnings (i.e. my airbag light staying lit on the dashboard).

Additionally, I want to be able to sue the airbag manufacturer if it failed to be deploy in an emergency situation (or my relatives but ideally **I** would be the person suing the company!). Therefore, I don't need to, or want to tinker with the details of its operation but I still have overall say over whether it's turned on or off. Relatively speaking, then, I have more control over my airbag than I do my DVD player, which has no reason at all for failing to let me turn off an unnecessary feature that allows me to enhance the use of hardware that I have purchased.

Would people tolerate a TV that would lock you into a single channel when you had selected to watch a movie and not allow you to change channel or switch off until you had sat through ten minutes of trailers? (The strange thing is that such TV's probably exist or are at least feasible for being manufactured today!) Would people tolerate vacuum cleaners that refused to suck until they had noticed your carpet was dirty, or because you used a competitors dust-bag? (Again, another likely occurence if the current state of inkjet printing is anything to go by) Would people tolerate telephones that auto-answered calls from marketers who had paid a fee to the telephone company and put them straight onto speakerphone?

One of the most popular features on telephones in the UK is a "Do Not Call" list for marketers, with severe penalties for companies that do not take account of it. My telephone has Caller ID functionality - I even get to control WHO I answer the phone to. My front door has a CCTV system - I get to control WHO I answer the door to (and leave marketers standing there in the cold and the wet). My oven and microwave do what I tell them to (within certain set parameters to ensure safety) and don't try to override me.

Most control-restrictions boil down to copyright control and advertising. Now copyright control is something that no form of protection has ever or will ever stop. Cinemas and DVD manufacturers are usually the source of any leak of pre-release films, professional copyright infringers have the knowledge, equipment and ability to bypass anything that the manufacturer may put in their way and those infringing copyright will not be stopped by such petty restrictions. Laws like the DMCA and its international equivalents fail to take account of one point - if someone is willing to break the law by pirating a movie, they will not think twice about breaking a law that prevents them from buying a device or using the "analogue hole" (the fact that if I can view it in any way, there will be ways to record that viewing, even if they are as primitive as using a camcorder to record the image on-screen) to copy the movie in the first place.

Additionally, it's impossible for a device alone to determine whether a copy process is infringing copyright - I am allowed to copy my DVD's for backup purposes in many countries. I may make one copy but then if the original disc breaks or is lost, the backup copy that saved me will also need to be copied, or I will have to obtain a second copy from somewhere. No chip in the world can currently decide accurately (or even just "well enough") whether or not I'm infringing copyright in doing either of those. In fact, in many cases, such backups are necessary. If you have ever let children loose with DVD's you will know that the discs scratch easily and many companies even refuse to provide copies of the DVD if this accidental damage happens to one you own (in the days of the ZX Spectrum, almost every company that distributed tapes would replace them free-of-charge if they stopped working).

The device has no way of knowing whether or not I am copying a DVD that I personally own or one I've rented or one I've filmed myself - in many cases this is the way that most modern copyright control systems end up being bypassed - any copyright flags or restrictions are removed to make the device "believe" it's genuine. Proof of purchase should be all I need to ensure that I can copy the disc and/or view copies of the disc. Even in court I'm innocent until proven guilty so it would be up to the device to prove, legally, that what I was doing was illegal. It couldn't. Ever.

That leaves advertising. Advertisting is not the only way to make money, or make your product known. Obtrusive advertising is a guaranteed way to lose money and make your products and even your own company infamous. If I wish to buy a product then I will research it. Not only will I not take any notice of brand-names or memories of previous advertisements of a company's products but I will absolutely blacklist a particular product, manufacturer or even entire line of products (such as DVD's or Blu-Ray or similar) based purely on how much control they expect to have over me. When I'm buying a product, the information available to me is what I base my decision on and not what advertisements I've seen in the past. When I go to the cinema, I base my decision on what is showing, not on every trailer I've seen since I last went to the movies and the ones before that. Only at the point of purchase do I NEED to know about any product.

As the years go by and copyright-protection gets ever more restrictive and more and more control is taken out of my hands, my mental blacklist grows. HDTV with it's amazing copy-protection? Blacklisted. (and my TV is five feet from my sofa and I don't see either how it will improve my perception of the TV image at that distance or why it's worth paying out for a mere resolution enhancement) Operating systems that want to dictate what audio/video streams I can play, where I can output them to or how many copies I can have installed? Blacklisted. (although I would like to point out that every OS I've ever installed has been properly licensed.) Cars that would only accept parts produced by their original manufacturer? Blacklisted.

And when enough of these blacklisted items are from a particular manufacturer? That manufacturer is blacklisted in its entirety.

The future, with its gadgets and gizmo's which are "vital" for modern life, holds immense interest for myself. As gadgets get more complicated, there has to be a certain amount of internal abstraction. We don't need to know how or why they work, just so long as we control what they do. My PC is the most complex appliance in my house and yet I exercise full control over what it does. My VCR has immensely complex integrated circuitry, although nowhere near that of my PC, that I don't need to understand to be able to use it and still it doesn't remove my control of its operations. Manufacturers would have you believe that "you can't" change what things do, because they are such complex machines. However, we have to keep in mind what motives they have for removing any controls we have - for instance, DVD region control and UOP's. There is no technical, functional or legal reasons (except for enforced compliance with what the manufacturers believe we should be paying for their DVD's in our particular country, whether or not we are legally able to import cheaper versions of the same DVD) why these two features should even exist and they can only interfere with our use of a product that we pay for in the first place and continue to fund by buying DVD's.

Are DVD's a product or a service? Is legally-bought downloaded music a product or a service? Is the computer hardware or software I buy a product or a service? If I stop paying for subscriptions and upgrades, or stop watching the advertisements, should things that I've "bought" be taken away from me? Everyone who sells these things wants them to be a service - they can charge more in the long run and revoke your use if ever you change your mind about paying for them. Everyone who buys these things wants them to be a product - they can control them and do what they want with them inside the confines of their law-abiding homes. Somewhere along the way, someone's going to have to come up with a reasonable compromise.

Tuesday, August 15, 2006

Slackware 11.0 RC-1

The next version of my favourite Linux distribution is on the verge of being released. Yes, Slackware has it's first 11.0 release candidate.

Normally, I don't chase the very latest versions of software until someone's tested them for me beforehand - for instance, my latest foray into the world of Opera version 9.0 was a bit dismal... most of the computers I installed it on had no problems at all but at least two showed severe random crashes that I could not track down for weeks. The funny thing was that both problematic installs were on different hardware and yet on two very-similar machines one version of Opera 9.0 worked flawlessly but another didn't.

Anyway, unusually for me, I've been closely tracking Slackware 11.0 since the last stable release, 10.2, which is currently powering my main desktop and a number of my servers and hobby machines (I always track updates to stable versions so my software is never vulnerable but I rarely use "beta" software of any kind). I've actually got an up-to-date mirror of the bleeding-edge -current version of Slackware (that will become 11.0) which I update every time the Changelog changes. I'm hoping in this way to not have to suffer traffic-lock when 11.0 is finally released - hopefully at worst a tiny update of a handful of packages will be all that's needed to create my own DVD-R instead of having to fight thousands of people trying to download all 4Gb of software swamping every mirror with traffic.

I have even gone to the effort of a test install of the RC-1 version on a seperate partition. Linux, and Slackware in particular, demonstrated its flexibility and user-focus once again - by booting from a Slackware DVD, I was able to install the full install to a blank partition without doing any more than a very basic check of the partition name (which I am always very careful to double-check by mounting the partition in question - never take partitioning or formatting of anything on a PC full of data lightly). Once it was installed to my spare partition, I was able to copy the kernel and modules directory from my "stable" partition to the new "current" partition and, with a little LILO magic, boot into the new version of Slackware without touching my previous installation in any way, but with the very latest 2.6 stable kernel and all the software of 11.0.

I have to say that it's not spectacular but it's not spectacular because it WORKS. It just does what you tell it. You boot it on your PC, it detects all your gear, you set a few options and you have a full desktop. You port your old settings and files over and everything just works again.

The simple fact was that, in under three minutes of the installation completing, I was in a fully kitted out Linux desktop with drivers for all my hardware without having to compile a single package - installation consisted of nothing more than an automated decompression of the packages to the partition in question and minor copies or edits of my previous configuration files (such as re-doing alsamixer settings, configuring X etc.). My old software worked (at worst requiring a recompile against latest headers), my settings transferred and my computer didn't crash or have to reboot seven zillion times.

New in Slackware 11.0 RC-1:

* Updated kernels (although the default still looks set to be a 2.4 kernel)

Obviously, although I would say that the kernel is the one thing not worth waiting for a Slackware package to come out for - just install the latest stable of 2.4 or 2.6 depending on your tastes... Slackware supports either seamlessly without needing any special setup (although you may find it convenient to stick with one of the two for compiling anything that reads from kernel headers). Don't forget that Slackware always comes with a .config for it's kernel that's fully modularised and ideal for "make oldconfig" when a new kernel comes out that you need to compile.

* Lots of init script fixes and features

* Updated hotplug / udev support

* X.org 6.9.0

* KDE 3.5.4

No more Gnome in Slackware unless you get it from somewhere else. Not a bad thing for me, given that Gnome always reminds me of the old Borland dialogs in Windows - it always looked clunky and out of place. You can still get Gnome for Slackware from many places but it was removed in the previous version because of an apparently horrid compilation rigmarole.

* Updated versions of just about everything else (Samba, Apache, MySQL, Java etc.)


The changes aren't massive - it's not even as if the previous version has software which is currently vulnerable (despite what some checkers may tell you if they only go by software version number rather than whether they've actually been patched!). The software isn't the very latest (but it is almost guaranteed to be the best tradeoff between features, security and code stability) but it's clean, it's quick, it's simple, it works and it's been designed to run on as many computers as possible by default.

Although not designed as a desktop distribution, it's easily subverted to that purpose by installing the right "extra" software but the fact is that you know what you are getting - a stable, safe system that works and is flexible.

I get to choose and keep my own firewall package - one I've grown to love and have integrated lots of my other scripts into, I get to keep my choice of kernels and even whether to go 2.4 or 2.6. Every piece of software has been updated but all my old settings port over easily (at worst requiring a diff of some sort). Every piece of hardware has modules ready-prepared for it so there's no need to keep recompiling to get support. The kernel is bog-standard kernel.org fare, so there's no vendor patches or compatibility problems to worry about. Everything is controlled by human-readable scripts, which upgrade cleanly over previous versions.

I'm planning on building some kind of workhorse headless server and it looks like Slackware 11.0 is going to be my choice, given it's proximity to release and its easy flexibility to be installed without X or other useless software. This server will be performing lots of tasks which I'm hoping to come to rely on - CCTV monitoring and other household security tasks, intranet web serving, firewall, NAT, printer server, transparent HTTP proxy, automated network antivirus scanner, email scanner proxy, Caller ID announcer, wireless gateway, network boot server and all manner of other custom projects. It will use relatively modern hardware, will require stability (as it will be expected to be running all day long), will not need any sort of graphics capability and have to be secure against attack. I don't want to have to compile anything from scratch or find out that I've forgotten package X, so a full install and then prune will be in order.

Slackware's reputation means that I'm quite happy to have waited nearly a year for this release - I haven't had a vulnerable system in that time due to strictly-monitored security fixes for the -stable version, I haven't had to fight with half-new features in things like udev and hotplug which would have caused me a lot of trouble and I'm going to a system that's just as stable albeit further updated.

Tuesday, May 09, 2006

A fascination with recompiling kernels

Okay, another trend for you:

Why do people who use Linux INSIST on recompiling their kernels over and over again? I'm a regular on several Linux forums and I see a lot of questions being asked that are along the lines of "I've recompiled my kernel and now something doesn't work". They are usually not talking about upgrading to a new kernel but actually just recompiling their already-working kernel but with different options (usually to change everything that is modularised to be incorporated into a single, fat kernel).

Now there are several very good reasons for customising a kernel in such a way. Firstly, if you are using PXE boots, USB keys, bootable CD's, (going back a bit now) single-floppy distributions or something else that has filesystem limitations (whether they are direct technicial limitations or, with PXE for instance, something like bandwidth) then having a single-file, small customised kernel is ideal. However, most people who are asking these types of questions are home desktop or laptop users.

When you have Gigabytes of space in which to store hundreds of unnecessary modules, recompiling the kernel ultimately costs you lots of time. Time in preparation, configuration, compilation, testing and, ultimately, problems brought about by missing a vital checkbox when configuring the new kernel.

One oft-touted advantage is speed. Now recompiling a kernel to target a different architecture may or may not give you a speed advantage (there are benchmarks which suggest that there isn't much difference at all between any of the x86 architectures). For general use, you wouldn't even notice the difference. The current theory is that compiling a kernel with only the modules you require built-in somehow makes it faster.

I can't see this myself. A module is ONLY ever loaded when you (or your hotplug programs) tell it to load. Otherwise, it's not even in RAM, let alone taking any of your CPU time. The time it takes to load a module is miniscule and rare... maybe ten, twenty times, each time your computer is booted, if that? A few milliseconds each time once properly cached? And that's usually done on bootup where things can hang around forever detecting hardware. After that, it's only ever done when you actually insert or remove some piece of hardware (and even then it may decide to stick around or be already loaded for something else). Compared to most other optimisations that are possible, that's an abysmally small one. Putting this stuff into a megalithic kernel every time doesn't save you anything in terms of processor time.

However, having a module PERMANENTLY in RAM even when you are not using it does not seem an efficient use of such a vital resource. By loading modules only on demand, surely you are saving enough RAM to cache anything you may want to load off of your disk (e.g. modules) and thereby increasing the system speed overall.

Additionally, having to hand-configure a kernel is an enormously time-consuming task, especially for the unskilled who may omit several vital options. And when you next purchase hardware, you're going to have to do it all over again. When you insert a USB device, the right options better be ticked or you won't see ANYTHING. You borrow your friend's USB hub, which uses slightly different modules, and you'll have to recompile the ENTIRE kernel all over again. You insert a USB device that uses a slightly different module to anything you've got in your kernel and, guess what, you have to recompile.

Why not just use a modularised, standard kernel that has modules for anything you could EVER use (but which are only lounging in a few tens of megabytes of disk), save yourself several hours of configuration, compilation, testing and frustration, sacrificing any miniscule optimisations that are negligible and unnoticeable for the sake of giving you more time actually USING the machine?

There are a lot of people on these forums that are complaining that something didn't work and their eventual solution is to recompile the kernel to incorporate some module that they didn't have to foresight to include. Add up the time taken to discover the problem, the time to diagnose it (which must be non-trivial if they have to post on a forum for help with it), the time to recompile the kernel and reinstall it in the right locations, to reboot and then test.

Is anyone seriously telling me that that is going to be LESS time than the actual productive time you lose you would lose by having to load a single module automatically from disk from their previous standard kernel configuration? I use hardware which, on the whole, is considered obsolete (it's cheaper and it does everything I need it to) and yet I still don't notice any significant improvement by not using modules.

Kernels take on the order of minutes to compile on modern machines but the human element is a lot of wasted time in doing so correctly, installing, rebooting etc.

Also, when troubleshooting omitting problematic modules can be necessary. When modularised this is simply a question of blacklisting them in the right software, editing a simple script to prevent insmod or modprobe from loading them or even just changing the permissions on the module file itself. Doing so on an all-in-one kernel involves, yes, yet another compile, install, reboot, etc...

I use Slackware which comes with the kernel config file used to compile the main Slackware kernel. It incorporates a lot of junk, most of it as modules, but it works on virtually ANY x86 machine (even 64-bit etc.) with little to no performance loss. I gladly sacrifice 50Mb of hard space to be able to buy/borrow ANY Linux-compatible hardware, stick it in the machine (which doesn't always involve shutting it down if you include USB, Firewire, PCMCIA etc.) and it will work without me having to TOUCH the software side of things.

People lend me their USB keys, their portable CDRW's, their scanners, their printers etc. so that I can diagnose them. So long as my software is up-to-date (which takes two commands at most), I know that anything Linux-compatible will run without me having to play with my machine or reboot it. The time I save in not recompiling the kernel each time more than makes up for any imaginary performance saving.

And when a new kernel comes out, the same config STILL works (make oldconfig). I get asked if I want to include X or Y in the new kernel and I always select the best option - the one that lets me use it if ever I need to (though I don't personally think I will have a Gigabit card, etc. in my personal machine any time soon) but doesn't allow it to get in my way - build as module.

Now let's pretend that my motherboard blows up tomorrow. So what? I move the hard drive to a spare machine, it boots up just the same but takes account of the new hardware without panicking, hanging, not having the modules required, or requiring me to play about with kernels and LILO/GRUB just to get the damn thing to boot up.

It's like defragging - there is a purpose to it but for most tasks the trade-off just doesn't add up. Spend three hours defragging to save milliseconds of disk acces time over the course of a month or so until the filesystem fragments again? There are scenarios where it may well be worth it, or have been worth it in the past, but most of the time you're just sitting a screen staring when you could be getting something done.

Friday, March 17, 2006

Kororaa Xgl Live CD

I've been reading about it everywhere and overall dismissing it so I thought I may as well have a look anyway. Yes, Novell have released this "next generation" X Server which combines OpenGL and hardware acceleration into a bog-standard X desktop.

Newsforge have an article about it and they're not the only ones.

Firstly, the use of a LiveCD for this is part of what they were designed for - tests of software you wouldn't want to even try and install but that someone's done the hard work for you, software that may well screw things up for you so you can test without having to install it.

Unfortunately, the demo AVI linked to in the above article shows you all it can do. And I have to say, I don't see what all the fuss is about. It's a minor point that all this is only supported on certain chipsets, almost all of which rely on a third-party binary-only driver (don't get me started again). If everyone in the world wants this, you'd better start complaining to nVidia and ATI now to open-source those drivers because otherwise it ain't gonna happen.

If a distribution (like, say, Suse) wants everyone to start running these sorts of desktops, there'd better be support and backing from someone who's going to keep this running once everyone in the world's got XGL... is that support going to come from Novell - I bet not, or at least without a hefty price. Bang goes advantage one of open-source operating systems. And what happens when nVidia change their support for OpenGL? Bye Bye Window Manager.

Anyway, skirting that minor issue - XGL looks very pretty. I've seen better effects in a few PC games menus, though. It's pretty but gimmicky. Having video overlaid over OpenGL graphics on a desktop that can spin around - very nifty. Now what could I use it for?

I've been able to have video play in the corner of my desktop for a while - it's called having a TV-card or a media player. My computer is also capable of transparencies (a feature which, incidentally, I turn off no matter what the operating system as there is very little legitimate use of it).

But personally, if I want something showing on the screen, it's because I'm watching it. If I don't, I don't want to have to squint to distinguish between different windows, or be distracted by a moving image somewhere else. Whether I'm working or just browsing, the normal use of my desktop ranges between one app full-screen and the rest minimized to another app full-screen and the rest minimized.

Before the advent of digital TV, I used to have a small window overlaying the bottom-right of my screen, in a position in which it did not obscure my use of scrollbars in my most common apps. However, I cannot remember a single occasion that I paid it any atttention like that. Occasionally, I would be waiting for a program to come on and would full-screen the TV when it did but there was nothing I did that would be aided by transparencies, hardware accelerated or not.

The only task I do where the apps aren't used full screen is if I'm doing janitorial work on my files. I would have several windows open on the filesystem, and drag-drop between them. Overlaps were eliminated by the simple precept of moving windows about. Granted, having a shortcut key to arrange/zoom windows would help but that's nothing that cannot be done in standard X.

Additionally, moving stuff between desktops is hardly a chore and a rare event - I use virtual desktops for different tasks. If something's not on the virtual desktop I'm using, it because it gets in the way or is part of a completely different task, e.g. games on a work desktop.

I frequently have my virtual desktops arranged by task, Internet, Work, Entertainment. Hence, the organisation of virtual desktops actually negates any need for transparency or some fancy 3D - I don't need to overlap windows if I can just throw them onto another, more relevant desktop. It also means that I can't be distracted while I'm working by stuff that shouldn't be on the screen at all.

Novell's XGL demo also demonstrates lot of other useless cruft - we won't even mention the bouncy windows or "snap to"... this stuff was done years ago and ignored because it's pure gimmick that's already working. It's the sort of stuff that appears in menus to games. Even there, it's only ever fun for the first minute and then you never dabble with it again.

Switching between virtual desktops as a cube - good metaphor but you could just as easily scrap it as an actual cube and just have four "sides" and still get work done without having to remember horrible keyboard combinations or have lots of OpenGL technology in place to be able to switch between desktops. It provides no advantage over much simpler, much less demanding methods.

Now it seems that the best bit of the XGL demos is, in fact, the hardware acceleration. Now that's a feature I won't knock unless it's for the fact that it only works on binary drivers at the moment. If they can resolve the driver issues and have hardware accelerated graphics on a desktop, maybe we can finally catch up the 90's with the rest of the world's operating systems! (And that's not a bash at Linux/X. It's a bash at the driver manufacturers who are holding us in that horrible position).

Novell, it seems, has spent two years making a stretchy GUI. To quote their website:

"Novell is announcing its contribution of the Xgl graphics subsystem and the 'Compiz ' compositing manager to the X.org project. These enhancements open up a whole world of hardware acceleration, fancy animation, separating hardware resolution from software resolution, and more. As a result, Linux desktops will become more usable, end-user productivity will increase, and Linux is firmly positioned at the forefront of client computing technology."

I'm sure that bouncy windows and video over a cube are so going to increase our productivity and make X more useable. I'm sure having to manually install a binary-only driver is phenomenally easy for your average potential Linux end-user that they'll even be able to do it in a bouncy window while watching Harry Potter playing over the top of Ice Age 2. I'm sure that those two years of adding to the configuration options of X.org.conf is going to just have us all blatting out our code twice as fast.

"Under the leadership of Novell's David Reveman, Novell has sponsored and led the development of this powerful new graphics subsystem for Linux since late 2004. Xgl is the X server architecture layered on top of OpenGL and takes advantage of available accelerated 3D rendering hardware. It is designed to integrate well with the composite extension and performs best when a compositing manager is running. 'Compiz' is the new OpenGL compositing manager from Novell and is the framework that enables the development of graphical plug-ins."

Let's get this straight - hardware acceleration is a good thing. I like to be able to run 3D apps as fast as possible. Rarely, however, does a working desktop require 3D acceleration. You might be a 3D designer all day long but your desktop really doesn't need it. If you have power to spare, a few bouncy windows would probably look lovely but otherwise all they've created is an add-on to window transparency, a feature most people have turned off or don't even notice that they can use.

Novell should have put their efforts into something a bit more practical. An OpenGL compositing manager, I'm sure, has one, maybe two uses where it's absolutely indispensible. What would have been much more use would have been simplified X configuration options, maybe even on-the-fly configuration for most options, tighter integration with layers like HAL or even a single damn hardware accelerated open-source driver.

XGL is a gimmick. It may convince some eight-year-old sap somewhere that "Linux is better than Windows" but that's about it. Just wait until he's thrown into the world of having to seperately compile a kernel-specific driver every time he wants to try out the next kernel assuming, that is, that nVidia has supported it at all yet. XGL brings nothing new to the table, no work that people couldn't get done before can now get done. No time is saved, no money is saved, no problem has been solved. All it does is make my computer even damn slower just to show me a file listing.

Tuesday, February 28, 2006

Windows Vista

Windows Vista (or whatever it will change name to seven times before they ever release the thing) is approaching and a lot of people are focusing on what it can do and what it can't do. What they don't seem to take account of is the history. People who complain that MS has given them a bad run in the past are told they are pessimists and that this is the time when everything will be perfect. I've heard that at least five times now, so will I be upgrading to Vista?

Would you buy again from a butcher that, five times, has sold you a bit of what he assures you is "the best beef in the world" only to discover that when you get home it tastes like old boots?

Not a chance.

DOS I adored. It worked. It was powerful. It was simple. It was fast. It did the job.

Windows 3.1 I gladly bought into and began to love. It was small, simple, worked and worked well. It was easy to use and just pretty enough without needing too much from the hardware (386 with 2Mb RAM).

Windows '95 was then thrust onto me by peer pressure; it was okay but nothing special. There was a lot of frills on an OS that was basically a 32-bit version of Windows 3.1. It was also very buggy. '95 OSR2 didn't help matters at all.

Windows '98 was pushed into my hands because '95 was such a disastrous attempt at trying to push a '98-style OS out too early. It improved next-to-nothing. '98SE came out and you were asked to pay again for it. Yeah. That's going to happen. It fixed a few of the problems and introduced a few new features, but nothing that anybody actually NEEDED at the time.

Windows ME I looked at and quickly became a disaster (things like major components of Windows supporting '98, 2000, XP but not ME... .Net anyone?). It was '98SE-with-knobs-on and didn't manage to do anything particularly exciting.

So from '95 to ME, there were very few, very small improvments in an OS that incorporated at least three major paid-for upgrades (as in major release versions, not just updates) over and above the base price. It was then that I told myself that I would not upgrade any further without good reason. My computers worked, it had cost an awful lot of money to get that far and I hadn't seen much improvement in my actual productivity over previous versions. They all ran the same software, at the same speed, with the same features, with little or no improvements in the areas that mattered - stability, compatibility with older software and hardware support.

Okay, '98SE+ incorporated USB mass storage properly and a nicer driver model but in essence it also killed ISA cards stone dead without giving people a say in the matter... is it really that hard to support a standard that was still in use at the time, had stabilised and standardised itself on things like hardware autodetection, and still works to this day in the Linux kernel which has a much stricter requirement on what stays in the kernel? To be in Linux, the hardware has to be stuff that's used, has support from several programmers willing to change their code constantly, work in the kernel at all times and get updated in line with everything else, from people who are not getting paid to do that job. Anyway...

I stuck to '98SE and I spent most of those years chasing Windows Updates, free antivirus, utilities to manage the computers, anti-spyware, etc.etc.etc. Windows 2000 I skipped entirely - it removed support for a lot of my then-current hardware and only provided a small stability bonus. XP was a "necessity" to run one single game that I wanted on one single machine and has also turned into more hassle than it's worth. XP I see as basically a games console - a bloody complicated and annoying one at that.

I didn't pay for XP, it came with a second-hand computer I was given, one thing I was glad of. XP offered me nothing over 98 except more restrictions, more problems, and much less system transparency - even the filesystem was relatively unreadable outside of XP without expensive utilities (though that's not so much of a problem now but still it's hard to correctly write to NTFS without buggering something up).

Despite several methods of recovery in case of system problems (Recovery Console, Safe Mode, System Restore), it was still perfectly possible to total a machine by installing an official update that would take more hours to fix than the computer was worth. Suddenly, I needed Ghost around constantly whereas before I'd only ever re-installed Windows '98SE from scratch once (and I later found out how I could have fixed that too). It wasn't something the average Windows '98SE user could do but I brought that OS back from numerous permanent blue-screens, booting problems etc. without having to worry that I wouldn't get the system back up and running.

There isn't going to be another chance for MS. This isn't blind MS-bashing, I've just had enough. There's posts on this blog telling you how I kept my own personal '98SE machine in tip-top condition from it's release to mid-2005 and even recommending that people stay with it.

I've always noticed it and never give it a second thought but now I can see the trend in MS OS's:

- More new features that I won't ever use and just get in my way. I end up turning half of them off within the first few days, the rest as time goes by and discover they are causing me problems. I end up setting half the settings to Classic or some other sort of compatibility or failsafe mode because that's how I liked it. Control Panel was prime candidate in XP, along with Autorun. Also disabling of things like power-save settings and screen blanking.

- More restrictions, barriers and brick walls, each of which stops me doing something I WANT to do and can CURRENTLY do. Connection limits, raw sockets, driver signing, not having to activate, the list goes on.

- More time and money, not just on the OS but its supporting programs to get it into a vaguely useable state. Anti-spyware, anti-virus, firewall (because the MS ones I won't trust to be any good from experience and may well be the next anti-competition case against MS), startup controls, Ghost (because, again from experience, the chances of any type of system restore working as it should are extremely minimal). Again, the list could be endless.

- More integration with stuff I don't want (starting with IE and WMP). I don't want stuff connecting to the net unless I SAY so and unless it's ABSOLUTELY necessary (i.e. it's a web browser which has been asked to connect to a website by me personally, or an autoupdate that I'VE scheduled to autoupdate). I don't even LISTEN to music, and I certainly don't want rubbish trying to get album covers and other nonsense from the internet just because I'm testing a drive with an Audio CD. I don't want my browser to even be ABLE to execute code directly in the webpage, or choose a search engine without asking me what one I want to use.

- Nothing that I absolutely *need* when it comes to upgrade time. My computer does lots of stuff already. What can I do in Vista that's totally 100% impossible in 98SE, XP or Linux? If you discount hard-coded restrictions and programming laziness, nothing. Vista is not a quantum computer conversion - it still does the same old stuff the same old way.

- Missing or just starting to introduce a lot of obvious stuff that SHOULD already be in the OS (**why** do I need a completely seperate, non-MS utility to tell me everything that's loading at Windows startup? Why have I gone from Windows 3.1 to Windows XP without MS incorporating such a simple, useful utility? Why can I not also click a button that LOCKS anything else from inserting itself into startup and kill half the spyware/viruses in one fell swoop? And yet they are bundling rubbish like media players and internet browsers that I DON'T want at all and have never even used)

- Still playing catchup to other systems. A database of your files that updates in the background and you can use to locate your files quickly? Got it, except my one doesn't slow the system down when I'm using it like Find Fast and the other MS "inventions" do. Admittedly MS may well be ahead in terms of hardware driver support but considering my Linux machine doesn't NEED half that new hardware and won't do until it's properly supported under Linux anyway... where's the incentive?

I quit Windows about a year ago hopefully forever. I was tired of my computer not doing what I tell it to. This is my biggest, absolute killer for not running Windows... if I say shutdown, you will shutdown, if I say delete that file, just delete the damn thing... I'm not an idiot, I know what I'm doing. The chances are that if I force a shutdown, there's a reason for it. It may not be an important one - I may be rushing to go out for the evening and want to make sure it's off - but that's not for you to decide. Unless I'm going to do permanent, irreparable damage just do what I say, and even then just make sure I'm AWARE of that. My OS of choice will *not* argue or crash or wait for every program on earth to voluntarily allow me to shutdown unless I ask it to.

I'm tired of having to be at the forefront of technology just to browse a simple web-page at a decent speed. I'm tired of "limitations" like XP Home's connection limits, raw socket limitations etc. when there is no technical or practical reason why they have to exist. If my OS is capable of it, it should offer it. It should not say "I COULD but... I'm not going to let you until you pay me money". It's like running a shareware operating system, except I've already paid for it.

I've worked as the only support for many years for a few hundred XP, 2000, 2003 and older machines and yet have only ever used XP on one laptop personally (my "games" machine) and on my girlfriend's computer (it came supplied with the computer and it was easier just to leave it on there for her... she had to "learn" Windows 2 years ago so learning a Linux desktop isn't a big problem at all... it's just easier for when she wants to play The Sims and other rubbish). Windows is "easy" until you need to maintain the thing and then it becomes a nightmare. My choice of OS at home reflects just how good Windows is - I work with Windows all day long, even recommend Windows systems and yet I won't touch it with a bargepole at home any more. On another note, the more broken Windows is, the more money I make because I have to then be paid by numerous schools to fix it for them. And I get paid by the hour. ;-)

I've lost count of the number of computers I've brought "back from the dead" by removing viruses, spyware, too many startups running, etc. When a user can sit at a new, fully-patched, antivirus-ed, antispyware-d machine and, without intent and within a matter of minutes, infect the machine so that it barely loads up in half-an-hour, taking hours to fix, is when I give up on that machine. What a user does SHOULD NOT affect the machine as a whole, only that user... even as a "limited" account on Windows you can wreak havoc.

Windows has an after-the-event method of fixing problems - once the virus is on there, and lots of people have also got it, some company might send out an update that may or may not catch all variants and won't help control the damage the virus has caused. Vista even includes special integration for antivirus apps. Do people not realise how ironic it is that the OS that "invented" the problems with modern-day viruses and spyware even has a special place that you can install anti-virus into so that it will integrate nicely? It's like having a car that comes with an easily accesible tool specially designed with the sole purpose of putting the wheels back on should they fall off on the motorway. So reassuring.

(Yes, DOS had viruses. DOS was back in the era of one-user full-admin home computers without sharing of disks or internet access and was a design disaster from the start... at least it bloody worked though. Sensible people had worked out in the 70's that that was just a stupid idea for multiple-users or internet-facing machines. Windows caught up with them in Windows XP/2003.)

There is actually a page on a website belonging to a Linux security enchancement package called SysMask that actually allows you to upload ANY bash, C or perl script. When you do, it compiles it, runs it and shows you the output! It will voluntarily and automatically run ANY code that ANYONE asks of it as an ordinary user because it's so sure of it's security, just to prove how good it is. This is on the same bloody server that runs their own website where you can download this code for free. It's never been taken down.

Like this site, I want before-the-event fixing - even IF someone runs some dangerous software deliberately, researching the latest holes, it can't affect the machine as a whole, can't destroy other people's files, can't put me in a state where I have to hope I have a recent image/backup. I don't trust Vista to do this... Windows 2000 was supposed to stop this. As was XP. As was 2003. Backups are for restoring files after unavoidable hardware damage - nothing else.

Now, on Linux, the damn computer actually bloody does what I ask of it. I don't have to be too careful about checking licensing for the software I install because it's *all* GPL or free (yes, I still check that it's GPL or otherwise free, though)... I'm not distributing my changes so it's all free for however many computers I want. No more license-counting, no more fighting activation systems that think they know better, no more serial codes, no more.

I used to spend HOURS on Windows hunting down decent freeware to get stuff done without having to shell out even more money but now I don't have to fill every system I own to the hilt with third-party freeware just to get the damn thing into a usable, secure state. It actually comes with everything I need, by default, installed securely.

At aboslute worst, an automated update command (one that WORKS, does it when it's convenient FOR ME, doesn't force updates that are dangerous and doesn't kill one machine or another on a regular basis) keeps me up to date. Rollback? How about a complete uninstallable plain TAR archive of every update I've ever installed, along with a copy of every single package ever installed on the machine? Any package I want, I install. I don't have to con the software into thinking it's NOT installing over a later version, not already been uninstalled, requiring the original setup disk etc.

It's also quite difficult (without doing something incredibly stupid and deliberate while logged in as root) to ruin the actual software on the machine. Windows relies on so much being intact to even boot, Linux just wants any half-recent kernel boot disk to get to a fully functioning command line and repair system (including uninstalling/reinstalling/upgrading/downgrading any single software package individually on the entire machine).

I get to choose what software runs without some arcane registry entry loading up something I'm not aware of, and am not even sure if I need it at all. Same for "services". Additionally, if I want a ten-second boot, I can have one. If I want flashy graphics, I can have them. If I WANT to boot into a command-line only environment, I can. I have that choice available. And you know what? From that environment I can control every single setting that I could control within the GUI if I wanted to. For every user. Without learning hexadecimal or what arcane GUID in the registry it's stored it under.

I can actually TRUST linux, from it's filesystems to it's hardware support to the individual software components to the firewall. I know that someone isn't going to say "well... we COULD let you have five users connected to your shares BUT we're not going to LET you". If something said that, the source code wouldn't know what had hit it after I'd put it back the way **I** want it. You're *my* computer, you can only do what you are told to do and **I** am the one in ultimate control of every single piece of software on my machine. If that means editing source, so be it. If that means I want to voluntarily install some binary (and therefore risk incompatibility, forced upgrades and undiagnosable problems) to get my job done, that's fine.

I don't have to feel like a criminal because I want to use one OS on two computers. I don't have to check in with mothership every time my motherboard changes (which is quite often because the only thing that's constant about my machine is it's data - the drives change, the hardware changes all the time; I've still got data from my DOS days on my current hard drives).

There's very little hardware I own that Linux doesn't support, and all of that is non-essential and easily replaceable (one USB IrDA adaptor, one 56k Winmodem out of eight). I don't need to have drivers on hand for each and every part of it, or a checklist of which manufacturers bothered to pay MS to get their drivers certified and which didn't. I don't need to worry about the drivers interfering or only being able to run them with the most horribly annoying pieces of GUI software known to man (HP printer drivers, some of the arcane school-specific hardware I have etc).

If I get a crash, there is something real, something productive that **I** can do about it. Someone, somewhere will be vaguely interested in finding out why my machine crashed and, hopefully, fixing it. There are constantly new free upgrades to try out, there are config files to play with, there is source to look through, there's one of the most complex debugging systems known to man sitting on my computer already waiting to find the exact spot that something crashed and why, there are many unique, discrete components that can be eliminated one at a time to diagnose and I can even single step individual changes to the kernel to find out which one caused my problem (git bisect's etc.).

I don't get (and could easily discover anyway) obscure problems like a certificate in a JAR file associated with a famous piece of UPS monitoring software expiring and thus killing the entire system without warning or a single error message, taking 100% CPU and stopping approximately 50% of programs from running at all.

Who knows, I may even be able to code a fix myself without having to wait a year for the manufacturer to even acknowledge my problem.

And at the end of the day, there's nothing I can't do on my machine that I ever wanted do on Windows. In fact, most of the tools I use now are so much more powerful it's saddening to think of the time that I've wasted trying to find Windows programs that could perform the same tasks. I **liked** batch files, I **wanted** to tweak every entry in my AUTOEXEC.BAT and CONFIG.SYS to get the most out of my very expensive hardware. I want to be able to choose and change between using my RAM for virtual storage, caching my drives when I organise all 500Gb of data on them, displaying a GUI so that I can get work done etc.

My hardware is, to put it bluntly, crap yet expensive (to me). A 1GHz serves all my needs but may well have cost me two years-worth of donated/disposed of hardware (which means several "free" jobs fixing other people's computers and a lot of effort and petrol), plus several hundred pounds of my hard-earned money plus the time and effort to get it working how I want it.

When £1000's of hardware is sitting there and telling ME that it won't do something because I haven't phoned Microsoft or haven't bought the right version, I find it diabolical that my most expensive appliance in the house is not controlled by me.

Windows 3.1 I bought into, 95/98 I used and tolerated for a LONG time, getting many useful hours out of it. By the time '98 was obsolete I'd fallen for MS's spiel far too many times and was getting tired of computers. An OS actually nearly put me, a computer fanatic, off of computers. I didn't believe in or buy 2000, or XP, or 2003 and I won't be doing the same for Vista.

I'll still have to use it, in work if nowhere else, but I'm hoping that I'm going to have made the right move here by moving away from cash-driven OS's to ones that are driven by a yearning for freedom, control, pride in their work and technical prowess. Not that it's got a new glass interface that looks cool.

Wednesday, February 22, 2006

Linux desktop update

First off, the computer is still running fine. Problems encountered since last post - umm... none? I updated a load of software (in keeping with my usual habits), everything from K3B to PHP even though I hardly use some of them. K3B is my primary CD writing app so that obviously had to be updated, the rest were just for my peace of mind. I very nearly downgraded K3B by several revisions after Swaret found a "new" version on a Slackware mirror but I already had installed a much higher revision from LinuxPackages that Swaret didn't seem to pick up on. Fortunately, I was watching out though and have confirmation turned on for every package upgrade Swaret tries.

Even if I had gone wrong, a simple upgradepkg command would solve the problem. I keep a directory full of packages that are installed on the machine (from LinuxPackages, my own, elsewhere etc.) seperate from the official Slackware packages so that I can upgrade, revert or remove such software. This is again kept seperate from software which I've had to manually compile to install on the machine so that I can always find either original source code or a package for anything I find on the machine.

If you remember, I did a full Slackware installation and that's EVERYTHING. I've got things like LaTeX installed which I haven't used since my university days but seeing that even with everything installed Slackware is still smaller than an equivalent Windows partition, I haven't bothered to remove anything (it's not like they are running as a background service or anything and I keep them up to date anyway so it's not a security risk).

I've been doing a lot of converting/copying/writing Video DVD's just lately which means that I've had to hunt down a suitable program. In the end a few choice command-lines did pretty much everything I needed them to.

Generally, I need to be able to convert anything (DivX, RealMedia, WMV, ASF, Quicktime, etc.) to MPEG-1 or MPEG-2 for putting onto a VideoCD or DVD-R for playing in ordinary DVD players. I also sometimes needed to copy a DVD when I didn't have any DVD-R's so that meant MPEG-2 DVD to MPEG-1 VCD conversion. We're talking home movies and web clips here, so there was no subtitles, chapters, multiple audio tracks or menus to worry about, just straight film clips. I'm sending them to Kuwait for my girlfriend's dad so they have to work in any region DVD player, his laptop, his school's machines, etc. without worrying about extra software, codec compatibility, regions or anything else.

In the process, I spent days looking for a program that could write correctly-formed MPEG's onto a CD in Video CD format (something which was never that easy in Windows anyway as you needed to have stuff not only in the right MPEG format but also a strict filesystem layout) until I found out that, if you don't want menus or anything, K3B can do it for you. I'd been using it for months without even knowing it did that!

K3B handles writing to DVD just the same once the data is in the correct VOB etc. formats and I've got QDVDAuthor to do that for me.

I solved a tiny minor problem to do with the clipboard contents transferring between TightVNC remote sessions and remote Windows computers (which I needed quite badly since I've logged into the machine via VNC every day since I installed x11vnc). Installing autocutsel solved that problem instantly. I like the idea of having seperate selection and clipboard buffers on Linux/Unix but if you haven't been brought up on them, they don't get used properly. Autocutsel synchronises the two and lets you just have a "normal" clipboard.

I also got NTP time synchronisation working after a "doh!" moment when I realised it needed UDP port 123 inbound to be open to the servers I wanted to use, not just outbound. A few good servers and it's ticking along nicely.

I've rewritten all of my firewall scripts so that now I can open ports on demand (for stuff like bittorrent to help it go faster), forward them to my girlfriend's machine, etc. In the process I "homogenised" all the scripts so that they are used on startup, from my rc.firewall, from my portknock daemon and from the command line. This means that I only have to maintain one script for all actions, so opening the SSH port to my work IP on bootup is using the same script as when I portknock from somewhere else or if I need to open a port to external access for remote VNC connections so that I can fix people's PC's. I can use a remote portknock to close a port that I opened from the command line locally without worrying about whether the rules will be implemented in the right order, whether the correct rules will be removed, unintentional doubling-up of iptables rules etc.

Because of the VNC setup, I am able to sit on a remote machine, securely access my network, take over my girlfriends computer to help do the bulk of things like MPEG conversions (her computer is 3 times faster than anything I use as I don't generally need CPU speed), that computer reads from and saves it's results to a Samba share on a journalled filesystem on the linux machine (which has the most disk space and which I can also control simultaneously to put that same files onto a DVD or VCD when they have finished converting).

I've also got UltraVNC running under Wine so that I can accept UltraVNC SC connections to my machine. So if a school has a problem they can login and double-click an icon that I've left on some of their servers, which will initiate an encrypted reverse connection to my machine which will then take over their machine and let me fix whatever the problem is. When I'm done, I close the connection and the software their end returns control. The beauty is that with UltraVNC SC, it's a single executable on the remote end that does not need configuration or installation and cleans up after itself when I'm done, so it's the sort of thing that I can tell people to download on the spur of a moment and, if they have a broadband connection, can easily fix their machines without leaving the sofa.

Because UltraVNC is Windows-only and uses non-standard VNC extensions, I had to use the Windows client for it under Wine. I've already got Crossover Office but I was hearing interesting things coming out of the main Wine releases so I decided to install Wine too. It ended up being easier than I thought and they didn't interfere at all after a bit of PATH-juggling, so now if I type wine, I get wine but my icons for Word etc. still use Crossover Office for which I can get support. Hence, the UltraVNC icon now uses Wine while the supported Office apps use Crossover. (I did try Word in Wine and it seemed fine but I'd rather stick with something that I know works, is a supported configuration and has someone I've paid money to on the other end so that I can shout at them if it goes wrong).

The computer consistently achieves 40-50 days of uptime and would be permanently on barring hardware failure were it not for my insistence on playing about with scripts that load at login so that, when I do next have to reboot, I don't have to worry about whether I enabled x11vnc on startup or configured the firewall to let through SSH connections from my work IP. I also upgrade the kernel to the latest stable release whenever I can, which means LILO changes and reboots, so a reboot once a month or so is no big deal, especially seeing as it is ME deciding that it needs a reboot (I still can't believe the number of times a Windows machine has to reboot from initial purchase through to a working system with all your software).
[On a side note - I noted the other day that my print server achieved over 380 days uptime being used quite a lot EVERY SINGLE DAY by myself and my girlfriend. Considering the fact that the lights go dim and the UPS switches to battery about twice a year, that's quite impressive, and it's not even running through the UPS.]

I've configured stuff like SMART and motherboard sensor logging using lmsensors (a long time ago) and now have more peace of mind that I did with Windows as I can see the exact factors that affect the values - this is very useful for hard disk temperatures and fan speeds. I can actually see which components produce the heat, which are cooled if I open a side panel, which ones are more sensitive to CD-Writers spinning up etc. My case is crammed full of hardware and cables and this is quite vital as there is no room for proper airflow in the case and I can't personally afford to upgrade when this system already works well within safe parameters.

I already have a hardware temperature/fan monitor which is seperate from the motherboard ones so that it throws an absolute wobbler if a fan does not start when the CPU is turned on. This happens sometimes (the fan seems to have trouble on startup on occasion - about once every 20 or so boots) and the computer is actually quite happy without that fan spinning at all but it's much nicer for me to know that it's not and to power down again. The hardware monitor was cheap but works on a very simple system (thermistors and fan connectors connected to an external chip powered by a drive power cable) and doesn't rely on my ageing BIOS having to notice the problem (which it generally doesn't with fan speeds) to shutdown the machine.

That same hardware monitor will also beep like hell and shut the power off if the power supply goes over-temperature (I use a fanless power supply so this was just another piece of paranoia). Additionally I now have motherboard monitoring and SMART monitoring (including disk temperature) which gives me peace of mind, especially considering the age of most of my hardware.

If any major component of the computer overheats, goes overvoltage, stops working, I KNOW for sure that either Linux, the hardware monitors or the UPS will shut the computer down. This is very important to me given that this computer runs 24-7 in a household environment. I doubt that Windows would shut you down if your drives starting to fail or go over temperature unless you spent a lot of time and effort to get some software that did it for you.

SMART also runs self-tests on the drives overnight (when things like slocate also do their business and update the filesystem search indexes for me) and constantly updates me on every performance change that occurs (for some reason one drive flickers back and forth between two consecutive values for Seek Time Performance which I assume is just natural variation) so hopefully I would catch most serious drive problems early enough to replace and restore the drive.

My girlfriend (someone who didn't know what Windows was until she had to use it on her law course a few years ago) is quite capable of turning the machine on or logging into it, doing whatever she needs to in Opera (web, email, etc.) and logging off again. When her computer's down and she needs to enter results for work, the Linux computer is always there and just works for her.

I've got OpenOffice installed now too, as an office backup and also to use for the spreadsheet as I have a licensed copy of Word for the Linux machine but nothing else (yes, I actually have a hologrammed original MS copy of just Word 2000 on CD). Because I am now also using Portable OpenOffice.org on my USB key this is also for compatibility and to familiarise myself with it. Something that's quite funny is that OpenOffice.org spreadsheet program manages to handle the complex XLS spreadsheet I use for my invoicing with the same functionality and without any of the weird "out of resources" errors I get with Excel (despite following every advice known to man on combatting that error in Excel). It's not even THAT complex a spreadsheet, it's just got a lot of conditional formatting to highlight monies owed to me etc.

Wireless works, when I need it to, and I'm thinking of having it permanently on now that I'm sure of the firewalling. This would be primarily so that I can set up an old relic of a computer in our spare room to form SSH tunnels over the wireless to the main Linux machine so that guests can check email etc. without me having to run cables upstairs. The setup works, I've tested it, but I've just got to shrink the machine a bit as it's only a small spare room and a big chunky desktop case is over the top for a remote-access port. If I had unlimited funds, I'd get a mini-ITX computer up there and I'd also fit it with a WinTV card and a security camera so that it can feed the signal back to the other computer in the house that's running a security camera and motion detection software.

Still no show-stoppers. In fact, if anything, my lack of disk space is my greatest problem at the moment, mainly due to the fact that slackware only uses a single 10Gb partition so I've filled the rest up with junk just because it was convenient. Stuff like Gb's of DVD VOB's and source MPEG's that I've already converted and have elsewhere but just haven't got around to deleting yet.

Saturday, January 14, 2006

My own piece of Linux "evangelism"

It's no surprise that I like to sing the praises of Linux. I've been using it, in one form or another, since the day I discovered it's existence.

Increasingly, however, when I read articles about Linux I am constantly annoyed at people's frustration that it's not how they want it. They read the part that said that Linux is Open Source (I don't usually capitalise those last two words but I feel I should start doing it) and can be customised and therefore they expect it to automatically do whatever they want.

I don't know when my annoyance at the lack of understanding of this particular "mantra" started but it has been recently exacerbated by articles on binary kernel drivers among others. I regularly contribute to several forums on Linux and School IT in general and a lot of people just don't seem to "get" Linux at all.

The binary kernel drivers argument was one of the first arguments I've had online where I've been so annoyed at the lack of understanding that I've pursued the question but only after going away for a while to make sure that my reply was calm enough. The discussion centred on the fact that Linux does not allow a stable kernel interface for binary drivers.

Now, the entire legality of binary drivers within Linux is one which hasn't surfaced properly yet and I can see that one day someone is going to get some nasty jaws appear out of the water and take them by surprise because they've misread the GPL. Leaving that aside for one moment, binary drivers are the bane of any Linux supporter.

Binary drivers are those without source code, like almost every hardware driver that exists for Windows. Companies like nVidia release drivers for their hardware in this form to avoid losing their precious patents etc. to a bunch of people who have bought their hardware and just want to use it. Fair enough, they have to make a living and if part of that living involves never releasing source code, that's up to them.

As a case in point, there do exist Open Source drivers for nVidia cards but they just don't feature the 3D acceleration that the binary drivers do (so you can use nVidia cards in a Linux system, but they won't have the same speed when playing games. The average desktop, however, would run just the same). Therefore, in this case, the drivers for the nVidia cards are only used by people who have spare 3D cards lying around in Linux machines that they are using at least some of the time for gaming. Professional users of 3D would probably not be using the nVidia binaries or even Linux.

nVidia achieved this marvel of modern technology (running 3D applications on a 3D compatible system with an nVidia card, where someone has already wrote 99% of the other necessary supporting code for them) by using binary drivers, which plug into the kernel at certain points. They evaded most of the technical and (possibly) legal limitations of interfacing with various versions of the linux kernel by releasing an Open Source kernel "wrapper" which lets the binary driver load itself without caring about what kernel it's actually running under and without the driver having to be rewritten for every kernel change.

The argument I had on Slashdot centred on people releasing binary drivers for Linux. The general overview was that Linux people were hostile and unhelpful to people writing binary drivers, that the manufacturers constantly had to keep updating the drivers for various kernels and that a stable binary driver API would help matters.

Needless to say, my reply was less than assistive in getting support for binary drivers in an Open Source kernel ("First, I think you're missing the fact that, overall, Linux doesn't care that you can't put your binary-only drivers on it").

The argument centred on the fact that companies will usually only release drivers as binary modules because they spend so long developing and testing their drivers that they are wasting an extraordinary amount of money if they then throw all that work out for anyone to copy it.

They don't seem to take account that Linux is entirely built on years of work that people, including many large corporations such as IBM itself, routinely give away for "nothing". Do you really think that the trade secrets in your hardware design are so fantastic that a) nobody has thought of them or that b) nobody who has your binary drivers and hardware couldn't reverse engineer them, legally, anyway?

Many of the network card drivers in Linux, for example, are reverse-engineered or written from open specifications and then placed under the GPL or other Open Source license. I can't see where this could pose a problem for the hardware manufacturer as they are effectively getting "free" drivers written for them, at little or no expense, as well as having those drivers supported and maintained for the forseeable future (at least until the hardware is considered obsolete and possibly even after that).

To quote Donald Becker's page (although this quote was written at an indeterminable date): "Linux now supports almost every current-production PCI Fast and Gigabit Ethernet chip!".

So every manufacturer of Ethernet cards has effectively had Open Source drivers written for them and distributed worldwide for free. I don't see any network card companies complaining about the fact that pretty much any network card inserted into a Linux machine is detected and used without having to download and install any driver, binary or otherwise.

It may be that the patents and trade secrets covering a network card are far fewer or of less importance than those covering a 3D graphics acceleration card. However, given the number of patents on items like TCP offload engines and the like, it seems unlikely.

If you are writing drivers for hardware, surely you'd be glad that someone is willing to ask you for the specifications of the hardware so that they can write and maintain such a massive, difficult piece of code as a hardware driver for another platform that you probably will never be able to support as well as your main platform?

The main misunderstanding comes from the binary driver "API" idea, the vision of a single standard interface to ANY piece of hardware within a computer and all of the associated kernel functions that will NEVER change. That sounds almost easy, no?

The Linux kernel is never easy. It changes every single day of its life. Unlike the major desktop operating system of Microsoft Windows, Linux is updated almost every minute by someone, somewhere. When the Linux IDE code was considered obsolete, unmaintainable and unsustainable, it was rewritten from the ground up. When that effort failed to stabilise quickly enough, it was restarted again on a smaller scale.

When the scheduler started experiencing problems on systems with hundreds of CPU's, it was ripped out, modularised and put back in. When USB and Firewire were introduced, they were bolted on to the existing frameworks and then rewritten time and time again to obtain a set of code that was maintainable and extendible for new standards like USB2.

At each point, everything was redesigned, not just re-engineered. People went back to the drawing board and said "Why are we still doing things for hardware we no longer support?", "Why are we bodging CD-writing by making an IDE-SCSI hybrid module?", "Why can't we use the SCSI code we already have to support these new fangled USB mass storage devices as well?"

Each time, any stable ABI would have broken. Each time, a new version of the stable ABI would have had to been released. It's not a stable ABI if it keeps breaking. On the other hand, if someone had said "SCSI works this way and you must not change it" then many things would not have been possible or would have meant reinventing the wheel for them to be supported.

When you consider the number of hardware interfaces that Linux supports (PCMCIA, PATA, SATA, PCI, ISA, MCA, PCI-E, PCI-X, AGP, USB, Firewire, I2C, the list goes on and that's just for the x86 platform) and take into account the amount of drivers for each style of interface (some of which share something like 99% of the code, most of which are completely individual) the fixing of a stable interface gets harder and harder without bringing the source code to an unmanageable level.

Yes, Windows does it to an extent. However, try to run a Windows 98 scanner driver in Windows XP. Most of them won't let you do it. The interfaces changed and are no longer compatible. Try and install an ISA card in a machine running XP, it won't recognise it. Microsoft obsoleted ISA cards because they felt like it. That's another issue, but what if Linux were to obsolete a major subsystem? First, there would be outcry, secondly, the code would be around so that people who WANTED it could still use it.

Try and get USB Mass Storage devices working on 98. You usually cannot without a specialised driver and even then, only on 98 Second Edition because the USB in 95/98 did not support it properly. The standard interface they chose in 95/98 WAS NOT COMPREHENSIVE ENOUGH to support Mass Storage Devices and had to be changed. 98SE made it possible by introducing new access methods but the drivers are usually totally different to those used for the same hardware under Windows 2000 or above.

Try and get most older games working in Windows 2000 or above. It's possible for the vast majority but far from easy and it's usually easier to run an emulator like DOSBOX or QEmu to do it because the interfaces and standards used in the DOS, Windows 3.1, Windows 95, 98 etc. era were obsoleted and changed and updated and even removed because they were incompatible with the "new" ideas going into later versions of Windows (e.g. early Windows versions had no real concept of multiple users on a single machine, early DOS games expected complete control of a processor in order to run and exact timings which aren't practical in a modern multi-threaded operating system).

Throughout the history of any operating system, and sometimes even applications, the set standards that seemed so perfect 10 years ago are never used properly or have to be worked around to make them work properly (consider things like 48-bit LBA drive access) and usually that means having to change the interface or corrupting it to your purposes.

Linux does not want to spend the next twenty years supporting it's own dreadful mistakes and misjudgements. Being Open Source, it does not need to. If something's wrong, they can change anything they like because no part of the system has to stay as it is. Linux is a liquid concept. However, should such changes occur many of the current binary drivers for Linux are likely to need substantial support in order to continue working properly, if at all. That support can only come from people with the driver's source code, i.e. the manufacturer.

When a kernel interface change stops PCI-Express cards from being accessed the same way they used to be, nVidia may be able to bodge something in their kernel wrappers or they may have to recompile their binary drivers to take account. Either way, they would have to spend a lot of time and money to support a change that is completely outside of their core business. People would be moaning at them for not doing their job. They would have to keep up, as they do today, with every change. And ten years from now, when they go bust, none of us will be able to use nVidia 3D acceleration on anything but the last kernel they supported.

Then again, they may just decide that it's too much bother and stop producing drivers for Linux. Were they Open Source or, ideally, in the kernel, the updating would *probably* be done for them automatically and without charge. They would be tested, without charge, by far more people than any beta test could summon up, with far more exotic configurations. Every time the kernel changed, they would be kept working until the day that there wasn't a single competent person in the world who wanted to keep supporting their hardware.

When IPv6 came along, Windows and Linux supported it by redesigning every piece of their networking code to take account of it. When IPv7 or whatever is next planned comes along, you're going to have to redesign everything all over again or put in some horrible backwards compatibility kludge to help older programs use it. That means that you will forever have to carry your older systems with you and all their backwards-compatibility layers, or you could just redesign the networking code to take account of it all for you so that old programs don't need to change and new programs can use the new features. They may be entirely seperate systems but why introduce a whole new layer if you could just slightly redefine one that's been working for years?

Binary drivers die a death as soon as the manufacturer stops updating them and are wounded by every kernel upgrade. Open Source drivers live for as long as there is a single person in the world willing to support them, barely feel a bump on a kernel upgrade and will stay in hibernation for as long as their source code exists, ready to be resurrected by anyone who wants to try them out (say, a computer museum curator who wants to run some ancient hardware card that has to be soldered onto a modern connector and have it's drivers tweaked to support the new, bodgeful interface).

A stable binary interface is impossible and totally against the idea of having a system that anyone can submit an idea for improvement to. When someone invents a better way of running hardware, all the internals HAVE to change or you end up with a mess of code that nobody in their right mind wants to touch and certainly not one which you would want people to be learning *from*.

Linux is bigger than an operating system. It's bigger than the companies that use it for commercial gain. It's bigger than the millions of people around the globe that use it every single day whether they know it or not. Linux (and Open Source in general) is about making things work, making them work well, making them work for the forseeable future, letting anyone see how they work, letting people come up with ideas for how to make them work better and keeping them working. None of those goals can be reliably met by using junk like source-less binary drivers or "stable" binary interfaces.

The annoying part is not that people disagree with the above, it's that they demand that Linux should change and not themselves. If Linux does not do what want, Linux is in the wrong. How often do they also swear at how stupid the design of some internal Windows API is? Linux is an emotional creature. It does not care about people who don't care about it.

If you want to rip Linux off and sell it with a thousand binary components and you can find a way to do it legally, even if you sell a unit to every single person on the planet, then you are on your own and Linux won't care that they break your system in their next upgrade. If you decide to Open your drivers and do the same, the chances are that someone will come along and help you to keep your stuff working, especially if it means that they get to play about with your system, ask questions, try new things, fix problems that only they have and can go off and learn from your code.

Open Source encourages software evolution. The better-written and better-performing man wins and their source code gets incorporated into more projects, their code gets learned by more people, who spread it to more code. Before long, every project needs this code to work properly, ensuring its own long life. However, if something EVEN BETTER comes along after that, it will be usurped for the greater good. If it's really better, it will take you over and smother you. If it's not, it will just linger and die and you will remain in your throne.

Binary drivers are one thing that I'd like to see smothered quite quickly. They are not necessarily better written or better performing but are kept there by corporations that are trying to gain money and recognition from Open Source without giving anything back. They legally, ethically and practically hinder alternatives from cropping up to usurp them as they know that they would be quickly smothered and left for dead. They need to maintain their little monopolies over their precious property. There is no analogy in nature for such a beast.

The fact that you can't manage to create a driver for the OS of my choice just means I won't buy your gear, or recommend it, or maybe even consider it. Whining about lack of co-operation from Linux people when I am happily running the product of years of their freely-given hard graft is not getting to get you any sympathy from me. If the drivers for your device came from your company along with the full co-operation of your company, I'd sing your praises and buy your gear. If you just want to cling onto the back of this Linux thing that people seem to be installing more of nowadays but not give anything back, don't expect Linux or its users to do you any favours either. I will only pile my money into something that I know will last me a long time and give me good value for money.

And now an admission... I own an nVidia graphics card - a weird one. It's a PCI Geforce4 440MX. Yes PCI. Not AGP or PCI-E. Bog-standard, old-fashioned PCI. I want to use it (it's my most expensive graphics card purchase ever at £50) and I don't want to replace it. That's not much money but the card is CAPABLE of doing everything I need. If I needed PCI-E levels of performance, I would have a PCI-E card.

I have it in my Linux desktop machine, primarily because that's what the machine used in it's previous Windows incarnation. I used to play Counterstrike and the motherboard does not have an AGP slot. The GeForce fitted the bill nicely. In Linux I have little or no use for it's 3D features (besides possibly the occasional game of TuxRacing) but it runs faster than the motherboard's onboard graphics.

I voluntarily use the nVidia binary drivers. The reason is that they provide better performance playing video, 3D etc. They prove that the card is capable of doing what I want. The binary drivers are not too much of a bind for me to recompile every time I change the kernel. They don't cause any crashes at all and the card works perfectly. If I need to diagnose a problem, rebooting without the nVidia driver but with the Open Source nv driver is not a big deal.

However, if nVidia updates their drivers to a version that doesn't support my card, introduces bugs, etc. I will not be upgrading to those drivers. If the Linux kernel people manage the technical/legal/ethical feat of making sure that nVidia cannot distribute any drivers but GPL ones, I will instaneously revert to the Open Source nv driver unless nVidia DO release a GPL one.

I won't be petitioning the Linux kernel people, I won't be rushing out to buy a new card that has got OS drivers, I won't buy nVidia's newest card that does run on Open Source drivers. I WILL be complaining to nVidia for not releasing the type of driver they should have released in the first place. I will use whatever works best for my current hardware to legally and technically interoperate with the rest of my machine's software.

If that means that, ultimately, my performance is reduced to poor levels because of having to use an inferior OS driver, I will be blaming nVidia for not bothering to contribute to that driver, to enable features that their hardware is perfectly capable of, and will adjust my next purchase according, to a company that does support OS and does not artificially limit the capabilities of a piece of hardware by refusing to openly publish code or specifications for it.

The nv driver already has what I need to run the card. Anything that isn't in the nv driver is due to nVidia not being co-operative.

I will not stop updating my kernel to the latest stable version, even if that means I break the nVidia card... an up-to-date kernel is worth much more than a single, replaceable driver.

I will not allow the kernel maintainers to be blamed for nVidia's lack of assistance. They do not care and never have cared about binary drivers and have stuck to their word on that.

I will not allow the nv maintainers to be blamed for nVidia's lack of assistance. They tried their best to get SOMETHING out of a company that wanted to give NOTHING.

I will accept it as inevitable that I knew I would eventually run into this problem, because I chose to use binary drivers for that component.

I will not be surprised if those same binary drivers stop working or fall into decay one day.

At least if I had open-source drivers which were capable of driving the card to it's full capabilities, I could keep running them through whatever legal or ethical turmoil Linux or nVidia goes through - that's the point of the GPL. If all else fails, I can still change the code myself (and I am capable of doing that) to make it work again. I don't have to rely on company X to keep my card working for me, breaking god-knows-what-else in the process. And I know that my hardware isn't part of some secret cover-up of something that I really don't care about when all I want to do is play TuxRacer.