Saturday, December 31, 2005

Laptop security

Recently, what with Christmas being seen as an ideal time for theft, I've been in meetings concerning the security of computer hardware, most notably laptops and projectors. Apparently I work in the second-worst location in the UK for thefts from schools.

As some of these meetings were sprung upon me without warning, I wasn't able to think them through as much as I'd like to have. As a result, I've been double-checking my advice to the schools to see if I can come up with any better ideas.

Current advice from the police includes chemical marking of property, securing the building, displaying signs and implementing CCTV (although the later is not really pushed as a solution, more a deterrent).

According to the second-hand feedback I've been hearing from the schools, the local police are having a tough time; schools are having lots of laptops and projectors stolen, the thieves are filing off the serial numbers and then the police are unable to confirm who the property belongs to. There are even stories of the stolen property having to be returned to the thief after a while as they are unable to prove that it's not theirs.

CCTV is proving all-but useless as the thieves are always ready and cover all identifying parts of their body or clothing. Chemical marks are easily discovered with UV lamps and removed even if it means damage to the property. There's also a growing black market in projector bulbs as these are not serialised and are therefore almost impossible to trace back to a source, as well as being an easily removeable, high-value commodity.

All this lead me to thinking about laptop security. Currently, the only physical way of securing laptops are so-called "kensington locks", small standardised holes in the chassis of laptops into which locks can be placed and also easily removed, sometimes without any damage at all to the laptop.

So if you can't prevent them being stolen, is there something else you could do? Each computer processor has a unique serial number burned into the silicon of the chip itself. However, there is usually no way to read this number from the chip as most manufacturers disable the option by default and also the thief can easily disbale the same option. This means that not only is it time-consuming to actually read this number from a laptop on purchase, it's easily disabled too. Although if the laptop is physically recovered the number could be checked, there's no way to read this number remotely.

Lots of software packages exist to "phone home". That is, every time the machine is connected to the internet, the software sends a small packet describing it's location/phone number/other identifying pieces of information to a central server. If the laptop is ever reported stolen, this information is passed to police so the thief is "caught" as soon as they go online.

The major flaw here is that a thief is going to be aware of such tricks and any professional would probably blank the hard drive upon receipt or even replace the entire drive unit and then install a clean version of the operating system. Software piracy would not be a big deal to a laptop thief.

Additionally, any hardware means of doing the same would also be detected and removed/circumvented. Or would it?

Why doesn't someone add to standard laptop chipsets a "call-home" modem/network card? Most laptops have built in modems/network cards nowadays and they would be the devices that actually physically connect to the Internet eventually (I'm assuming that any stolen laptop in use today would most probably go on the Internet at some time in it's life, which is not an unreasonable assumption).

Obviously, the modem/network card would have to call-home without the thief knowing. Let's assume, therefore, that the software driver for the modem/network card comes in two types - on the one hand, it will identify itself as a standard modem/network card, as supported by internal Windows drivers or the same drivers as a non-call-home device. In doing so, it will not give away it's purpose. However, the driver originally supplied with the hardware would also include an option to send a series of innocent-looking AT commands or even packets to localhost. This packets would set a hardware password, and maybe other information such as an IP address or email address, which would be stored inside the chipset firmware itself.

Once the password is set, every time the device connects to the Internet (which is fairly easy for the hardware itself to detect and intervene without software assistance), the device is "activated". From then on, if the device driver does not send the password by the series of special packets/AT commands, the hardware itself would inject packets with the intent on sending a call-home packet/email to a central server.

This central server would most probably be setup by the hardware manufacturer, but it could also be set by the customer themselves to be an email address of their own. Whenever a standard non-password driver is used for the device (such as you would get by a reinstallation of the operating system), it would attempt to send this packet/email, which would include such details as the phone number called or the external IP address or even a short history of phone numbers dialled.

However, even with the "correct" password-driven drivers installed you would HAVE to know the password in order for the device to activate normally (or even activate at all) without sending such call-home information. If the thief was wise enough to know that this laptop contained such hardware, they might try to install the specialised drivers. However, without the password that is etched into the chipset firmware by the manufacturer/owner there is no way the thief could disable the call-home functionality or change the password. This won't have stopped him stealing the laptop but it will seriously limit it's resale value, a laptop without Internet access is severely limited in it's capabilities.

You could even add functionality to the "secure" drivers (the ones that ordinary customers will have pre-installed for them) that the device won't initialise the modem/network unless it receives the correct password from the user. This would prevent the thief from just using the pre-installed drivers, effectively forcing you to "log on" to the modem/network card before you can use it.

With such controls in untouchable silicon on the device that controls the modem, network card, wireless card, etc. a thief would be left with a crippled laptop, unable to go online for fear of being caught.

Even wiping the entire disk would do nothing, the specialised drivers would be gone so the chipset would "know" that it was being used on a machine that may have been stolen and wiped. If the device runs on a standardised driver (e.g. a plain 56k AT command set or an NE2000-compatible network card), then a thief reinstalling the system would be unaware that by using the standard Windows driver they are advertising to the chipset that the system has been stolen. Only the NE2000 driver which also sends the correct password (most probably obtained from the user at boot-time) would be able to circumvent the call-home functionality.

The original owner would, of course, be perfectly capable of reinstalling their operating system as they know the password to the device and be in possession of the drivers to send the password to the device. Even if the original owner sold the laptop, the person they legitimately pass the laptop onto could still use non-secure drivers. The laptop could handshake with the central server to see if it has been reported stolen before sending such a packet or, at worst, send an email to an address whenever it connects. This might even be a good audit tool for companies to see just how much the laptops gets used.

Combine this with the fact that the concept is cross-platform and operating system independent (so long as two drivers exist: a standard one that can use the hardware normally and a specialised driver to send the special commands to the device upon initialisation) and you have a pretty foolproof system. You could ask for the password on boot (most corporate laptops have boot-time passwords anyway and the functionality could be implemented in the BIOS rather than the OS drivers), on login or on use of the device. Inexperienced theives would be caught the second they used the laptop online, experienced ones would be deterred or at least know that the value of a laptop with such a system would be severely limited.

Just an idea I had ticking away in the back of my mind.

Sunday, November 20, 2005

The World's Best Computer Games #1 - Quake 1

I thought I would try my hand at saying not only which computer games I think were the best, but also how they could have been made better. Firstly, the list of games. Although everybody else in the world is likely to disagree, the list is based on what games I find myself wanting to play when I have exhausted whatever the most recent game I installed was. I'm not a huge games player but I have played an awful lot over the years and have found an unfortunate trend for modern games to be mostly devoid of gameplay. This list should, hopefully, point people at games that are fun to play, engaging, have lasting appeal and are not sold as an advert for a CGI workstation.

I'll quickly list a few of the candidates, however this is not is order and is almost certain to change and be updated as I describe each game in detail, along with where I think their sequels or extension packs should have gone.

The Settlers (PC)
Chaos / Lords of Chaos (Spectrum)
Syndicate (PC)
F29 Retaliator (PC)
Final Fight (Arcade)
Spy Hunter (Arcade)
Quake (PC)

First, though, I'll start with Quake. For the sake of simplicity, I'm going to take Quake 1, GLQuake and both "official" Mission Packs together.

Quake was (I believe) the first ever truly three-dimensional, fully textured game on a standard computer. My first exposure to it was on a Pentium 133MHz . To set the scene, 3D graphics cards did not really exist except in the most extreme of PC's, the Internet was there but mostly limited to specialist uses and the OS of choice was DOS or Windows 3.1.

At the time, Doom and some other minor shareware offerings were the only games to have any sort of 3D. The best 3D offering up until Quake had, in fact, been Doom, with it's textured walls that didn't scale well, 2D-only movement (3D was faked... you could see out of windows and go up stairs but it was all a clever trick... you could never look up or down), blocky sprites and an atmosphere that no game before its release had ever shown before.

Doom was no more that a more convincing version of the "3D" techniques used in previous titles of a similar nature, such as Wolfenstein 3D and Spear of Destiny. Quake, however, managed to invent a whole new level of gameplay. Fully 3D environments, introducing full mouse-look, had most people spending the first hour of playing their new game just looking at the scenery and eagerly hunting out things to look at.

Quake was beautiful for it's time. 3D cards showed off their power and uses like never before and most people who bought a 3D card at this time bought it so they could play Quake. A 3D-card in this era meant that you bought a 3DFX card - no set standard of 3D acceleration in DOS existed until 3DFX's Glide arrived, thereby making itself the standard for years to come. Now virtually unheard of, 3DFX and Glide had set the pace for home 3D graphics. Without them, the nVidia's and ATI's of today wouldn't exist.

3DFX cards were also able to use SLI (Scan Line Interleave). PC's at the time were pretty puny for any serious 3D work and so being able to install 2 3DFX cards in one PC and having them share the work of the drawing helped considerably and at a price point where there wasn't a faster processor available to just brute-force their way through the calculations.

Quake showed off Glide's capabilities to the point where, ever since, they've been basically an essential part of running a gaming PC. Modern computers are approaching a point where 3D and 2D are integrated so tightly, processors are so over-powered and consumers so demanding that most PC's are capable of running even the most powerful games without "specialist" 3D hardware, just an ordinary graphics card. Back in Quake's day, to make people go out and buy an expensive component was the games one main drawback but it easily overcome it by making such good use of every processor tick it could.

The 3D environments were used well. Monsters jumped out of shadows, attacked from every concievable angle, pored out of secret doorways that you wouldn't notice if you weren't paying attention. Secret areas were difficult to get to but many players found them just by trying out things that they'd never been able to do before... jumping across obstacles, leaping onto tiny platforms, looking for inconsistencies in room sizes, hunting for secret buttons above and below them.

Lighting was used to good effect, the rooms going dark and strange noises coming from behind you was guaranteed to make you panic, critical areas were visible from the rest of the map making you try to fight through to get there, slits of light under wall indicated secret areas.

The sounds were incredible but no more scary than the noises of Doom, except now full stereo and an enviroment to support your movements made it even more realistic. But what made the game was its easy introduction to such new environments, the balance of the game, the level design and the relatively new multiplayer facilities.

Multiplayer was only used in relatively basic games up until the likes of Quake and Doom. Quake supported local network play (traditional IPX and newer TCP/IP), effectively creating the first LAN parties as well as modem and serial connections. I can remember playing Quake with certain mod's over a 56,000 bps serial connection (I remember distinctly because we didn't have any sort of connections between the machines beforehand and had to bodge a 9-pin to 9-pin cable using a variety of other cables and adaptors. The final cable ended up over 6 metres in length with numerous adaptors but worked at full speed without any issues - I still have the cable and still use it because, by disconnecting it at certain points, you can form ANY serial connection you're ever likely to need with each end being male or female, null-modem or plain, 9 or 25 pin.)

Talking about mod's... the fact that people could make maps and modify the game added greatly to its use. I spent many hours combining the best parts of my favourite two addons (both of which changed everything from the weapons to the monsters to the AI) and removing features they had that I didn't like (things like instantaneous weapon respawning) while adding in other stuff that I wanted. This wasn't tweaking or playing about, you were actually given a fully functional programming language (QuakeC) in which to do whatever you wanted. It was a little buggy and pernickity but once you got the hang of it you could do almost anything. In fact, that's how the addon packs came about - the better AI in the official addons (codenamed hipnotic and rogue after the companies that made them) worked purely by using new code in QuakeC, with new models and new maps. The Quake executable never really changed at all, except to support loading from different filesets.

Quake was my first exposure to multiplayer deathmatch - years after our first foray with "the ultimate serial cable", we were still playing Quake mod's like Nehara and Wyrm with friends over our 10base2 network and the internet and still they worked just as you'd expect today, sometimes miles better than even many modern games multiplayer implementations!

Even the basic single player game would last you ages, though. You could probably force your way through it in several days of gaming but you never wanted to. Much more fun to actually play the game, find the secrets, kill every monster on the map. Your path was always pretty linear but it was actually intriguing, wondering what new monsters you'd have to deal with around the next corner. Quake was never easy, it wasn't a game to pile through just to say you'd completed it, you could take your time over it and enjoy it.

From a technical side, the system requirements were pretty minimal, even adjusting for relative price increases, and signle-handedly sparked the entire 3D gaming hardware industry - and the compatibility was phenomenal. It ran under DOS originally but then Windows came out in force and Quake still run under Windows, then a Windows native version was released and when the game was over-sequelled, the source code (which was already being licensed into games for things like the Half-life engine etc.) was GPL'd and Unix-native versions came out among all sorts of strange conversions. It also ran on Mac's from early on in it's life.

Gamesaves were simple - one key save and restore from whereever you were. Without those sort of gamesaves, the game would have taken months to complete. Video options could be ramped up or down to suit your hardware and platform. Modern computers just laugh at Quake now as they can all run it in it's top resolution without any sort of strain but back in it's day, some of the options seemed unreachable. The fact that it still scales and plays well on modern machines (and modern OS's) is only testiment to it's high technical quality. You could actually play with just the keyboard if you wanted to but having mouse-look on made it the exact equivalent to the modern FPS's that it spawned.

Network play was smooth, single play was great fun and engaging, the maps seemed enormous and to have no end to their number, downloads (if you had Internet connections back then) would only extend the amount of things you could do and the time you could spend on it. There wasn't really much of a community of multiplay gamers back then and yet there was still a major source of new content for it coming out, from official mission packs just as large as the original game to entirely new ways to play the game coming out of FTP sites.

There was no complex plot; it needed none, it was sheer atmosphere. You could jump, swim, drown (!), burn, electrocute yourself, get squished, use lifts and stairs, dodge flying buttresses and collapsing rocks, watch lava balls leap into the air in front of you, see zombies rise from the water, tear off their arms and throw them at you. The new concepts that arose were just fascinating and that's where the gameplay come from. Mods added things like low gravity levels, grappling hooks, heat-seeking weapons, trap-laying, capture-the-flag games, even primitive physics.

It was like nothing that had been seen before by the average gamer. And all from your home PC. Unfortunately, the number of new features introduced into a dozen modern games is seriously dwarved when compared against those introduced just in Quake. We might have higher-tech games now but, short of more realistic physics, there is little new in them that wasn't present or possible with the now-ancient Quake engine. In fact, many modern games use engines which are based, somewhere along the line, on actual Quake source code.

Tuesday, November 01, 2005

Trust 445a Router Recovery

A few years back, when I first went to ADSL from 56k, I bought a router from my favourite company, Trust. The router was a Trust 445A Speedlink xDSL Web Station, a bog-standard Conexant-based router pretty much indistinguishable from a technical point of view from my previous AMX-CA64E router.

After about 6 or 8 months, however, I tried to change a setting in the router's config without thinking about it (I made the router switch to PPP Half-bridge mode as I was pretty much using it in a Half-Bridge configuration anyway) and managed to make the admin interface inaccessible. At the same time, it stopped giving out DHCP, connecting via ADSL or working at all.

I couldn't find any firmware to restore it from (the one time Trust has let me down by not having the correct drivers) and it seemed beyond repair. I threw it into a back cupboard to use as an emergency four-port Ethernet switch in case I ever needed one. At the time I was a bit short of money but still went out and bought a replacement so that must have really meant that back then there was no way to recover this particular router; I never consign anything to the bin if there's the slightest chance I can recover it.

Digging it up a few weeks ago when clearing out some of my older computer cables, I thought that it was worth another shot. ADSL doesn't seem to be going anywhere any time soon so it was worth getting a backup router or using it if I have to fix someone's computer.

Details were sketchy to say the least. Google only turned up Origo Repair as a likely candidate (which would probably work if you knew how to fiddle it and probably works wonders for people whose routers can take most firmwares and who are willing to try any number of firmwares first.

Instructions for the 445A were much harder to track down until I spotted an old forum entry, hidden away in the depths of a long conversation:

"I have Trust 445a too, and it seems it uses the same chipset CX82310-14 ARM940T Processor as Billion BIPAC-711CE and others. Italians secceded flashing Trust 445a to Billion BIPAC-711CE."

The website linked to contained an instructive PDF, a driver set and a firmware image of a Billion router. The PDF itself was in Italian but with that, the firmware and an online translation service, I was able to get the approximate gist of the plan.

1) Open up the router and short jumper JP1.
2) Turn on router.
3) Connect via USB between the router and a computer with an Intel chipset motherboard (the flashing utility requires certain chipsets, otherwise it can't recognise the USB ports). I found this out on the Origo Repair page which seems to use similar flash utilities: "The only limitation is the Flash program that is used. This seems to be very fussy about the chipset in your machine. It is known to work with VIA & Intel but not with SIS, nForce2 & KT266."
4) Boot DOS (I used the Ultimate Boot CD's version of Freedos)
5) Run the flash utility with the /e switch to erase the current firmware.
6) Run the flash utility again, this time supplying the firmware on that website.
7) Wait until it's uploaded and the modem settles (quite worrying that for several tens of minutes it just looks completely inert but eventually it all completes successfully)
7) Remove the jumper, reboot and test the router.

Additionally, there's a further step to then go on to upgrade the onboard files on the router to support UPnP and various other minor fixes, but that's just the easy part.

After a few false starts (a laptop and a desktop with only SiS chipsets that the DOS utility couldn't use the USB ports on, having it plugged into a USB port which the DOS flash utility did not see as the FIRST USB port, etc.) I managed to flash the firmware perfectly.

A quick reboot of the router showed it working just as it should, albeit with another company's branded version of the admin interface. Basically, from a blank EEPROM, it had restored the 445A to perfect working health and even added UPnP support as a bonus (the 445A didn't support UPnP with it's original settings).

I stood little chance of finding this stuff out on my own and am eternally grateful to the forum poster, the Italian website authors and Google for restoring a dead router back to full health, thereby saving me about £50 in the future.

Now to go and burn that firmware, driver and my own instructions onto a million and half CD's, tapes, DVD's, floppies, USB disks and hard drives just in case the website goes down and I need it again!

Monday, October 17, 2005

SSH port probes and port knocking

On the topic of people receiving unwanted SSH port probes, I have previously posted a script to catch requests and blacklist the source IP's. Though this strategy is effective, it lacks a certain finesse and it also doesn't prevent what I saw as the major problem about such probes - log flooding.

An average Internet-facing server has room for hundreds of megs of logs, an admin capable and willing to go through the logs en-masse and blacklist troublesome IP's. Unfortunately for myself the only real factor in these potential attacks is in the annoyance that a large log or repeated visible attempts generates.

I am always security conscious and this hasn't changed, however I think it's time for my home machine to stop storing logs of every little script-kiddie that pings my port 22 with their tools and instead just ignore them. That was previously much more difficult for me, especially when you see the same IP try and try again. It's just so tempting to blacklist them and imagine their reaction on the other end.

Previous to this article, my port 22 was open to the world (but no other ports). The control of who got in was left in the more-than-capable hands of a religiously updated OpenSSH, with only one working logon allowed to actually come in, passwordless logins denied and a large private-key that's kept secure, protected also by a long random passphrase. That allows access to an extremely limited account whose only purpose is to allow me to su with the right password and become root.

I use SSH to log in from machines when I am at work (where it's incredibly useful to bypass over-zealous school internet filters, with a little help from a proxy on my home machine) and when I need to fix other people's machines. All the schools I work for are NAT'd across the internet through a single IP. Thus, it would be simple to allow SSH through just that IP. However, when I am fixing someone's machine (which could be at a moment's notice), I need SSH to still let me in. This means that I would have a basically random IP trying to connect to my home machine and it would have to deal with it accordingly. This is why my port 22 was left open up until now.

Previously, I just watched the world and his brother bounce standard logins and passwords off of my home machines in a vain attempt to see if I had left some stupid login or password on the machine that might give them some sort of platform to springboard onto higher access. I was secure in the knowledge that not only would they never get the only working account name by bruteforce, nor would they have the publickey and neither would they have the passphrase which went with it. The worst case was that they exploited some kind of flaw in OpenSSH itself which I consider highly unlikely and easily fixable. In that case, it wouldn't be just me who'd be in trouble but half the servers on the Internet and someone would come up with a fix VERY quickly.

These dumb brute-force attacks left a little bit of a bad taste, though. My logs were constantly flooded by "invalid user", "potential break-in attempt" and so on, all through the day. The machine that controls port 22 is a desktop and toy machine, selectively exposed to the internet for convenience and for trying out new systems and therefore it's quite annoying to have logs flooded by such futile attempts at breaking in.

I have put on this blog a script that I used to use, it basically watches the logs via a regular cron job and blocks any IP that tries too often. This works wonderfully and is the situation I would use if I needed random people from random IP's (e.g. employees etc.) to be able to access the SSH on that machine.

Fortunately, I don't need to. The only person to ever use the machine will be myself and therefore it's much easier to me to use port-knocking.

Port-knocking is the Internet equivalent of a secret handshake. As far as anyone on the Internet is concerned the machine is not responding to any connection requests whatsoever. What it is secretly doing, though, is monitoring all such requests for a particular, pre-determined pattern. When it sees that pattern of requests, it modifies the firewall to open a particular port to that particular IP.

For example, it's possible to set it up so that when it sees connection requests on ports 1000, 2000 and then 3000, all from the same IP address, in that order and without any other port requests in between, it opens up port 22 for the IP that tried to connect. Thus, from any computer in the world, I could take a small utility or even just a batch file and a telnet command, hit those secret-handshake ports in the right order and as if by magic SSH would suddenly be available for me to use from that computer.

A similar "knock" of some different ports would also close the SSH port behind me, too. Using this, it's perfectly possible to rap on the internet port of my home computer, get it to let me in, SSH and do whatever I need to do, then rap a different tattoo and it would close SSH. All the time this is happening, every other computer in the world could be trying to get in and all they would see would be a closed or stealthed port (depending on whatever firewall config you have). Being closed or stealthed would also mean that SSH doesn't even have to wake up to deal with such requests, so bypassing any possibility of having some SSH exploit being randomly tried out on the machine. It also means that people can't even GET to the stage where they can try to log in, so instantly cutting out quite a lot of log-spam.

The utility that can knock like this can be as simple as batch or shell script with certain parameters to telnet, to a custom-made tiny C program or a generic port-knocking utility which you supply the port list to. The ports can be UDP or TCP, they can be any numbers in any order, they could even be encoded versions of a particular passphrase or password. Depending on the port-knock listen server in use, the knocks can also be one-time passwords chosen from a predefined list (so that even if someone sniffs your secret-handshake going across the internet, they wouldn't be able to replay it).

I have now switched over to a port-knock system (a good one seems to be Knock from the author of Arch Linux), if only to make my logs easier to read, and have noticed a myriad of advantages from this system.

1) It doesn't make my system any less secure - the only required software is a port-knock daemon that's trivial in implementation and quite secure by design (all it does is listen to connection attempts at the IP level and then extracts the source IP and destination port. It then executes a shell script of your choice which does all the opening, closing of ports etc.)

2) It makes my system more painful for someone unscrupulous to try to access - ports appear closed or non-existent and there is absolutely no indication that anything "strange" is going on. Knock-opened ports are only opened to the IP that knocked correctly, thus keeping everyone else in the world in the dark.

3) It means that the software behind the port-knock is protected from random, "dumb" brute-force attacks and new exploits. Though this isn't 100% effective (as every security analyst has argued when confronted with port-knocking) it's made another step harder, to the point where you really need to be able to sniff all traffic in and out to be able to defeat the simplest configuration of port-knocking. If someone can do that, you're already in trouble.

4) The knock acts as a magic key. It can be a simple numeric list, it can be TCP or UDP or even (theoretically) ICMP. Equally, however, it can be a one-time password that expires after use (so combatting sniffing/replay techniques to open the port), an encrypted sequence, a time-dependant sequence, an IP-dependent sequence, or a combination of those and many other possible sequences (one based on the last time the admin logged in, for example?). Until the magic key is used, nobody can access any port or tell that anything screwy is going on. As far as the casual "hacker" is concerned, there's no way to tell if the machine is switched on and looking for a portknock or switched off entirely.

5) Anybody trying knocks can be blacklisted in a similar way to we had before, but using the failure logs of the portknocker rather than millions of attempts at usernames.

6) The knocks are immune against every-port scans, like an nmap scan, as the knock has to be done in the right sequence. At no point in the sequence is any clue given to a potential brute-forcer about how far through the sequence, which individual knocks were successful etc. because of the multi-stage nature of portknocking.

Knocking port 1000 in the example above makes everything still look closed down. You knock port 1001 afterwards (or indeed any port but 2000) and you have to start the sequence again to be allowed in. However, you get absolutely no feedback about whether or not each stage was successful.

Therefore the chances of brute-forcing the knock have decreased dramatically. Say you have a three-knock TCP-only rule. Then you would have to try EVERY single combination of three numbers all in the range 0-65535. That's 281,462,092,005,375 combinations, over 281 million million, each consisting of a single packet. And in between each one you would have to scan every port to see if it opened anything that wasn't open before. That's going to need a VERY long time and be more than a little obvious, even on the fastest connection in the world.

But now imagine you have mixtures of TCP, UDP and ICMP, four-knock, five-knock etc. rules, time-based rules and other clever trickery to determine what the secret handshake is. It would be all-but impossible, with a decent well-thought out port-knock system, to get anything to open up at all, thus stopping any sort of brute-force attempt before a login dialog had even been found.

7) The minimal overhead of such a system is valuable too. Possible brute-force attackers take no more bandwidth than random port hitters. No resources are required to track hundreds of parallel connections other than standard TCP/IP. The CPU strain is kept to an absolute minimum and the worst case would be a DoS attack requiring substantial resources behind the attackers to initiate. Again, you're already in trouble but at least they're bouncing off your TCP/IP stack rather than trying to guess your passwords.

Multiple parallel attempts from different IP's would be no more help, either. The knock would have to be completed from a single IP correctly. Additionally, it may well be that the knock is IP dependent or that some IP's are blacklisted from any sort of port-knocking anyway!

Additionally, it's one piece of software on the server and one trivial piece of software on your USB key or in your normal remote kit. Currently I always have PuTTY, my private key and TightVNC as my standard remote kit anyway, so adding a tiny portknock utility is a pittance. The system is simple and completely follows TCP/IP rules and standards. To people who are allowed access, the overhead is just one more command, one more password that may even be linked to the main SSH password.

8) The system is completely customisable, from what the port-knock actually consists of down to what it does. In the end, each knock sequence runs a shell script and so can do anything that the user it's running as can do.

9) At the end of the day, even if someone manages to sniff your network, steal the time-dependence equations or even completely guess your port-knocks, you're then left in no less secure a position that you were before. You can still blacklist, you're still password/public-key protected and no changes have to be made to any of the application software either on the client end or server. Everything is just as it was before.


This system is for the ultra-paranoid as well. As an example, consider trying to attack a system with these layers:

1) No ports are opened to anyone without the special knock. The entire world sees nothing but a blank reply, identical to one which a dead connection or turned-off machine would give.
2) The knock is a hash of source IP, time, secret password and other variables. Nobody would see the knock without sniffing every bit of traffic into the computer.
3) Even when the knock is sniffed, replaying it does not open any ports. With no ports open, nobody can attack.
4) Say someone steals the sheet with all the necessary portknock passwords and equations to open the port (an extremely unlikely but yet still not very dangerous scenario).

This is where any open SSH server on the Internet currently is... SSH is considered uncrackable even when all communcation is sniffed if implemented properly.

5) Now they have to find a login that is allowed in via SSH.
6) There are no passwords on any logins, only publickeys, so they also need the publickey of that user (which cannot be sniffed in any way).
7) The publickey is also protected by a passphrase. They now need that as well.
8) They have all the above and managed to get into SSH and login as that user. Now they could (possibly) do some damage by exploiting vulnerabilities in the underlying system.

What would you do if you were faced with such a system? Personally, I think it would be at least 281 million million times easier to just break into the actual building that held that server rather than even try to access it.

But port-knocking is not just for SSH. As hinted above, because the result of a successful port-knock is the execution of a specific shell script, opening holes in a firewall for a particular IP is the most basic of uses. With the right software or hardware it could do anything you wanted. The right port-knock could do anything from turn your machine off to waking up every machine on the LAN, initiating backups to rebooting, putting the kettle on to sending out alerts to every technician.

Friday, September 16, 2005

CCTV, Motion Detection and Linux

Over the summer, I found myself with quite a bit of free time on my hands. I have also, for some time, been eager to install a small CCTV camera at my front door to see who's at the door (Quick, hide behind the sofa!). This is partly for my girlfriend and partly for my gadget obsession.

I read on BBC News and also seperately on The Register about a burglar who was caught when he stole the PC which was monitoring a house's security camera. The PC emailed every image of any movement detected on the camera to a remote email address which, obviously, was quite theftproof.

I thought it was a marvellous idea, having a visual record of any event on a security camera sent to a remote email account (far from where the event is happening and also very secure) which doesn't need any authentication to send to (and therefore leaves no passwords on anything that could be stolen) and also requires authentication (which any intruder/thief could not gain from the computer stolen) to be able to delete/view the images in question. Not only that, but the ISP logs and email images would provide quite substantial legal proof in any case coming to court, in terms of verifying times, dates, tampering etc.

Properly set up, only a pre-emptive phone line cut would be any use against it. Even then, however, there's always the possibility of having a mobile phone,possibly even inside the case of the computer or as one of those PCMCIA GPRS cards, dialling up to an ISP, or even more complicated setups like wireless links between friendly neighbours or to a wireless ISP.

Short of covering from head to toe, cutting the whole neighbourhood's phone lines beforehand (an event certain to attract an unwanted amount of attention), jamming the 2.4GHz that most wireless networks run on and making sure to steal the PC's and any video recording equipment which was running the camera, and then wiping that PC with tools secure enough to obliterate any history of any images being written to the drive, there's not much a burglar could do about sending out some sort of information about themself.

I loved the idea of such a system and also that it actually works in practice, as the above story shows. Some months before this news story I had seen a piece of software that did this and apparently that was the one used to catch this particular burglar. This renewed my interest in Motion.

It didn't hurt that the software was Linux-based, free to use, easy to customise and very powerful. Any camera input (USB webcams, networked or wireless PC-compatible cameras, BT848-based TV cards or, indeed, any video equipment with a Linux BTTV driver) could be fed into the system (in fact many feeds are trivially possible), have complex motion detection algorithms run on it, with still images, short movies and even the audio being recorded whenever motion was detected.

These images and sounds could then be stored, transferred, archived or emailed anywhere (I suppose that FTP or SSH is also easy to do, basically the software writes a JPG/MPG and then runs a shell script of your choice on it whenever it detects motion). Additionally, it would be possible to watch through the cameras at any time by using suitable authentication on the web-based interface, showing real-time images to whatever computer on whichever continent you happen to be.

Over the summer, I invested in a cheap CCTV kit with remote 8" monitor. This monitor could not only supply power to and read video and audio from two different cameras, it would also output one of their composite outputs again without the need for further adaptors or cables. This seemed the perfect setup... a camera wired to the monitor so that I can see what's happening in real time, with the output being simultaneously fed straight into a motion-detecting PC setup.

The setup was a cinch, just a matter of dusting off some WinTV cards and adding one short cable to the CCTV monitor. The software compiled and installed and within about 10 minutes I had it emailing images to an email account whenever my willing volunteer waved their hand across the camera. With some fine-tuning over the next few days of settings and image masks to take care of the timed external lights interfering with the setup, the hanging baskets outside moving in the wind etc. I had the perfect test.

We went to Scotland for a week, leaving the cat at home. We arranged for someone to come feed her during the week and I thought it would be a good test. While we were away, I would dial up to a cheap ISP, log into my home machine and watch the images live. I could also browse through my email account and find all the images of movement. I saw the neighbours walk past at 9.58 a.m. I saw the postman come at 7.00 a.m. and could even see the three items of junk mail in his hands as he walked up to the door. I saw our friend come in to feed the cat as promised. I also got one or two images of a plant pot falling over in the garden.

Now intrigued by the possibilities, I'm considering extending the system. We have a car in the private car park behind the house that can be seen from our spare room. I only wish my neighbours could all have identical systems so that when a car alarm goes off we all know who's it is without leaving the house and can ring up the offender and make them go turn it off!

I plan to install another CCTV camera as a spyhole in the front door to capture full-frontal images of the person approaching the door and maybe as many more as I can find USB webcams for (I know I have at least two lying around). All this and it will cost me a little less than £20 extra per camera to cable and put in an old WinTV card. The computer appears capable of running at least two or three more cameras in terms of CPU speed (motion detection is quite expensive in terms of CPU power) and my test/development machines are old, obsolete things that people were throwing out.

I'm even looking into using a wireless setup so that, for example, the computer running the system could be wired *and* wirelessly connected to my main desktop, other cameras or a second system in a more secure location (the loft seemed an ideal place to run the show from, given that most burglars probably wouldn't bother to go up there).

I also remember having some cards that slot into a computers rear slots and the power supply connectors. They supply 9V outputs on standard connectors that plug straight into most CCTV cameras. With those, I could have a cheap, ancient PC or could even invest in a mini-ITX board, that would be using only a single mains socket and maybe a piece of CAT5. That PC would then be connected directly and supplying power to two or more cameras (some USB, some via PCI TV Card) and even microphones, connecting to a central computer which could store and email the files.

It could even text me the pictures to my mobile or indeed ring me up with an automated message, set off an alarm system, email my neighbours to get them to have a quick look, or blast out MP3's of "Wanted", "Rescue Me" or "Stop in the name of love". It could even display the culprit's face on my home TV in full, glorious technicolour with the words "GOTCHA!" displayed over it, while playing a Wah, Wah, Wah, Wahhhhh sound over the speakers!



What an idea. Marvellous what you can do with technology.

Substrate screensaver

Normally screensavers are disabled the instant I set up a new system and my recent upgrade to Slackware 10.2 was no exception. However, I was looking through the list for a screensaver to test my 3D acceleration to see if my NVidia drivers were working and came across Substrate.

This is a wonderful little screensaver that I find quite beautiful, some say it looks like the evolution of a city from above, others like cracks in a white rock, still others, like a Picasso in progress.

Substrate example images

Either way, it's impressive results from a simple mathematical algorithm. Pity that I can't get a Windows version for my girlfriend as every time the screensaver activates on my machine, she sits and watches it grow.

Edit: I got in contact with the author of a port of XScreensaver, after many, many attempts at trying to find a suitable Win32 version or a version that would compile cleanly under Cygwin or similar environment, and he happened to have a Windows binary of this screensaver just hanging about. So thanks Darren Stone for your help! The version of WinXScreenSaver on his software page didn't have it when I tried but he pointed me to a version of just that screensaver that he had compiled some time ago: http://tron.lir.dk/software/w_substrate.zip

(As of late-February 2008, the above website appears to be down. For now, there is a mirror of this file here: http://www.ledow.org.uk/windows/w_substrate.zip

Another Linux Desktop Update (and Slackware 10.2)

The Linux desktop machine is not only still going well, but getting better all the time. In fact, Windows has been deleted from my main computer, the result I'd been hoping for and something I'd been wanting to do for years. As far as I'm concerned, Windows in all it's variations is now just another console, good for games, not much use for anything else.

I recently upgraded to Slackware 10.2 on my main computer. Generally my computer is usually near the most updated you can get without having Beta or Alpha or CVS-build software on a machine. When it comes to upgrades, I upgrade to the next version of software depending on:

1) Whether I can run it in tandem with the older version - While this is possible with most things (e.g. Opera, PuTTY, TightVNC), it's not always possible with major upgrades. I want to KNOW that I can run the new version but that if there is a single regression I can still use the version I was using before.

2) Whether I can always revert to the older version if I want - This is where a package management system beats the built-in Windows features hands down. Not only will Swaret find, download, install and check dependencies of any software I install using it, it will also make a backup of the previous version. A simple removepkg/installpkg will get me right back to where I wanted, at worst having to replace any tailored config files from a personal backup. Most Linux distributions have such facilities, RPM, DEB, etc.

3) Other people's experiences of upgrading to that software - Lots of confirmed reports found on Google, relevant forums etc. of no major problems is the best thing, lack of any reports of major problems is next best. The less information available, the less I trust the upgrade.

4) Reputation of and previous experience installing that software - a program which has never had an install problem, upgrades itself neatly and compactly, is able to import all of its old settings etc. is one I'm more likely to upgrade as soon as I can.

5) The severity of the upgrade - how important an upgrade it is will determine how quickly I will be upgrading to it. Serious security updates for critical flaws and serious bug fixes for dangerous bugs will be more likely to be installed that something that corrects a spelling mistake in a filename.

6) The enormity of the upgrade - minor upgrades are more likely to occur, major upgrades may be postponed until I can test them out fully.

Given the above, a Slackware 10.1 -> Slackware 10.2 upgrade snags on 4,5 and 6. Numbers 1 and 2 are dependent on how carefully I think through the upgrade, and information for 3 wasn't available, although many people have been running the slackware-current version between the 10.1 and 10.2 release (I hereby thank all the willing testers for ensuring my machine will be relatively safe by the time 10.2 comes out).

Security wasn't a major problem - I was properly firewalled, my common desktop software (Opera etc.) was always at the latest stable version and nothing exposed to the internet was vulnerable. There were a few upgrades I was looking forward to, most notably a Ghostscript upgrade that made my printer (a Samsung ML-4500... possibly the only laser printer I've ever seen that you can open the specially-designed toner cartridge and just pour in certain toner without having to re-buy the entire cartridge) work much better under CUPS.

KDE upgrades were also on my agenda but not something I was happy attempting on my primary desktop machine by myself. This fixed things like my icons jitting about the place between (fairly infrequent) reboots, konqueror crashing while navigating the filesystem and numerous other little niggles.

It turned out in the end that I managed to clear out my old 10Gb Windows partition (so there really is no going back now!), after throwing a few lifebelts to things like documents, INI files and other stuff that might come in handy someday. With Slackware, unlike Windows, I literally formatted the drive as ext3 and copied the old Slackware install over using cp -a -x to the blank drive. This copied all the files, links, etc. over without modifying them.

A small oversight discovered later was that the -x (stay on a single filesystem) for some reason excluded /dev but even *that* couldn't stop Linux trying it's best to boot (although a lot of drivers complained). That went into the rather short list of "Should I ever have to do this again, remember to"'s.

Once the filesystem was copied across, I booted from a boot disk making sure that the new partition was the linux root. I played with the lilo config to set it up (keeping some older entries to boot back into the original config should I ever need to), edited fstab and then reinstalled lilo.

[[ Side note: My favourite thing about Linux in general is that any kernel can boot any partition on any computer. I needn't have bothered with this backwards compatibility entries in lilo, I could have just booted from any Slackware boot CD that I had laying around and tell it which partition to use as root and I can fix/run anything to get it back up and working.]]

Then I booted into this identical copy of my root and only once I was in and everything was working as if it was my old drive did I follow through the Slackware 10.1 -> 10.2 UPGRADE.TXT instructions.

The upgrade went very smoothly and once the package upgrades were complete, it was merely a matter of setting up lilo again to boot from the 10.2 2.6.13 generic kernel that had been installed (making my own initrd on the way) and then rebooting.

[[side note: Although I think that initrd are a marvellous idea (a small mid-boot ramdisk environment to bung in any strange drivers that may be needed at boot time, e.g. USB/Firewire Mass Storage, SCSI etc.) they can be a pain in the backside when you just want to add a new line to lilo with a slightly different config. That usually means rebuilding the initrd with different root parameters etc. which soon can become a file-management nightmare, having to keep kernels, configs and initrds for every different combination. Hell... it works and you can tweak it to do some complicated stuff, so I'll suffer it.]]

After that, it was simply a matter of seeing what needed recompiling to fit in with the numerous upgrades that were in 10.2 (kernel, KDE, QT, libraries etc.). I didn't even bother to check whether the proprietry NVidia drivers I use (the only low-level non-source-code thing I have installed on this machine) would need recompiling as I was certain they would. This went without a hitch and then I was able to boot into X-Windows on this new system and "see what else had broke".

That turned out to be the driver for my never-used wireless card (which is used for purely experimental purposes as I don't generally trust wireless in any way shape or form, ever since I demonstrated to myself just how easy it is to crack WEP and really interfere with most wireless networks whatever the encryption... the only access allowed over the wireless interface is a public-key authenticated SSH). That caused me some minor problems as it was complaining about the module config of the kernel (a missing file normally attributed to having kernel source lying around which didn't actually compile the kernel in question). This turned out to be a bit of a false alarm as the module would load normally anyway and worked perfectly.

Other than minor issues and a bit of user stupidity, there were no problems. Strange, for an upgrade that's the equivalent of going from Windows 95 to 98, or NT4 to 2000, that there was so little upheaval and virtually zero compatibility problems. And yet, my primary partition was never in any danger and with a small boot from the CD and a tiny LILO tweak, I'd have been back as if I'd never touched the machine.

I have now also tried Knoppix 4.0 on my laptop, similarly impressed that it needs absolutely no prior knowledge of what system it's going to run on. I'm amazed at just how much of the stuff that I have bought or been given "just works" in Linux, at worst requiring me to hunt down a small GPL driver from somewhere:

The QX3 microscope I found at a boot sale.
A cheapy PSX->PC USB controller adaptor (though games are not the focus on this machine, it was just out of interest).
A cheapy USB hard drive enclosure, again found at a boot sale.
My Intel NetportExpress (from eBay) connected so that myself and my girlfriend can stop fighting over who gets to use what printer.
My cheapy graphics tablet that was bought on a whim and never really used

Admittedly, some things I deliberately research to ensure they work on both operating systems; my wireless card, my USB key, my cheapy, ancient printer, my sound card (went through two or three old, donated and junked sound cards which, while they all worked under Linux, all showed performance problems. In the end, I just bought a cheapy SoundBlaster Live off eBay to save me having to use software mixing).

I always knew that Linux had problems with certain pieces of hardware, back from when I first heard about Linux when I was just a lad. Now I think that 99% of the problems are sorted out and most systems "just work". Heck, even most winmodems can be persuaded to work in Linux, even if they do need a binary driver. So far, the only thing I've found that remains resolutely incompatible with Linux is a small USB-IrDA adaptor that was never bought to work with Linux and, though it's detected as a serial port, doesn't actually support all the IR protocols that it would under Windows.

Addendum: And now I'm being asked to install Linux systems for an unattended kiosk-style system and a few small specialised workstations running the QX3's inside some of the schools.

Wednesday, June 01, 2005

The perfect operating system

As in other articles, this list will grow as I think of new stuff to add to it.

--- Software for the operating system will come as a single, standardised file. This file will be openly-readable by a suitable viewer/editor and be appropriately compressed.

--- Upon installation, each piece of software will be confined to it's own folder/directory within a system-wide software repository and not need to access anything outside that directory. Requests for system-wide configuration changes, such as file associations, notification of events, binding to internet ports etc. will be indicated by a specially-named file within a specially-named subfolder of the software.

These files are seen by the OS as **requests** for configuration changes. The OS will handle detecting these files and allowing the user to choose which piece of software gets to ACTUALLY change the system in the way it asks for. Facilities would be included to specifically BLOCK individual programs from requesting such changes, if necessary (to avoid spamming the user with bogus requests). Any changes in these files are notified to the user concerned to allow them to decide what to do.

--- No piece of software will be allowed any sort of access to another piece of software's folder. Each piece of software has a folder which will have it's own subfolders, each with a unique cryptographic hash stored for the purposes of detecting changes.

These subfolders will include: Programs (for the program's executables and required libraries), Configuration (for program-wide configuration information and system-wide configuration requests) and Data (for additional data, such as user-created documents, frequently updated databases, temporary files etc.).

--- The OS will control software and the movement of data. Each user will have the same program folders present (although those seen by the users will depend on their security permissions) and each user can have their own unique configuration for each program (again, security policy allowing).

The OS will determine which user the software is running under and create links to the correct configuration for that user. This will be under a copy-on-write basis, with new users getting the basic system install and each user's changes stored seperately. Any unchanged files will use the system's defaults.

Similarly for data, each user will have their own data directory for each piece of software.

The OS will also be responsible for detecting shared libraries between software (using name, internal information such as company, version number etc., cryptographics hashes and so on). Common libraries will be detected by the OS and links created to a system-wide libraries directory, after gaining permission from a suitable user to do so.

--- There will be OS facilities for users to transfer data from one piece of software to the other, but programs will NOT be able to do this themselves as they will NOT be able to access any other software's folder. Ideally, these mechanisms will be transparent and a user will see a "Home" folder which contains all of their data files from every program. These will be links to invidual files within a software's Data directories.

The user is free to arrange their Home folder how they wish... links will stay intact, unneeded files will be hidden from view. By selecting a file, they are able to open it in the software in which it is currently residing or in another suitable piece of software. The OS will handle multiple programs opening the same file in the same way as current OS's use hard links, that is the file is only present ONCE but it's entry may appear in multiple software Data directories. The OS will only show a single copy of any file in the user's Home directory, though. Once a non-default program has finished with a certain item of data, the link is removed from it's Home directory, unless there is no other link to that file.

--- As part of the "file presence" system-wide configuration request mechanism, programs may request the use of specialised libraries and other programs. If granted (either by security policy or by appropriate user permission), the software is allowed read and execute access to that program, it's configuration and it's associated data, again taking account of the user it is running under.

--- Uninstallation of software is as simple as removing it's folder or moving it outside of the software repository. Orphaned configurations/data would remain present in each user's Home directory.

--- The OS will provide a security facility for all password/passkey entry. Whenever a password/key needs to be entered, the user will select the password/key field in the softare requesting it. Upon this selection, the OS will clear the screen and place a special, unfakeable, unchangeable indicator in a position on the screen.

While the system password screen is active, no other program may display any component of itself onscreen and all drawing requests are done off-screen. No other piece of software will be able to overlay or display this indicator but the OS itself. When this indicator is lit, all input device control is passed to the OS but NO OTHER PROGRAMS. The password is entered by the user (or the file/device necessary is selected), then sent directly to the software which provided the initial password/key field.

Additionally, the system security screen will clearly show the name of the software initiating the request for a password/key, the reason for it's request and any username or other information required to provide the correct password/key.

Rules for software

A list of things that every piece of computer software do. This list will grow as and when I think of stuff to add to it.

1) Context-sensitive help (such as the "What's This" style help), if implemented, should be defined for EVERY button/option, in much the same way as every single image on a website should have an ALT tag.

2) All software should include an "uninstall" option, whether by an operating-system-provided mechanism or by a simple program supplied with the software. The uninstall should NOT require a reboot (except on those OS's where it's the only way to do it), should kill any running instances of any part of the software, clean up after itself (removing itself and the directories it was contained in) and ask the user if they want to remove configuration information for the program, save games, other files it finds in it's install directories, etc.

If the user does choose to keep this information, it should be put into a directory of the user's choice, labelled by default with the name of the software. This information should include any and all registry entries and, at the user's option, removing or reverting back any file association changes etc. that were made by the software.

3) If a program requires a certain file, library, etc. it should explicitly check for it at install time and run time. If any required file isn't found at any time, a check should be made for all required files and a list printed of the files the program needs in order to run. Lower-numbered versions of required files are seen as incompatible and will make the software print the file AND version required. Websites and other sources of these files should be displayed if known.

4) New versions of a program should ALWAYS come with a small utility or facility within the program itself to convert an installation of an older version into a format compatible with the new version. Hence, older config files are translated into any required new format automatically by the software to the equivalent or a safer configuration. These "convertors" should always be run whenever a previous installation has been detected and should be installed no matter what. Users should be informed by the software of any regressions in new configurations.

5) ALL software should co-exist with ANY version of itself without conflicts. On installation, it should not overwrite any previous installation unless asked to although it may steal, for example, file associations from it's previous versions.

6) Any installation changes with system-wide effect must gain the approval of the user before they are executed and provide an option to skip that step. Unless absolutely critical to the running of the program, the software should run whether the users chooses Yes or No. This covers file association changes, default printer changes, URL handler changes, default action changes (such as auto-insertion of audio CD's etc.) and similar measures.

Monday, May 30, 2005

Electronics

One of my interests has always been computers and I've always known the basic theory about how computers work, how the electronics operate etc. I know how electronic components work and what each does and even the formulae that are relevant but I've never been confident enough to actually dive in and start creating stuff from scratch.

I enjoy the Velleman kits from Maplins... ten minutes with a soldering iron and you have an electronic game or quiz buzzer or clock, but I could never follow how they worked properly. There was always some strange arrangement of components that confused me.

Recently, I was asked by my brother to make a little "disarming" pack for his Scout group for a camp they are going on. I made him one a few years ago that was basically a small circuit with no electronic components that the Scouts had to "disarm" without setting it off. It consisted of a buzzer and a battery. The buzzer was constantly short-circuited by a small resistor on a wire that stopped it sounding. When the wire was cut or the resistor removed, the short circuit disappeared and the buzzer sounded.

That was crude and simple and didn't really work very well apparently, and I was also concerned about how safe it was so had to test it a lot. A few weeks ago, my brother announced that he wants another one and I had enough warning this time to do it properly.

I found the Kelsey Park Electronics Club website, belonging to a school many miles from myself, and it's been a lifesaver, explaining things that I've never seen explained anywhere else, in a simple way, with lovely printable PDF's for the projecta. The author clearly knows his subject and it is a fantastic resource.

Using that website, analysing a few of the circuits and stealing a few of the ideas from there I've come up with a brilliant circuit for this year's camp and have already spent a small fortune at Maplin's buying the breadboards and components and testing it out. Now, if only I could find a louder but lower-power buzzer than the one that cost me a tenner!

Another Linux Desktop progress update

It's been over a month and I'm sticking with Linux on the desktop. Too many things "just work" for me to go back, my data actually feels more secure than it did before and the computer does what I ask of it, poorly written software aside.

What poorly written software? Nothing too vital. I just wanted to play some movie trailers online with Opera but that's proving almost impossible, even with MPlayer-plugin, plugger et al. Nothing seems to make it work whereas FireFox runs it just as it should with the same plugins.
I've followed every page I could find on getting these plugins to work with Opera but they just display a blank box or throw up lots of stdout errors, or both. That's no big deal, I could just use Firefox, but I like the way Opera works faster for day-to-day browsing and is integrated with RSS, news, mail, etc.

The other program that was giving me hell was KPlayer. Some files it just would not display properly, displaying what appeared to be vsync problems (a single tear at a certain point on the screen while displaying video). I first noticed this while playing a DVD. Putting KPlayer into X11 rather than XV mode solved it but it took too much CPU. MPlayer GUI showed no such problems (despite KPlayer using MPlayer to display) but I didn't like it or any of it's skins.

I could NOT find any difference between how KPlayer and MPlayer were rendering the clips but KPlayer always looked different. Eventually, Xine came as a good middle ground, showing all the clips I want it to while not displaying any artifacts, using the computer's hardware acceleration to it's full and having a usable GUI.

Using ALSA-only for sound is a great leap forward and the only problems I get are the soundcard-sharing issues that are well documented and easy to fix with dmix plugins. I've tried those and they work perfectly.

I investigated KDE 3.4 for a while, even downloading the KLAX live CD (a new version of KDE on a SLAX LiveCD, based on Slackware). 3.4 seemed faster, more responsive, cleaner and more bug-free but I'm wary of upgrading it until I've made a tar.gz of my system as it is at the moment. That's another thing that I love about Linux... the low-level config is all runtime-determined and the remaining config is hardware-independent.

I can transfer this disk image to any other machine in the world and it would boot up (after running lilo) and work just as before, with only a few minor tweaks to support non-detected hardware. The point is that, unlike Windows, I could move this hard disk to any machine should this one go wrong and be up and running again in minutes rather than hours and use the exact same settings as I have now. You just CANNOT do that with Windows. Even changing your motherboard pretty much requires a reinstall but in Linux, it hardly cares about what motherboard you are running.

I've gone onto Linux 2.6 now, given the fact that it supports more hardware, has ALSA built-in at the kernel level, has lots of bugfixes and new features. It meant that I had to tweak the startup scripts somewhat to enable me to dual boot and compile software for whatever kernel I happened to be running (mainly just symlink magic) but that didn't take long and was hardly necessary anyway... I think I've only booted back into 2.4 once.

KNemo has replaced my Zonealarm because now I have the power of an iptables firewall, I just miss the little flashing lights that indicated network activity. :-)

SSH logins are now working flawlessly and, following advice from a number of sites, I've got non-root passwordless SSH up and running, with an su providing all the power I need. I do see this as a little unnecessary root had his own private key that was the only way to log in but it's working now. I can use PuTTY from work and log in to my home machine and laugh at all the login attempts I see bouncing off port 22:

203.73.40.40 only tried 2 times.
210.96.200.24 only tried 2 times.
212.19.84.19 only tried 2 times.
210.204.129.27 only tried 3 times.
211.160.17.13 only tried 3 times.
212.160.93.188 already blocked for 5 attempts.
61.16.165.148 already blocked for 5 attempts.
70.183.189.207 already blocked for 5 attempts.
210.178.215.221 already blocked for 6 attempts.
211.169.117.119 already blocked for 6 attempts.
211.248.77.194 already blocked for 6 attempts.
213.179.250.115 already blocked for 6 attempts.
218.108.89.205 already blocked for 6 attempts.
61.219.67.127 already blocked for 6 attempts.
66.14.195.102 already blocked for 6 attempts.
202.180.175.138 already blocked for 7 attempts.
61.218.8.110 already blocked for 7 attempts.
202.127.24.198 already blocked for 8 attempts.
211.112.77.28 already blocked for 9 attempts.
211.114.170.161 already blocked for 9 attempts.
203.199.69.160 already blocked for 10 attempts.
201.144.107.203 already blocked for 11 attempts.
213.21.187.186 already blocked for 11 attempts.
202.141.12.146 already blocked for 12 attempts.
84.45.142.57 already blocked for 12 attempts.
220.65.55.130 already blocked for 13 attempts.
202.201.0.246 already blocked for 15 attempts.
210.51.12.238 already blocked for 15 attempts.
211.114.177.138 already blocked for 15 attempts.
211.114.195.7 already blocked for 15 attempts.
62.76.207.201 already blocked for 15 attempts.
163.27.7.2 already blocked for 18 attempts.
81.190.223.110 already blocked for 21 attempts.
64.49.222.180 already blocked for 24 attempts.
222.53.117.195 already blocked for 28 attempts.
210.51.25.217 already blocked for 32 attempts.
202.183.229.200 already blocked for 34 attempts.
66.82.4.25 already blocked for 39 attempts.
211.147.7.88 already blocked for 43 attempts.
212.89.111.97 already blocked for 44 attempts.
62.65.85.126 already blocked for 44 attempts.
81.214.131.217 already blocked for 44 attempts.
61.220.76.58 already blocked for 57 attempts.
220.130.3.60 already blocked for 61 attempts.
212.251.61.243 already blocked for 77 attempts.
211.250.189.5 already blocked for 89 attempts.
217.160.130.112 already blocked for 89 attempts.
24.10.130.62 already blocked for 89 attempts.
80.190.249.210 already blocked for 89 attempts.
82.235.174.242 already blocked for 89 attempts.
12.26.192.66 already blocked for 108 attempts.
165.229.65.86 already blocked for 108 attempts.
64.208.57.92 already blocked for 108 attempts.
81.208.31.177 already blocked for 109 attempts.
203.177.36.178 already blocked for 114 attempts.
80.183.225.131 already blocked for 122 attempts.
219.101.165.151 already blocked for 154 attempts.
195.70.198.199 already blocked for 216 attempts.
66.236.248.139 already blocked for 216 attempts.
211.26.36.3 already blocked for 221 attempts.
128.252.171.31 already blocked for 348 attempts.
211.172.225.111 already blocked for 382 attempts.
211.144.101.203 already blocked for 413 attempts.
61.193.182.107 already blocked for 425 attempts.
218.149.85.18 already blocked for 467 attempts.
202.30.198.245 already blocked for 519 attempts.
137.195.182.25 already blocked for 701 attempts.
213.251.132.146 already blocked for 702 attempts.
218.179.255.249 already blocked for 1409 attempts.

That's the output from my custom script which I add new features to whenever I feel like it:

#!/bin/sh
#
# Script to search logs for SSH brute-force attempts and block IP's

# Log a message saying we have started.
logger -t SearchLogs -p cron.notice -- Starting Log Search...

# Search through /var/log/messsages for "Failed" lines from SSH.
# Strip out IP from each line.
cat /var/log/messages* |grep sshd |grep Failed\ password\ for | sed s/.*from\ // | sed s/\:\:ffff\:// |sed s/\ port\ .*// > /tmp/ssh_attempt_ips.txt

# Similarly for "Invalid" lines
cat /var/log/messages* |grep sshd |grep Invalid\ user | sed s/.*from\ // |sed s/\:\:ffff\:// >> /tmp/ssh_attempt_ips.txt

# Similarly for "Did not recieved identification string" lines
cat /var/log/messages* |grep sshd |grep Did\ not\ receive\ identification\ string\ from | sed s/.*from\ // |sed s/\:\:ffff\:// >> /tmp/ssh_attempt_ips.txt

# Sort IP's and weed out any duplicate lines to rid us of multiple IP's.
# Also, add a count of each IP so that we can judge how many attempts
# they had and strip out some whitespace.
#
# Also, ignore any line containing 212.85.1.15 as that's our main IP for
# logging in from, similarly for 127.0.0.1.
cat /tmp/ssh_attempt_ips.txt |sort |uniq -c -d |sort |sed s/\ */\ / | sed '/.*212.85.1.15/d' | sed '/.*127.0.0.1/d' > /tmp/ssh_attempt_counts.txt

# Remove whitespace from beginning of line, place | in between counts and IP's.
cat /tmp/ssh_attempt_counts.txt |sed s/\^\ // | sed s/\ /\|/ > /tmp/ssh_prioritised_list.txt

if [ -f /tmp/ssh_prioritised_list.txt ]
then
# For each line...
for BAD_IP in `cat /tmp/ssh_prioritised_list.txt`
do
# Strip the count from the IP...
COUNT=`cat /tmp/ssh_prioritised_list.txt |grep $BAD_IP | sed s/\|.*//`
IP_TO_BLOCK=`cat /tmp/ssh_prioritised_list.txt |grep $BAD_IP | sed s/.*\|//`

# If a particular IP had more than 5 goes...
if [ "$COUNT" -gt "4" ]
then
EXISTING_LINE=`iptables -n -L INPUT |grep $IP_TO_BLOCK`

# Add to permanent blacklist.
echo $IP_TO_BLOCK >> /etc/ssh_blacklist.txt

# If it's not already on the firewall blocklist
if [ -z "$EXISTING_LINE" ]
then
# Print out a message and add to firewall.
echo Blocking $IP_TO_BLOCK for $COUNT attempts
logger -t SearchLogs -p cron.notice -- Blocking $IP_TO_BLOCK for $COUNT attempts
logger -t SearchLogs -- Blocking $IP_TO_BLOCK for $COUNT attempts
echo iptables -A INPUT -s $IP_TO_BLOCK -j DROP
iptables -A INPUT -s $IP_TO_BLOCK -j LOG
iptables -A INPUT -s $IP_TO_BLOCK -j DROP
else
echo $IP_TO_BLOCK already blocked for $COUNT attempts.
logger -t SearchLogs -p cron.notice -- $IP_TO_BLOCK already blocked for $COUNT attempts
fi
else
# Just warn.
logger -t SearchLogs -p cron.notice -- $IP_TO_BLOCK only tried $COUNT times.
echo $IP_TO_BLOCK only tried $COUNT times.
fi
done
else
echo "Can't read /tmp/ssh_prioritised_list.txt"
fi

cat /etc/ssh_blacklist.txt |sort |uniq > /tmp/ssh_blacklist.txt
cp /tmp/ssh_blacklist.txt /etc/

# Log a message to say we've finished
logger -t SearchLogs -p cron.notice -- Log Search Ended.

#---------------------------------------

I have that running as a cron job and it produces the output you see above. It's amazing how many attempts you get. Looking up the IP's on DShield shows that I'm not alone on being attacked from some of these IP's. It's tempting to move the SSH port just to avoid the log spam.

Overall, thoroughly impressed. I wouldn't class it as a perfect system but it's a damn sight closer than Windows ever was and when you consider that people who are giving their time and effort away are doing a better job than the largest company in the world pumping billions into research, you have to ask yourself whether you've ever been right to give them money.

Some computer annoyances.

I've had a few pet peeves about how computers work for many years and I thought I'd air them here. Firstly, login procedures.

Why are password dialogs not more secure? How many times have you had some program pop up and ask you for a password while you are in the middle of typing, you've not noticed and pressed some keys and Enter in the course of your work? Or how many times have you been in the middle of writing a password when something else pops up or shifts the cursor focus so you end up typing the password somewhere completely different? (Hotmail's login dialog comes to mind because sometimes it would change focus if you typed in your password before the scripts on the page had finished loading, meaning you typed your password on the login name box for all to read).

In the same way that Windows has the "press Ctrl-Alt-Del" to log on, in order to ensure that your password can only go to the one program not affected by Ctrl-Alt-Del, i.e. the login dialog, shouldn't we have some facility within the OS itself to prevent other passwords being scattered willy-nilly when they are entered?

Maybe a system-default password screen. If the user needs to enter a password for any purpose, they have to click on the password box itself, which is just a large rectangular link. When that is clicked, the screen is blanked, the keyboard is locked to every program but the login system (so preventing software key-loggers) and a password dialog appears asking you to "Type in the password for Opera 8.01 to access the website www.hotmail.com" or "Type in the password for Control Panel to access your network configuration" or "Type in the password for SSH to login as the user root on remote system server.house.com".

This screen would also be secured by the operating system somehow, to ensure fake screens could not be generated (even though they would require user-interaction to do so). This could be by some sort of visual cue which cannot be modified or overlaid by any program by the system itself (in a similar vein to the secure icon in web browser which SHOULD NOT be able to have a website overlay it's image) or by using the clever tricks that people are suggesting web browsers should use such as a customised screen which nobody but the system has access to... therefore the program would have to know what the user's customised login screen looked like. Any deviation from their picture of The Simpsons beating up a computer and the user would know that it was a fake login box.

[Incidentally, I once used a technique many years ago, before the web was popular, to obtain an admin login at my old school. Using Visual Basic and the PrintScreen button, I was able to replicate the look and functionality of the RM software login screen that the school was using at the time, even down to the help dialogs and pixel-perfect positioning, and got my sixth-form computer science teacher to log into the machine. It was set up so that it failed consistently with any login but wrote the username and password to a file.

The computers at the time had a problem that they would fail to log in with the exact same message if they had been disconnected from the network since they booted up, no matter if you reconnected them later on. The way to fix it was to reboot from the login screen. My program simulated this functionality and was run from within any non-privileged user account. Once the admin had typed their username and password, they got the error they thought meant the computer had to be rebooted, they would reboot, come to a GENUINE login screen and log in successfully. However, the username and password they had tried would be logged to a plaintext file on the user account I had run the program on.

Honestly, I let the person in question log in, then immediately told him what I'd done. Stupidly, he had challenged everyone in the sixth form to try to get admin access, seeing it as a learning experience, and had foiled several previous attempts to steal his username and password. The only other success was when someone read his password over his shoulder and then created dozens of admin accounts using it, using their own name. They were quickly spotted and caught. Mine wasn't and would never have been as multiple logins from the same username were allowed and I would have had time enough to circumvent the measures in place which showed up admin users on the system (an easily subvertible two-line "cron job" that showed all admin users on a spare monitor in the server room)]

Ideally, there would be a fourth LED on your keyboard, indicating "secure" status which can ONLY be toggled by the operating system itself, not application software or device drivers. This would light only when the REAL login screen was up and people would be trained to realise that if the light's not on, people can read their password.

This should also stop the problem of accidentally writing your password at the end of the username box. Either a special button or key combination (even as simple as tabbing to the password field in question and pressing enter) would take you to the secure password screen. The light on the keyboard would be enough to show non-touch-typists that they are in the password box, the blanked screen for the login procedure enough to show touch-typists that they are in the right place.

I do consider password showing on the screen to be quite a weak link, though not completely unstoppable. For a busy user, though, especially around kids or teenagers, this would be a great way to stop unintentional password theft.

Friday, April 22, 2005

Another update.

Following on from my previous post, another update:

Kplayer & Mplayer are working perfectly now that I've got the codecs installed properly, have acceleration working with the right options and I now know to always use Alsa for sound wherever possible. I have one spurious AVI file which identifies itself as DivX which won't play properly under any of the codecs except one but that one's not automatically picked. Selecting that codec manually makes it work and it's only one file that does this.

I've associated most of the formats with their respective players. Upgrading to the just-released Opera 8 was simple enough, no glitches and it's much faster and more stable, although it was too clever for it's own good and tried to use the Win32 Opera directory from my previous Windows install to get its Java from.

The first Java site I visited, I noticed and a quick whereis pointed me to a seemingly good java directory which Opera refused but then managed to find some sort of clue and suggested I try another. The suggestion worked and Java runs better than ever inside the browser.

I've now bought Crossover Office because it does what I want it to, was cheap enough, though I'm not sure about the subscription module for update. However, I was paying for one piece of subscription-based software on Windows and at least if I stop paying for Crossover, I get to "keep" the last version. Crossover runs my Word 2000 without problem, also my Irfanview tool (at least for all the bits I've tried that I would ever need). I'm going to try JASC Paint Shop Pro 7 on it at a later date.

I've configured the login screen the way I want it and this way it has a simple "shutdown" button that my girlfriend can use to turn it off if she's the last to bed at night. Before it needed a console login and a shutdown command and she's not familiar with non-GUI systems.

KPlayer issues solved, I've set it to be the default player for every media file format and I've also tested GSpot in wine and it runs perfectly.

Sound was my biggest problem so far. SDL games were sometimes perfect, next time they were run the sound would lag by over a second. That was unacceptable and annoying. Eventually, after much Googling, an upgrade of SDL, a tweak of ALSA settings to enable basic hardware mixing (I'm not brave enough to attempt the dmix plugin yet and don't really have a need for it) and most importantly, disabling the aRTs system. This hasn't lost me any sound that I would like to keep and stops the sound that I do want from lagging so inconsistently. This is something that I hope upgrading to a later version (whether or ALSA, KDE, aRTs, or even the linux kernel) will fix because it's plainly a problem.

I've upgraded packages (simple) and also installed remote passwordless SSH (more difficult) by a combination of Googling and guessing. Already I've had a few SSH login attempts using basic passwords from various places in Taiwan and China but I'm not paranoid enough to worry about them. Passwordless-SSH was more of an educational distraction than a real need. And, yes, my ADSL router was forwarding unknown ports to a no-longer-existent local IP address instead of to my main machine (which used to have it's own firewall). A quick tweak of the settings and SSH showed up externally as responding.

Further on my quest for "work distractive technologies", I've played a bit with TuxRacer and it looks like 3D acceleration is working properly... :-)

Still fighting with the bloody KDE clock as it just insists on pissing about when set to a timezone that uses BST and says it's applied changes that it doesn't. Wonder if KDE 3.4 has fixed that issue?

Still no problems or major disappointments as of yet but rest assured that I'm hunting for some more show-stoppers..

Monday, April 18, 2005

Linux Conversion Update

Okay, so I've been running Linux for a few days now. I've managed to get all of my necessary apps and most of my "useful" apps working properly.

KDE provides most of the tools natively but I had to install some things like MPlayer (and Kplayer, a GUI for it), K3B and a few others. I've also downloaded Crossover Office having been disappointed at the conversion capabilities of AbiWord and KOffice (through no fault on the behalf of their authors, I want to add). I haven't got around to installing OpenOffice at the moment and Word 2000 is about the only thing that I will admit that Microsoft has done right so I have no problems with paying money to get that to work. It saves me time playing about, ensures compatibility and means I won't strip or corrupt important info in my DOC files. Similarly for Excel 97/Gnumeric.

Crossover + Word is working perfectly, not a sign of a glitch, but Wine isn't up to running much else that I want at the moment, but luckily most of what I want has nothing to do with actual work. :-) It does run Irfanview, Dreamweaver and Paint Shop Pro, though, and DW and PSP I consider to be my vital apps. I have yet to test it out on every feature of those programs.

I've also installed the nVidia drivers which sped up X's 2D drawing noticeably. That was the only time I needed to shut down X and restart. I'm still using a console login at the moment because I've never been afraid of a command line and I like to see what's been going on at boot before I start up a complex program like X.

Linux still hasn't crashed once so it's definitely better than my previous setup, and even X hasn't crashed out yet (which, from previous Linux experience, I was expecting, especially with a binary display driver).

I managed to download a Knoppix torrent and burn to a CD within an hour of deciding to do it, without meeting any unpassable shortfalls or annoyances along the way (just installed BitTorrent and K3B from LinuxPackages and it all just worked). In fact, K3B is better than most CD-Writing software I've used on Windows and I've pretty much used them all.

KPlayer/MPlayer was a little more tricky but hardly a vital program. The package I was using was missing a few symlinks, which were easily created, a little bit of configuration and I'm only having trouble now with one MPEG-4 avi that seems screwed in MPlayer but works in another, unaccelerated, media player that came with Slackware. I'm assuming that the Win32 codecs are faster but not as well supported as the OS codecs that came with Slackware and that a slight override somewhere will cure this. While fixing this, I did notice that I quite miss using GSpot, a codec-discoverer for any avi/mpg/etc. file. I'll have to find an OS equivalent or get it running under Wine.

Installing new and missing software has been a breeze thanks to LinuxPackages.net, not that I've ever been scared of compiling my own, it just makes it so much easier to keep track of what program is where. Thanks to Slackware's plain tgz packages, if I lose track of where the "executable" went, I can just browse the package in Ark and look for it. That's worth its weight in gold and useful for those packages that didn't have quite as much attention paid to their creation as others on LinuxPackages and might be missing a KDE shortcut icon or similar.

I've been upgrading a few libs like SDL et al and not run into any problems yet, in fact I can say that it's only made things work better. I'm looking at the prospect of upgrading to the latest -current Slackware packages as they include the latest version of KDE. I'm not too worried about the rest of the updates as they are mainly bugfixes and security updates. The computer is firewalled and not running any internet-bound services and the only bugs I've run into are Konqueror crashing a bit more than it should. It's hardly a problem as it doesn't even take down other instances of itself, let alone X or the OS, and a click on the Konqueror icon gets it straight back up.

I haven't managed to test remote SSH yet and I'm a bit worried that my firewall might be being a little overzealous and blocking it as the port appears "stealthed" to any web port-scanner. I'll have to see if that's because of the firewall, the way I've set it up or the SSH config. I've noticed this same problem on another 10.1 machine with this firewall so I will have to look into this. Internet access from the computer itself, though, works just fine, even over P2P and torrenting.

I've been trying out a couple of Linux games and they've been quite fun so far, just the sort of games I like, small, fast, not too fancy but fun. Am missing my other games a bit but knew I would be and will have to wait for a new PC for gaming.

Have yet to try the DVD but I see no reason why it should not work as MPlayer can play MPEG2 files off my hard disk and the computer can see the DVD drive, so there shouldn't be any problems. I have updated packages for libdvdcss etc. on standby just in case.

Had a few teething troubles with the KDE clock as it really messes up when you select timezones, whacking hours on and off of the clock a few seconds after selecting them and simultaneously resisting most attempts to stop it being too clever for it's own good and adding hours for BST etc. In the end, I just set it to a non-timezone until I can be bothered to fight with it again.

Altogether, it's been pretty painless, I haven't lost any work, my main apps are up and running and there's not much functionality that I didn't have before. The computer is more stable, turns itself off EVERY time I ask it to, boots up without issues or any error messages and runs most of the existing hardware (with the only exception being the old scanner) without me having to do anything at all.

Also, I've noticed the benefit of using a local network, printer-server and ADSL router again. To be able to have a network where everything important like printers and internet access are done over a standalone networked device is a real lifesaver but then, Slackware would easily handle my printer and any sort of NAT effortlessly. I have also recently set up a Slackware machine for my brother which does all of the above in one machine so that he just plugs in any computer and it all just works (firewall, NAT, printer sharing, Samba storage, etc.) once it has an IP. This has greatly aided his transition from a 500MHz 98 machine to a new mega-gaming-machine XP.

My entire changeover has gone mostly unnoticed by the other user of my home network, namely my girlfriend. The only comments she had were, when she used the machine once to save turning hers on, that it didn't have a seperate Opera icon for her like the old one did, with her bookmarks and emails. That was because I'd forgotten to transfer her settings across when I'd done mine. That's easily fixed, as all the old Windows drives are still present and mapped into the filesystem and, thanks to Opera being cross-platform again, it's simply a matter of copying them over to the right place.

The other comment was that she couldn't play PartyPoker on it, her favourite online game, because that's a Windows-only piece of software. I had tried that in Wine myself but it really didn't like it at all. She'll just have to stick to her own computer for that, which I knew she would.

Overall, quite happy with it so far and still waiting for the first showstopper.

Thursday, April 14, 2005

Plunge Taken...

Bandwagon firmly landed on with both feet...

Windows decided to play up. I thought to myself "I can fix this". Then I thought, "Why bother? This is my work machine and it should ALWAYS be up". Then I installed Linux. I now have Linux as my primary desktop.

Things I will miss:

- Games (but may well invest in a cheap XP machine for those)
- My plethora of "essential" programs (no more Zonealarm icon flashing away reassuringly, no more need for specialist programs like NAT32, virus scanners, spyware detectors etc.) which have become obsolete or unnecessary.

Things I won't miss:
- Bugs
- Blue screens
- Viruses (Only ever had one, personally, from a respected PC Games magazine CD)
- Spyware (Never had any but always kept checking)
- Endless drivers

I've moved onto a Slackware 10.1 system running KDE and it's working just fine. I plan to use it for work mainly, and to provide a fault-free stable system for the next few years. Already browsing the web, rss, irc & emailling (having Opera be multiplatform is a lifesaver and greatly helped the transfer), icq, msn, yim and aim (thanks to Kopete, the linux equivalent of Trillian without the ludicrous upgrades and skins), printing (via CUPS and the lovely people at linuxprinting.org for the PPD), access to all my parititions, a firewall, a version of PuTTY (yes, I know it's just an SSH frontend but I liked it on Windows and I'm used to it now).

Considering it's running on a plain VESA driver for now, it's actually faster than even my finely-tuned 98. Have still to set up my CD-RW and DVD-ROM but don't see them being a problem, using k3b and mplayer. My scanner is linux-incompatible but I have two others sitting under the desk that are 100% compatible, so just have to re-cable that. My "weird" hardware like my cheapy-RAID card, cheapy USB stick, USB IrDA, Intel QX3 are already supported and auto-detected without me having to touch anything. Will have a look see how hard it is to connect to my Nokia via IrDA and use my card-reader at some point but that's hardly a priority.

Collateral damage is minimal so far, just a lilo change to boot Linux by default. All my flaky FAT drives are still there and accessible. I am considering investing in Crossover Office to run my Word 2000 and Excel 97 combo and possibly even things like Dreamweaver but for the moment, KOffice is holding the fort.

Considering that 90% of my use of the computer is Web, Email and IM, the impact has not been too bad, it took minutes to get up and running with the exact same version of Opera that I was running on Windows and import all my stuff over. Scroll wheel on my mouse and the occasional segfault due to not having any swap were very quickly cured and I haven't managed to crash it since.

I need to switch on APM/ACPI but I haven't tried that yet. Normally a "modprobe apm" does all that for me but it appears to be missing so I will try and track that down. When it didn't work, I was too busy trying out all the silly card games to care. :-) Worst case scenario is that I recompile the stock kernel that Slackware provides to something a little more relevant. The only difference that that gives me to my old Windows 98 is that now I don't have a pretty screen up when I have to turn it off saying "Windows is shutting down..." :-)

I've decided to allocate one month of time to it, to see how I get on with it. I've resigned myself to the fact that it will not run my games but I may well be able to find emulators for my favourite older systems (Spectrum etc.), use things like DOSBox to run some of my older titles, and anything DirectX/OpenGL I can use on some other computer. That should be enough to distract me and I can use an XP machine as a games-console only.

The programs I have yet to find a suitable replacement for are Paint Shop Pro 7 (nice, simplified interface around a powerful image manipulation program), Dreamweaver (nothing quite like it), and a few tiny utilities I like to use.

I'll see how it goes and see whether I can hold of a nice games machine for myself. My ideal aim is to have a Linux desktop for work, browsing, email etc. and only power up a Windows XP machine for games, literally using it as a games console. Even then, what I want to do is make CD images of all my games and mount them over a Samba share via Daemon Tools on the windows side so that I don't have to track down every CD for every single game every time I want to play it. The samba share would be held on either the main Linux machine or on a small Linux storage server with a mini-RAID on it.