For Christmas this year, I managed to persuade my other half that a GP2X would be an ideal present to keep me quiet. For those who don't know, the GP2X is a handheld games console whose main selling point is that it runs Linux behind-the-scenes and has buckets of "homebrew" software, including ports of popular Open-Source games and emulators. As I've stated in a previous article, to me that makes it more valuable than any other games console I've ever owned - I can load it up with "fun" older games and play for thousands of hours rather than spend a fortune on a single modern game which I would play for about a day before I get bored or completed it. (Incidentally, thank you Nintendo for the Wii and bringing the fun back into gaming!)
Anyway, I am now the very proud owner of a "Mark 1" GP2X, called the F-100. This is the black version with the original "joystick" rather than the touchscreen and digital joypad. To be honest, I didn't specify which version and knowing what I know now, I'm glad that all the wife could afford was a second-hand F-100. It's personal choice but I can sacrifice the features of the sucessor F-200 for the features that the earlier model has. Other people would disagree depending on their usage.
Most of the people who are interested in the GP2X, or indeed it's predecessor the GP32, will know about the handheld's features but for a quick rundown...
It runs on two "off-the-shelf" ARM chips (940T and 920T), with variable (and software-controlled) clocking between 60MHz and 260MHz each, the default being 200MHz and the overclocking being quite "safe" overclocking that doesn't cause permanent damage unless you do it for very long periods of time and overheat something. In fact, more powerful games can only run at the higher speeds and will "overclock" the device themselves - all that happens is that it works or it crashes. All that is required to "recover" from a crash is a switch-on, switch-off to get it back to normal.
It comes with 64Mb RAM (32Mb is directly accesible, the rest can be used with some trickery and often is) and a 64Mb internal NAND permanent storage. This contains the bootloader, kernel, some built-in applications like the menu and maybe even some games depending on your particular purchase. This can all be replaced and customised but you won't gain much. The NAND allows people who forgot to get an SD card to use the console straight away, and also provides an avenue for the vendors to pre-load certain games if you buy their bundles. Also, because it has to be a deliberate act to affect the NAND bootloaders or kernel, you aren't going to be bricking your GP2X accidentally. NAND firmware updates tend to come in the form of a bootable SD card.
For main storage, you can use an SD card up to 4Gb (32Gb in the later F-200 model) formatted in either ext2 or FAT32 - most cards come with FAT anyway and if you want to use the connectivity features, you're better off with the more-prevelant FAT. This is where most of your games etc. will go and it's quite easy to have hundreds or thousands of games on a single large SD card. And, let's be honest, SD cards are so tiny that you could easily carry a handful with you and fulfill every gaming need.
The firmware runs uBoot to boot pure Linux 2.4 as the core OS (source and alternative firmwares are available but you don't gain much because GamePark Holdings, the manufacturer, did a good job in the first place) in around ten seconds, with a nice splashscreen and bingely-bingely-beep startup sound. All of the "menus", games, emulators are just ordinary Linux programs compiled on GCC against an ARM target. So everything is either open-source from the start or has many open-source equivalents, even the main menu, boot-up screens, built-in applications etc.
Because it relies on BusyBox internally, it's not even unusual to see wrapper-bash-scripts around games in the Games menu. It has a rather perfect and simplistic method of program execution - when the GP2X starts up, it boots and then runs the menu program (standard Linux binary). That lets you select a game/application to run. On termination, each individual application is responsible for making sure it exec()'s the main menu before it clears up. It's beautifully simple but works and prevents the menu hogging RAM while you're playing a game. And if a program ever crashes really hard, you just switch-off, switch-on and it boots the menu back up again. Programs have full access to NAND and SD storage for savegames etc. but they tend to only place things in their own folders. This does, however, allow you to have a collective "roms" folder and use several different emulators with the same roms.
It operates off two AA batteries, although they HAVE to be high-power rechargeables (preferably 2800mAh) - what do you expect for a dual-200MHz portable machine?! You can get a good couple of hours out of a set of two depending on what you're doing. There is also a mains-adaptor port for static use and you can easily carry enough AA batteries to last you all day if need be. The only minor point here is that the mains adaptor doesn't charge the batteries, but you can't have everything.
The screen is full-colour, 320x240 and is very good in virtually any lighting. It is covered by a plastic protective screen about 2mm above the surface, which protects the expensive bits against that pen you keep in your pocket. There are two speaker grilles on the front (I believe only one is an actual speaker(?) but stereo sound is present in the headphones) and the SD slot sits in the very middle at the top.
It also has a headphone socket, power socket (3.3v regulated), mini-USB socket and EXT socket (we'll get to those last two in a minute). The two AA's sit comfortably in a rear "bump". The game controls are (on the F-100) a "mini-joystick" on the left of the screen which works surprisingly well, A, B, X, Y, Start, Select buttons in their usual places, Vol+ and Vol- just underneath the joystick and L and R shoulder buttons. On the F-100, the joystick also "clicks" down to provide another button. Sadly this was removed from the F-200 model because the joystick was replaced with a 4-way D-pad and a touchscreen was introduced over the LCD.
The GP2X is comfortable to hold for long periods and the design is "flat" on the front, except for the joystick which can be controlled by a wiggly thumb or between two fingers for precision control. The batteries are tucked away from your fingers so it feels quite thin. You can't accidentally eject the SD card or knock the battery cover off while playing and all the other ports have rubber covers to stop you poking things in them accidentally. Headphones plug into the top, keeping the lead out of your way.
The mini-USB socket allows you to connect the supplied USB cable to access the GP2x from a PC - of any kind. No driver software is required and it appears as a standard mass storage device so Linux, Windows and Mac can all "manage" the devices files. Installing a game can literally be a drag-and-drop. You select what content you would like the PC to access each time - either the SD card or (F-100 only) the internal NAND - and it just pops up as a removeable disk.
You can copy your games to your GP2x without switching off by using this feature or you can just eject the SD card and use an SD card reader in your PC (not supplied). There are also a plethora of USB options on the earlier model - the F-100 runs what is known as a USB gadget interface so that it can appear as a "device" to normal PC's. This allows it to be seen as a USB network card, USB HID device (so you can control windows games with its joypad, for example), and it has built-in web server, telnet server and samba server for access over the USB-net. Sadly, these features are lacking from the later model F-200, which I see as a great loss, and were instead replaced with a touchscreen interface in addition to the normal control methods.
Because it's all just Linux, you get some fanatics do things like plug a wireless or Bluetooth USB adaptor into the socket and port a driver for the device. Strangely, they often work, although the practicalities of a handheld limit its usefulness. There are ports of games designed especially for accessing a Nintendo Wiimote over a USB-Bluetooth device, for example.
The EXT socket allows for a whole new range of options. First, TV-out. Yes, this little device can display on your TV! Some games can appear blocky in this mode but some make use of higher resolutions when they detect the TV-out cable. Either way it makes for much better "static" multi-player fun.
Additionally, the F-100 has a peripheral available called the Cradle - essentially a "break-out box" which connects to the EXT port to give you access to 4 USB ports, for connecting devices such as joypads, keyboards, mice, USB keys etc. Games have to support extra controllers but most popular ones do. The GP2x also directly recognises USB mass storage devices connected to it. The breakout box also features TV-out itself too, plus JTAG programming ports (for hard-core tinkering and "un-bricking"), additional audio-out, a power-supply connector and RS232. It's safe to say that the break-out box isn't really that portable because it is intended as a home-device for development, or for using the TV-out feature to turn it into a home console.
But the important thing is, how well does it play games? Well, the absolute best examples for "showing off" don't run to much if you're looking for 3D-power in your handheld, but considering the devices specifications they are very impressive. Payback is a GTA 1/2 clone (some might say a bit TOO close to the original) with 3D, dynamic lighting etc. and plays really well. It has to be said that this is the showpiece of the GP2x and little beats it in terms of hardware use, speed and visuals. On the homebrew side, a complete port (yes, port, not remake) of Quake is the best, in visual terms, that you will see - and it's compatible with virtually every Quake mod, including the official ones. However the little beast should not be underestimated - Quake running at full-speed on a device such as this is no mean feat when there is no dedicated 3D hardware.
The GP2X, it has to be understood, is not going to out-perform much at 3D. It's based firmly on 2D, from design to manufacture to software, and that's where it excels. There are ports of almost every 2D GPL Linux game available - SuperTux, Crimson Fields, LBreakout, Liquid War, GNU Chess, Quake, Hexen, Clonk Planet, etc. there are dozens. But that's NOT what the GP2X is for - I'm sorry but it's not! Neither is it to be used for it's built-in MP3 player, eBook reader or Video player (DivX compatible). Nope. This thing is an emulation machine, pure and simple. The "official" archive is full of games but emulators top the download charts every month.
ZX Spectrum, Amiga, Atari, Commodore 64, Gameboy, NES, SNES, Master System/Game Gear, Genesis/Megadrive, Arcade games, they all have emulators for them that run on the GP2X. Only the most demanding tax the little workhorse but for myself, that was more than enough. I can play all my old favourites, full speed, on a little portable device that I can put in my inside pocket comfortably. This is also the ultimate test of gameplay on the joystick - pulling off Ryu's special moves on a SNES emulator running Street Fighter 2 is flawless (I've heard the F-200 has more trouble because of its D-Pad?). The button layout is very well thought-out and lets you emulate SNES controllers virtually perfectly, and every other comfortably. You never feel that there's a button mapped into an impossible place.
The speed, graphics and sound for the above-mentioned emulators are perfect for the vast majority of games in default settings - it's always those ones with the special chips that give you performance problems. Gameboy games feel perfect, SNES games work perfectly if you have the "basic" chips, so Super Mario World and Mario All-Stars are flawless but things like Street Fighter Alpha, Starfox and Yoshi's Island will suffer. There is a port of MAME available with over a thousand games playable. Most 80's arcade games are fully playable and later ones are hit-and-miss depending on the specifications. I love Final Fight, Wonder Boy, Pang, Ikari Warriors etc. and was very glad to see that they all ran perfectly.
You can stretch the machine to higher-level games (there's even a PSX emulator for the very optimistic) using the built-in overclocking options in most emulators but you rarely go from "Aw, it's unplayable" to "Yay, it's perfect" by doing so. Also, as with all overclocking (of which I am a massive opponent when it is used on PC's), it varies considerably based on the particular manufacturing that went into your particular device. Some people can overclock their GP2X to 270MHz and beyond without problems, others can't get much past the 200MHz defaults. Oh, and it can kill your batteries much more quickly, so in fact what you find yourself doing is finding "sweet-spot" under-clocking limits for every game so that you can save battery power without sacrificing gameplay. Most emulators allow you to do this on a per-game basis, which helps you save as much as possible.
Emulators are definitely leading the software development on the GP2X - RAM timing and MMU hacks to vastly improve performance originated from a want to get every ounce of power out of the GP2X and are present in every emulator and in most homebrew games. MAME lets you access 4 USB joypads connected to the handheld for multiplayer action - a rare feature in other GP2X games, but has been copied into most emulators for the platform. There are a smattering of commercial games, none really priced higher than about £10, and their quality does show through but there are not many able to compete with "free" games coming out of the porting community. I thought that Payback was well-worth the money but I'm not sure I'd fork out for some of the puzzle games. I'd much rather give a homebrew-author the money for a particular favourite.
And with a 2Gb SD card, you can fit almost everything you'd want (including a couple of hundred MP3's and a video or two) onto a single card.
On top of emulation, the homebrew would be the next software "feature". If you can compile against ARM targets (easy with GCC and the various devkits available for the GP2X), base the game off Allegro or SDL or be prepared to write a little hardware-code, you can get games up and running in minutes. The hardware was designed to be accessible - it's all just Linux. You can get the joypad showing up in /dev/joy, you get sound out of /dev/dsp, you can do some memory mapping tricks on /dev/mem and /dev/fb to create double-buffered video in the slightly-trickier top 32Mb of RAM. Everything is just Linux 2.4 with some extra features here and there to let you tweak the LCD backlight, control the battery-low light, speed up either CPU etc.
There are versions of BASIC available which target the GP2x and are designed to create the sort of mini-games that can be more fun than commercial games - sites running nothing but Flash games are proof of this on the Internet, and on the GP2X you can knock up similar games in minutes using one of dozens of development packages - Fenix, Python, BASIC, all sorts of languages are available. There are hundreds of games available, some diabolical, some fantastic. There are even ports of SCUMMVM, Albion, Descent, Doom, Duke Nukem 3D, Ultima 7, Heretic, Hexen, Rise of the Triad, various DOSBox-based games, Frozen Bubble, Super Mario War, OpenTyrian and even the Graphical version of Nethack! (I'm sorry, you can't leave that game out of the list!). Every year there is a competition run for the best GP2X homebrew or ported game and the winners can be very impressive.
At the moment, when I'm not playing the "oldies" on an emulator, I'm playing Ghostpix (a very polished Picross/Nonogram/whatever you want to call it puzzle-game), SuperTux, Liquid Wars, Kuoles, FreeDroid, Frontier2x (an Elite-2-port), Quake, and a million other "five-minutes" games that are just fantastic for a handheld console.
And the most important thing - the GP2X makes gaming fun again. You plug it into your PC, download some stuff from the web or the official archive, throw it onto the SD card over the USB cable, then go to Games, GameName, GameExecutable and play. You can even use the eBook reader to read the instructions on the device itself (emulators tend to have a lot of instructions because, for instance, SNES emulation demands quite a lot of buttons to be mapped so you need to know how to get back out to the menu - usually this is some combination like Vol+ and Vol- simultaneously or R+L+Start).
A lot of thought obviously went into its design. It's sleek, small, comfy, durable and practical. It has plenty of connectivity and hacking potential (even the F-200). It works well and is sturdy. Controls are obvious (Vol+ and Vol- work in EVERYTHING, just about, even though they are software-controlled) and well thought-out. The built in applications are more than good enough and substitutes are easy to come by - there's even one that makes it look and work like the PSP interface.
It could benefit from an add-on battery pack like the PSP has, especially if you can get several hours out of such a thing, even on the highest demands. And it really needs an in-built charging circuit. But apart from that it's very, very good and it should really get more attention from hardware designers.
For the next model, a "hybrid" of the first and second models would sell much better - sacrificing old functionality for new functionality isn't a good choice to make. If you could upgrade its 3D capabilities without destroying backwards compatibility (the homebrew/porting scene is far too important to just discard and try to build another), that could only be a good thing. I don't see it being impossible, even if it's only in the form of a 3D accelerator chip and a custom OpenGL library to manage it. But on many fronts it's already perfect.
The capabilities are there for networked games (at least in the F-100), but it doesn't appear that many people have used them. Maybe a small wireless or Bluetooth chip could solve that problem in a backward-compatible way (especially with the Wiimote being Bluetooth-based, it looks set to be a standard for wireless games controllers) - it's not like the drivers for such things would be impossible to port. You could easily upgrade to a 2.6 kernel (some people already have!) for the next version and open up a whole new world of new drivers you could take advantage of. You would have to tweak it, though, to ensure power-use stayed as low as possible - embedded kernels aren't exactly rare, though.
But the best thing about the GP2X is the reputation that goes with it. Nobody knows what it is, so you get some strange looks when you produce it from a pocket on the train. Some even stranger ones when someone recognises the Mario "ting" from an obviously non-Nintendo device. It is absolutely fantastic for wiling away long journeys, it has to be said, purely because of its design for a short attention span... listen to some music, read an ebook, play a SNES game, listen some more, play some Megadrive, listen some more, carry on that campaign in Crimson Fields, etc.
All in all, the GP2X has managed to do what a lot of the larger handheld console developers haven't. It turns a profit, based purely on hardware. Software is especially prevelant even though there are few "launch" titles. It's great fun and well designed. And it has a community effect that's unmatched.
Here's to a GP2X sequel that's even better!
Monday, January 21, 2008
GP2X handheld Linux games console - A review


Sunday, October 28, 2007
Why can't my computer...
There are a lot of things that annoy me about computers, or more usually, Windows in particular.
Why can't my computer:
1) Accurately gauge how long it will take to do something? (Prime Culprit: Windows)
File copies, program installations, downloads, no matter what it is, if it's got a progress bar it's not going to be accurate. Not only will it not be accurate, it won't even be close most of the time. Sometimes, granted, it'll be spot-on but that's just got statistical averaging written all over it. Why, in the middle of a file copy, does it suddenly decide that it's going to take 3 days, no, 10 seconds, no, wait, 18 hours, no, hold on, ... ?
Yes, not all things are predictable, but the computer could at least estimate (within a reasonable margin of error) and not blindly pick up random and wildly varying estimates without even flinching. It'd be much better if, when it gets confused or held up, it just gave up until it knew again! (KDE does do this, for instance, when copying files... it'll just say "Stalled" for a second).
2) Know where the drivers for a bit of hardware will be? (Prime Culprit: Windows)
Why do I have to point it to a driver that I've either had to insert myself or, a lot of the time, had to go to www.randommanufacturerswebsite.com/techsupport/drivers/windows/xp/driver/thingamajig/v8/revision2/setup.exe and downloaded myself before it will recognise the hardware I bought?
Why can't there be a standard for hardware so that everything USB or PCI-based contains some information that tells the computer roughly WHERE a set of drivers will be. Or, failing that, some website where it can automatically look up what drivers exist and where when given a PCI/USB id, even if it gets "Not Supported" or "Unknown" some of the time?
3) Know what I mean? (Prime Culprit: All OS)
When I type www.fredc.om into my browser, why can't it just correct it for me (maybe with a Google-style "did you mean?")? Some mistakes are hard to catch but it doesn't even catch the little ones. wwww.fred.com or www,fred,com , for instance. And when I have a file copy command, rather than just error at me because I mis-spelled or, in Linux, mis-capitalised a filename, have a small go at working out what I meant and ask me if it's right. (Yes, there are certain plugins etc. but I want it to be a standard feature. I can't be the only person who's done these silly typos!)
4) Protect me from others and myself?(Prime Culprit: Windows and Linux)
Now, I'd like to point out that this is almost exclusively a Windows problem for one part of the question (protecting me from others) and almost exclusively a Linux problem for the other (protecting me from myself). Windows just doesn't go far enough to stop other people getting into/onto/through my computer's defences. Not even after the 50th version which has promised to do so. But it has some fantastic features to protect me from myself, if they are enabled. File deletes are, on the whole, confirmed first and undoable later. Plus, it's quite hard to completely shoot yourself in the foot by, say, deleting your Windows directory accidentally. On the other hard, it's extremely easy if you are not careful to totally balls up your Windows installation just by clicking on the wrong website, the wrong email etc. or even disabling the wrong service.
Linux, though, protects you from outside elements a lot more. And even if they do get through, it is quite easy to recover from them and, additionally, their impact will be limited to the user accounts that are affected. However, even as a normal user, you can wipe out your home directory in one command without any confirmations and with little chance of getting it back unless you have specifically put into place procedures to recover it (such as replacing commands with safer versions, configuring user accounts so that they can't do that sort of thing, or just having easy-to-restore backups in place).
So it seems that Linux could benefit from a bit of Shadow Copy, a bit of System Restore or some kind of filesystem rollback and Windows could benefit from a bit more privilege seperation, a bit better programming and a focus on non-virus software rather than anti-virus software (i.e. before-the-event practices that stop the viruses getting on there so easily in the first place).
And this doesn't just apply to desktop environments. Humans make mistakes. Operating systems should be designed to take account of this fact and help where possible.
5) At least give me a clue? (Prime Culprit: All OS)
"mplayer: error while loading shared libraries: liba52.so.0: cannot open shared object file: No such file or directory"
(Note that this is an example only - when you compile mplayer from source, it does in fact warn you or leave out support when pre-requisites are missing).
Well. Lovely. Fantastic. So you know that you NEED liba52. You won't run without it. You were obviously written with it in mind. So why can't I instead get:
"Mplayer: Error: You haven't installed liba52. You can download this from http://liba52.sourceforge.net/"
Now, with modern dependency checking this sort of thing is getting rarer but even so, where's the error message that a human can parse easily? Windows does just the same with missing DLL's. And compile-time messages but not run-time messages are another bit of a bind. Fine, tell me gently at compile-time that I need libX and exit neatly. But why not do the same when I move that binary to another machine that doesn't HAVE libX, instead of erroring as above? More people RUN programs than COMPILE them. People compiling programs usually have the sense to sort such problems out for themselves (and such a trivial error is nothing compared to some of the doozies that you can get when compiling software for yourself!), ordinary users can't.
Similarly, dumbing down error messages too much is a major mistake:
"An error has occurred."
What error? Why? Whose error was it? Was it my fault? Was something wrong with the machine? Where did the error occur - in the program, in Windows, in something else? What can be done about it?
As a replacement, how about:
"An error has occurred in PartyPoker. [[Note the friendly program name]]This appears to be a problem with that program. You can try running it again, or checking for a PartyPoker update. If you still recieve errors with PartyPoker, the program gives the website address www.partypoker.com/problem as a source of help. Click below for a file which will help the program author to determine the cause of the problem."
Or:
"An out-of-disk-space error has occurred. Windows is showing you this error because it did not have enough room to create a 50Gb file on drive C: as requested by the program Nero Burning ROM. You have only 10Gb free space on drive C:. You can try:
- Clearing up 40Gb of space on drive C: and retrying the operation
- Instructing Nero Burning ROM to use a drive with more space (for example, D: currently has 100Gb free)"
6) Fix itself. (Prime Culprit: All OS)
Windows.
Windows Last Known Good Configuration.
Windows Safe Mode.
Windows Recovery Console.
Where's the "I need to get to my files" option - with a minimal desktop that uses NO programs, services or other information in common with the main Windows install and lets you copy your files off the computer before it dies completely? Where's the "Run Diagnostics" option to let Windows have a go at trying to find out what it actually wrong rather than blindly looping through a list of Windows "versions", each of which gets less and less drivers loaded? While we're at it, where's the "Check My Disks/Memory/Hardware" option in that list?
Where's the "Right, last time I crashed loading the graphics driver, according to these logs - this time I'll ignore graphics and just load a basic VESA driver and see if I can get further" logic?
And then we have the fantastic idea to include an option, which is usually the default, to automatically restart Windows on error (great, so you can't even SEE the BSOD when it whizzes past, and then Windows will blindly sit there trying to get into Windows every time it reboots until you come and fix it - it'd be better just to turn itself off!). Yeah, there sometimes a need for a watchdog on a high-availability server but on an OS designed for Home Desktop use? And what's the point of it just infinitely restarting at the same place unless it LEARNS from that mistake, especiallly if that place is before it evens gets to a desktop?
That's just the start of my list. Hopefully, I'll finish it off soon.


Friday, October 19, 2007
What does the Linux desktop need? Those who say "I want, I want..."
I've just read an article linked on LWN.net entitled "What does the Linux desktop really need?".
Let's veer slightly and ask a similar question: "What does my car need?". I'd say it NEEDED a lot less rust, a sunroof that isn't held in by parcel-tape, a new wheel-bearing (AGAIN) and something to make my lights turn on at least once in every ten tries.
Personally, I'd say it could also do with electric windows, heated front windscreen and a CD changer/MP3 stereo. But wait. Hold on. If we're saying that I can have anything I've seen on a car... I "need" braking-power reclamation, a hybrid engine, 0-60 in 5 seconds, a five-year warranty, finger-print recognition for starting the engine and GPS vehicle tracking.
Now, that's ridiculous, because my car doesn't NEED those last things, but that's basically what this article was saying. The Linux Desktop doesn't NEED anything else. It's there. It's a viable alternative to Windows. It can do anything that Windows can do (given the developer time investment). Years of development on the Windows side is now recreated in a matter of months on the Linux side - take drivers for things like newly-released wireless cards, some of which have to be reverse-engineered before a driver can be made, take some of the fancy graphical effects present in Vista, some of the desktop "features" of MacOS and Vista and there are already equivalents and copies available for Linux that can do just the same, most of which were started AFTER someone had seen those features elsewhere.
There isn't a type of application that can't be run natively, in theory. Given enough horsepower, we can even replicate the majority of Windows functions enough that high-power applications and 3D games can be run in a Windows-userspace recreation (Wine) at astonishing speed considering the technical problems of doing such things. Not only that, Linux can do virtually everything that Windows can do natively, and usually does a better job at it. There's nowhere to go from here apart from getting people to a) use the thing and b) develop for the thing, both of which are mutually dependent.
Reading LWN comments on the article are even worse... it "needs" Photoshop, Office, Games... No, it doesn't.
It's been proven - it's technically possible to write top-class 3D games and powerful image-editing programs for the Linux desktop. It's not even any "harder" than doing so for Windows. When Adobe want to do it, they can. In fact, Linux is more standardised for such things. You don't need to worry about ATI vs nVIDIA vs Intel - just let OpenGL sort things out for you.
The fact is that the desktop doesn't NEED anything, unless you are intent on recreating Windows on Linux. That's the problem - the Windows mentality isn't suitable, or compatible, with the way Linux works. Windows people want firewalls that don't disrupt their games and let any application request an open port via uPNP. Windows people want antivirus because they think they need it. Windows people want perfect connection to the heap of junk that is Active Directory. Windows people don't want to enter passwords or manually configure their hardware in order to do dangerous things, like overclock their graphics card or turn their firewall off. You can't change those people. Not without a big stick.
The way to get Linux onto a desktop is not to perfectly emulate every undocumented Windows bug and quirk when connecting to an Active Directory server for login so that some poor sap can run Outlook the way he likes, but to build a Linux equivalent that has clear advantages - faster, smaller, easier to manage, more transparent, easily portable, easily extendible and which can do stuff not seen elsewhere. The people who are more likely to make decisions based on those criteria? Large organisations. Who use networks. Which are run by a poor sysadmin somewhere who "knows" Windows but only "plays" with Linux. They don't care that Linux can detect and use 99% of all PC hardware - they care that it takes an hour to set up a new type of PC to the way they want it to be, rather than a five-minute copy of a well-known model's hard drive.
Imagine a Linux distribution. You install it in a "server" mode via a menu. Then you install it on a client machine via the same menu. At no time did you have to install drivers for monitors or some such rubbish. You don't HAVE to license it. You don't HAVE to spend days setting up the user group structure and policies to a safe default. Yeah, they'll be parts of the machine that won't work without proper drivers but that's not important. Really. These sorts of places SPECIFY the machines. They say what hardware it will or will not come with, down to the individual components. Compatibility with some cheap winmodem is not their problem - they buy a different modem, especially if it affects their security or technician's free time.
Anyway, you've started a client and server from barebones. Then imagine that you have automatic, cross-network authentication to that server, client logon, desktop settings and "policies", which allow the network administrator to change every single setting and restriction on the clients in almost every program via one directory interface. Imagine it works just as well over wireless, VPN, a Bluetooth interface or a USB cable. Just as automatic. Just as simple. Just as fast.
You can throw software across the network by just clicking on a machine in a tree-diagram on the server and deploying a package (so it'll be an RPM, not an MSI, but who cares?). Managing a thousand users on a hundred workstations becomes a cinch. And as a bonus, the machines automatically share work between them when they are idle. They automatically discover each other (with according administrator control) and use each other's storage as a mass-RAID for the network data, including encryption to stop people looking at other people's work. It does it all without needing a millions ports open. It does it all without major security problems. It works just as well from outside the network, when one of your staff takes a clieent laptop home - they plug it into their broadband, maybe they have to click an option to connect remotely instead of locally, and bam! - it's just like they are at the office.
Now imagine that you can do all that on lower-end machines than Windows could. And you can do more, stuff that just isn't possible on Windows. You can plug four graphics cards into each PC, four USB mice and four USB keyboards and now four people can use the one machine without even knowing. And their CPU power is being shared across the network, with all the other four-man machines, maybe even with the server itself doing some CPU work on their behalf when it's not busy with other things. And you wouldn't even notice that was what was going on. We're *not* talking thin-client - but you can do that if you want, too. You just tick the "thin-client" option when you install the client and the system does the rest for you.
Now imagine that not only does it do all that but you can also trust the server to backup those clients too, whether they are working locally or remotely. The server remembers individual machines and any quirks you've had to apply (that binary modem driver, that setting on boot that prevents the ACPI problems etc.) and when you rebuild them you can re-include those quirks too. Saving data to the network is transparent and not only does the server RAID all it's data, but it shares it out with the network. Server blows up? No probs. Stick the CD into any machine, create a server, maybe supply it with the old servers private key and bam - all the data feeds back from the clients to the server and the network rebuilds itself.
Well... the problem is that most of that stuff exists in one form or another. Certainly everything listed above is perfectly "do-able" and there's at least a few bits of software for every single component of that sort of system. They're not all tied into one distribution (that I know) but they are there. The most "together" distributions are the paid-for ones, Red-Hat etc. But there is nothing there that isn't possible, it might take a few months work and you could probably do it all without nothing more than existing downloads and kernel patches and a bit of bash-glue. But it's not around. You can't actually get it. And most Windows admin's won't even try it while it involves a lot of messing about. Have you seen some of the HOWTO's? Have you seen the number of steps needed to get Kerberos, LDAP, Exotic Filesystems, remote-control, VPN's, etc. all working your way? Windows is no easier, either, so you're left in the "what's in it for me" valley.
What's needed is not more and more replication of existing features but new and exotic uses. What's the most interesting part of Google? The Google Labs. What's the thing that people ALWAYS buy an OS for? The new, interesting features. Yes, when Samba can perfectly manage every aspect of AD integration, it'll be sought-after. But people scrambled to Vista "because". There wasn't anything complicated in it, there was little groundbreaking stuff and popular opinion now says that Vista is more of a pain in the backside to run for the average user than previous versions. But it was bought because it "could". It could do "new stuff" that Windows people hadn't seen before. Remember Windows 98SE that could "do USB".
People are already talking about the next version of Windows Server because of what it can do. Not about how well it does it. Not even about how easy it is to do, that's normally left until review copies appear in the hands of magazine reporters, but about what's new. And, stupidly, not even about what it doesn't do any more. The fact that every single version of Windows Server has had a hundred features announced that have never appeared is overlooked. The hype surrounding it by the time it comes out MAKES people want it. Vista was supposed to include database-style filesystems, a download manager, filesystem "filters" (Libraries), Palladium "trusted systems", integrated anti-spyware, UEFI support, PC-to-PC Sync, a full XPS implementation, to have a better UI, to perform better than Windows XP, and that's before you even get into all the capabilities that they physically removed from the OS that were there before in XP.
And the fact is that Windows Vista was just a small upgrade. If it had had ALL of those things, it could possibly be the best OS in the world. And Linux CAN have the majority, if not all, of those things. Most of them even exist for Linux right now. We just aren't using them.
People who "push" developers to make a Linux-Windows just don't get it - Linux is already in front in terms of features and technical details. We all know that. It wipes the floor with it's "main" competitor (although, to be fair, so do a lot of other operating systems). It's not that we're not "there". We are. Something else is holding Linux back. Firstly, ease of use. That's usually a big trade-off with not only compatibility and security but also with system performance. However, Linux has power to spare. And then it's just a matter of making things work without a million settings. I'm a big fan of command-lines and text-based configuration files - there is no reason to lose them. But they don't have to have vi as their only interface.
The main thing it's missing, however, is a short, simple, easy demonstration of powers that Vista and even future versions of Windows either can't or don't have. It's needs a show-distro to turn up, either from the depths of one of the established ones, or out-of-the-blue. It doesn't need that distro to say "Look at me, I'm just like Windows, only slightly better", it needs to say "Why on Earth would you bother to look at an OS that can't do X, Y and Z", where X, Y and Z are things that either have never been done before, or always been "promised" or "desired" and never materialised. And I don't mean a flashy-gui interface. The nearest we ever got to that sort of hype was the Kororaa Xgl Live CD and look at what it did - very little of any practical use. But it was NEW. It was even NEWER than what Windows could do at the time. So it got a lot of press.
Being able to access an AD domain isn't something new. It's not impressive to people. It's not even that innovative - there's a major OS that does it automatically and (fairly) reliably. What's needed is to play to Linux's strengths - flexibility, malleability, speed of development, freeform and accessible API's. That means coding quickly, easily, without barriers and restrictions and expensive SDK's. Just get in there and write stuff. In half the time it's taken Windows to get where it is, Linux has replicated and/or surpassed every aspect of Windows. Now it needs to overtake it - you can't do that by blindly copying features from Windows, or even other OS's.
Now, the article doesn't push Linux for anywhere near as much as the comments on LWN. To them I say: Just because Windows does something, doesn't mean that Linux should follow suit. It that were the case, Linux would BE Windows. I don't WANT my Linux desktop to have a built-in GUI firewall that's difficult to configure the way I want. I don't WANT automatic update dialogs that are a pain to turn off. I don't WANT something to automatically detect all wireless networks the second it sees a wireless card.
On the software front, what would be the point of "getting" Exchange, Adobe, Office as Linux-native versions or equivalents. By doing that, you would have to integrate a significant portion of Windows infrastructure, including Active Directory and DirectX. So what you've done is made a "free" version of Windows. Whoopee. Everyone who's currently using Linux is using it NOW while it's not a version of Windows... why? Because it's BETTER. It isn't bound by some stupid corporate decision or two decades of backward compatibility quirks.
Take a look at some edited highlights of Vista SP1:
Performance improvements
New security APIs
A new version of Windows Installer, version 4.1.[47]
Users will be able to change the default desktop search program
Support for the exFAT file system (FAT extended for USB sticks, basically)
Support for 802.11n.
Terminal Services can connect to an existing session.
IPv6 over VPN connections.
Support for booting using Extensible Firmware Interface on x64 systems.
An update to Direct3D, 10.1.
Support for the Secure Socket Tunneling Protocol.
What's there that Linux won't have by the time it comes out, if it hasn't got it already? What's there that Linux couldn't do? Nothing. And to be honest, as a changelog for a major upgrade to even a stable release of an OS, that's pretty pathetic. What about Server 2008? It's all pretty much the same. There's nothing in there that Linux doesn't already or couldn't do with a year or so's work.
Let's stop faffing about asking Windows users what they think they need from a Linux machine. Let's SHOW them. Let's just get stuff done and forget emulating Windows. We all know that Windows has it's death coming to it. The longer we give it credibility by attempting to copy everything it does, the more time we waste away from the interesting stuff, the stuff that will have people hooked. We have SELinux, we have file-server compatibility, we have directory management software, we have all of this but nobody cares. We need to show stuff that Windows can't do.
We need a five-machine network that can outperform the best Windows servers and individual desktops, when both are running 20 simultaneous clients (as in four people ACTUALLY WORKING on each of the five machines, locally). We need filesystems that "heal" (and not like self-healing NTFS in Server 2008 which is basically thread-safe Scandisk), network filesystems that can let Google do it's job without worrying and with which small companies no longer need to worry about tape backup (although, obviously, they still could) - which adds 50% to the price of any server.
We need perfect, logical, simple directory systems that can do stuff that Windows AD can't even dream of, in an easily editable/recoverable/backup-able format - it doesn't matter if it's Fedora Directory Server or Apache Directory Server - no-one cares. We need it all to run, automatically but securely. We need automatic secure communcations across a network to pick up new machines and integrate them directly into the Directory. We need systems that (with proper admin control over the process) auto-patch underneath systems that are still running. We need one-click Setup, Trusts, Backups and merges of entire Domains.
We need client systems that can repair themselves from known-good images (which, hell, should be stored in the cross-network filesystem) while they are still working - no, we don't acquire viruses but you still need to Ghost stuff back sometimes. We need machines that detect faulty hardware and compensate automatically - memory just failed in the server? Fine. Isolate the memory areas responsible (BadRAM), alert the admin, allow them to work-around the problem temporarily until they can get a replacement, restart and check all services and then carry on like nothing happened. And all the time you spotted it where Windows would have just crashed.
We need systems that can tolerate as much failure as possible. Primary hard drive or RAID array failed? Warn the admin, carry on as normal, read what you need off the network. Network failed? Route around it, over a USB connection if the admin only has a USB cable left, or FireWire, or wireless, or Bluetooth, or Serial, or Parallel, or... We need a real, "intelligent" help system. When it sees that admin hunting through menus looking at DNS settings, it tries to (unobtrusively) help. It brings up a checklist and works through things one at a time by itself until it says to the admin "The DNS server is fine. But you forgot to point that client machine at it." or "The DNS server doesn't have a reverse-DNS for that IP, that's why what you're trying isn't working".
We need systems that collectively monitor, detect and shutdown other rogue systems within their sight, a kind of distributed-IDS built into the system. We need systems that do all this 100% securely, with full certificate-chains and verification and let the admin control exactly what's going on if he wants. And when someone breaks that particular method of encryption? Ah, just choose one of the thousand-and-one encryption methods and do a one-time re-encryption to change every server, client and software over. Well, yes, do pick up local Windows systems and tie into them as much as you can but forget making that a priority. Set NEW standards. Make people say "I absolutely NEED a system that can do that." Let the other OS manufacturers play catch-up for a change.
Let's stop playing catch-up. We already won that one, there's no competition there any more, there's no more fun to be had. Let's start wiping the floor. Let's get JUST ONE feature in that people decide they absolutely NEED. And let's do it before Windows can even get a sniff. Let's do it so that, when the time comes for Microsoft to replicate it, they want to be able to read OUR code in order to get it done well enough. Let's stop playing about asking 90-year-old grannies why they don't like Linux when they know nothing BUT Windows... their answer will always be some variant of "It's not like Windows".... either that or "That penguin is scary". Let's make the people that are really scared of the Penguin be Microsoft and Apple. Because, at last and for once, they can't keep up with Tux.
Friday, July 20, 2007
Essential Linux Utilities
Ever since setting up three Linux PC's in a row, I've realised that I've grown dependent on a few pieces of software for Linux, above and beyond what comes with a standard distro (or, at least, Slackware).
Beep - a tiny util that can beep the PC speaker in a variety of ways, perfect for headless systems. I use it to give a warning tones inside boot scripts and also to provide a rising or falling tone on the start or end of certain tasks, such as booting or shutting down. Because it uses the PC speaker, it doesn't interfere with ALSA, works on even the oldest of PC's, doesn't necessarily require an external set of speakers etc. Beware using it, however, on multi-user installations - I tend to keep it restricted to the audio group of users only to stop people messing about with it.
Ether-wake (available from various places, originally by Donald Becker) - the ultimate power-saving util... this is a Wake-on-LAN packet broadcaster to wake up computers that support WoL from their deep sleep (i.e. turn them on so long as they are plugged into the net and have a power cable in them). With this I keep my home network largely turned off and "wake up" (i.e. turn on) particular PC's as and when I need them. And larger scale experiments have shown that there's nothing better than the sound of a room full of PC's all booting up simultaneously at the click of a single button / cron job.
HTop - a better version of "top" that I find easier to use. Shows processes and RAM usage in a nice controllable text-mode GUI that allows you to kill individual processes, scroll up and down etc.
rc.firewall (See this post for a mirror) - a perfect, simple, one-file iptables firewall that works well as rc.firewall in Slackware. Works for single computers, NAT'ing routers, multiple network cards, multiple-networks-on-a-single-card, and lots of other configurations. It uses a simple syntax for even multi-port port-forwards, has many simple options for various things such as allowing or deny ping's or cross-network traffic, has a very strong default configuration and can be reloaded at the drop of a hat at which point all the detected network interfaces are re-firewalled.
x11vnc - This is one of those utilities that few people ever use. It's a vnc server for X. But it has a vital difference... it's a VNC server for EXISTING X sessions. Most people are familiar with xVNC which allows you to spawn an entire X-Windows system where each "screen" is actually a VNC session (thereby providing instant-VNC-thin-client) but that's not much use to someone that has a single-user Linux PC who wants to log onto their home PC and click on that link that they left showing in their browser. x11VNC does just that - the command-lines get horrid very quickly, you have to pay close attention to the security of the thing (because now connecting to the PC on port 5900 is the equivalent of logging in as yourself on the local PC!) but it's a great piece of software. The author is also working hard to make VNC-wrapped-in-SSH a cinch, even from Windows PC's, by extending the TightVNC clients to incorporate SSL tunnelling. Yeah, you can now do this with some things like KDE's Remote Desktop functionality but I've been using this particular utility for so long that I have scripts which build-on to it and it also has some features that just aren't present in other imitators.
knockd - a simple port-knocking daemon implementation which can be triggered remotely using either a tiny utility that works on Linux/Unix/Windows or by simpler tools such as telnet. Perfect for securing a server for remote access (and incidentally the best way to stop random port probes to your machine - my SSH logs were filling up until I found this) as you can just put the portknock client on a usb disk or a website and download it from wherever you happen to be or you can even "bodge" one in a real emergency. Also, the configuration basically consists of port-sequences and names of scripts to run. This means that it's easy to configure it to see port-hits on ports X,Y,Z as an instruction to run an "open" script and then you can hit ports Z,Y,X to run a "close" script. And because you can have multiple port sequences running, it's very easy to have all sorts of different things happening. See my article here for a bit more background on my use of this utility.
Tuesday, July 10, 2007
Mirror of Projectfiles.com / lfw.sourceforge.net rc.firewall
Having just completed a set of instructions for a group of Linux newbies on how to set up a firewall, I then discovered that my favourite Linux iptables firewall script has all-but gone from the Internet. I checked Google, both "official" websites (including the Freshmeat.net mirror) and archive.org. Still no joy. Luckily I had kept a copy of this GPL script, which I have mirrored.
For those people who have had trouble finding the script that's been hosted at both ProjectFiles.com and http://lfw.sourceforge.net you can download the rc.firewall script at the following address:
http://www.ledow.org.uk/linux/
This is the 2.0 "final" version. I have the documentation mirrored too. Oh, and I assume that the reason that the archive.org site has no mirror is that the author wants no more to do with it. So be polite if you do need to contact them (the above file has their email address etc.) and don't bother me for support, either! (You probably couldn't afford me!).

