Tag Archives: Writers Perspective

No Nonsense XML Web Development with PHP, Thomas Myer

PHP doesn’t spring to mind when thinking about processing XML data, but PHP is a better solution than you might think. Since PHP is used to develop websites, which use HTML a standard based on the principles of XML, PHP is a sensible choice. PHP also includes powerful tools for parsing and manipulating XML data. We can use this to our advantage to convert and manipulate XML information in our PHP based web applications. XML-RPC and SOAP also use XML, so the use of a web-based language for web-services is also another obvious choice.

All of these situations are covered in extensive detail by Thomas Myer in his new book, No Nonsense XML Web Development with PHP from publisher SitePoint, a long time source for articles and information on web applications and development.

No Nonsense XML Web Development with PHP

The contents

No Nonsense XML Web Development with PHP covers a gamut of different topics, from an introduction on the basics of XML and its uses through to web services. Throughout, the straightforward and relaxed tone of the book help you to pick up the background behind what Thomas is teaching you, as well as the specifics of different aspects in the book.

We start off with a simple examination of XML and the role of DTDs in the consistency of the XML data. Thomas is right here to point out that DTDs are about consistency, rather than restriction, on the information we store in XML. He also covers the role that DTDs have in validating information, often simplifying the code required in our application to confirm the quality of the content.

Our first foray into the specifics of XML and PHP starts in Chapter 4 where the basics of the XSLT transformations are covered. This is also the start of a recurring theme in our application of a content management system (CMS). The book uses the CMS as a hook to link together all the different elements of the XML/PHP content and it is an approach which works well. This introduction is enhanced by a more detailed examination of XSLT before moving on to the manipulation of XMLwith JavaScript and the role of DHTML in web site development.

By Chapter 7 we are introduced to the full-blown techniques for parsing and manipulating XML data using PHP. We get in depth coverage on the different parsing techniques sich as SAX, DOM Thomas covers the fundamentals of parsing before covering the specifics of generating, and parsing, RSS/RDF information used in the syndication of web site data. The book then wraps up with coverage of web services, primarily XML-RPC and the role of databases in the use and storage of XML data.

Again, throughout, we get information and examples on how we can apply these different areas into our content management system. The entire CMS code is included in Appendix B of the book, with Appendix A holding information on the functions included in PHP for XML processing.


I like the conversational tone that Thomas uses - he doesn’t talk down to you and the concepts are introduced effectively through the use of a good progressive style and cross rereferences to other sections of the book. The use of a common goal project - the content management system - is also an excellent way to ensure that as you read through the contents, you pick up more of the detail and capabilities of PHP for XML.

The format of the book is good too - code sample are clearly defined (although the large font is a bit distracting) and each code extract is handily tagged with the file name and whether the fragment is entire or simply an extract. For each fragment there is usually a step by step examination of the code and a description of what is going on.


Very occasionally the theory of the topic being discussed seems to be a bit short and almost rushed. As a practical guide this isn’t a problem, but for some a better understanding of the theory would help with the adaptation of the practical contents. This shouldn’t really detract though from what is an excellent hands on guide to PHP and XML applications.


If you do any form of XML processing within PHP then this is the book you should keep on the shelf. Not only will it give you the back up theory you need, the practical examples will become invaluable.

Rickford Grant, Linux Made Easy

Getting users to try Linux is only half the battle. The other half is showing them what they can achieve when using it. Linux Made Easy by Rickford Grant uses a task based approach to show how you can use Linux to perform your daily tasks; email, browsing, letter writing, even scanning and printing are covered in detail. I spoke to Rickford Grant about the book, why he chose Xandros and how the look and feel of the computing environment are more important to acceptance than the name on the box.

Linux Made EasyThe book highlights how easy it is to do your everyday tasks - email, writing letters, scanning documents - using Linux. How key do you think this is to the wider adoption of Linux?

I can’t help but think that it is extremely important. Until now, the image of Linux has been of a system for people on the geekier side of the compu-user spectrum, and I’d say the majority of books out there on the subject bear this out with their focus on networks, commands, and so on.

One of the reasons I wrote my first book, ‘Linux for Non-Geeks,’ and now ‘Linux Made Easy,’ was that most of the Linux books out there are so focused on that more geekish facet of Linux that it was hard to imagine a mere mortal having any reason to use Linux, let alone being able to do so. They certainly had that effect on me when I first got started.

As it stands now, the people who use Linux for its usefulness in specific applications, or who are in fact geeks, are, for the most part, already in the Linux fray. That being the case, if you are talking abut expanding the user base, then you are talking about business folks and your typical average Windows or Mac home users. If you want to attract those people, especially the latter group, then it is going to have to be clear to them that Linux is just as point-and-click sweet as whichever system it is they are considering abandoning in its favor.

This is where I see books, such as mine, focusing on those ‘everyday tasks’ you mention, as being of great benefit in expanding the Linux user base, or at least that segment of it.

Why Xandros? Are any of the other distributions you would recommend?

I picked Xandros because it seemed so very simple to deal with from installation of the system itself to installing additional applications. If someone wanted to keep their Windows system and create a dual boot setup, that was easy too. It also seems to handle odd hardware configurations pretty well.

Of course, there are some other good distros out there, but each of them has a limitation or two that I see as potentially problematic for a true newbie who, while interested in getting out of the Windows world, is not particularly interested enough in Linux per se to start geeking around in order to use it. Fedora Core and Mandrake/Mandriva, for example, are fine distros in their own right, but they have a few points that the target audience for my book might not care to bother with.

I hear good things about Ubuntu too, but I haven’t tried it myself yet, and I am sure that the non-graphical installer I hear it has will scare some folks away. After all, one of the big shocks to Windows users the first time they actually install Windows themselves (since most people get it pre-installed when the buy their computers) is that the installation process isn’t a completely graphical experience, nor is it an exceptionally easy one. I’d actually go so far as to say that most of the good Linux installers are far easier to deal with than that for Windows, and I have to say that the Xandros installer is about as easy as they come.

Do you think the wide range of distributions is a limitation of wider adoption, because it confuses new users, or they make the wrong decision and get scared off?

It’s hard to say, as it can actually work both ways. I know when I started out with Linux, I tried a few distros without any success. That acted to turn me off the idea of trying any further - for a while, anyway.

Part of the problem might just have been in the timing, in that Linux was not really all that non-geek friendly in the old days. After some time, when I finally did get my first distro up and running off one of those live CDs (Knoppix was the one, to be exact), I was then delighted that there were so many other varieties out there for me to move on to. I could just keep plugging away until I found one or two that I thought did the trick for me – sort of looking for the perfect pair of shoes for a person with big feet (size 13, if you’re interested).

The fact that there are so many distros out there allows users to escape the one-size-fits-all world that exists for other operating systems. Users can pick a distro that fits their needs best according to whatever criteria they may have. Of course, not everyone is interested in going through that diddle-and-dump process, which could be viewed as a negative. Fortunately, however, these days there are a lot of distros available that are pretty easy to deal with and can thus be recommended to those with less of an experimental bent.

Where do you think CD solutions, such as Knoppix fit into the role of encouraging more Linux users?

Well, as I just mentioned, Knoppix certainly did the trick for me. Unfortunately, not all of these CD solutions are created equal, and I had quite a few that refused to cooperate. They also run a bit slower than the real thing, and that can act to turn some folks off no matter how many times you tell them that it’s slow because it’s being run from CD. First impressions and all, you know.

Still, I think that the pros win out over the cons in terms of turning people on to Linux, as these live-CD distros allow people to get a feel for how normal and absolutely graphical Linux is without their having to fear ruining their Windows system – a sort of safe-and-sane way to give it all a try.

There’s been a lot of discussion about the usability of Linux on the desktop in comparison to Windows. Do you feel that the major hurdles to making Linux easier to use have been overcome?

I really do think so. I don’t see how Linux is any more difficult than Windows in terms of typical home or office tasks. In fact, in many ways, especially in terms of settings and such, I think it is definitely easier. Same with the actual installation process with most distros. Oh, but I’ve already mentioned that.

Are there any gaps in the Linux sphere for desktop users? For example, does Linux work as an effective replacement for mobile as well as desktop users?

There are a few gaps, but for the most part, they don’t have a major effect on most users. One of these gaps, of course, would be a lack of easy support for certain MS specific formats, such as streaming media designed for Windows Media Player. Users cannot, for example, go to MSNBC, click one of the video links there, and watch it. At least, they can’t do it without some tinkering.

There is also the problem of peripherals. Linux has to play a game of catch up when it comes to drivers for new devices, and thus it takes a while before support for such devices makes it to Linux. While this isn’t a problem for most people, it can be for those folks who like to go to the computer store, buy whatever odd device they happen to see on the shelf, and then use it. Users have to be a bit more concerned with hardware support than they do with Windows. Mac users might be a bit more familiar with the problem.

There is also the desire by the entities behind many distros to keep things as Open Source as possible, and to avoid any possible licensing violations, which is all quite reasonable. Unfortunately, the reason for all this is lost on end users. All they can see is that things such as MP3 support and encrypted DVD playback support are lacking in many distros, which can act as a turn off for some.

One of the key points about the book for me was that you spend all but one of the chapters working in KDE and GUI apps, rather than dropping to the ‘complex’ command line. Was this a key target of the book - showing just how use friendly Linux could be at all levels?

Yes. Those moving into Linux from Windows or Mac OS… or complete compu-newbies, for that matter, are sure to see commands as offputting and archaic. In fact, I was considering excluding that chapter altogether, but then I figured it had its uses in regard to JAVA apps and installing support for encrypted DVDs. Commands also give some folks the feeling that they are really ‘using’ a computer. Other than that, however, there isn’t really much call to resort to the command line, at least not in Xandros – or at least for the targeted audience of the book.

I have nothing against using commands personally, but they can really act to scare newbies away. And, after all, if you don’t really need them, why bother. That said, I thought it best to keep my discussion of the command line limited and to keep it last.

I get the feeling that you were scratching a particular itch with this book. Did you have a specific person in mind when writing it?

Yes, my long-time friend Steve in Los Angeles. He read my first book, was definitely interested, but ultimately not all that interested in going through the motions required to set up his own Linux system. I realized that while Linux for Non-Geeks was really a book for people interested in getting into Linux easily, there were still others who didn’t care all that much about Linux per se, and instead just wanted a really easy way out of the costlier Windows world. Xandros struck me as an ideal candidate for such people, and thus that is the audience towards whom I targeted Linux Made Easy.

I also tried to address some of his specific concerns, as well as others voiced in Amazon.com reader reviews and other such venues for my first book. The result is more coverage of how to do things with the various applications available in Xandros (and most other distros, for that matter).

It also seemed that a lot of people, once they have all these great pieces of software on their machine, have no idea of what they might use them for. I thus included some projects that readers can work through, which, in addition to showing them how to use the various apps, also serve to give people some idea of what they might consider doing with them. OpenOffice Draw is a good example. Lot’s of people can’t see any particular need for it, but I try to show them how to use it as a simple page layout application. I also try to provide greater coverage on how to deal with the more common varieties of peripheral devices.

Linux Made Easy strikes me as the sort of book that IT pros should keep on a shelf for distribution at will to users asking ‘do you think I should switch to Linux?’ Is that an example of where you’d like to see the book being used?

Well, as the author of the book, I’d like to see it on every shelf from Scunthorpe to Singapore, but, yes, that would be a good example of where it would be very useful. A pro could just hand it to an interested party to let them see that Linux needn’t be some mysterious freakish system out of the reach of the common man. I think it would work well as a text for workshops and the like too. And of course, I would hope that it would be something that would appeal to someone browsing the shelves at the bookstore.

Are you an exclusive Linux user?

Yes - well, almost. I do have one machine set up as a Windows/Linux dual-booter. I use the Windows side of that for basically two things: 1. to check things I need to refer to when writing these Linux how-to books, and 2. to play my favorite game – the Austrian card game, Schnapsen, which has yet to be ported over to Linux. I’m still crossing my fingers on that one.

Anything else emanating from your keyboard that we should know about?

I have all sorts of projects started, but I’m not sure which I will follow through with straight off. I suppose it depends on what publishers are willing to print. I am currently working on a newbie-friendly command book and thinking of an update to Linux for Non-Geeks, to name a couple of things. It’s hard to say what those will actually end up as though, in that things sometimes morph into totally different end products. I am also always working on a Japan experience book, but that is another ball of wax altogether.

Do you have a favourite author?

Wolfgang Hildesheimer has been a long-time fave, but then I suppose you’re talking about computer book authors, aren’t you? In terms of computer books, I don’t have a particular author that I would call a fave, but there are a lot of books that I think are rather good. As far as Linux books go, I think ‘Linux in a Nutshell’ and ‘How Linux Works’ are well worth having, though I wouldn’t necessarily recommend them for newbies.

Author Bio

Rickford Grant, author of Linux for Non-Geeks, has been a computer operating system maniac for more than 20 years, from his early days with an Atari XL600 to his current Linux machines. Grant spent the past seven years as an Associate Professor at Toyama University of International Studies in Japan before relocating to Wilmington, North Carolina

Linux Made Easy, Rickford Grant

Linux Made Easy by Rickford Grant is a companion to his original Linux for Non-geeks. Where the two differ is that this book is about how easy Linux can be for performing a myriad of tasks using a simple, skill-based approach. In this book, Rickford describes how to use Linux to do what you need to do: web browsing, sending email, basic document creation and using external peripherals like your printer, USB flash drive and scanner. In short, this book is about using Linux, from the perspective of ‘Your Average Joe’.

The book covers, and indeed includes, Xandros Open Circulation Edition, a Debian based distribution that just happens to include a number of key components for the target market, including OpenOffice, a key part of the toolkit required by many users to provide word processing and spreadsheet facilities.

Linux Made Easy

The contents

In consideration of the target audience the book is a meaty, but not imposing, 450 pages making it look both substantial enough to keep potential readers interested and yet not so large as to make them think twice about buying a ‘professional’ book.

The book starts off with the usual background to Linux and some considerations before moving straight on to the installation of Linux on your machine. Individual steps are split into ‘projects’, so in the installation chapter we have projects for booting your machine, creating a boot disk and the actual installation. Consideration is even giving for retaining your Windows partition in the process. Each project is a combination of discussion of what you are about to do, followed by a very detailed step-by-step account of the process, and then some follow up notes and information. To round off the first part, we get a quick introduction to the main parts of your new operating system.

By Part II we start getting in to the meat of the operating system, first looking at how to connect to the Internet (a required step in the modern computing world) before moving on to basic file management, removable media and finally the control and customization options available through the Xandros Command Center and finally how to keep your machine up to date through the Xandros network.

The next section concentrates more on using your machine for those day to day tasks, including printing, scanning, and connecting your digital camera and PDA. There’s lots of information here, from getting the equipment talking (including what to do when it doesn’t want to), through to actually scanning images, exchanging photos and PDA data. By the end of this section you should be up to speed in terms of duplicating the basic set up of a Windows environment and ready to start working and using your Xandros installation.

Part IV is all about typical applications and projects; listening to CDs, browsing the Internet and sending email, using OpenOffice and other day-to-day tasks. As with the rest of the book, there’s more here than you will find through a casual glance. The chapter on OpenOffice for example doesn’t just tell you how to open the various applications; you also get information on using them. Basic spreadsheet mechanics, including formulas and referencing cells make up one of the projects. Although switchers may already know, it’s nice to see that the book covers more than just the ‘’use this application for that task’ approach.

By the end of the book, and Part V, we are into the material which the book itself acknowledges is geeky: the command line. Although it contains the usual range of command line utilities and handy hints for making the best of a shell, it’s interesting to note that the previous 19 chapters have been entirely based on using the X Windows interface and KDE. Again, this simply helps to show that Linux is a credible alternative to Windows and that you don’t have to be a geek to use it.


Throughout, Rickford’s writing style is free and easy flowing. Despite the heavy step-by-step approach, you never feel like you are being treated like an idiot. Rickford assumes readers are going to be proficient in using a computer, just not proficient in Linux. To add to the lighter feel, chapters have interesting titles and subtitles and there’s a lot of humor in the book. This in turn makes the book incredibly easy to read while containing a lot of information. Even for a long-time Linux user there’s a lot that can be learned from the book.

The choice and range of topics is also a massive bonus. This book is aimed entirely, and squarely, at those people who want to try Linux, not with the aim of simply toying with it, but with the specific aim of actually using it to do your day-to-day tasks.

There’s a surprising chapter on Linux gaming which only covers the standard games provided as part of Xandros, but it helps to show to people that Linux is more than just an Internet and business application machine and really can be used as a full time replacement for Windows.


To be honest, there really isn’t a great deal to work on when it comes to problems with the book. There are a few formatting and stylistic issues, but nothing major. Just occasionally there are just a few too many screenshots and those provided seem superfluous, but for a user new to Linux these additional screens would be reassuring, rather than annoying.


This book is not for people already familiar with Linux, but it is a book that could easily be distributed to new users. Overall, the book would be an ideal item to keep on the shelf and hand over to the next person who asks you what to do when they get fed up of Windows. In fact, I’m tempted to keep piles of the book for just this purpose.

Peter Wainwright, Pro Apache

Apache has been a stalwart of the Internet for some time. Not only is it well known as a web serving platform, but it also forms a key part of the LAMP (Linux-Apache-MySQL-Perl/Python/PHP) and is one of the best known open source projects. Getting an Apache installation right though can be tricky. In Pro Apache, Peter Wainwright hopes to help readers by using a task, rather than feature based, approach. I spoke to Peter about Apache, its supported platforms, the competition from IIS and his approach to writing such a mammoth tome.

High Performance Linux ClustersInflammatory questions first – Unix or Windows for Apache?

Unix. To be more precise, BSD, then Linux, then almost anything else (e.g., commercial Unixes), then Windows — if you must.

The usual technical arguments and security statistics against using Windows are readily available from a number of sources, so let me give a rather different perspective: it seems Microsoft was in discussion to buy Claria, creators of Gator (one of the more annoying strains of adware that infest Windows desktops). Coincidentally, Microsoft’s beta ‘AntiSpyware’ tool recently downgraded Claria’s products from quarantine to ignore. It seems that the deal fell through, but for reasons of bad PR rather than any concern for the customer. Call me cynical if you like, but I see little reason to place my faith in a closed-source operating system when the vendor is apparently willing to compromise the security of its customers for its own business purposes. Yes, plenty of us already knew that, but this is an example even non-technical business managers can grasp.

Having said that, yes, there are reasons why you might be required or find it otherwise preferable to run Apache on a Windows server. For example, you might need make use of a Windows-specific module or extension. Apache on Windows is perfectly viable – but given a free choice, go the open source route.

Do you prefer the text-based configuration, or the GUI based configuration tools?

Text-based every time. I don’t object to the use of a GUI outright, but if I can’t easily understand the generated configuration files by direct inspection afterwards, or can’t modify the configuration without upsetting the tool, I’ve just built a needless dependency on a tool when I would have been better off maintaining the text-based configuration directly. Using a tool not a substitute for understanding the underlying configuration.

Too many administrators, I think, use the default configuration file without considering whether it might be better to create a much simpler and more maintainable configuration from scratch. I find an effective strategy for maintaining an Apache configuration is to divide it into several simple configuration files according to function – virtual hosting, access control, SSL, proxies, and so on – and then include them into one master configuration file. If you know what your website (or websites) will be doing, you can configure only those features. A simpler configuration, in turn, generally means fewer security issues to deal with.

The default configuration file, if I make use of it at all, becomes just one of the files included into the master configuration file that takes its place. Customisations go into go into their own or files and override the defaults as necessary. This makes it very easy to see what configuration came pre-supplied and what was applied locally. It also it easy to update the default configuration as new releases of Apache come out, because there are no modifications in the file to carry across.

Can you suggest any quick ways to improve performance for a static site?

There are two main strategies for performance-tuning a server for the delivery of static content: finding ways to deliver the content as efficiently as possible, and not delivering the content at all, where possible. But before embarking on a long session of tweaking, first determine whether the load on the server or the available bandwidth is the bottleneck. There’s no point tuning the server if it’s the volume of data traffic that’s limiting performance.

Simple static content performance can be improved in Apache by employing tricks like memory-mapping static files or by caching file handles and employing the operating system’s sendfile mechanism (the same trick employed by kernel HTTP servers) to efficiently transfer static data to the client. Modules like Apache 1.3’s mod_mmap_static and Apache 2’s mod_file_cache make this easy to configure.

At the platform level, many operating systems provide features and defaults out of the box that are not useful for a dedicated webserver. Removing these can benefit performance at no cost and often improve security at the same time. For instance, always shut down the mail service if the server handles no mail. Other server performance improvements can be gained by reducing the amount of information written to log files, or disabling them entirely, or disabling last access-time updates (the noatime mount option for most Unix filesystems).

If the limiting factor is bandwidth, look to trade machine resources to reduce throughput with strategies like compressing server responses with mod_gzip. Also consider the simple but often-overlooked trick of reducing the bytesize of images (which compression generally won’t help with) that Apache is serving.

Arranging not to deliver the content can actually be easier, and this reduces both server loading and bandwidth usage. Decide how often the static content will change over time, then set configure caching and expiration headers with mod_cache (mod_proxy for Apache 1.3) and mod_expires, so that downstream proxies will deliver content instead of the server as often as possible.

To really understand how to do this well, there is no substitute for an understanding HTTP and the features that it provides. RFC2616, which defines HTTP 1.1, is concise and actually quite readable as RFCs go, so I recommend that all web server administrators have a copy on hand (get it from www.w3.org/Protocols/HTTP/1.1/rfc2616.pdf). That said, it is easy to set expiry criteria for different classes of data and different parts of a site even without a firm understanding of the machinery that makes it work. Doing so will enable the site to offload content delivery to proxies wherever possible. For example, tell proxies that all (or most) of the site’s images are static and can be cached, but the text can change and should never be cached. It may happen that most of the text is also static, but since images are generally far larger, marking them as static provides immediate benefits with a very small amount of configuration.

Security is a key issue. What are the main issues to consider with Apache?

Effective security starts with describing the desired services and behaviour of the server (which means both Apache and the hardware it is running on). Once you know that, it is much easier to control what you don’t want the server to do. It’s hard to protect a server from unwanted attention when you don’t have a clear idea of what kinds of attention is wanted.

I find it useful to consider security from two standpoints, which are also reflected in the book by having separate chapters. First is securing Apache itself. This includes not only the security-specific modules that implement the desired security policies of the server, but also the various Apache features and directives that have (sometimes non-intuitive) security implications. By knowing what features are required, you can remove the modules you don’t need.

Second, but no less important, is securing the server that Apache is running on. The security checklist in Pro Apache attempts to address the main issues with server security in a reasonably concise way, to give administrators something to start from and get them thinking in the right direction. One that’s worth highlighting is ‘Have an Effective Backup and Restore Process’ — it’s vital to know how to get your server back to a known state after a break-in, and being able to do so quickly will also stand you in good stead if a calamity entirely unrelated to security occurs, like a hard disc failure or the server catching fire (this actually happened to me). The ssh and rsync tools are very effective for making secure network backups and restores. They are readily available and already installed on most Unixes, so there’s no reason not to have this angle covered.

With the increased use of dynamic sites using PHP and Perl, how important and useful are functions like SSIs and rewriting which is built into Apache?

When designing a web application, use the right tool for each part of the job. Apache is good at handling connectivity and HTTP-level operations, so abstract these details from the application as far as possible. Rewriting URLs, which are simply one kind of many kinds of request mapping, are just an aspect of this. Similarly, don’t make a web application handle all its own security. Use Apache to handle security up front as much as possible, because it is expert at that, and if used properly will prevent insecure or malicious requests from reaching the application. Unfortunately, rather too many web application developers don’t really understand web protocols like HTTP and so build logic into the application that properly belongs in the server. That makes it more likely that a malicious request can find a weakness in the application and exploit it. It also means the application designers are not making use of Apache to its fullest potential.

Bear in mind that it is possible, with scripting modules like mod_perl, to plug handlers into different parts of the request-response cycle. Clever use of this ability allows a flexible modular design that is easier to adapt and less likely to create hidden security issues. Apache 2 also provides new and interesting ways to construct web applications in a modular fashion using filters. These features are very powerful, so don’t be afraid to exploit them.

I’ll admit to a fondness for Server Side Includes (SSIs). Even though they have been largely superseded by more advanced technologies, they are easy to use and allow for simple templating of static and dynamic content. Apache’s mod_include also knows how to intelligently cache static includes, so SSI-based pages are a lot faster than their basic mechanic would suggest, and without requiring any complex configuration. They’re a good choice for sites that have a lot of static content and need to incorporate a few dynamic elements.

Apache is facing an increasing amount of competition from Microsoft’s IIS, especially with the improvements in IIS 6.0. Ignoring the cost implications, what are the main benefits of Apache over IIS?

Trust. One of the reasons that Apache is a reliable, secure, and high-performance web server is because the Apache developers have them as end objectives. They’re not trying to sell you something. Having total flexibility to add or remove features, or inspect and modify the code if necessary, are almost bonuses by comparison.

On a more technical note, an Apache-based solution is of course readily portable to other platforms, which ties into the choice of platform we started out with. Although there are always exceptions, if you think there’s a feature that IIS provides that Apache cannot — bearing in mind you can always run Apache on Windows — chances are you haven’t looked hard enough.

Pro Apache is a mammoth title — where do you start with something as complex with Apache?

Too many books on computing subjects tend to orient themselves around the features of a language or application, rather than the problems that people actually face, which is not much help if you don’t already have some idea what the answer is in order to look it up. I try hard in Pro Apache to start with the problems, and then illustrate the various directives and configuration possibilities in terms of different solutions to those problems.

Even though there are a bewildering number of directives available, many of them are complimentary, or alternatives to each other, or are different implementations of the same basic idea. For example, take the various aliasing and redirection directives, all of which are essentially variations on the same basic theme even if they come from different modules (chiefly, but not exclusively, mod_alias and mod_rewrite). Understanding how different configuration choices relate to each other makes it easier to understand how to actually use them to solve problems in general terms. A list of recipes doesn’t provide the reader with the ability to adapt solutions to fit their own particular circumstances.

I also try to present several different solutions to the same problem in the same place, or where that wasn’t practical, provide pointers to alternative or complimentary approaches in other chapters. There’s usually more than one way to achieve a given result, and it is pretty unlikely, for example, that an administrator trying to control access through directives like BrowserMatch and RewriteRule will discover that the SSLRequire is actually a general-purpose access control directive that could be the perfect solution to their problem. (SSLRequire is my favourite ’secret’ directive, because no one thinks to find a directive for arbitrary access control in an SSL module.)

Since many administrators are still happily using Apache 1.3, or have yet to migrate, the updates made to the first edition of Pro Apache (then called Professional Apache and published by Wrox) to cover Apache 2.0 do not separate coverage of the 1.3 and 2.X releases except where they genuinely diverge. The two versions are vastly more similar than they are different — at least from the point of view of an administrator — and in order to be able to migrate a configuration or understand the impact of attempting to do so, it was important to keep descriptions of the differences between the two servers tightly focused. To do this, coverage of the same feature under 1.3 way and 2.X are presented on the same page wherever possible.

It seems unlikely considering the quality of the content, but was there anything you would have liked to include in the book but couldn’t squeeze in?

With a tool as flexible as Apache, there are always more problems to solve and ways to solve them than there is space to cover, but for the most part I am very happy with the coverage the book provides. Judging by the emails I have received, many people seem to agree. If there’s anything that would have been nice to cover, it would probably be some of the more useful and inventive of the many third-party modules. A few of the more important, like mod_perl, are covered by the last chapter, but there are many so many creative uses to which Apache has been put that there will always be something there wasn’t the space or time to include.

What do you do to relax?

Strangely enough, even though I spend most of my working time at a computer, I’ve found that playing the odd computer game helps me wind down after a long day. I think it helps shut down the parts of my brain that are still trying to work by making them do something creative, but deliberately non-constructive. I recommend this strategy to others too, by the way; board games, or anything similar, work too.

To truly relax, I’ve found that the only truly effective technique is to go somewhere where I don’t have access to email, and determinedly avoid networks of any kind. I suspect this will cease to work as soon as mesh networks truly take hold, but for now it’s still the best option. It also helps that I have a wonderful, supportive wife.

What are you working on next?

Right now I’m gainfully employed and wielding a great deal of Perl at some interesting problems to do with software construction in the C and C++ arena. There’s been some suggestion that a book might be popular in this area, so I’m toying with that idea. I also maintain an involvement in commercial space activities, specifically space tourism, which has recently got a lot more popular in the public imagination (and about time too, some of us would say). That keeps me busy in several ways, the most obvious of which is the ongoing maintenance the Space Future website at www.spacefuture.com.

Author Bio

Peter Wainwright is a developer and software engineer specializing in Perl, Apache, and other open-source projects. He got his first taste of programming on a BBC Micro and gained most of his early programming experience writing applications in C on Solaris. He then discovered Linux, shortly followed by Perl and Apache, and has been happily programming there ever since.

When he is not engaged in development or writing books, Wainwright spends much of his free time maintaining the Space Future website at www.spacefuture.com. He is an active proponent of commercial passenger space travel and cofounded Space Future Consulting, an international space tourism consultancy firm.

From Bash to Z Shell by Oliver Kiddle, Jerry Peek and Peter Stephenson

Note: This review was originally published in Free Software Magazine

Linux in a Windows WorldIf you use a free software operating system or environment, chances are one of your key interfaces will be through some kind of shell. Most people assume the bulk of the power of shells comes from the commands available within them, but some shells are actually powerful in their own right. Many of the more recent releases being more like a command line programming environment than a command line interface. “From Bash to Z Shell” published by Apress, provides a guide to using various aspects of the shell. From the basic command line interaction through to the more complex processes of programming, it touches on file pattern matching and command line completion along the way.

The contents

Shells are complicated – how do you start describing working with a shell without first describing how the shell works, and don’t you show them how to use it by doing so? The book neatly covers this problem in the first chapter with what must be the best description of a shell and how the interaction works that I’ve ever read.

This first chapter leads nicely into the first of three main sections. The initial section looks at using a shell, how to interact with the programs which are executed by the shell and how to use shell features such as redirection, pipes and command line editing. Other chapters look at job and process control, the shell interface to directories and files, as well as prompts and shell history.

The real meat of the book for me lies in the two main chapters in the middle that make up the second section. The first of these chapters is on pattern matching. Everybody knows about the basics of the asterisk and question mark, but both bash and zsh provide more complex pattern matching techniques that enable you to very find a specific set of files which can simplify your life immensely. The second chapter is on file completion; press TAB and get a list of files that matches what you’ve started to type. With a little customization you can extend this functionality to also include variables, other machines on your network and a myriad of other potentials. With a little more work in zsh and you can adjust the format and layout of the completion lists and customize the lists according to the environment and circumstances.

The third and final section covers the final progression of shell use from basic interaction to programming and extending the shell through scripts. Individual chapters cover the topics of variables, scripts and functions. The penultimate chapter puts this to good use by showing you how to write editor commands – extensions to zsh that enhance the functionality of the command line editor. Full examples and descriptions are given here on a range of topics, including my favourite: spelling correction.

The final chapter covers another extension for the command-line – completion functions. Both bash and zsh provide an extension system for completion. Although the process is understandably complex, the results can be impressive.

Who’s this book for?

If you use a shell – and let’s face it, who doesn’t – then the information provided in the book is invaluable. Everybody from system administrators through developers and even plain old end users are going to find something in this book that will be useful to them.

Of all the target groups, I think the administrators will get the most benefit. Most administration involves heavy use of the shell for running, configuring and organizing your machine, and the tricks and techniques in this book will go a long way to simplify many of the tasks and processes that take up the time. Any book that can show you how to shorten a long command line from requiring 30-40 key presses down to less than 10 is bound to be popular.


The best aspect of the book is that it provides full examples, descriptions and reasoning for the different techniques and tricks portrayed. This translates the content from more than a simple guide and into an essential part of the users desktop guides. The book is definitely not just an alternative way of using the online man pages.

The only problem – although it’s a good one – is that reading the book and following the tips and advice given becomes addictive. After you’ve customized your environment, extended your completion routines and enhanced your command-line once, you’ll forever find yourself tweaking and optimizing the environment even further.

Finally, it’s nice to see a handy reference guide in one of the appendices to further reading – much of it online, but all of it useful.


One of the odd things about the book is that the title doesn’t really reflect the contents. If you are expecting the book to be guide to using a range of shells ‘From Bash to Z Shell’, as the name suggests, you’ll be disappointed. Sure, a lot of the material is generic and will apply to many of the shells in use today, but the bulk of the book focuses on just the two shells described in the title, which makes the title a little misleading.

Although I’m no fan of CDs in books, I would have liked to see a CD or web link to some downloadable samples from the book.

In short
Title From Bash to Z Shell
Author Oliver Kiddle, Jerry Peek and Peter Stephenson
Publisher Apress
ISBN 1590593766
Year 2005
Pages 472
CD included No
Mark 9

Eric S Raymond, Deb Cameron, Bill Rosenblatt, Marc Loy, Jim Elliott, Learning GNU Emacs 3ed

GNU Emacs has been the editor of choice for many users for many years. Despite new operating systems, environments and applications, emacs still has a place in the toolbox for both new and old users. I talked to the authors of Learning GNU Emacs, Third Edition: Eric S Raymond, Deb Cameron, Bill Rosenblatt, Marc Loy, and Jim Elliott about the emacs religion, nervous keyboard twitches and whether emacs has a future in an increasingly IDE driven world.

High Performance Linux ClustersWell, I guess the answer to the age-old geek question of ‘emacs’ or ‘vi’ is pretty much covered with this book?

Jim Elliott (JJE): We pretty much start with the assumption that people picking up the book want to know about Emacs. I had fun following the flame wars for a while a decade ago, but we’ve moved on. Some of my best friends and brightest colleagues swear by vi.

Bill Rosenblatt (BR): I try not to get involved in theological arguments.

Deb Cameron (DC): Like all religious questions, you can only answer that for yourself.

Eric S. Raymond (ESR): Oh, I dunno. I think we sidestepped that argument rather neatly.

Marc Loy (ML): I think the other authors have chimed in here, but this book “preaches to the choir.” We don’t aim to answer that religious debate. We just want to help existing converts! Of course I think emacs! but I’m a bit biased.

Could you tell me how you (all) got into using emacs?

ESR: I go back to Gosling Emacs circa 1982 — it was distributed with the variant of 4.1BSD (yes, that was 4.*1*) we were using on our VAX. I was ready for it, having been a LISP-head from way back.

ML: During my first programming course at college, I went to the computer lab and sat down in front of a Sun X terminal. There were two cheat-sheets for editors: emacs and vi. They were out of the vi batch at the time. So I jumped head first into emacs. By the time they had the vi batch replenished, I was hooked and never looked back.

DC: At a startup in Cambridge where I worked, vi was the sanctioned editor. But Emacs evangelists were on the prowl, offering to teach the one true editor in private sessions. Support people threw up their hands in disgust as yet another one one turned to Emacs, though this was too early for GNU Emacs. It was CCA Emacs. The only problem in my opinion was the lack of a book, like O’Reilly’s Learning vi. That gap was the impetus for writing this book.

JJE: I was introduced to the mysteries when I was a co-op intern at GE’s Corporate R&D Center in upstate New York, near my undergraduate alma mater, Rensselaer Polytechnic Institute. My mentor and colleagues handed me a cheat sheet and introductory materials, and I took to it like a fish to water, after getting over the initial learning curve. We were developing graphical circuit design software on SUN workstations, creating our own object-oriented extensions to C, since there was not yet a viable C++ implementation, never mind Java.

BR: I was working as a sysadmin in the mid-1980s at a software company that did a lot of government contract work. I was on a project that required relatively little of my time, so I had a lot of time on my hands. I had some exposure to emacs from a previous job, and I decided, rather than just doing crossword puzzles all day, to spend my time learning GNU Emacs as a means of learning LISP. I ended up contributing some code to GNU emacs, such as the first floating point arithmetic library.

Emacs uses a fairly unique keyboard control mechanism (C-x C-s for save, for example). Do you think this is one of the reasons why many find emacs confusing?

ML: Certainly! But for those that can get past this (large) initial hurdle, I think the keyboard controls increase general productivity. The amount of text manipulation I can do all while “touch typing” in emacs has always impressed me.

DC: I think new users might find Emacs confusing either by reputation or because they don’t have this book or haven’t tried the tutorial. C-x C-s is like any finger habit, easy to acquire and with Emacs, easy to change if you so desire, even if you’re not a LISP hacker. And cua mode lets you use more common bindings easily if your fingers aren’t already speaking Emacs natively.

JJE: Undoubtedly. That’s a big part of the learning curve. But it’s much less of a problem than it used to be, now that keyboards have so many extra keys (like separate, reliable arrow keys, page movement keys, and the like). And, even more importantly, there is is now by default a visible menu bar and icons to fall back on until you learn the more-efficient keyboard commands. Old hands will remember how much of a nightmare the heavy use of control characters (especially C-s and C-q) used to be, when using modems to dial in through text terminal servers. These almost always interacted poorly with the terminal server’s flow control, and there were usually a couple of other problem keystrokes too. Now that we’re all using TCP/IP and graphical environments, people have it easy!

BR: It tends to divide the programmers from the nonprogrammers. Programmers tend to think that control keys are more, not less, intuitive than using regular letters and numbers like vi. But then maybe I’m just showing signs of religious bigotry.

ESR: Probably the biggest single one, at least judging by the way my wife Cathy reacts to it.

Emacs is something of a legend - how much longer can we expect to see emacs as a leading editor and environment; especially when compared to IDEs like Eclipse?

ML: That’s an excellent question. I doubt it will ever disappear, but I do see it losing ground to focused IDEs. For example, I use Eclipse for my Java programming, but I have it set to use emacs keyboard shortcuts.

DC: Emacs offers infinite flexibility and extensibility. Nothing else offers that. As long as there are hackers, there will be Emacs.

ESR: There will always be an Emacs, because there will always be an ecological niche for an editor that can be specialized via a powerful embedded programming language.

JJE: To elaborate on ESR’s response, editors like Eclipse and JEdit give you powerful and flexible customization through Java, and tend to ship with better basic support for advanced language features and refactoring operations, and it’s easy to look a lot better than Emacs at first glance. But there isn’t anything that compares to its breadth, and how amazingly quickly and flexibly you can extend it if you want to. That’s something that comes from using LISP. You really can’t beat it for deep, dynamic control and power. (And I hope readers unfamiliar with LISP will take the opportunity Emacs gives to explore and learn it; the exercise of becoming a competent LISP hacker is extremely valuable in developing deep programming skills.) I use Eclipse for editing Java, but I use Emacs for most everything else.

BR: I think there will always be a role for emacs, because of its extensibility and the fact that visual programming environments are largely cosmetic rather than substantive improvements over character-oriented ones. The day when visual programming languages (as opposed to those written with ascii characters) become popular is the day when emacs will possibly become obsolete. There’s little better evidence of emacs’s longevity than the fact that you are interviewing us for a book that was originally written about 15 years ago (in fact, I am somewhat amazed that you are doing this). There are very few tech books that have been around that long. It’s because of the longevity of the software.

I find it pretty hard - and I’ve been using emacs for 15 years - to find something that emacs can’t do; is there anything that you think should be supported by emacs but currently isn’t?

JJE: The Unicode support is still very rough-edged, given the wrong approaches that were originally taken. It’s hard to work with Asian alphabets, and XML documents with mixed alphabets, without getting a little nuts. But that’s something that rarely affects me.

ML: Jim Elliott mentioned the Unicode support. Being a Java programmer, I sorely miss that feature. In every other regard, I continue to be surprised by what emacs can do or be taught to do. I suppose the quantity of .emacs files and chunks of LISP code out there are a testament to the stability of this editor.

DC: There are things I’d like to see, but what you find is they’re in the works. An easier approach to character encoding is one, and that’s coming in Emacs 22.

ESR: Not since tramp.el got integrated and made remote editing easy. That was the last item on my wishlist.

Do you think it odd that there are certain parts of functionality that are only available through shell commands - spelling, for example - do you think these should be embedded to help emacs become even more of a one stop shop?

ML: Well, I’ve never used emacs for a text editor, so those shell-escaped features never got in my way. Features like spelling certainly would be welcome, but I don’t think that has a big influence on the folks who are picking up emacs–certainly not on the folks who continue to use it.

ESR: No opinion. I laugh at spellcheckers, and I’m not sure what other things fall in this category.

DC: Spellchecking is embedded now with ispell and flyspell.

JJE: I think we show ispell does a really good job of deeply integrating the spell checking process into the Emacs experience. There’s no reason not to take advantage of external processes for things like this. That’s always been the Emacs (and Unix) philosophy; don’t reinvent things that you can leverage instead.

BR: I think that’s really just a question of demand. If people want spell checking as a built-in command, it’s pretty easy to extend emacs to make that happen through the ability to pipe text through a process.

Emacs has itself become either the source or inspiration of a few other GNU projects (GNU info, for example). Do you see this as a dilution or an endorsement of the technology built into emacs?

ML: I see it as an endorsement, definitely.

ESR: An endorsement, fairly obviously.

DC: An endorsement, of course. Emacs is the grandaddy of ‘em all.

JJE: Endorsement, definitely! You can’t be sure something is useful until it’s been reused at least three times.

BR: Certainly it’s an endorsement. GNU emacs contains a lot of code that is quite useful elsewhere. One example of this is the process checkpointing routine (unexec). I wrote an article, about a zillion years ago in a short-lived journal called SuperUser, about interesting uses for unexec.

Emacs is something of a behemoth compared to solutions like vi and nano, do you think this makes new users - of Linux particularly - loathe to use it, when it’s often not included as part of the basic installation tool set (for example Gentoo and others)?

ML: I’m sure it has an effect on new users. But vi isn’t a piece of cake, either! The new folks that I have seen picking up emacs are doing it to follow others they see using it and enjoying. They go looking for it. If it’s not installed, that simply adds one step to the process–a step we cover in the book for several platforms, by the way.

DC: Once upon a time Emacs was the only behemoth, but now that’s pretty common and the build process is easy for Linux if it’s not included or if the version included isn’t the latest. There are easy installs for other platforms too, so you can use Emacs no matter what platform you might be (forced into) using at the moment. I run it on three platforms.

JJE: There used to be some truth to this criticism, remember the old jokes about what Emacs stood for, like “Eight Megs And Constantly Swapping”? But the rest of the computing world has long ago swept by. Emacs is now tiny and tight compared to much software people encounter. Have you looked at Word lately?

ESR: Don’t ask me to think like a new user; I’m afraid I’m far too aged in evil for that.

BR: Perhaps, yes.

Does everybody here have the same nervous C-x C-s twitch while working in other non-emacs editors that I do?

ML: Daily! That’s why I had to switch the shortcuts in Eclipse.

ESR: Heh. No. Actually, I have both emacs and vi reflexes in my fingers, and I almost never cross up in either direction.

DC: Well, Emacs is so good at saving your work in an pinch that I get nervous only if I’m using something else.

JJE: I only tend to get tripped up when I encounter environments people have set up where one editor is trying to pretend to be another. Usually the context is enough for me to reach for the right keys. One thing I very much enjoy about Mac OS X is the way that the standard Apple frameworks used to build modern applications (the ones that came from NeXTStep) all support basic Emacs key bindings.

You say at the start that you weren’t able to include everything you wanted; emacs includes its own programming language which you only touch on for example - but is there anything that didn’t make it into the book that you really, really wanted to include?

ML: I think we managed to cover all of my big ticket items. I’m really happy with the coverage provided for folks learning Emacs. I still use it myself for reminders on the .emacs configuration and font control.

DC: Probably what I would have most liked to include and couldn’t in this rev was Emacspeak, the voice interface to Emacs.

JJE: Deb was the primary driving force behind what got into the third edition.

BR: More on LISP and extensibility, certainly. We had to stick to the fundamentals and only take it so far.

The logistics of five authors for one book must have been interesting?

ML: Actually, with Deb Cameron managing things, it was quite simple. She did a fantastic job–and did a majority of the new work in this edition herself. Jim Elliott and I both worked with her on the second edition of the Java Swing book and had no trouble jumping in to help her finish this book.

ESR: No, it was like one of those album sessions you read about where through the magic of multi-track recording the band members never actually have to be in the same studio at the same time. I only wrote two chapters, fairly cleanly separated from the rest of the book, and never had to interact with the other four authors much.

JJE: It worked very well; Deb’s great at coordinating this sort of thing, and she, Marc and I had worked together in the past on the Java Swing effort.

BR: Well, we were brought on at different times to do different pieces of the book, so it wasn’t a big deal. I wrote roughly the last half of the first edition; the only other author at the time was Deb Cameron. The other authors came along later.

Are any of you working on any new titles we should keep our eyes peeled for?

ML: I’m happily on a writing hiatus, but that never seems to last long.

DC: I’m editing the latest edition of O’Reilly’s Java Enterprise in a Nutshell, a revolutionary revision of that book that includes the best of open source tools in addition to the standard stuff. Check out this article.

JJE: I know that Hibernate: A Developer’s Notebook needs to be revised to cover Hibernate 3. I am hoping to find time to do that this summer or fall, but it’s been a hard year so far because of some health issues in my family. I miss writing! But other things are sometimes more important.

BR: I currently write a newsletter on Digital Rights Management called DRM Watch (www.drmwatch.com). It’s published by Jupitermedia, and it’s a free weekly email subscription. It provides balanced coverage of the subject; I’m somewhere to the left of Big Media and to the right of the EFF.

ESR: I’m going to do a fourth edition of “The New Hacker’s Dictionary” sometime soon.

Author Bios

Marc Loy

Marc Loy is a trainer and media specialist in Madison, WI. When he’s not working with digital video and DVDs, he’s programming in Java. He can still be found teaching the odd Perl and Java course out in Corporate America, but even on the road he’ll have his PowerBook and a video project with him.

James Elliott

James Elliott is a senior software engineer at Berbee, with fifteen years professional experience as a systems developer. He started designing with objects well before work environments made it convenient, and has a passion for building high-quality Java tools and frameworks to simplify the tasks of other developers.

Bill Rosenblatt

Bill Rosenblatt is president of GiantSteps Media Technology Strategies, a New York-based management consulting firm whose clients include content providers and media technology companies ranging from startups to Fortune 100 firms.

Bill’s other titles for O’Reilly are Learning the Korn Shell and (with Cameron Newham) Learning Bash. He is also the author of Digital Rights
Management: Business and Technology (John Wiley & Sons) and editor of the Jupitermedia newsletter DRM Watch (www.drmwatch.com).

Debra Cameron

Debra Cameron is president of Cameron Consulting. In addition to her love for Emacs, Deb researches and writes about emerging technologies and their applications. She is the author of Optical Networking: A Wiley Tech Brief, published by John Wiley & Sons, which covers the practical applications and politics of optical networking.

Deb also edits O’Reilly titles, including Java Enterprise in a Nutshell, Java in a Nutshell, JavaScript in a Nutshell, Essential SNMP, Cisco IOS in a Nutshell, TCP/IP Network Administration, Java Security, Java Swing, Learning Java, and Java Performance Tuning.

Eric S Raymond

Eric is an Open Source evangelist and author of the highly influential paper “The Cathedral and the Bazaar.” He can be contacted through his website, Eric S. Raymond

Linux in a Windows World by Roderick Smith

Note: This review was originally published in Free Software Magazine
Linux in a Windows World
Linux in Windows World aims to solve the problems experienced by many system administrators when it comes to using Linux servers (and to a lesser extent clients) within an existing Windows environment. Overall the book is meaty and a quick flick through shows an amazing amount of information has been crammed between the covers. There are though some immediately obvious omissions, given the books title and description, but I’m hoping this won’t detract from the rest of the content.

The contents

The book starts off with a look at where Linux fits into a Windows network, covering its use both as a server and desktop platform. Roderick makes some salient points and arguments here, primarily for, rather than against, Linux but he’s not afraid to point out the limitations either. This first section leads on to a more in depth discussion of deploying a Linux system into your network, promoting Linux in a series of target areas – email serving, databases and so on – as well as some strategies for migrating existing Windows desktops to Linux.

The third chapter and the start of the second section starts to look in detail at the various systems and hurdles faced through using Linux within an existing heavily Windows focused environment. This entire section is primarily devoted to Samba and sharing and using shared files and printers.

Section 3 concentrates on centralized authentication, including using LDAP and Kerberos in place of the started Windows and Linux solutions.

Remote login, including information on SSH, Telnet and VNC make up content of the fourth section. Most useful among the chapters is the one on Remote X Access which provides vital information on X server options for Windows, and information on configuring XDMCP for session management.

The final section covers the installation and configuration of Linux based servers for well-known technologies such as email, backups and network manage (DNS, DHCP etc).

Who’s this book for?

Overall, the tone of the book is geared almost entirely towards administrators deploying Linux as a server solution and migrating your Windows clients to using the Linux server. The “integration” focus of the book concentrates on replacing Windows servers with Linux equivalents, rather than integrating Linux servers and clients into an existing Windows installation.

All these gaps make the book a “Converting your Windows World to Run on Linux Servers” title, rather than what the book’s title (and cover description) suggests. If you are looking for a book that shows you how to integrate your Linux machines into your Windows network, this book won’t help as much as you might have hoped.

On the other hand, if you are a system administrator and you are looking for a Windows to Linux server migration title then this book will prove invaluable. There are gaps, and the book requires you to have a reasonable amount of Linux knowledge before you start, but the information provided is excellent and will certainly solve the problems faced by many people moving from the Windows to a Linux platform.


There’s good coverage here of a wide range of topics. The information on installing and configuring Linux equivalents of popular Windows technologies is very nice to see, although I would have preferred some more comparative information between the way Windows and the Linux counterparts work and operate these solutions.

Some surprising chapters and topics also shine through. It’s great to see the often forgotten issue of backups getting a chapter of its own and the extensive information on authentication solutions are invaluable.


I found the organization slightly confusing. For example, Chapter 3 is about using Samba, but only to configure Linux as a server for sharing files. Chapter 4 then covers sharing your Linux printers to Windows clients. Chapter 6 then covers the use of Linux as a client to Windows for both printer and file shares. Similarly, there is a chapter devoted to Linux Thin Client configurations, but the use of rdesktop, which interfaces to the Windows Terminal Services system, has been tacked on to the end of a chapter on using VNC.

There are also numerous examples of missed opportunities and also occasionally misleading information. Windows Server 2003 for example has a built in Telnet server and incorporates an extensive command line environment and suite of administration tools, but the book fails to acknowledge this. There’s also very little information on integrating application level software, or the client-specific integration between a Linux desktop and Windows server environment. A good example here is the configuration of Linux mail clients to work with an existing Exchange Server, which is quite happy to work with standard IMAP clients. Instead, the book suggests you replace Exchange with a Linux-based alternative, and even includes solutions for configuring this solution.

Finally, there are quite a few obvious errors and typos – many of which are in the diagrams that accompany the text.

In short
Title Linux in a Windows World
Author Roderick W Smith
Publisher O’Reilly
ISBN 0596007582
Year 2005
Pages 478
CD included No
Mark 8

Joseph D Sloan, High Performance Linux Clusters

Getting the best performance today relies on deploying high performance clusters, rather than single unit supercomputers. But building clusters can be expensive, but using Linux can be both a cheaper alternative and make it easy to develop and deploy software across the cluster. I interview Joseph D Sloan, author of High Performance Linux Clusters about what makes a cluster, how Linux cluster competes with Grid and proprietary solutions and how he got into clustering technology in the first place.

High Performance Linux ClustersClustering with Linux is a current hot topic - can you tell me a bit about how you got into the technology?

In graduate school in the 1980s I did a lot of computer intensive modeling. I can recall one simulation that required 8 days of CPU time on what was then a state-of-the art ($50K) workstation. So I’ve had a longtime interest in computer performance. In the early 1990s I shifted over to networking as my primary interest. Along the way I set up a networking laboratory. One day a student came in and asked about putting together a cluster. At that point I already had everything I needed. So I began building clusters.

The book covers a lot of material - I felt like the book was a complete guide, from design through to implementation of a cluster - is there anything you weren’t able to cover?

Lots! It’s my experience that you can write a book for beginners, for intermediate users, or advanced users. At times you may be able to span the needs of two of these groups. But it is a mistake to try to write for all three. This book was written to help folks build their first cluster. So I focused on the approach that I thought would be most useful for that audience.

First, there is a lot of clustering software that is available but that isn’t discussed in my book. I tried to pick the most basic and useful tools for someone starting out.

Second, when building your first cluster, there are things you don’t need to worry about right away. For example, while I provide a brief description of some benchmarking software along with URLs, the book does not provide a comprehensive description of how to run and interpret benchmarks. While benchmarks are great when comparing clusters, if you are building your first cluster, to what are you going to compare it? In general, most beginners are better off testing their cluster using the software they are actually going to use on the cluster. If the cluster is adequate, then there is little reason to run a benchmark. If not, benchmarks can help. But before you can interpret benchmarks, you’ll first need to know the characteristics of the software you are using-is it I/O intensive, CPU intensive, etc. So I recommend looking at your software first.

What do you think the major contributing factor to the increase of clusters has been; better software or more accessible hardware?

Both. The ubiquitous PC made it possible. I really think a lot of first-time cluster builders start off looking at a pile of old PCs wondering what they can do with them. But, I think the availability of good software allowed clusters to really take off. Packages like OSCAR make the task much easier. An awful lot of folks have put in Herculean efforts creating the software we use with very little thought to personal gain. Anyone involved with clusters owes them a huge debt.

Grids are a hot topic at the moment, how do grids - particularly the larger toolkits like Globus and the Sun Grid Engine - fit into the world of clusters?

I’d describe them as the next evolutionary stage. They are certainly more complex and require a greater commitment, but they are evolving
rapidly. And for really big, extended problems, they can be a godsend.

How do you feel Linux clusters compare to some of the commercially-sourced, but otherwise free cluster technology like Xgrid from Apple?

First, the general answer: While I often order the same dishes when I go to a restaurant, I still like a lot of choices on the menu. So I’m happy to see lots of alternatives. Ultimately, you’ll need to make a choice and stick to it. You can’t eat everything on the menu. But the more you learn about cooking, the better all your meals will be. And the more we learn about cluster technology, the better our clusters will be.

Second, the even more evasive answer: Designing and building a cluster requires a lot of time and effort. It can have a very steep learning curve. If you are already familiar with Linux and have lots of Linux boxes, I wouldn’t recommend Xgrid. If you are a die-hard Mac fan, have lots of Mac users and systems, Xgrid may be the best choice. It all depends on where you are coming from.

The programming side of a grid has always seemed to be the most complex, although I like the straightforward approach you demonstrated in the book. Do you think this is an area that could be made easier still?

Thanks for the kind words. Cluster programming is now much easier than it was a decade ago. I’m a big fan of MPI. And while software often lags behind hardware, I expect we’ll continue to see steady improvement. Of course, I’m also a big fan of the transparent approach taken by openMosix and think there is a lot of unrealized potential here. For example, if the transparent exchange of processes could be matched by transparent process creation through compiler redesign, then a lot more explicit parallel programming might be avoided.

What do you think of the recent innovations that puts a 96-node cluster into a deskside case?

The six-digit price tag is keeping me out of that market. But if you can afford it and need it …

Go on, you can tell me, do you have your own mini cluster at home?

Nope-just an old laptop. I used to be a 24/7 kind ‘a computer scientist, but now I try to leaving computing behind when I go home.
Like the cobbler’s kid that go without shoes, my family has to put up with old technology and a husband/father that is very slow to respond to their computing crises.

When not building clusters, what do you like to do to relax?

Relax? Well my wife says …

I spend time with my family. I enjoy reading, walking, cooking, playing classical guitar, foreign films, and particularly Asian films. I tried learning Chinese last year but have pretty much given up on that. Oh! And I do have a day job.

This is your second book - any plans for any more?

It seems to take me a couple of years to pull a book together, and I need a year or so to recover between books. You put so many things on hold when writing. And after a couple of years of not going for a walk, my dog has gotten pretty antsy. So right now I’m between projects.

Author Bio

Joseph D. Sloan has been working with computers since the mid-1970s. He began using Unix as a graduate student in 1981, first as an applications programmer and later as a system programmer and system administrator. Since 1988 he has taught computer science, first at Lander University and more recently at Wofford College where he can be found using the software described in this book.

You can find out more on the author’s website. More information on the book, including sample chapters, is available at O’Reilly.

Tom Jackiewicz, Deploying OpenLDAP

OpenLDAP is the directory server of choice if you want a completely free and open source solution to the directory server problem. Tom Jackiewicz is the author of Deploying OpenLDAP, a title that aims to dissolve many of the myths and cover the mechnanics of using OpenLDAP in your organization. I talked to him about his book, his job (managing OpenLDAP servers) and what he does when he isn’t working on an LDAP problem.

Deploying OpenLDAPCould you summarize the main benefits of LDAP as a directory solution?

There are many solutions to every problem. Some solutions are obviously better than others and they are widely used for that reason. LDAP was just one solution for a directory implementation. Some people insist that Sony’s BetaMax was a better solution than VHS–unfortunately for them, it just didn’t catch on. The main benefit of using LDAP as a directory solution is the same reason people use VHS now. There might be something better out there but people haven’t heard of it, therefore it gets no support and defeats the idea of having a centralized directory solution in place. Bigger and better things out there might exist but if they stand alone and don’t play well with others, they just don’t fit into the overall goals of your environment.

If you deploy any of the LDAP implementations that exist today, you instantly have applications that can tie into your directory with ease. Because of this reason, what used to be a large scale integration project becomes something that can actually be accomplished. I’m way into standards. I guess LDAP was simple enough for everyone to implement and just caught on. If LDAP existed in the same form it does today but another directory solution was more accepted, maybe I’d be making arguments against using LDAP.

Please read the rest of the interview at LinuxPlanet.

Patrick Koetter, Ralf Hildebrandt, The Book of Postfix

Postfix is fast becoming a popular alternative to sendmail. Although it can be complex to configure, it’s easier to use Postfix with additional filtering applications, for example Spam and virus filters, than with some other mail transfer agents. I spoke to Patrick Koetter and Ralk Hildebrandt about The Book of Postfix, the complexities of configuring Postfix, Spam, and email security.

The Book of PostfixHow does Postfix compare to sendmail and qmail?

Ralf Hildebrandt (RH): As opposed to sendmail, Postfix was built with security in mind.

As opposed to qmail, Postfix was built for real-life systems in mind that have to adapt to the hardships of the Internet today. qmail is effectively unmaintained.

Patrick Koetter (PK): That’s a tough question because I am not one of those postmasters who spent half their life working with Eric Allman’s Sendmail nor did I spent too much time enlarging my knowledge on qmail, so I can’t give you an in detail answer that will really tackle specific features or functionalities.

Let me give it a different spin and try if that answers it:

When I took out to run my first own mailserver I looked at Sendmail, qmail and Postfix.

Sendmail to me was too complicated to configure and since my knowledge of the M4 macro language was very little, but my fear of loosing e-mail or even configuring my server to be an open relay was large I dropped it. The ongoing rally of CERT Advisories about this or that Sendmail exploit by then didn’t make it a hard choice.

Then I took a look at qmail, but wasn’t really sure I wanted it because it is more or less a series of patches if you want to use it with nowadays feature range. But I gave it a try anyway and ended up asking some questions on the mailing list because the documentation would not answer what I was looking for.

To cut it short: I was under the impression you had to enter the “Church of qmail” before anyone would take the time to answer a question to a qmail novice. It might have changed since then, but back then I left and I never looked back because all I wanted was to run a MTA.

Finally I took a look at Postfix and was very surprised by the amount of documentation that was available. I also immediately fell in love with the configuration syntax, which seemed to simple and clear to me. For a while I thought this must be a very feature limited MTA, but the more I read the more I understood that it did almost the same things, but was simply easier to configure.

I finally decided to stick with Postfix after I had joined the Postfix mailing list and found out that people really cared for my questions, pointed me to documentation to read again or give me advice on how to do this or that more efficient.

Of course, as the Postfix community grew larger, one or the other character turned up who would rather lecture someone seeking help, but the overall impression still remains the same.

Postfix is well maintained, its security record is unbeaten up to now and the community is how I wished every community supporting a software should be. The modular software architecture Wietse Venema has chosen makes it easy to expand Postfix’ capabilities. Its a system that can grow very well. I haven’t seen another piece of software that does the complex job of being a MTA that well.

Postfix seems a little complex to install - there are quite a few configuration files, some of which seem to contain arcane magic to get things working. Is this a downside to the application?

PK: That’s the provoking question, isn’t it? ;)

To me Postfix is as simple or complex as the process of mail transport itself is. That’s why we added so many theory chapters to the book that explain the e-mail handling process before we took out to explain how Postfix does it in the follow-up chapter. If you understand the process its pretty straightforward to configure Postfix to deal with it.

But basically all you need is three files, main.cf, master.cf and the aliases file. Wait! You could even remove the main.cf file and Postfix would work with reasonable defaults on this specific server.

The main.cf file carries all parameters that are applied globally. If you need options that are specific to a special daemon and should override global options from main.cf, you add them in master.cf in the context of that special daemon. That’s the basic idea of configuring Postfix.

Then there is a lot of tables in the /etc/postfix directory, which you usually don’t need unless you take out or configure a specific feature that isn’t part of basic functionality.

Sure, the amount of tables might frighten a novice, but then they are there for the sole purpose of supporting a novice and even advanced users because they hold the documentation about what the specific table is about and how you would add entries to the table if you wanted to use it.

The rest is complexity added by additional software, for example Cyrus SASL which is a royal pain for beginners.

Of course your mileage will vary when you take out to configure a full blown MTA that incorporates Anti-Spam measures, Anti-Virus checking, SMTP Authentication and Transport Layer Security, where Postfix looks up recipient names and other information from an LDAP server that also drives an IMAP MTA.

But when you begin it boils down to the two configuration files and an aliases file.

As for the “arcane magic” I don’t exactly understand what you relate to so I do some speculation based on my own experiences.

I struggled with smtpd_*_restrictions for quite a while until I realized: “Its the mail transport process that makes it so hard to understand.” Once you’ve understood how a SMTP dialog should be processed it suddenly seems very simple. This is at least what happened to me. I recall hours sitting in front of these restrictions, Ralf ripping hair out of his head and looking at me as if I was from another planet.

The quote we used in the restrictions chapter alludes to that day and it also contains the answer I came up with: “To know what to restrict you need to know what what is.” I looked the “what” parts up in the RFCs, understood what smtpd_*_restrictions were all about and saved Ralf from going mad ;)

But that’s specific to smtpd_*_restrictions. For all other parameters and options it pays to read RFCs as well, but you can get very far by reading the excellent documentation Wietse has written _and_ by looking at the mere names he used for the parameters. Most of the time they speak for themselves and tell you what they will do. I think Wietse has done a great job at thinking of catchy self-explanatory parameter names.

RH: Postfix works with the default main.cf and master.cf. If you have advanced requirements, the configuration can get elaborate. But configuration files like I created them and also offer them on http://www.stahl.bau.tu-bs.de/~hildeb/postfix/ have evolved over several years of use (and abuse of the Internet by Spammers) - I never thought “That’s the way to do it”, but it was rather “trial and error”.

Postfix seems to work exceptionally well as a mail transport agent - i.e. one that operates as an intermediate relay or relayhost (I’ve just set up a Postfix relay that filters spam and viruses, but ultimately delivers to a sendmail host, for example). Is this because of the flexible external interface Postfix uses?

RH: It also works excellent as a mailbox host :) Over the years, Wietse added features for content filtering and the ability to specify maps that tell the system which recipient addresses should be accepted and send on further inwards.

That makes it easy to say “Instead of throwing away our old Product-X server, we simply wedge Postfix in between”

But there’s no special preference as “intermediate relay”. It’s an universal MTA. We use it everywhere. Also for the server handling the mailboxes. Or our list exploder.

Do you have a preferred deployment platform for postfix?

PK: Basically I go for any platform that suits the needs. As for Linux I prefer distributions that don’t patch Postfix, but that’s only because I support many people on SMTP AUTH issues on the Postfix mailing list and some maintainers have taken to do this or that different, which makes configuring SMTP AUTH even harder.

Personally I’d go for RedHat Linux because I know it the best and produce good results faster as on other platforms. But then I wouldn’t hesitate a second to go for something else if it suits the scenario better. That’s another side of Postfix I like very much: It runs on many many systems.

RH: Debian GNU/Linux with Kernel 2.6.x. Patrick begs to differ on the Debian thing. Anyway, it works on any Unixoid OS. I ran it on Solaris and HP-UX back in the old days.

You cover the performance aspects of Postfix. Is it particularly taxing on hardware?

PK: That’s a question that turns up regularly on the Postfix mailing list. Read the archives… ;)

But seriously, you can run Postfix for a single domain on almost any old hardware that flies around. If your OS works with the hardware Postfix will probably go along with it as well.

The more domains you add the more mail you put through the likelier of course that you will get to the limits. But those limits usually aren’t limits imposed by Postfix, but by the I/O performance of your hardware.

Think of it this way: Mail Transport is about writing, moving and copying little files in the filesystem of your computer. The MTA receives a mail from a client and writes it to a mail queue where it waits for further processing. A scheduler determines the next job for the file and the message is moved to another queue. There it might wait another while until it gets picked up again to be delivered to another, maybe remote destination. If the remote server is unreachable at the moment it will be written back to the filesystem again to another queue and so an and so on until it finally can be removed after successful delivery.

The calculation to decide what to do with the mail doesn’t take a lot of time, but writing, moving and copying the file takes a lot longer. That’s due to the limitations of hardware. Hard discs nowadays really can store a lot of e-mail away, but the access speed didn’t grow at the same time. Still you need to stick to them because storing the message in a temporary device would lose the mail if the system was turned off suddenly.

So the basic rule is to get fast discs, arrays and controllers when you need to handle _a lot_ email. Regular hardware does it for private users quite well.

Another slowdown you should be prepared to expect is when you integrate Anti-Spam and Anti-Virus measures. They do not only require to read and write the files they also examine the content which often requires to unpack attached archives. This will temporary eat some of your CPU. But that’s something current hardware can deal with as well.

For hard facts you will need to find somebody who is willing to come up with a real world and well documented test scenario. So far one or the other has posted “measurement data”, but none of them would really tell about their setup and how they tested. Also I don’t know about a sophisticated comparison of Sendmail, qmail and Postfix.

Most of the “comparisons” I’ve heard weren’t able get rid of the odor of “because you wanted it to be better”.

Such tests are not what Postfix is and, as far as I can say without asking him, isn’t Wietse Venema about. I vividly recall him posting “Stop speculating, start measuring!” to someone who came up with a performance problem. I like that attitude a lot, because comparisons should be about facts and not believe.

I enjoyed the in-depth coverage on using certificate based security for authenticating communication between clients and servers. Do you see this as a vital step in the deployment process?

PK: Vital or not depends on your requirements and your in-house policy. Personally I do like certificate based relaying a lot and I think it should be used more widely, because you could really track spam a lot better down and would gain a more secure mail transport at the same time, but then certificate based relaying simply lacks the critical mass of servers and clients supporting it.

As long as you don’t have the critical mass of servers and clients using it there will always be a relay that does it without and that can be tricked to relay spam one or the other way and you loose track of the sender.

It also takes more work to configure, but especially maintain certificate based relaying because you need to maintain the list of certificates. You need to remove the ones that are expired, add others, hand out new ones, this and that…

I think its a “good thing to do [TM]” if you use it in your company, have many mobile users, but most of all (!) have all clients and serves under your control. Then you can automatize some of the work that needs to be done and all that together can pay up for the security and simplicity you get on your network.

But I doubt any private user would be willing to pay the additional fee for maintenance not to mention the certificate infrastructure to maintain the certificates themselves.

Was it Yahoo who had some certificate based Anti-spam measure on their mind? So many attempts to fix the effects of Spam… I think what we really need is a redesign of SMTP to cope with the current challenges. But that’s another topic and I am certainly not the one to be asked how it should be done. ;)

Is it better to use files or MySQL for the control tables in Postfix?

RH: “He said Jehova!”

Performance-wise mysql just sucks. The latency for queries is way higher than when asking a file based map. But then with mysql maps, any changes to the map become effective immediately, without the daemons that use the map having to exit and restart again. If your maps change often AND you get a lot of mail: mysql In all other cases: file based maps.

And: Keep it simple! If you don’t NEED mysql, why use it?

PK: I don’t think there’s a better or worse, because either way you loose or you gain something, but what you loose and gain aren’t the same things:

From a performance point of view you loose a lot of time when you use SQL or LDAP databases because of their higher lookup latency so you might want to stick with the files.

But then, if you host many domains, you win a lot when you maintain the data in a database. You can delegate many administrational tasks to the end user who accesses such a database through some web frontend. So there’s the pro for databases.

If you need both, performance and maintainability, you can build a chain from databases and files. The editing is done in the database and job on your computer checks the database on a regular base and builds (new) files from it when the data has changed. This way you get the best of both worlds for the price of a little delay after changes had been done in the database.


PK: An old couple sits in the kitchen at home.

She: “Let’s go to the movies.”
He: “But we have been to the movies just recently…”
She: “Yes, but they show movies in colour AND with sound now!”

Definitely IMAP ;)

RH: Depends on your needs. Let the user decide: go for courier-imap (which also does pop), so the user can choose.

Is there a simple solution to the spam problem?

RH: Mind control? Orbital lasers? No, but Postfix’s restrictions and the possibility of delegating policy decisions to external programs can help.

PK: No, unfortunately not. There are too many reasons why Spam works and a working solution would have to be technical, political and business oriented at the same time.

First of all it works because the SMTP protocol as designed has little to no means to prove that a message was really sent by the sender given in the e-mail. Anybody can claim to be anybody. As long as this design problem persists it will cost a fortune to track spammers down.

Even if you know where the spam came from the spammer might have redrawn to a country that don’t mind spammers and will protect them from being pursued by foreign law.

The world simply lacks anti-spam laws all countries agree on. You typically are forced to end your chase for a spammer the moment you pass another countries borders because you are not entitled to chase the suspect.

Still, if you where entitled to do so, if costs a fortune to track a spammer down and even then it might take ages to get some money for the damage they have done. Is your company willing to pay that much just to nail one spammer down when another two emerge the moment the one goes behind bars?

And then Spam works, because it is so cheap. You buy a hundred thousand addresses for 250 bucks or even less and IIRC Yahoo found out that 1/3 of their mail users read spam and VISIT the pages they promote.

If one wants to make it go away one must make it expensive for those that send or endorse spam. If you ruin the business concept no one will send spam. That’s business… ;)

To sum my position up: The problem is global and we don’t have the right tools to hinder the cause. Currently all we can do is diminish the effect, by using as many anti-spam features as we can think of.

Do either of you have a favourite comic book hero?

PK: The “Tasmanian Devil” is my all time favourite. I even have a little plastic figure sitting in front of me under my monitor, which has become some kind of talisman. It reminds me to smile about myself on days where I’d rather go out and kill somebody else for not being the way I would want them to be ;)

RH: Calvin (of Calvin and Hobbes)
Too much Coffee Man!

Author Bios
Ralf Hildebrandt and Patrick Koetter are active and well-known figures in the Postfix community. Hildebrandt is a systems engineer for T-NetPro, a German telecommunications company, and Koetter runs his own company consulting and developing corporate communication for customers in Europe and Africa. Both have spoken about Postfix at industry conferences and contribute regularly to a number of open source mailing lists.