Tag Archives: IBM DeveloperWorks

Applying memcached to increase site performance

A new article on using memcached, the memory caching tool, to improve website and application performance is now available on IBM developerWorks:

The open source memcached tool is a cache for storing frequently used information to save you from loading (and processing) information from slower sources, such as disks or a database. It can be deployed in a dedicated situation or as a method of using up spare memory in an existing environment. Despite the simplicity of memcached, it is sometimes used incorrectly, or it is used as a solution in the wrong type of environment. Learn when it is best to take advantage of using memcached.

Read Applying memcached to increase site performance

Adding DTrace probes to your applications

A new article on adding DTrace probes to your application has been published on IBM developerWorks:

DTrace provides a rich environment of probes that can be used to monitor the execution of your system, from the kernel up to your application. You can perform a significant amount of examination without changing your application, but to get detailed statistics, you need to add probes to your application. In this article we will examine how to design the probes, where to add them into your application, the best locations for the probes, and how to effectively build and use the probes that you have added.

Read Adding DTrace probes to your applications

Deploying Gearman across multiple environments

A new article on using the work distribution tool, Gearman:

The open source Gearman service allows you to easily distribute work to other machines in your network, either because you want to spread the work over a large body of machines or because you want to share the functionality of different languages and environments with each other. In this article, you will look at some typical uses of Gearman and how it can solve a variety of issues and problems in modern applications. You will also learn how Gearman can be combined with other tools, like memcached, to help speed up your application and processing requirements.

I’ve tried to pay particular attention to using it where you might normally use RPC or web services, or when you want to execute large quantities of jobs and spread them over a number of machines or different parameters.

Read it Deploying Gearman across multiple environments

Deep-protocol analysis of UNIX networks

A new article on deeper analysis of network packets, is now available on IBM developerWorks:

Whether you are monitoring your network to identify performance issues, debugging an application, or have found an application on your network that you do not recognize, occasionally you need to look deep into the protocols being used on your UNIX® network to understand what they are doing. Some protocols are easy to identify and understand, even when used on non-standard ports. Others need more investigation to understand what they are doing and what information they are exchanging. In this article, we will take a look at techniques for performing detailed analysis of the protocols in use on your UNIX network.

The piece specifically looks at ways of extracting more detailed information from the raw data you see on your network.

Read Deep-protocol analysis of UNIX networks

Saving money with open source, Part 3: The OpenChange solution offers great promise

The third and final part of my series on saving money with open source covers the OpenChange mail server, designed to provide a complete, protocol and functionally equivalent collaboration environment to Microsoft’s Exchange server.

From the article:

In today’s economic climate, everyone is looking for ways to reduce expenses. In the IT sector, one way to cut costs is by turning to open source alternatives instead of using expensive licensed products. This last part of our series explores OpenChange, which is designed to be used as an Exchange groupware server. E-mail is probably the backbone of your business; When the e-mail servers go down, everything can quickly grind to a halt. In this article, learn about the OpenChange e-mail server and whether it is ready for prime time.

Read: Saving money with open source, Part 3: The OpenChange solution offers great promise

Saving money with open source, Part 2: Tap into the power of OpenOffice

The second part of the series on saving money using open source technology looks at OpenOffice, a complete Office software suite comprising word processor, spreadsheet, and presentation package, among other tools.

From the intro:

On the desktop, the operating system and environment are less important than the applications that support the main operating functions for your office. Your business drives your application requirements, but most businesses will also use an office suite, such as OpenOffice, to support their core operations.

The OpenOffice suite is open source, freely available, and completely compatible with a wide range of different office suites, including Microsoft Office. It’s a compatible product, both in terms of file readability and usage, and you can try out OpenOffice with no barriers.

Read: Saving money with open source, Part 2: Tap into the power of OpenOffice

Saving money with open source, Part 1: Use the Ubuntu operating system

I completed a series earlier this year on using various tools within the open source world that can save you money in place of spending money on commercial products and licenses.

The first article looks at the Ubuntu Linux distribution. From the intro:

Part 1 discusses Ubuntu, a community developed Linux-based operating system for laptops, desktops, and servers. Ubuntu contains many applications: a Web browser; presentation, document, and spreadsheet software; instant messaging; and much more. This article explores Ubuntu’s:

  • Benefits
  • Updates and stability
  • Desktop version
  • Compatibility and integration
  • Hardware support

Read: Saving money with open source, Part 1: Use the Ubuntu operating system}

UNIX network performance analysis

In a follow-up to an article I did earlier this year on analyzing the structure and layout of your network using ping and other tools, I’ve written another article on similar lines, this time looking at how to monitor and then report on the performance of your network and how to identify and diagnose problems.

Knowing your UNIX network layout will go a long way with understanding your network and how it operates. But what happens when the performance of your UNIX network and the speed at which you can transfer files or connect to services suddenly reduces? How do you diagnose the issues and work out where in your network the problems lie? This article looks at some quick methods for finding and identifying performance issues and the steps to start resolving them.

Read: UNIX network performance analysis

UNIX network analysis

I have a new tutorial on analyzing networks, in terms of understanding your basic network configuration, the other machines and devices on the network, and the general topology.

From the intro:

When accessing a new UNIX system, or even understanding an existing one, a key part of the puzzle to how the system operates is the network configuration. There are many aspects of the network that you need to know and understand to correctly identify problems and prevent future problems. By using some basic tools and commands you can determine a lot about the configuration of a single system, and through this basic understanding, a good idea of the configuration of the rest of the network. With some additional tools, you can expand that knowledge to cover more systems and services within your network.

In this tutorial you will use some basic tools within the UNIX environment that can disclose information about the configuration of your system. By understanding these tools and the information they output, you will be able to gain a greater understanding of your system network configuration and how it works. You will also examine tools and solutions that can look at the wider network and gain more detailed information about your network, its potential security issues, and key points of information that will help you identify and diagnose problems when they do occur.

Read UNIX network analysis

Solutions for tracing UNIX applications

Tracing applications are something of a passion for me, especially with the introduction of DTrace in Solaris and Mac OS X.

To support that, I have a new tutorial about the different methods available for tracing Unix applications. I tried to concentrate on tools and techniques that don’t require access to the source, like using truss and DTrace.

From the intro:

Most developers and systems administrators know what should happen in their operating system and with their applications, but sadly, this isn’t always the case. There are times when an application has failed, or is not behaving as you expect, and you need to find out more information. By using your existing knowledge of how your application should work and some basic UNIX skills, you can trace the application to find out what is causing the problem. This tutorial will teach you the basic techniques of using tracing tools to find out what your application is doing behind the scenes.

First, the tutorial looks at the distinction between debugging and tracing, and how the two solutions differ. Then it examines some specific examples of where tracing can be used to solve problems in your application. DTrace provides elements of both system tracing and debugging, and also provides you with the ability to time and benchmark applications. Finally, the tutorial shows how to trace the information being exchanged between network computers to help find problems in network applications.

Read Solutions for tracing UNIX applications