Jungle Disk – great SOA demonstration, great potential
I recently came across Jungle Disk.
They use the Amazon S3 storage system to provide you with a secure way to store files over the Internet, using Amazon for the storage and their software as the interface between your machine and Amazon. Primarily this is practical for use in a backup situation, and there are a number of benefits to this approach; for one, your backup is kept securely off site, and the data is encrypted too.
The cost is low too - 15 cents/Gigabyte, which compares favourably to similar services, like GoDaddy and Apple's iTools/Mac.com service. Ironically for the latter, because Jungle Disk appears as a local disk, you can use the Mac Backup application to store your files on the remote Amazon system.
Jungle Disk – great SOA demonstration, great potential
I recently came across Jungle Disk.
They use the Amazon S3 storage system to provide you with a secure way to store files over the Internet, using Amazon for the storage and their software as the interface between your machine and Amazon. Primarily this is practical for use in a backup situation, and there are a number of benefits to this approach; for one, your backup is kept securely off site, and the data is encrypted too.
The cost is low too - 15 cents/Gigabyte, which compares favourably to similar services, like GoDaddy and Apple's iTools/Mac.com service. Ironically for the latter, because Jungle Disk appears as a local disk, you can use the Mac Backup application to store your files on the remote Amazon system.
Using static disks in Parallels for performance
Using a static disk, or even just multiple disks, within Parallels can make a big difference to performance. This is particularly true with Windows virtual machines within OS X; I’ve managed to change the boot time from about 30 seconds to under 20 just by changing to a static disk for VM.
The default disk in Parallels is an expanding type - this saves disk space, because Parallels automatically adds to the size of the disk as you need it, but it also means that Parallels has to manage the allocated disk space, adding to the file used. Not only does the management imply a small overhead, there is a much larger chance of the file being fragmented.
A more annoying effect is that the constant use of the expanding disk with virtual memory under Windows, means that size of the disk may increase just because you opened a large application once.
You can get round this by creating a statically-sized disk, and then setting the virtual memory within your virtual host, to use this statically sized disk.
To do this:
- Shutdown your virtual machine - you cannot do this with a machine in the paused stated, because you are effectively adding new hardware to the machine.
- Click Edit to edit the configuration for the virtual machine.
- Click Add, and select a new hard disk
- Unclick the Expanding checkbox and set the size; probably 1-2GB is fine, but keep in mind you will lose this amount of disk space permanently, even if your VM doesn’t use it all.
- Save your configuration.
- Start up your VM and configure the new drive.
For Windows:
- Log in as a user with Administrator privileges.
- Right click on
My Computer and choose Manage. - Choose Local Disk Management.
- Create a new partition/volume.
- Once the new disk is ready to use, right click on
My Computer again, and choose Properties. - Click the Advanced tab.
- Click Settings under Performance.
- Click the Advanced tab.
- Click Change under Virtual Memory.
- Reconfigure the VM settings, creating the new settings for the new drive (I recommend a lower value of 50MB and an upper value 2-10MB below the maximum size of the disk. Windows will use the minimum and dynamically increase it’s usage up until the maximum.
- Remove the VM configuration for the original system/expanding disk.
You should be all set.
It’s probably a good idea to run the Parallels Compressor and reduce the size of your disk now that you are no longer using the disk for virtual memory.
For Linux, Solaris and other Unix variants you might want to run, the process is of course slightly different. For some environments, there are other benefits, but I’ll cover that in a separate post.
ATA-over-Ethernet for Solaris
I noticed for the first time recently the ATA over Ethernet product from Coraid.
There’s a Solaris driver available (impressively in both SPARC binary, Solaris 7+ (direct download) and source (direct download), under a BSD-like license. The release notes are required reading too.
ATA over Ethernet is an interesting concept, albeit an expensive one at the moment, but I like the idea of remote disks, rather than remote computers. The reason is simple: in many ways it would make much more sense for those situations where you want a lot of storage, but still with direct access to the hardware you want to use it on. Traditionally you’d use NFS for that, but that would require a cheap-ish server, which seems a waste. Move the hard disks away (for noise/heat reasons) and keep the CPU and graphics interface local; video production and even large (but not necessarily fast) databases.
Now marry up ATA over Ethernet with ZFS and you could have a phenomenal ZFS pool, accessibly directly from the desktop, and without the need to keep a unit like Thumper in a cabinet next to your desk.
Shame I probably wont get the chance to try it out.
T1000 ALOM rocks
I love the Advanced Lights Out Management (ALOM) module in the T1000.
The T1000 is kept downstairs, and the noise can be uncomfortable, but the ability to power up and down the T1000 remotely over the network makes using it and testing it so much easier.
ALOM should be standard on all computers!
Niagara II
The Niagara II architecture is on the way, and it promises to double the throughput of the original T1 (Niagara) CPU and provide a host of other benefits.
The Niagara CPU (T1) as provided in the T1000 (read my T1000 in more detail review) and T2000 (read T2000 faster than I need) support 8 cores, with 4 threads per core, and a single, shared, FPU. That single FPU becomes a problem in high volume floating point work, because it can slow down the work of all the other cores and threads.
The multiple threads make use of the slower access to RAM to trigger a context switch, so although they are not executing four threads simultaneously, the potential drop in performance of a single thread as it has to access more data enables another thread to run until the data is available. This enables you to get a lot of execution power out of the single core, based on the fact that it would otherwise be sitting there idle.
With the Niagara II CPU there are four significant improvements, based on the same eight-core approach:
- Doubling of thread support to eight simultaneous threads, and therefore 64 simultaneous threads on the one CPU.
- Each core now has it’s own FPU, improving the rate of floating point calculations.
- Upping of the CPU rate to 1.4GHz.
- Support for dual-CPU systems.
That last item is very interesting, because it means that you’ll be able to support a single system with 128 simultaneous threads. If Sun could squeeze that into a 1U unit like the T1000, you could support an impressive 5,376 simultaneous threads within a standard full-height rack.
Of course, to back that up, there are some additional changes. The replacement for the T1000 is expected to support 64GB RAM (twice the current) and the T2000 128GB (also twice the current), and 10Gb Ethernet will be standard on the motherboard.
The rest of the key features will remain the same, including the ability, through software, to control the individual cores and lower power consumption. I’ve mentioned it before, but I still think there could be potential for a portable version of the T1 - the Intel dual core CPUs show that multi-core technology of this type is something that can be applied in a laptop.
The Ultra 3 Mobile Workstation (read Ultra 3 Review) is not a small unit, although the size of the T1 CPU is such that it would take up a significant portion of the case…
Even a 4 core/4 thread version of the Niagara would be an interesting concept, and would keep the size and power requirements down.
Until then, I’ll just have to keep testing the T1000. I’ve spent 3 days now trying out the Cooltools, and I’ll probably be posting the preliminary results this week.
Parallels Update
Parallels for Mac has been updated, but its the stuff beyond the highlight elements that I find most interesting.
The headline elements are:
- Support for the new Mac Pro (and up to 3.5GB RAM
- Support for Mac OS X 10.5 (Leopard)
- Experimental support for Vista
That’s all great, and I’m just installing Vista beta 2 on the iMac as I type this on the MBP.
However, for me the key elements are:
- Solaris guest OS no longer hangs after suspend/resume - this was a really annoying issue that begged me no end. Although my iMac 17″ stays on full time, my MBP is set to sleep after an hour, but leaving it alone while I’m updating Solaris 10 or running tests would mean that I’d have to force a reboot and sometimes start again.
- Solaris doesn’t work with more than one virtual disk fix - this was particularly annoying, as I’ve been playing with ZFS, and having multiple, virtual hard disks to toy with would have been much better than playing with partitions.
So far, the new version seems great. Being able to play with ZFS (even with expanding disks) is fantastic. I’ve been too busy to let the machine sleep and trigger the freezing problem.
Extending documentation formats and facilities using the Docbook base
Back in July, we made an Eclipse documentation plug-in of the MySQL manuals available for users to download.
In truth, the Eclipse documentation format is actually just HTML; you have to combine the HTML with a plug-in manifest that details the documentation, version number etc so that the documentation is loaded and identified as a valid plug-in element when Eclipse is started.