Apple has published an in-depth profile piece looking at Virginia Tech's award-winning supercomputer project.

Originally based on G5 Power Macs, the system now employs 1,100 2.3GHz Xserves. The cluster operates at 12.25 teraflops.

Scientists at the college are thrilled with the cluster because it puts researching power into the hands of more scientists at a price many institutions can afford, promising a scientific renaissance.

The Xserve-based supercomputer is part of Virginia Tech’s Institute for Critical Technology and Applied Science. It attacks major scientific and engineering challenges, such as nanoelectronics.

Doctor Cal Ribbens, associate professor of computer science explains how System X helps deal with such problems: “The results of the simulation demand that the computations be very tightly coupled, because the answer in one part of the simulation depends on the answer in another part. So not only do you need very powerful computers, you need them connected by a very fast network."

Dr. Srinidhi Varadarajan, director of the university’s Terascale Computing Facility and the system’s lead designer said: “Its floating-point performance matches or exceeds that of Intel’s Itanium2 solution."

Varadarajan also sees the Xserve's use of ECC memory - which has the capability to correct the code logic to guard against corrupt data and read/write errors.

In layman's terms, such small errors can skew a calculation - particularly where weeks of calculation are taking place.

Varadarajan explains: "Think of it as potential loss of long-term memory in humans".

He observes how important this feature is to help solve major computational problems: "Without a way to correct for errors, scientists have to repeat a run five or six times. And, even then, all of their results may only point roughly in the same direction. Their machines are telling them nothing useful.”

Dr. Ribbens says the new Xserve cluster means better science: “People are already doing bigger things."