Parallel processing is one of the hot topics in computing these days, so I’m always interested when a major tools vendor introduces a new technology designed to make parallelisation and concurrency simpler for the lay programmer. Most recently, Apple finalised the APIs for Grand Central this week, coinciding with the latest development build of its forthcoming Snow Leopard edition of Mac OS X.
Grand Central is a new set of technologies that bakes concurrency into the heart of Mac OS X. They’re designed to make it easier for programmers to divide their applications into separate, atomic processing tasks, each of which can then be handed off to the OS for efficient distribution across multiple CPU cores. With Grand Central, the OS itself handles much of the low-level grunt work of supervising and routing independent tasks, freeing programmers to concentrate on user-facing issues.
This kind of assistive technology will be essential if developers hope to take full advantage of the next generation of high-performance CPUs. Cores have replaced clock speed as the new metric for processor power. If you think it’s challenging to write code that runs efficiently on today’s four- and eight-core systems, just wait until the average desktop PC contains 16 or 32 cores, or more.
Still, I can’t help but wonder whether the industry as a whole might be running in the wrong direction. Despite years of research into grid processing and HPC (high-performance computing), efficient parallelization remains a tough nut to crack. The systems that do it well are mainly purpose-built environments that are poorly suited to the needs of your average PC user.
So why not just have those systems do what they do best and leave our PCs to handle user experience and interactivity? In other words, why are we trying to re-create Google-style parallelism on our desktops when we could just have Google handle the heavy lifting for us?
NEXT: Apple’s concurrency tools won’t change the world
Check out our new Macworld Mobile site.