Friday, February 27, 2009

Multicore - challenges on several fronts!

So it was my privilege to host a GCN! Webinar on Multicore yesterday, Thursday, February 26th. See www.gridcomputingnow.org for more. I have been looking at this area for a while, wondering what steps have been taken to ensure that the new cores being placed on your desktop will be utilised in an efficient manner. It turns out that this is a questionable expectation.

The key points made in the webinar, which featured colleagues Francis Wray of Concertant LLP and David Henty, of University of Edinburgh, were that:-
  • the use of parallel programming techniques are required to take advantage of multicore and other parallel architectures.
  • these techniques are essentially of two variants: shared memory or message passing.
  • In the former, programme components access a single pool of memory, handling all the administration and controls associated themselves.
  • In the latter, information is distributed through the infrastructure and operated upon in parallel.
  • For higher performance, message passing is deemed superior.
Now how many readers of this blog, remember these facts from their training? How many have used them in anger?

Frannie, frightened the life out of me when he said the thing to do was to build an application and then see how well it runs in a multicore environment. Have we really not got anything better to show for our 30 years of development than traditional hand crafting?

It certainly seems as though there are some commonly accepted language extensions which allow for the definition of shared data structures. There are also some common sense guidelines (suspend disbelief on this) for parallel programming models. And some tools, such as compilers and post coding analysis tools, which can help.

But bottom line, it looks as though you are on your own if you want to develop applications targeted at multicore environments. And, don't expect any help from the O/S if you simply want to get the best out of your dual core laptop. It would seem that the plethora of "services" that are running on your pc in background today, will preempt any "threads" which you fire up in parallel, thus hobbling the performance of your application. And that's assuming that you can figure out how to get the thing to work!

Bottom line, have we been sold a pup? The idea that Moore's Law continues in the face of rising heat and power requirements through the deployment of multicore is in my mind extremely dubious. It might be the case if the compiler/operating system helped create and allocate threads effectively. And that the O/S allocated the work to processors in a sensible way. But if that is not the case, are we simply going to leave the other processors on the die unused while the only one we can access melts itself with exhaustion.

Intel is right to moan that the software industry is not keeping up. But what steps has it taken to raise this issue and ensure the right tools are available? Please let me know. By the way, Sandia National Labs have recently published a study which questions the multicore strategy beyond a few processors. It seems as though  the memory contention and communications challenges that are presented when large numbers of cores are present tend to defeat the strategy. See http://www.spectrum.ieee.org/nov08/6912 for more.

And I thought that merely using the cores as servers for virtualisation would do the trick!!

Labels: , ,