Readers add their tips to better parallel processing
Connecting state and local government leaders
A recent blog item on whether parallel processing requires new languages pointed out that developers are divided on the best way would to split programs across a multiple-core architecture. The comments the blog drew reflect some of those divisions.
Joab Jackson’s recent blog item on whether parallel processing requires new languages pointed out that developers are divided on the best way to split programs across a multiple-core architecture. The comments his blog drew reflect some of those divisions.
One reader said some programming languages are already up to the task:
“If your processor runs lots of programs with a multitasking operating system, you will tend to make pretty good use of the processor cores as the OS schedules the various jobs to run. For instance, a legacy FORTRAN program (definitely not "parallelized") ran nicely in four processors by simply partitioning the data into four blocks and running four instances of the program. … Occasionally, you will have that one, monolithic program that does not easily distribute across cores, but that's the only one that will need much attention. … So most can make good use of multicore processing even as we move to 6, 12, 16 or even 100 cores in an individual server as long as the number of processes you have is large (process-rich environment).”
More on this topic from GCN:
How to write apps for multiple cores: Divide and conquer
Does parallel processing require new languages?
Double-duty for video cards
Another reader said you don’t have to create new languages from scratch: “The quotes by [James] Reinders and [Brian] Goetz in this article brush over the fact that one can create new languages without discarding legacy code if that language supports interoperability. Java was a new language yet its support for calling existing routines through JNI helped people make the jump to it without discarding their existing code.”
Yet another commenter said trying to find new ways to program parallel processors “places the cart before the horse, so to speak. The idea that the industry must design a new programming model for multicore processors is pure folly. It should be the other way around. We must first come up with a correct parallel programming model and then design the parallel processors to support the model. … The reality is that everybody in the computer universe who has an understanding of the issues know that the multithreaded approach to parallelism is complete nonsense. Threads are unreliable and hard to program. There is an infinitely better way to design and program parallel computers that is 100 percent threadless and deterministic.”
One writer suggested a different approach. “MasPar (massive parallelism) is an unavoidable milestone to be passed on the ‘AGI-Roadmap’ for my chosen field of artificial general intelligence. If old languages like Erlang and Haskell can do MasPar, fine, then we don't need to reinvent the wheel. But what we probably do need is a co-evolution simultaneously of MasPar hardware and MasPar languages to program the hardware.”
To read more comments from readers, go to the story.
Meanwhile, the issue of writing programs for multiple cores and multiple processors is explored further in “How to write apps for multiple cores: Divide and conquer,” which also appears in the June 15 print edition of GCN.
NEXT STORY: Social-networking tools fuel collaboration