The prescient Amdahl
Connecting state and local government leaders
This week GCN has a feature story on how programmers can write their programs to get the most out of multicore processors.
The problem is only partially about how to break up the problem in such a way that it uses all the processor cores you have available. That's complicated enough, but there is an additional trick'you must break up the problem in such a way that any gains in efficiency aren't eaten up by the overhead it takes to manage the execution across numerous cores.
This trade-off was perhaps first articulated by IBM computer architect Gene Amdahl. Amdahl observed that the performance gains expected by breaking a task into multiple simultaneously-executed parts will be offset by the additional overhead required to manage this new, more complex, way of executing the problem. Engineers now refer to this balancing act as Amdahl's Law.
"Amdahl's Law expresses the law of diminishing returns: The incremental improvement in speedup gained by an improvement of just a portion of the computation diminishes as improvements are added," stated John Hennessy and David Patterson on their 2006 textbook 'Computer Architecture."
While most developers may have to start to think about concurrency, this is old news for those writing for supercomputer systems.
One of the sidebars to main story is about how high-performance and clustered computer systems tackle this problem through the Message Passing Interface, a library of calls for Fortran, C and C++ applications.
For this article we sat in on a session about MPI given by Matthias Gobbert, a mathematics professor at University of Maryland Baltimore County and an administrator for UMBC's Center for Interdisciplinary Research and Consulting.
As Gobbert noted, a nice aspect about MPI is that it doesn't greatly alter a programmer's environment. The library is primarily a set of bindings, available for C, C++, and Fortran among other languages. The program code remains a single file, even if different processes are carved off for different processors to tackle.
The first step of rendering a program MPI-capabilities is to simply include a header in the program code. For C, the header would be "#include
"It's your job to make they match up correctly," he said of the MPI_Send and MPI_Recv's.
Once the programmer is finished, the application is compiled with an MPI wrapper that runs in conjunction with the compiler for the native language of the program itself (many Linux distributions offer mpicc and mpiCC, which are both MPI wrappers for C compilers). The MPI commands are handled by the MPI compiler while the non-MPI aspects are handled by the native compiler.