Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Programming for Parrallelism: The Parallel Hierarchies Pattern

Programming for Parrallelism: The Parallel Hierarchies Pattern

This item in japanese

The advent of multi-core processors offers new performance opportunities. However, according to Ina Fried, this hardware trend is a big challenge for the software industry, since “most of today's software isn't built to handle that kind of advance”. Intel fellow Shekhar Borkar argues as well that “large software makers […] have not moved fast enough” and that time has come for the software to follow Moore's law, i.e. to double “the amount of parallelism that it can support every two years." (The notion of Moore’s law refers to the fact that the performance of chips is roughly doubled every two years). Borkar believes that software development practices have to be retooled in order to leverage potential performance improvement. 

In this vein, a new architectural pattern for parallel programming has been introduced (pdf) by Prof. Jorge L. Ortega-Arjona from the Mathematics Department of the National Autonomous University of Mexico.  The pattern attempts to increase performance of execution when:
It is necessary to perform an computation repeatedly, composed of a series of ordered operations on a set of ordered data.  Consider a program whose output may be the result of just a single complex computation as a series of conceptually ordered simple operations, executed not for value but for effect, at different levels. An operation at a high level requires the execution of one or more operations at lower levels. If this program is carried out serially, it could be viewed as a chain of subroutine calls, evaluated one after another.
His Parallel Hierarchies pattern is “an extension of the Layers pattern”, which partitions the problem into layers components responsible to “provide operations or functions to more complex level layers, and to delegate more simple subtasks to layers in less complex levels.” Elements of functional parallelism are “introduced by allowing “two or more components of a layer […] to simultaneously exist, normally performing the same operation”:
During the execution of operations in each layer, usually the higher layers have to wait for a result from lower layers. However, if each layer is represented by more than one component, they can be executed in parallel and service new requests. Therefore, at the same time, several ordered sets of operations can be carried out by the same system. Several computations can be overlapped in time.
Among the known uses of the pattern suggested by Prof Ortega are “Tree structure operations like search trees, where a search process is created for each node.”  Among the listed limitations are that “not every system computation can be efficiently structured as layers,” the pattern doesn’t work well when upper layers have too many dependencies on lower layers (eg: typical web based apps), and that desiging the right level of layers can be quite complex.

If you agree with Jon Erickson, who believes that “concurrency […] should be in the forefront of all developer's and architect's field of vision”, then it’s good to keep an eye on emerging research in this space. 

Rate this Article