Complexity of the method of block paralleling structured programs (Q1103392)

From MaRDI portal





scientific article; zbMATH DE number 4053000
Language Label Description Also known as
English
Complexity of the method of block paralleling structured programs
scientific article; zbMATH DE number 4053000

    Statements

    Complexity of the method of block paralleling structured programs (English)
    0 references
    0 references
    1987
    0 references
    The paper investigates certain well known paralleling algorithms oriented at preparing sequential programs for efficient (parallel) execution on multiprocessor computing systems (MCS) with multiple instruction and data flows such as, for example, the PS-3000 system. After application of the paralleling algorithm the original sequential program is partitioned into independent, as to control and data links, fragments which are grouped into branches (to minimize ``trigger functions'' [\textit{V. E. Kotov}, Kibernetika 1974, No.1, 1-16; No.2, 1-18 (1974; Zbl 0284.68018, Zbl 0288.68008)]) [(*)\ \textit{V. V. Ignatushchenko}, \textit{L. V. Karavanova}, Avtom. Telemekh. 1977, No.6, 176-191 (1977; Zbl 0419.68047)]. Isolation of program fragments suitable for parallel execution on asynchronous MCSs is a complicated and difficult procedure which is not always attainable. To facilitate analysis of program parallelism, the above procedure can be executed in two stages: decomposition of the program into fragments, and subsequent determination of dependencies between the isolated fragments in order to determine the possibility of their parallel execution (the proper paralleling procedure). The decomposition and paralleling stages are not clearly distinguished as, e.g., in (*). At the decomposition stage this leads to the need of constructing a rather complex preliminary general graph. A characteristic feature of the known decomposition methods oriented on nonstructured programs (NSP) is ``bottom-up'' program analysis which begins with individual operators (acting as elementary program components) that are combined into larger structures which, in turn, are combined into still larger structures, etc. Since the simplest programs to decompose are programs linked by the least number of dependencies, structured programs (SP) are of particular interest in this connection. In fact, in SPs control is organized in accordance with the ``one input-one output'' principle with minimum control linkags between program elements. Moreover, SPs can be decomposed by ``top-down'' instead of ``bottom-up'' method of analysis used in NSPs. One can thus assume that from the point of view of decomposition, SPs are more suitable than NSPs for this purpose. The following discussion concerns only fully structured programs whose strict definition is given below.
    0 references
    paralleling algorithms
    0 references
    multiprocessor computing systems
    0 references
    Isolation of program fragments
    0 references
    analysis of program parallelism
    0 references
    fully structured programs
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references