On efficient parallel computations for some dynamic programming problems (Q1109691)

From MaRDI portal





scientific article; zbMATH DE number 4070670
Language Label Description Also known as
English
On efficient parallel computations for some dynamic programming problems
scientific article; zbMATH DE number 4070670

    Statements

    On efficient parallel computations for some dynamic programming problems (English)
    0 references
    0 references
    1988
    0 references
    A general method for parallelization of some dynamic programming algorithms on VLSI was presented by \textit{L. Guibas, H. Kung} and \textit{C. Thompson} [``Direct VLSI implementation of combinatorial algorithms'', in: Proc. Caltech Conf. on VLSI, 509-525 (1979)]. We present a general method for parallelization for the same class of problems on more powerful parallel computers. The method is demonstrated on three typical dynamic programming problems: computing the optimal order of matrix multiplications, the optimal binary search tree and optimal triangulation of polygons. For these problems the dynamic programming approach gives algorithms having a similar structure. They can be viewed as straight- line programs of size \(O(n^ 3)\). The general method of parallelization of such programs described by \textit{L. G. Valiant, S. Skyum, S. Berkowitz} and \textit{C. Rackoff} [SIAM J. Comput. 12, 641-644 (1983; Zbl 0524.68028)] then leads directly to algorithms working in \(\log^ 2n\) time with \(O(n^ 9)\) processors. However we adopt an alternative approach and show that a special feature of dynamic programming problems can be used. They can be thought as generalized parsing problems: find a tree of the optimal decomposition of the problem into smaller subproblems. A parallel pebble game on trees [see the author, in: Combinatorial algorithms on words, NATO ASI Ser., Ser. F 12, 341-356 (1985; Zbl 0578.68041), and in: Proc. conf. on combinatorial analysis and its applications (1985), publ. in Zastosow. Math. (1987)] is used to decrease the number of processors and to simplify the structure of the algorithms. We show that the dynamic programming problems considered can be computed in \(\log^ 2n\) time using \(n^ 6/\log n\) processors on a parallel random-access machine without write conflicts (CREW P-RAM). The main operation is essentially matrix multiplication, which is easily implementable on parallel computers with a fixed interconnection network of processors. Hence the problems considered can also be computed in \(\log^ 2n\) time using \(n^ 6\) processors on a perfect shuffle computer (PSC) or a cube-connected computer (CCC). An extension of the algorithm from an earlier paper of the author [Lect. Notes Comput. Sci. 208, 318-325 (1985; Zbl 0605.68077)] for the recognition of context-free languages on PSC and CCC can be used. If the parallel random access machine with concurrent writes (CRCW P-RAM) is used, then the minimum of m numbers can be determined in constant time and consequently the parallel time for the computation of dynamic programming problems can be reduced from \(\log^ 2n\) to log n. We investigate also the parallel computation of trees realizing the optimal cost of dynamic programming problems.
    0 references
    parallelization of some dynamic programming algorithms
    0 references
    optimal order of matrix multiplications
    0 references
    optimal binary search tree
    0 references
    optimal triangulation of polygons
    0 references
    generalized parsing
    0 references

    Identifiers