Dynamic programming

From the D&C Theorem, we can see that a recursive algorithm is likely to be polynomial if the sum of the sizes of the subproblems is bounded by kn.   (Using the variables of that Theorem, k = a/c).   If, however, the obvious division of a paroblem of size n results in n problems of size n-1 then the recursive algorithm is likely to have exponential growth.

Dynamic programming can be thought of as being the reverse of recursion.   Recursion is a top-down mechanism -- we take a problem, split it up, and solve the smaller problems that are created.   Dynamic programming is a bottom-up mechanism -- we solve all possible small problems and then combine them to obtain solutions for bigger problems.

The reason that this may be better is that, using recursion, it is possible that we may solve the same small subproblem many times.   Using dynamic programming, we solve it once.

Evaluation of the product of n matrices

We wish to determine the value of the product ∏i = 1 to n Mi, where Mi has ri-1 rows and ri columns.

The order of which matrices are multiplied together first can significantly affect the time required.

To multiply M×N, where matrix M is p×q and matrix N is q×r, takes pqr operations if we use the "normal" matrix multiplication algorithm.   Note that the matrices have a common dimension value of q.   This makes the matrices have the property of compatibility, without which it would not be possible for them to be multiplied.   Also note that, while matrix multiplication is associative, matrix multiplication is not commutative.   That is, N×M might not equal M×N and, in fact, N×M might not even be defined because of a lack of compatibility.

EXAMPLE:
Calculate M = M1 × M2 × M3 × M4,
where the dimensions of the matrices are
M1: 10,20       M2: 20,50       M3: 50,1       M4: 1,100

Calculating M = M1 × ( M2 × ( M3 × M4 ) ) requires 125000 operations.

Calculating M = ( M1 × ( M2 × M3 ) ) × M4 requires 2200 operations.

We could figure out how many operations each possible order will take and then use the one having the minimum number of operations, but there are there are an exponential number of orderings.   Any of the n-1 multiplications could be first and then any of the remaining n-2 multiplications could be next and so on, leading to a total of (n-1)! orderings.

We can find the best order in time O(n3) by using dynamic programming.

If mi ,j is the minimum cost of evaluating the product Mi × ... × Mj then:
mi, j = 0, if i = j, and
mi, j = MINik < j { mi,k + mk+1,j + ri-1rkrj }, if i < j.

The algorithm:

    for i := 1 to n do
       mi,i := 0
    for length := 1 to n-1 do
       for i := 1 to n-length do
          j := i + length
          mi,j = MINi≤k<j{mi,k + mk+1,j + ri-1rkrj}

In the above listing, length refers to the number of matrix multiplications in a subproblem. An alternative approach would be to use size (equal to length+1) as the number of matrices in the subproblem.

For the example given above, we would calculate:
m1,1 = 0,   m2,2 = 0,   m3,3 = 0,   m4,4 = 0
m1,2 = 10000,   m2,3 = 1000,   m3,4 = 5000
m1,3 = 1200,   m2,4 = 3000
m1,4 = 2200

Longest common substring problem

Given two sequences of letters, such as A = HELLO and B = ALOHA,
find the longest contiguous sequence appearing in both.

One solution:   (assume strings have lengths m and n)
For each of the m starting points of A, check for the longest common string starting at each of the n starting points of B.

The checks could average Θ(m) time → a total of Θ(m2n) time.

Dynamic programming solution:
Let Li, j = maximum length of common strings that end at A[i] & B[j].   Then,

        A[i] = B[j] → Li, j = 1 + Li-1, j-1
        A[i] ≠ B[j] → Li, j = 0

LONGEST COMMON SUBSTRING(A,m,B,n)
    for i := 0 to m do Li,0 := 0
    for j := 0 to n do L0,j := 0
    len := 0
    answer := <0,0>
    for i := 1 to m do
       for j := 1 to n do
          if Ai ≠ Bj then
             Li,j := 0
          else
             Li,j := 1 + Li-1,j-1
             if Li,j > len then
                len := Li,j
                answer = <i,j>

Example:

       A L O H A

    H  0 0 0 1 0
    E  0 0 0 0 0
    L  0 1 0 0 0
    L  0 1 0 0 0
    O  0 0 2 0 0

Longest common subsequence

String C is a subsequence of string A if string C can be obtained by deleting 0 or more symbols from string A.

Example:

    houseboat
    houseboat    

    ousbo     is a subsequence of     houseboat

String C is a common subsequence of strings A and B if C is a subsequence of A and also C is a subsequence of B.

Example:

    houseboat
    computer

    houseboat
    computer

    out     is a common subsequence of     houseboat     and     computer

String C is a longest common subsequence (LCS) of strings A and B if C is a common subsequence of A & B and there is no other common subsequence of A & B that has greater length.

Let Li, j be the length of an LCS of A[1...i] & B[1...j], i.e., the prefixes of strings A & B of lengths i and j.

Li, j = Li-1, j-1 + 1, if Ai = Bj
Li, j = Max{ Li-1, j, Li, j-1}, if AiBj

LONGEST COMMON SUBSEQUENCE(A,m,B,n)
    for i := 0 to m do Li,0 := 0
    for j := 0 to n do L0,j := 0
    for i := 1 to m do
       for j := 1 to n do
          if Ai = Bj then
             Li,j := 1 + Li-1,j-1
          else
             Li,j := Max{ Li-1,j, Li,j-1}
    length := Lm,n


Dan Hirschberg
Last modified: Oct 14, 2005