본문 바로가기
학부공부/Algorithm_알고리즘

10. Dynamic Programming(2)

by sonpang 2021. 12. 17.
반응형

Dynamic Programming

 

01. Dynamic programming

  • is a technique for solving problems with overlapping subproblems.
  • Typically, these subproblems arise from a recurrence relating a solution to a given problem with solutions to its smaller subproblems of the same type.
  • Dynamic programming suggests solving each smaller subproblem once and recording the results in a table from which a solution to the original problem can be then obtained

Example: Fibonacci numbers

Defined by the simple recurrence

Two initial conditions

Computing the nth Fibonacci number recursively (top-down) : overlapping subproblems!

Computing the nth Fibonacci number using bottom-up iteration and recording results:

Whether one uses the classical bottom-up version or its top-down variation, 

  • the crucial step in designing a dynamic programming algorithm remains the same: deriving a recurrence relating a solution to the problem to solutions to its smaller subproblems.

Since a majority of dynamic programming applications deal with optimization problems,

  • we also need to mention a general principle that underlines such applications.
  • principle of optimality (by Richard Bellman): an optimal solution to any instance of an optimization problem is composed of optimal solutions to its subinstances.

 

02. Basic examples_Coin-row problem

Problem

  • Let 𝐹(𝑛) be the maximum amount that can be picked up from the row of 𝑛 coins

 

The recurrence

We can compute 𝐹(𝑛) by filling the one-row table left to right

To find the coins with maximum total value found? → Backtrace should be recorded in an extra array

 

 

03. Basic examples_Change-making problem

Problem

  • Give change for amount 𝑛 using the minimum number of coins of denominations 𝑑1 < 𝑑2 < ⋯ < 𝑑𝑚.
  • Assuming availability of unlimited quantities of coins for each of the 𝑚 denominations 𝑑1 < 𝑑2 < ⋯ < 𝑑𝑚 where 𝑑1 = 1

 

Let 𝐹(𝑛) be the minimum number of coins whose values add up to 𝑛

 

The recurrence

Space efficiency : Θ(𝑛)

Time efficiency : 𝑂(𝑛𝑚)

 

 

04. Basic examples_Coin-collecting problem

Problem

  • Several coins are placed in cells of an 𝑛 × 𝑚 board, no more than one coin per cell
  • A robot, located in the upper left cell of the board, needs to collect as many of the coins as possible and bring them to the bottom right cell
  • On each step, the robot can move either one cell to the right or one cell down from its current location
  • The goal is to find the maximum number of coins the robot can collect and a path it needs to follow to do this

Let 𝐹(𝑖, 𝑗) be the largest number of coins the robot can collect and bring to the cell (𝑖, 𝑗) in the 𝑖th row and jth column of the board

 

The recurrence

 

Cell (𝑖 − 1, 𝑗) → Cell (𝑖, 𝑗) : move down

Cell (𝑖, 𝑗 − 1) → Cell (𝑖, 𝑗) : move right

Space efficiency : Θ(𝑛𝑚)

Time efficiency : Θ(𝑛𝑚)

 

05. Knapsack problem

Problem

  • Given 𝑛 items of known weights 𝑤1, 𝑤2, … , 𝑤𝑛and values 𝑣1, 𝑣2, … , 𝑣𝑛 and a knapsack of capacity 𝑊,
  • Find most valuable subset of the items that fit into the knapsack : solved by exhaustive search

2021.11.16 - [학부공부/Algorithm_알고리즘] - 04. Brute-force Search(2)

 

04. Brute-force Search(2)

Brute Force and Exhaustive Search 01. Brute Force A straightforward approach to solving a problem, usually directly based on the problem’s statement and definitions of the concept..

ku320121.tistory.com

Let 𝐹(𝑖, 𝑗) be the value of an optimal solution to this instance, i.e., the value of the most valuable subset of the first 𝑖 items, 1 ≤ 𝑖 ≤ 𝑛 that fit into the knapsack of capacity 𝑗, 1 ≤ 𝑗 ≤ W.

 

The recurrence

Space efficiency : Θ(𝑛𝑊)

Time efficiency : Θ(𝑛𝑊)

 

06. Memory Functions

Solutions to problems with overlapping subproblems

  • The direct top-down approach leads to an algorithm that solves common subproblems more than once and hence is very inefficient.
  • The classic dynamic programming approach works bottom up. : it fills a table with solutions to all smaller subproblems, but each of them is solved only once. An unsatisfying aspect of this approach is that solutions to some of these smaller subproblems are often not necessary for getting a solution to the problem given. Since this drawback is not present in the top-down approach, it is natural to try to combine the strengths of the top-down and bottom-up approaches.
  • The goal is to get a method that solves only subproblems that are necessary and does so only once. : Such a method exists; it is based on using memory functions.

This method solves a given problem in the topdown manner

  • but, in addition, maintains a table of the kind that would have been used by a bottom-up dynamic programming algorithm.
  • Initially, all the table’s entries are initialized with a special “null” symbol to indicate that they have not yet been calculated.
  • Thereafter, whenever a new value needs to be calculated, the method checks the corresponding entry in the table first : if this entry is not “null,” it is simply retrieved from the table; otherwise, it is computed by the recursive call whose result is then recorded in the table

 

07. Knapsack problem AGAIN

Only 11 out of 20 nontrivial values (i.e., not those in row 0 or in column 0) have been computed

 

08. Floyd’s Algorithm: All pairs shortest paths Problem

Problem

  • Given a weighted connected graph (undirected or directed), find shortest paths between every pair of vertices

Idea

  • Floyd’s algorithm computes the distance matrix of a weighted graph with 𝑛 vertices through a series of 𝑛 × 𝑛 matrices:

  • the element d_ij(𝑘) in the 𝑖th row and the 𝑗th column of matrix 𝐷(𝑘) is equal to the length of the shortest path among all paths from the 𝑖th vertex to the 𝑗th vertex with each intermediate vertex, if any, numbered not higher than 𝑘
  • 𝐷(0) : the weight matrix of the graph
  • 𝐷(n) : the distance matrix
  • The algorithm computes all the elements of each matrix 𝐷(𝑘) from its immediate predecessor 𝐷(𝑘−1)

 

The recurrence

The time efficiency is cubic

 

09. Warshall’s algorithm

Problem

  • Find the transitive closure of a directed graph : a matrix containing the information about the existence of directed paths of arbitrary lengths between vertices of a given graph
  • Almost similar to Floyd’s algorithm

 

 

반응형

 

반응형

'학부공부 > Algorithm_알고리즘' 카테고리의 다른 글

12. Linked List(1)  (2) 2022.02.06
11. Binary Search & Parametric Search(1)  (6) 2022.01.12
09. Dynamic Programming(1)  (6) 2021.12.15
08. Divide and Conquer(2)  (14) 2021.11.21
07. Divide and Conquer(1)  (0) 2021.11.18

댓글