Revisiting Sparse Dynamic Programming for the 0/1 Knapsack Problem Nirmal Prajapati Sanjay Rajopadhye Tarequl Islam Sifat Tarequl.Sifat@colostate.edu prajapati@lanl.gov Sanjay.Rajopadhye@colostate.edu Los Alamos National Laboratory Department of Computer Science Corespeq Inc Los Alamos, New Mexico, USA Colorado State University Fort Collins, Colorado, USA Fort Collins, Colorado, USA
0/1 Knapsack Problem Statement βΌ Given a set of π items numbered from 1 up to π , each with a weight π₯ π and a profit π π , along with maximum capacity π· , we must π+1 πππ¦ππππ¨π ΰ· π π π¦ π π=1 π+1 π‘π£πππππ’ π’π ΰ· π₯ π π¦ π < π· πππ π¦ π β {0,1} π=1 2
Sparse DP Algorithm for solving 0/1- Knapsack Problem βΌ A βsparseβ KPDP algorithm (SKPDP) has been known for a while. βΌ Conventional KPDP algorithm generates a DP table that contains many repeated values. βΌ SKPDP does not calculate the repeated values in the DP table. βΌ So far there has been no quantitative analysis of its benefits. 3
Contributions βΌ Quantitative analysis of Sequential SKPDP βΌ Exploration of two parallelization techniques for SKPDP and their performance analysis βΌ Comparison of SKPDP with Branch-and-Bound 4
Problem Instance Generation For Quantitative Analysis βΌ Uncorrelated : π π = π πππππ(π ππππ) βΌ Weakly Correlated : π π = π½π₯ π + π πππππ(π ππππ) βΌ Strongly Correlated: π π = π½π₯ π + πΎ [Hardest Problem Instances] βΌ Subset Sum: π π = π₯ π where π½, πΎ are constants, π π is profit and π₯ π is weight of each item. 5
Punch Line βΌ For KP instances with significantly large capacity than the number of items (C >> N) βΌ If the problem instance is weakly correlated, the operation count of SKPDP algorithm is invariant with respect to capacity ( π· ). βΌ If the problem instance is strongly correlated, the operation count of SKPDP algorithm is exponentially less than KPDP. 6
Punch Line Weakly Correlated Instances Strongly Correlated Instances 7
Dynamic Programming Solution π π β 1, π ππ π₯ π > π π π, π = α max π π β 1, π , π π β 1, π β π₯ π + π π πππ‘π for ( k=1 ; k < N ; k++ ) { for ( c=0 ; c <= C ; c++ ) { if( c < weights[k] ) M[k,c] = M[k-1,c]; else M[k,c]= MAX (M[k-1,c], M[k-1,c-weights[k]]+profits[k]); } } 8
Example Problem Instance π = 5 π· = 11 Item No. Profits Weights 1 1 1 2 6 2 3 18 5 4 22 6 5 28 7 9
Dynamic Programming Table π π β 1, π ππ π₯ π > π π π, π = α max π π β 1, π , π π β 1, π β π₯ π + π π πππ‘π c Capacity Item# (weight,profit) 0 1 2 3 4 5 6 7 8 9 10 11 0 0 0 0 0 0 0 0 0 0 0 0 0 k 1 (1,1) 0 1 1 1 1 1 1 1 1 1 1 1 2 (2,6) 0 1 6 7 7 7 7 7 7 7 7 7 3 (5,18) 0 1 6 7 7 18 19 24 25 25 25 25 4 (6,22) 0 1 6 7 7 18 22 24 28 29 29 40 5 (7,28) 0 1 6 7 7 18 22 28 29 34 35 40 10
Memory Efficient KPDP βΌ Only need the current row of the DP table to calculate the next row βΌ The whole table does not have to be stored βΌ This way we can find the optimal profit value βΌ We can find the exact solution including which items are taken in the optimal solution by using a divide-and-conquer strategy βΌ Divide-and-Conquer strategy doubles the number of computations, 2ππ· βΌ Reduces memory requirement by a factor of π/2 11
βSparsityβ in the current context 12
βSparsityβ in the current context Capacity Item# (weight,profit) 0 1 2 3 4 5 6 7 8 9 10 11 0 0 0 0 0 0 0 0 0 0 0 0 0 1 (1,1) 0 1 1 1 1 1 1 1 1 1 1 1 2 (2,6) 0 1 6 7 7 7 7 7 7 7 7 7 3 (5,18) 0 1 6 7 7 18 19 24 25 25 25 25 4 (6,22) 0 1 6 7 7 18 22 24 28 29 29 40 5 (7,28) 0 1 6 7 7 18 22 28 29 34 35 40 Item# 0 <0,0> 1 (1,1) <0,0>,<1,1> 2 (2,6) <0,0>,<1,1>,<2,6>,<3,7> 3 (5,18) <0,0>,<1,1>,<2,6>,<3,7>,<5,18>,<6,19>,<7,24>,<8,25> 4 (6,22) <0,0>,<1,1>,<2,6>,<3,7>,<5,18>,<6,22>,<7,24>,<8,28>,<9,29>,<11,40> 5 (7,28) <0,0>,<1,1>,<2,6>,<3,7>,<5,18>,<6,22>,<7,28>,<8,29>,<9,34>,<10,35>,<11,40> 13
Building the Sparse Table Add-Merge-Kill When we include the 4 th item (Weight: 6, Profit: 22) in our choice <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,19>, <7,24>, <8,25> {1,2,3} <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,22>, <7,24>, <8,28>, <9,29>, <11,40> {1,2,3,4} 14
Building the Sparse Table Add-Merge-Kill When we include the 4 th item (Weight: 6, Profit: 22) in our choice <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,19>, <7,24>, <8,25> {1,2,3} <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,22>, <7,24>, <8,28>, <9,29>, <11,40> {1,2,3,4} <0,0>,<1,1>,<2,6>,<3,7>,<5,18>,<6,19>,<7,24>,<8,25> 14
Building the Sparse Table Add-Merge-Kill When we include the 4 th item (Weight: 6, Profit: 22) in our choice <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,19>, <7,24>, <8,25> {1,2,3} <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,22>, <7,24>, <8,28>, <9,29>, <11,40> {1,2,3,4} <0,0>,<1,1>,<2,6>,<3,7>,<5,18>,<6,19>,<7,24>,<8,25> Add (6,22) to each pair, <6,22>,<7,23>,<8,28>,<9,29>,<11,40>,<12,41>,<13,46>,<14,47> 14
Building the Sparse Table Add-Merge-Kill When we include the 4 th item (Weight: 6, Profit: 22) in our choice <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,19>, <7,24>, <8,25> {1,2,3} <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,22>, <7,24>, <8,28>, <9,29>, <11,40> {1,2,3,4} <0,0>,<1,1>,<2,6>,<3,7>,<5,18>,<6,19>,<7,24>,<8,25> Add (6,22) to each pair, <6,22>,<7,23>,<8,28>,<9,29>,<11,40> 14
Building the Sparse Table Add-Merge-Kill When we include the 4 th item (Weight: 6, Profit: 22) in our choice <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,19>, <7,24>, <8,25> {1,2,3} <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,22>, <7,24>, <8,28>, <9,29>, <11,40> {1,2,3,4} <0,0>,<1,1>,<2,6>,<3,7>,<5,18>,<6,19>,<7,24>,<8,25> <6,22>,<7,23>,<8,28>,<9,29>,<11,40> 14
Building the Sparse Table Add-Merge-Kill When we include the 4 th item (Weight: 6, Profit: 22) in our choice <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,19>, <7,24>, <8,25> {1,2,3} <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,22>, <7,24>, <8,28>, <9,29>, <11,40> {1,2,3,4} <0,0>,<1,1>,<2,6>,<3,7>,<5,18>,<6,19>,<7,24>,<8,25> <6,22>,<7,23>,<8,28>,<9,29>,<11,40> 14
Building the Sparse Table Add-Merge-Kill 15
Generation of Problem Instances βΌ The fraction of objects that can fit in the knapsack on average, 1/π ππ· π ππ€π = π βΌ The set of weights, π₯ π is generated with a normal distribution that has a mean of π ππ€π βΌ For weakly correlated problem instances, the correlation between the weights and profits is controlled by a noise factor, π π π = π½π₯ π + ππππππ π½ππ’ππππ πππ’π₯πππ [βππ ππ€π , ππ ππ€π ] βΌ For strongly correlated problem instances π is irrelevant, π π = π½π₯ π + πΎ 16
Gain π½π’ππ ππ’ππππ‘ ππ ππΏππΈπ π»πππ = 1 β π½π’ππ ππ’ππππ‘ ππ πΏππΈπ (= 2ππ·) The range of Gain is (-1, 1) βΌ A value of gain close to 1 means that the number of iterations in SKPDP is insignificant compared to KPDP βΌ A value of gain close to -1 mean that we have the worst-case scenario for SKPDP. 17
Gain π = 2, π = 0.1% 18
SKPDP vs KPDP Weakly Correlated Instances Strongly Correlated Instances π = 256, π = 8, π = 0.1% π = 2, π = 0.1% 19
Impact of π on the sparsity ππ€π ; π = 2 π = π½π₯ + ππππππ π½ππ’. π½π βππ ππ€π , ππ 20
Impact of π on the sparsity ππ€π = ππ· π π ; π = 0.1% 21
Impact of π and π on the sparsity 2 12 < C < 2 50 π = 2 10 , 22
Parallelization of SKPDP βΌ Fine-Grained Parallelization βΌ Coarse-Grained Parallelization 23
Recommend
More recommend