programming for the 0 1 knapsack
play

Programming for the 0/1 Knapsack Problem Nirmal Prajapati Sanjay - PowerPoint PPT Presentation

Revisiting Sparse Dynamic Programming for the 0/1 Knapsack Problem Nirmal Prajapati Sanjay Rajopadhye Tarequl Islam Sifat Tarequl.Sifat@colostate.edu prajapati@lanl.gov Sanjay.Rajopadhye@colostate.edu Los Alamos National Laboratory


  1. Revisiting Sparse Dynamic Programming for the 0/1 Knapsack Problem Nirmal Prajapati Sanjay Rajopadhye Tarequl Islam Sifat Tarequl.Sifat@colostate.edu prajapati@lanl.gov Sanjay.Rajopadhye@colostate.edu Los Alamos National Laboratory Department of Computer Science Corespeq Inc Los Alamos, New Mexico, USA Colorado State University Fort Collins, Colorado, USA Fort Collins, Colorado, USA

  2. 0/1 Knapsack Problem Statement β—Ό Given a set of 𝑂 items numbered from 1 up to 𝑂 , each with a weight π‘₯ 𝑗 and a profit π‘ž 𝑗 , along with maximum capacity 𝐷 , we must 𝑂+1 𝑛𝑏𝑦𝑗𝑛𝑗𝑨𝑓 ෍ π‘ž 𝑗 𝑦 𝑗 𝑗=1 𝑂+1 π‘‘π‘£π‘π‘˜π‘“π‘‘π‘’ 𝑒𝑝 ෍ π‘₯ 𝑗 𝑦 𝑗 < 𝐷 π‘π‘œπ‘’ 𝑦 𝑗 ∈ {0,1} 𝑗=1 2

  3. Sparse DP Algorithm for solving 0/1- Knapsack Problem β—Ό A β€œsparse” KPDP algorithm (SKPDP) has been known for a while. β—Ό Conventional KPDP algorithm generates a DP table that contains many repeated values. β—Ό SKPDP does not calculate the repeated values in the DP table. β—Ό So far there has been no quantitative analysis of its benefits. 3

  4. Contributions β—Ό Quantitative analysis of Sequential SKPDP β—Ό Exploration of two parallelization techniques for SKPDP and their performance analysis β—Ό Comparison of SKPDP with Branch-and-Bound 4

  5. Problem Instance Generation For Quantitative Analysis β—Ό Uncorrelated : π‘ž 𝑗 = π‘ π‘π‘œπ‘’π‘π‘›(π‘ π‘π‘œπ‘•π‘“) β—Ό Weakly Correlated : π‘ž 𝑗 = 𝛽π‘₯ 𝑗 + π‘ π‘π‘œπ‘’π‘π‘›(π‘ π‘π‘œπ‘•π‘“) β—Ό Strongly Correlated: π‘ž 𝑗 = 𝛽π‘₯ 𝑗 + 𝛾 [Hardest Problem Instances] β—Ό Subset Sum: π‘ž 𝑗 = π‘₯ 𝑗 where 𝛽, 𝛾 are constants, π‘ž 𝑗 is profit and π‘₯ 𝑗 is weight of each item. 5

  6. Punch Line β—Ό For KP instances with significantly large capacity than the number of items (C >> N) β—Ό If the problem instance is weakly correlated, the operation count of SKPDP algorithm is invariant with respect to capacity ( 𝐷 ). β—Ό If the problem instance is strongly correlated, the operation count of SKPDP algorithm is exponentially less than KPDP. 6

  7. Punch Line Weakly Correlated Instances Strongly Correlated Instances 7

  8. Dynamic Programming Solution 𝑁 𝑙 βˆ’ 1, 𝑑 𝑗𝑔 π‘₯ 𝑙 > 𝑑 𝑁 𝑙, 𝑑 = α‰Š max 𝑁 𝑙 βˆ’ 1, 𝑑 , 𝑁 𝑙 βˆ’ 1, 𝑑 βˆ’ π‘₯ 𝑙 + π‘ž 𝑙 π‘“π‘šπ‘‘π‘“ for ( k=1 ; k < N ; k++ ) { for ( c=0 ; c <= C ; c++ ) { if( c < weights[k] ) M[k,c] = M[k-1,c]; else M[k,c]= MAX (M[k-1,c], M[k-1,c-weights[k]]+profits[k]); } } 8

  9. Example Problem Instance 𝑂 = 5 𝐷 = 11 Item No. Profits Weights 1 1 1 2 6 2 3 18 5 4 22 6 5 28 7 9

  10. Dynamic Programming Table 𝑁 𝑙 βˆ’ 1, 𝑑 𝑗𝑔 π‘₯ 𝑙 > 𝑑 𝑁 𝑙, 𝑑 = α‰Š max 𝑁 𝑙 βˆ’ 1, 𝑑 , 𝑁 𝑙 βˆ’ 1, 𝑑 βˆ’ π‘₯ 𝑙 + π‘ž 𝑙 π‘“π‘šπ‘‘π‘“ c Capacity Item# (weight,profit) 0 1 2 3 4 5 6 7 8 9 10 11 0 0 0 0 0 0 0 0 0 0 0 0 0 k 1 (1,1) 0 1 1 1 1 1 1 1 1 1 1 1 2 (2,6) 0 1 6 7 7 7 7 7 7 7 7 7 3 (5,18) 0 1 6 7 7 18 19 24 25 25 25 25 4 (6,22) 0 1 6 7 7 18 22 24 28 29 29 40 5 (7,28) 0 1 6 7 7 18 22 28 29 34 35 40 10

  11. Memory Efficient KPDP β—Ό Only need the current row of the DP table to calculate the next row β—Ό The whole table does not have to be stored β—Ό This way we can find the optimal profit value β—Ό We can find the exact solution including which items are taken in the optimal solution by using a divide-and-conquer strategy β—Ό Divide-and-Conquer strategy doubles the number of computations, 2𝑂𝐷 β—Ό Reduces memory requirement by a factor of 𝑂/2 11

  12. β€œSparsity” in the current context 12

  13. β€œSparsity” in the current context Capacity Item# (weight,profit) 0 1 2 3 4 5 6 7 8 9 10 11 0 0 0 0 0 0 0 0 0 0 0 0 0 1 (1,1) 0 1 1 1 1 1 1 1 1 1 1 1 2 (2,6) 0 1 6 7 7 7 7 7 7 7 7 7 3 (5,18) 0 1 6 7 7 18 19 24 25 25 25 25 4 (6,22) 0 1 6 7 7 18 22 24 28 29 29 40 5 (7,28) 0 1 6 7 7 18 22 28 29 34 35 40 Item# 0 <0,0> 1 (1,1) <0,0>,<1,1> 2 (2,6) <0,0>,<1,1>,<2,6>,<3,7> 3 (5,18) <0,0>,<1,1>,<2,6>,<3,7>,<5,18>,<6,19>,<7,24>,<8,25> 4 (6,22) <0,0>,<1,1>,<2,6>,<3,7>,<5,18>,<6,22>,<7,24>,<8,28>,<9,29>,<11,40> 5 (7,28) <0,0>,<1,1>,<2,6>,<3,7>,<5,18>,<6,22>,<7,28>,<8,29>,<9,34>,<10,35>,<11,40> 13

  14. Building the Sparse Table Add-Merge-Kill When we include the 4 th item (Weight: 6, Profit: 22) in our choice <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,19>, <7,24>, <8,25> {1,2,3} <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,22>, <7,24>, <8,28>, <9,29>, <11,40> {1,2,3,4} 14

  15. Building the Sparse Table Add-Merge-Kill When we include the 4 th item (Weight: 6, Profit: 22) in our choice <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,19>, <7,24>, <8,25> {1,2,3} <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,22>, <7,24>, <8,28>, <9,29>, <11,40> {1,2,3,4} <0,0>,<1,1>,<2,6>,<3,7>,<5,18>,<6,19>,<7,24>,<8,25> 14

  16. Building the Sparse Table Add-Merge-Kill When we include the 4 th item (Weight: 6, Profit: 22) in our choice <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,19>, <7,24>, <8,25> {1,2,3} <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,22>, <7,24>, <8,28>, <9,29>, <11,40> {1,2,3,4} <0,0>,<1,1>,<2,6>,<3,7>,<5,18>,<6,19>,<7,24>,<8,25> Add (6,22) to each pair, <6,22>,<7,23>,<8,28>,<9,29>,<11,40>,<12,41>,<13,46>,<14,47> 14

  17. Building the Sparse Table Add-Merge-Kill When we include the 4 th item (Weight: 6, Profit: 22) in our choice <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,19>, <7,24>, <8,25> {1,2,3} <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,22>, <7,24>, <8,28>, <9,29>, <11,40> {1,2,3,4} <0,0>,<1,1>,<2,6>,<3,7>,<5,18>,<6,19>,<7,24>,<8,25> Add (6,22) to each pair, <6,22>,<7,23>,<8,28>,<9,29>,<11,40> 14

  18. Building the Sparse Table Add-Merge-Kill When we include the 4 th item (Weight: 6, Profit: 22) in our choice <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,19>, <7,24>, <8,25> {1,2,3} <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,22>, <7,24>, <8,28>, <9,29>, <11,40> {1,2,3,4} <0,0>,<1,1>,<2,6>,<3,7>,<5,18>,<6,19>,<7,24>,<8,25> <6,22>,<7,23>,<8,28>,<9,29>,<11,40> 14

  19. Building the Sparse Table Add-Merge-Kill When we include the 4 th item (Weight: 6, Profit: 22) in our choice <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,19>, <7,24>, <8,25> {1,2,3} <0,0>, <1,1>, <2,6>, <3,7>, <5,18>, <6,22>, <7,24>, <8,28>, <9,29>, <11,40> {1,2,3,4} <0,0>,<1,1>,<2,6>,<3,7>,<5,18>,<6,19>,<7,24>,<8,25> <6,22>,<7,23>,<8,28>,<9,29>,<11,40> 14

  20. Building the Sparse Table Add-Merge-Kill 15

  21. Generation of Problem Instances β—Ό The fraction of objects that can fit in the knapsack on average, 1/πœ‡ πœ‡π· 𝑋 𝑏𝑀𝑕 = 𝑂 β—Ό The set of weights, π‘₯ 𝑗 is generated with a normal distribution that has a mean of 𝑋 𝑏𝑀𝑕 β—Ό For weakly correlated problem instances, the correlation between the weights and profits is controlled by a noise factor, 𝜏 π‘ž 𝑗 = 𝛽π‘₯ 𝑗 + π‘†π‘π‘œπ‘’π‘π‘› π½π‘œπ‘’π‘“π‘•π‘“π‘  𝑐𝑓𝑒π‘₯π‘“π‘“π‘œ [βˆ’πœπ‘‹ 𝑏𝑀𝑕 , πœπ‘‹ 𝑏𝑀𝑕 ] β—Ό For strongly correlated problem instances 𝜏 is irrelevant, π‘ž 𝑗 = 𝛽π‘₯ 𝑗 + 𝛾 16

  22. Gain π½π‘’π‘“π‘ π‘π‘’π‘—π‘π‘œπ‘‘ π‘—π‘œ 𝑇𝐿𝑄𝐸𝑄 π»π‘π‘—π‘œ = 1 βˆ’ π½π‘’π‘“π‘ π‘π‘’π‘—π‘π‘œπ‘‘ π‘—π‘œ 𝐿𝑄𝐸𝑄 (= 2𝑂𝐷) The range of Gain is (-1, 1) β—Ό A value of gain close to 1 means that the number of iterations in SKPDP is insignificant compared to KPDP β—Ό A value of gain close to -1 mean that we have the worst-case scenario for SKPDP. 17

  23. Gain πœ‡ = 2, 𝜏 = 0.1% 18

  24. SKPDP vs KPDP Weakly Correlated Instances Strongly Correlated Instances 𝑂 = 256, πœ‡ = 8, 𝜏 = 0.1% πœ‡ = 2, 𝜏 = 0.1% 19

  25. Impact of 𝜏 on the sparsity 𝑏𝑀𝑕 ; πœ‡ = 2 π‘ž = 𝛽π‘₯ + π‘†π‘π‘œπ‘’π‘π‘› π½π‘œπ‘’. π½π‘œ βˆ’πœπ‘‹ 𝑏𝑀𝑕 , πœπ‘‹ 20

  26. Impact of πœ‡ on the sparsity 𝑏𝑀𝑕 = πœ‡π· 𝑋 𝑂 ; 𝜏 = 0.1% 21

  27. Impact of 𝜏 and πœ‡ on the sparsity 2 12 < C < 2 50 𝑂 = 2 10 , 22

  28. Parallelization of SKPDP β—Ό Fine-Grained Parallelization β—Ό Coarse-Grained Parallelization 23

Recommend


More recommend