ClassW04ApproxAlgs/SerdarBozdag

Top | ClassW04ApproxAlgs | recent changes | Preferences

Difference (from prior major revision) (author diff)

Changed: 191c191
subject to

subject to


Bin Packing Problem

1 Outline


2 Original Bin Packing Problem

In original bin packing problem, we are given a list of n items L={a1,a2,…,an} and infinite number of bins with capacity C.

The size of i-th item ai is s(ai), where 0 < s(ai) ≤ C.

The problem is to pack the items in minimum number of bins under the constraint that the total size of items in each bin cannot exceed the capacity of the bin.


3 On-line Algorithms for Original Bin Packing Problem

On-line bin packing algorithms are the algorithms which packs each item one by one without any knowledge of the next items. Thus, approximation factor of these algorithms are higher than offline algorithms in which size and number of items in the list are known before starting to pack them.

In this section, we give very brief information about some naive online algorithms to warm you up.

3.1 Definitions

The following definitions will be used in this section when we talk about on-line algorithms for original bin packing problem.

3.2 Next-Fit (NF)

  1. Pack the first item into the first bin
  2. For each successive item
    1. If it fits in the bin that contains the last packed item, pack in that bin.
    2. Otherwise, close this bin and pack the item in an empty bin.

The disadvantage of this algorithm is that it closes a bin that can be used in the future. This algorithm packs a list of items and its reverse in the same number of bins. The asymptotic performance ratio (APR) of NF is as following
NF(I) ≤ 2*OPT(I) if 1/2 < p ≤ 1
NF(I) ≤ (1-p)-1*OPT(I) if 0 < p ≤ 1/2

3.3 Worst-Fit (WF)

  1. If there are no open bins in which the current item fits, pack the item into an empty bin.
  2. Otherwise, pack the item into the bin of smallest content in which the item fits.

Although WF never closes a bin, the APR of this algorithm same as the previous one.

3.4 First-Fit (FF)

  1. Pack the current item into the lowest indexed nonempty bin in which it fits.
  2. If there is no bin in which the current item can fit, pack the item in an empty bin.

3.5 Best-Fit (BF)

  1. If there are some bins in which the current item can fit, pack the item into the bin of largest content in which the item fits.
  2. Otherwise, pack the the item into an empty bin.

3.6 Conclusion on on-line algorithms

The drawback of these on-line algorithms is their performance when size of items in the list or part of the list are in increasing order. If a list is in increasing order, the performances of the on-line algorithms suffer greatly.

Even "average-case" lists that have portions in increasing order can greatly affect an on-line algorithm's approximation guarantee ratio. For example, when the bin size is 1, then following list of items will be packed into 3 bins according to all four on-line algorithms...

.2, .7, .3, .8
It is easy to see that these items can be packed into two bins. We touched the approximation guarantee of some algorithms above, but in general the analysis of these algorithms are very complicated and it is out of scope of this tutorial. You can read the papers listed in references for more information.


4 Extensible Bin Packing Problem

Extensible bin packing problem is a variation of original bin packing problem. In this problem, number of bins is given as an input. The capacity of each bin is 1 and bins may be extended to hold more than their capacities. The cost of a bin is 1 if it is not extended, and the total size of items in it if it is extended. The goal of this problem is to pack a set of items of given sizes into the specified number of bins in a way to minimize the total cost. In this tutorial, a fully polynomial time asymptotic approximation scheme (FPTAAS) is introduced for extensible bin packing problem.

Extensible bin packing has a number of important real world applications such as:


5 A FPTAAS for Extensible Bin Packing Problem

5.1 Introduction

Input Notations Goal

5.2 Algorithm

We present an algorithm A(I,ε), which takes a problem instance and a parameter ε and produces an approximation guaranteeing that
A(I,ε) ≤ (1 + ε)OPT(I) + O(ε-1logε-1)
in time bounded by a polynomial in n and ε-1
Before showing how this algorithm produces this approximation guarantee, we can make some assumptions without loss of generality for the sake of simplicity. For each assumption, we prove that this assumption can be made without loss of generality.

5.2.1 First Assumption

There are no small items. In other words
xi > ε/(1+ε) for i=1,2,…,n
Proof
Suppose we have an approximation algorithm A which guarantees given approximation guarantee with so small items. We can use this algorithm to pack an arbitrary list of items as following:
  1. First, use A to pack items that are not small
  2. Pack the small items greedily.
There are two cases when we pack small items
  1. Packing small items does not increase the cost. Then we clearly satisfy the approximation guarantee.
  2. Packing small items does increase the cost. For convenience, let ε equal to ε/(1+ε) (maximum size of a small item). Then we can say that before packing small items the level of each bin is at least 1 - ε. Therefore
OPT(I) ≥ j=1..m l(bj) > m(1-ε)
A(I) ≤ j=1..m max(1,l(bj)) ≤ j=1..m l(bj)+ε ≤ mε + j=1..m l(bj)
(A - OPT(I))/OPT(I) ≤ mε/m(1-ε) = ε.

5.2.2 Second Assumption

There are no big items. In other words
xi < 1 for i=1,2,…,n
Proof
We claim that there is an optimum packing algorithm which packs big items into a bin by itself. Similar to the proof given above, we can construct an 1+ε approximation algorithm from 1+ε which packs without big items. Suppose some optimum packing placed some item x in a bin together with a big item. Then we could move x to another bin that does not contain a big item without increasing the cost.

5.2.3 Third Assumption

Total size of all items cannot exceed 2m
x=1..n xi < 2m
Proof
When there are no big items, greedy method guarantees that as long as any bin is packed to a level of less than 1, no bin will be packed to a level of 2 or more. In other words, when a bin is packed to a level at least 2, then the cost of packing is simply the sum of items, so the packing is optimum. Thus, we can make this assumption without loss of generality.

5.2.4 Fourth Assumption

The level of a bin cannot exceed 3.
l(bj) < 3 for j=1,2,…,m
Proof
Suppose, there is a bin of level 3 or more. According to the previous assumption, there will be a bin of level less than 2. Thus, we can move an item from bin that has higher level to the other level without increasing the cost. Therefore, without loss of generality, we can assume that the level of any bin is at most 3.
For convenience, we will show that
A(I,ε) ≤ (1 + ε)2OPT(I) + O(ε-1log{ε-1})
We can obtain original approximation guarantee by a different choice of ε

In high level, this algorithm works as following:
First we group the items in terms of their sizes. (We round the sizes up to specified values) Then we define different ways of packing these items in bins. Then coarsening method (dynamic programming) gives a way of packing all items at minimum cost. (We give an ILP to find the optimum result.)
It is an approximation algorithm because we change the size of the items, and we relax ILP to LP. We will round the sizes of items up to s1, s2, ..., sN where N is the smallest value for the following inequality
⌊(1+ε)j/ε⌋ε2 ≥ 1
According to the inequality above, N approximately equals to ε-1lnε{-1}
We define the value of s1,s2,…,sN according to the following equation
sj = ⌊(1+ε)j/ε⌋ε2 for 1≤ j<N
sj = 1 for j=N
Thus,

  1. s1 < s2 < … < sN
  2. sN = 1. (Recall that we assume that the maximum size of an item is 1.)
  3. If sj-1 <xi ≤ sj then round xi to sj.

Increase ratio of items of size less than s1
s1/(ε/(1+ε)) ≤ ⌊(1+ε)/ε⌋ε2/(ε/(1+ε)) ≤ (1+ε)2

Increase ratio of items of size larger than s1
sj+1/sj = ⌊(1+ε)j+1ε-1⌋/⌊(1+ε)jε-1⌋ ≤ (1+ε)j+1ε-1/((1+ε)jε-1-1) = (1+ε)j+1/((1+ε)j - ε) ≤ (1+ε)j+1/ (1+ε)j-1 ≤ (1+ε)2
 

Thus, Rounding increases the cost of the optimum solution by a factor of at most (1+ε)2)


upload:sbozdag_configuration_table.gif

The figure given above gives an intuitive explanation for the ILP we introduce in next section.
Each row in this table is a configuration. In each configuration we pack a number of items of each size (Cij) such that the level of the bin is at most 3. The number of configurations can be in terms of ε.
We can use a configuration zi times in the solution. Our goal is to pack all items and use all bins. These are the constraints in ILP given below.


5.3 Integer Linear Program

zi: number of times we use i-th configuration
Objective function:
min i=1..M zi.max(1,j=1..N Cijsj)
subject to
i=1..M ziCij ≥ nj for j=1,2,…,N (We need to pack all items of each size)
i=1..M zi ≥ m(We need to use all bins)
zi ∈ {0,1,2,ldots} (We can use a configuration any time we want)

When we relax this ILP to LP by allowing zi to be any nonnegative value, number of inequalities in LP will be N+1.
Thus,

Relaxation increases the optimum solution by at most 3(N+1) = O(ε-1logε-1) which is the asymptotic part in the approximation guarantee shown in the beginning.

Thus, this algorithm gives a solution with an approximation guarantee
A(I,ε) ≤ (1 + ε)OPT(I) + O(ε-1logε-1)


References


Top | ClassW04ApproxAlgs | recent changes | Preferences
This page is read-only | View other revisions
Last edited March 19, 2004 6:09 pm by adsl-67-114-254-153.dsl.lsan03.pacbell.net (diff)
Search: