Topics in Combinatorial Optimization Orlando Lee – Unicamp 10 de abril de 2014 Orlando Lee – Unicamp Topics in Combinatorial Optimization
Agradecimentos Este conjunto de slides foram preparados originalmente para o curso T´ opicos de Otimiza¸ c˜ ao Combinat´ oria no primeiro semestre de 2014 no Instituto de Computa¸ c˜ ao da Unicamp. Preparei os slides em inglˆ es simplesmente porque me deu vontade, mas as aulas ser˜ ao em portuguˆ es (do Brasil)! Agradecimentos especiais ao Prof. M´ ario Leston Rey. Sem sua ajuda, certamente estes slides nunca ficariam prontos a tempo. Qualquer erro encontrado nestes slide ´ e de minha inteira responsabilidade (Orlando Lee, 2014). Orlando Lee – Unicamp Topics in Combinatorial Optimization
Basic definitions We need some definitions and notation related to linear algebra, convexity, polyhedra and linear programming. We assume some familiarity with these topics and present just the minimum needed in order to present the following topics: integral polyhedra total dual integral (TDI) systems totally unimodular (TU) matrices Fix R n as the grounding vector space. For vectors x , y let xy denote the inner product of x and y . Orlando Lee – Unicamp Topics in Combinatorial Optimization
Linear independence A vector v is a linear combination of x 1 , . . . , x k if there exist λ 1 , . . . , λ k such that λ 1 x 1 + · · · + λ k x k = v . We say that vectors x 1 , . . . , x k are linearly independent if there do not exist λ 1 , . . . , λ k , not all equal to 0, such that λ 1 x 1 + · · · + λ k x k = 0 . Orlando Lee – Unicamp Topics in Combinatorial Optimization
Affine independence A vector v is an affine combination of x 1 , . . . , x k if there exist λ 1 , . . . , λ k such that λ 1 x 1 + · · · + λ k x k = v and λ 1 + . . . + λ k = 1. We say that vectors x 1 , . . . , x k are affinely independent if there do not exist λ 1 , . . . , λ k , not all equal to 0, such that λ 1 x 1 + · · · + λ k x k = 0 and λ 1 + . . . + λ k = 0. Orlando Lee – Unicamp Topics in Combinatorial Optimization
Linear and affine independence So � v � v is an affine combination of x 1 , . . . , x k if and only if is a 0 � x 1 � x k � � linear combination of , , . . . , 1 1 or x 1 , . . . , x k are affinely independent if and only if � x 1 � x k � � , . . . , are linearly independent. 0 0 Orlando Lee – Unicamp Topics in Combinatorial Optimization
Linear and affine hull Let X be a subset of R n . Let lin ( X ) := { λ 1 x 1 + · · · + λ k x k : k � 1 , x 1 , . . . , x k ∈ X , λ 1 , . . . , λ k ∈ R } be the linear hull of X . Let X be a subset of R n . Let aff ( X ) := { λ 1 x 1 + · · · + λ k x k : k � 1 , x 1 , . . . , x k ∈ X , λ 1 , . . . , λ k ∈ R , λ 1 + . . . + λ k = 1 } be the affine hull of X . Orlando Lee – Unicamp Topics in Combinatorial Optimization
Dimension and (affine) rank The rank of a subset X ⊆ R n , denote rank ( X ), is the size of a maximal subset of linear independent vectors of X . From basic linear algebra, we know that every maximal subset of linearly independent vectors of a set X ⊆ R n has the same size. So rank is well-defined. The affine rank of a subset X ⊆ R n , denoted aff-rank ( X ), is the size of a maximal subset of affine independent vectors of X . Exercise. Prove that this concept is well-defined, that is, every maximal subset of affinely independent vectors of a set X ⊆ R n has the same size. Orlando Lee – Unicamp Topics in Combinatorial Optimization
Dimension and (affine) rank Exercise. (a) Prove that if 0 ∈ aff ( X ) then aff-rank ( X ) = rank ( X ) + 1. (b) Prove that if 0 �∈ aff ( X ) then aff-rank ( X ) = rank ( X ). The dimension of a set X ⊆ R n is dim( X ) := aff-rank ( X ) − 1. We say that X ⊆ R n is full-dimensional if dim( X ) = n . Orlando Lee – Unicamp Topics in Combinatorial Optimization
Dimension and (affine) rank In linear independence, the vector 0 is privileged. To generate R n with linear combinations, we need n linear independent vectors. In affine independence, there is no privileged vector. To generate R n with affine combinations, we need n + 1 affine independent vectors. Orlando Lee – Unicamp Topics in Combinatorial Optimization
Convexity A subset C of R n is convex if λ x + (1 − λ ) y ∈ C for every x , y ∈ C and every λ ∈ R with 0 � λ � 1. A vector v is a convex combination of x 1 , . . . , x k if there exist λ 1 , . . . , λ k � 0 such that λ 1 x 1 + · · · + λ k x k = v and λ 1 + . . . + λ k = 1. Let X be a subset of R n . Let conv ( X ) := { λ 1 x 1 + · · · + λ k x k : k � 1 , x 1 , . . . , x k ∈ X , λ 1 , . . . , λ k � 0 , λ 1 + . . . + λ k = 1 } be the convex hull of X . Orlando Lee – Unicamp Topics in Combinatorial Optimization
Convexity eodory, 1911) For any subset X ⊆ R n and any Theorem. (Carath´ vector x ∈ conv ( X ), there exist affinely independent vectors x 1 , . . . , x k ∈ X such that x ∈ conv ( { x 1 , . . . , x k } ). Roughly speaking, every element of conv ( X ) can be written as a convex combinatiorn of k affine independent vectors of X , where k is at most the affine rank of X . Orlando Lee – Unicamp Topics in Combinatorial Optimization
Cones A subset C of R n is a (convex) cone if C � = ∅ and λ x + µ y ∈ C for every x , y ∈ C and every λ, µ ∈ R + . A vector v is a conic combination of x 1 , . . . , x k if there exist λ 1 , . . . , λ k � 0 such that λ 1 x 1 + · · · + λ k x k = v . Let X be a subset of R n . Let cone ( X ) := { λ 1 x 1 + · · · + λ k x k : x 1 , . . . , x k ∈ X , λ 1 , . . . , λ k � 0 } be the cone generated by X . Orlando Lee – Unicamp Topics in Combinatorial Optimization
Cones eodory, 1911) For any subset X ⊆ R n and any Theorem. (Carath´ vector x ∈ cone ( X ), there exist affinely independent vectors x 1 , . . . , x k ∈ X such that x ∈ cone ( { x 1 , . . . , x k } ). Roughly speaking, every element of cone ( X ) can be written as a conic combination of k affinely independent vectors of X , where k is at most the affine rank of X . In the statement, if we assume that 0 �∈ X , then we can also write linearly independent vectors instead of affinely independent vectors. Orlando Lee – Unicamp Topics in Combinatorial Optimization
Hyperplanes and half-spaces A subset H of R n is a hyperplane if there exist c ∈ R n \ { 0 } and δ ∈ R such that H = { x : cx = δ } . We say that the set S := { x : cx � δ } is a half-space. If δ = 0 we say that H ( S ) is a linear hyperplane (half-space). Orlando Lee – Unicamp Topics in Combinatorial Optimization
Polyhedra A cone C is polyhedral if there is a matrix A such that C = { x : Ax � 0 } . Equivalently, C is the intersection of finitely many linear half-spaces. Several researchers showed that a convex cone is polyhedral if and only if is finitely generated where C being finitely generated means that there exists X such that C = cone ( X ). Orlando Lee – Unicamp Topics in Combinatorial Optimization
Polyhedra A subset P of R n is a polyhedron if there exists an m × n matrix A and a vector b ∈ R m such that P = { x : Ax � b } . Equivalently, P is the intersection of finitely many half-spaces. We say that the system Ax � b determines P . An inequality cx ≤ δ is valid if cx � δ holds for every x ∈ P . A polyhedra or cone is rational if the linear system of inequalities that determines them contains only rational inequalities ( dx � δ with d and δ rational). Orlando Lee – Unicamp Topics in Combinatorial Optimization
Polyhedra A subset P of R n is a polytope if there exists X such that P = conv ( X ). It can be shown that P is a polytope if and only if P is a bounded polyhedron. For subsets A , B of R n let A + B := { a + b : a ∈ A , b ∈ B } . Minkowski (1936) proved the following fundamental result. Theorem. A set P is a polyhedron if and only if P = Q + C for some polytope A and some polyhedral cone C . Orlando Lee – Unicamp Topics in Combinatorial Optimization
Farka’s lemma Farka’s lemma has several equivalent statements. It characterizes when a given system of linear (in)equalities Ax � b has a feasible solution (or, the polyhedron determined by the system is nonempty). Theorem. Ax � b is feasible if and only if yb � 0 for every y � 0 such that yA = 0 . Corollary. Ax = b has a solution with x � 0 if and only if yb � 0 for every y such that yA � 0 . Corollary. Ax � b has a solution with x � 0 if and only if yb � 0 for every y � 0 such that yA � 0 . Orlando Lee – Unicamp Topics in Combinatorial Optimization
Linear programming Linear programming concerns the problem of maximizing or minimizing a linear function over a polyhedron. One of the most important results in the area is the (strong) duality theorem that relates two optimization problems – the primal and the dual – by a minimax equality. Theorem. Let A be a matrix and b , c be vectors. Then max { cx : Ax � b } = min { yb : y � 0 , yA = c } . There exist several equivalent forms of duality theorem, for example: max { cx : Ax � b , x � 0 } = min { yb : y � 0 , yA � c } , max { cx : Ax = b , x � 0 } = min { yb : yA � c } . Orlando Lee – Unicamp Topics in Combinatorial Optimization
Recommend
More recommend