Multi-Objective . . . Multi-Objective . . . Analysis of the Problem Constraint Approach to Two Approaches Let Us Estimate the . . . Multi-Objective Let Us Estimate the . . . Optimization Both Ideas Lead to . . . Home Page Martine Ceberio, Olga Kosheleva, Title Page and Vladik Kreinovich ◭◭ ◮◮ University of Texas at El Paso ◭ ◮ El Paso, TX 79968, USA mceberio@utep.edu, olgak@utep.edu, Page 1 of 12 vladik@utep.edu Go Back Full Screen Close Quit
Multi-Objective . . . 1. Multi-Objective Optimization: Examples Multi-Objective . . . • In meteorology and environmental research , it is im- Analysis of the Problem portant to measure fluxes of heat, H 2 O, CO 2 , etc. Two Approaches Let Us Estimate the . . . • To perform these measurements, researchers build tow- Let Us Estimate the . . . ers with sensors at different heights. Both Ideas Lead to . . . • These towers are called Eddy flux towers. Home Page • When selecting a location for the Eddy flux tower, we Title Page have several criteria to satisfy: ◭◭ ◮◮ – The station should be located as far away from ◭ ◮ roads as possible. – Else, gas flux from cars influences our measure- Page 2 of 12 ments. Go Back – On the other hand, the station should be located Full Screen as close to the road as possible. Close – Otherwise, it is difficult to carrying the heavy parts when building such a station. Quit
Multi-Objective . . . 2. Multi-Objective Optimization (cont-d) Multi-Objective . . . Analysis of the Problem • In geophysics , different type of data provide comple- mentary information about the Earth structure: Two Approaches Let Us Estimate the . . . – information from the body waves (P-wave receiver Let Us Estimate the . . . functions) mostly covers deep areas, while Both Ideas Lead to . . . – the information about the Earth surface is mostly Home Page contained in surface waves. Title Page • When we know the relative accuracy of different data types i , we could apply the Least Squares: ◭◭ ◮◮ �� � � 1 ◭ ◮ x ik − x ik ( p )) 2 ( � → min . · σ 2 p Page 3 of 12 i i k Go Back • In practice, however, we do not have a good informa- tion about the relative accuracy of different data types. Full Screen • In this situation, all we can say that we want to mini- Close mize the errors f i ( x ) corr. to all the observations i . Quit
Multi-Objective . . . 3. Multi-Objective Optimization is Difficult Multi-Objective . . . • If we want to minimize a single objective f ( x ) → min, Analysis of the Problem this has a very precise meaning: Two Approaches Let Us Estimate the . . . – we want to find an alternative x 0 for which Let Us Estimate the . . . – f ( x 0 ) ≤ f ( x ) for all other alternatives x . Both Ideas Lead to . . . • The difficulty is that multi-objective optimization is Home Page not precisely defined. Title Page • For example, when selecting a flight, we want to mini- ◭◭ ◮◮ mize both travel time and cost. ◭ ◮ • Convenient direct flights which save on travel time are more expensive. Page 4 of 12 Go Back • On the other hand, a cheaper trip may involve a long stay-over in between flights Full Screen • It is therefore necessary to come up with a way to find Close an appropriate compromise between several objectives. Quit
Multi-Objective . . . 4. Analysis of the Problem Multi-Objective . . . • Without losing generality, let us consider a multi- Analysis of the Problem objective maximization problem. Two Approaches Let Us Estimate the . . . • Ideally, we should select x 0 that satisfies the constraints Let Us Estimate the . . . f i ( x 0 ) ≥ f i ( x ) for all i and for all x . Both Ideas Lead to . . . • So, ideally, if we select an alternative x at random, then Home Page with probability 1, we satisfy the above constraint. Title Page • The problem is that we cannot satisfy all these con- ◭◭ ◮◮ straints with probability 1. ◭ ◮ • A natural idea is thus to find x 0 for which the probabil- Page 5 of 12 ity of satisfying these constraints is as high as possible. Go Back • Let us describe two approaches to formulating this idea (i.e., the corresponding probability) is precise terms. Full Screen Close Quit
Multi-Objective . . . 5. Two Approaches Multi-Objective . . . • 1st approach: maximize the probability that for a ran- Analysis of the Problem dom x , we have f i ( x 0 ) ≥ f i ( x ) for all i . Two Approaches Let Us Estimate the . . . • 2nd approach: maximize the probability that for a ran- Let Us Estimate the . . . dom x and for a random i , we have f i ( x 0 ) ≥ f i ( x ). Both Ideas Lead to . . . • We thus need to estimate: Home Page – the probability p I ( x 0 ) that for a randomly selected Title Page x , we have f i ( x 0 ) ≥ f i ( x ) for all i , and ◭◭ ◮◮ – the probability p II ( x 0 ) that for a randomly selected ◭ ◮ x and a randomly selected i , we have f i ( x 0 ) ≥ f i ( x ). Page 6 of 12 Go Back Full Screen Close Quit
Multi-Objective . . . 6. Let Us Estimate the First Probability Multi-Objective . . . • We do not have any prior information about the de- Analysis of the Problem pendence between different objective functions f i ( x ). Two Approaches Let Us Estimate the . . . • So, it is reasonable to assume that the events f i ( x 0 ) ≥ Let Us Estimate the . . . f i ( x ) and f j ( x 0 ) ≥ f j ( x ) are independent. Both Ideas Lead to . . . � n • Thus, p I ( x 0 ) = p i ( x 0 ), where p i is the probability Home Page i =1 that f i ( x 0 ) ≥ f i ( x ) for a random x . Title Page ◭◭ ◮◮ • How can we estimate this probability p i ( x 0 )? ◭ ◮ – before starting to solve this problem as a multi- objective optimization problem, Page 7 of 12 – we probably tried to simply optimize each of the Go Back objective functions, Full Screen – hoping that the corresponding solution would also Close optimize all other objective functions. Quit
Multi-Objective . . . 7. Estimating the First Probability (cont-d) Multi-Objective . . . Analysis of the Problem • So, we know the largest possible value M i of each of the objective functions: M i = max f i ( x ). Two Approaches x Let Us Estimate the . . . • The maximum can often be found by equating the Let Us Estimate the . . . derivatives of the objective function to 0. Both Ideas Lead to . . . • This way, we find not only maxima but also minima of Home Page the objective function. Title Page • Thus, it is reasonable to assume that we also know ◭◭ ◮◮ m i = min x f i ( x ). ◭ ◮ • The value f i ( x ) corresponding to a randomly selected Page 8 of 12 x lie inside the interval [ m i , M i ]. Go Back • We do not have any reason to believe that some values from this interval are more probable. Full Screen • It is thus reasonable to assume that all the values from Close this interval are equally probable. Quit
Multi-Objective . . . 8. Estimating the First Probability (cont-d) Multi-Objective . . . Analysis of the Problem • We do not have any reason to believe that some values from this interval are more probable. Two Approaches Let Us Estimate the . . . • It is thus reasonable to assume that all the values from Let Us Estimate the . . . this interval are equally probable. Both Ideas Lead to . . . • So, we have a uniform distribution on the interval Home Page [ m i , M i ]. Title Page • This argument can be formalized as selecting the dis- ◭◭ ◮◮ tribution with the probability density ρ ( x ) for which � ◭ ◮ S = − ρ ( x ) · ln( ρ ( x )) dx → max . Page 9 of 12 Go Back • For the uniform distribution, we have Full Screen n � p i ( x 0 ) = f i ( x 0 ) − m i f i ( x 0 ) − m i , so p I ( x 0 ) = → max x 0 . Close M i − m i M i − m i i =1 Quit
Multi-Objective . . . 9. Let Us Estimate the Second Probability Multi-Objective . . . • In the second approach, we select the objective func- Analysis of the Problem tion f i at random. Two Approaches Let Us Estimate the . . . • We have no reason to prefer one of the n objective Let Us Estimate the . . . functions. Both Ideas Lead to . . . • So, it makes sense to select each of these n functions Home Page with equal probability 1 n . Title Page • For each i , we know the probability ◭◭ ◮◮ p i ( x 0 ) = Prob( f i ( x 0 ) ≥ f i ( x )) = f i ( x 0 ) − m i ◭ ◮ . M i − m i Page 10 of 12 • The probability of selecting each objective function f i ( x ) is equal to 1 Go Back n . Full Screen • Thus, we can use the complete probability formula to Close � n n · f i ( x 0 ) − m i 1 compute p II ( x 0 ) = . M i − m i Quit i =1
Recommend
More recommend