New User Special Price Expires in

Let's log you in.

Sign in with Facebook


Don't have a StudySoup account? Create one here!


Create a StudySoup account

Be part of our community, it's free to join!

Sign up with Facebook


Create your account
By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here

Deterministic Opt & Desg

by: Lyric Kiehn

Deterministic Opt & Desg ECI 153

Lyric Kiehn
GPA 3.61


Almost Ready


These notes were just uploaded, and will be ready to view shortly.

Purchase these notes here, or revisit this page.

Either way, we'll remind you when they're ready :)

Preview These Notes for FREE

Get a free preview of these Notes, just enter your email below.

Unlock Preview
Unlock Preview

Preview these materials now for free

Why put in your email? Get access to more of this material and other relevant free materials for your school

View Preview

About this Document

Class Notes
25 ?




Popular in Course

Popular in Engineering Civil & Environ

This 44 page Class Notes was uploaded by Lyric Kiehn on Tuesday September 8, 2015. The Class Notes belongs to ECI 153 at University of California - Davis taught by Staff in Fall. Since its upload, it has received 35 views. For similar materials see /class/187536/eci-153-university-of-california-davis in Engineering Civil & Environ at University of California - Davis.

Popular in Engineering Civil & Environ


Reviews for Deterministic Opt & Desg


Report this Material


What is Karma?


Karma is the currency of StudySoup.

You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/08/15
Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 NonLinear Programming General Approach Finding and comparing peaks and valleys in the objective function within a feasible region de ned by the constraint set Example Applications 0 Profit maximization with price elasticity 0 Transportation problems with volume discounts on shipping costs 0 Portfolio selection with risky investments also a multiobj ective programming problem 0 Farm management considering agricultural production functions 0 Hydropower scheduling 0 Truss design Abbreviated Outline of Methods Unconstrained Nonlinear Optimization 0 Analytical methods Calculus 0 Searching without derivatives Fibonacci search method Grid search method Genetic algorithms 0 Searching with rst derivatives Onedimensional line search procedure Gradient search steepest descent method one of many hillclimbing methods QuasiNewton methods Conjugate gradient method 0 Searching with second derivatives Newton s metho Trust region methods 39 39 Nonlinear quot 39 quot Y 39 quot these are globally optimal for min of convex functions over convex feasible regions 1 0 Analytical methods Lagrange multiplier method Exterior penalty function methods Interior barrier function methods Gradient projection methods Generalized reduced gradient methods Successive linear programming methods Successive quadratic pro gramming methods Genetic algorithms constraints represented in tness function Algorithms for Special Cases of NLP Nominally these are globally optimal 0 Separable programming linearmixed integer approximation 0 Quadratic programming for min convex functions 0 Dynamic programming 71 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 NonLinear Search Procedures Like the simplex method for linear programming search procedures for nonlinear programming problems nd a sequence of trial solutions that lead to an optimal solution In each iteration you begin with the current trial solution and proceed systematically to a new improved trial solution Typically you stop when either one of the following conditions are met 1 Your trial solutions have converged to a given value ie successive solutions show little or no improvement 2 Your solution satis es certain optimality criteria such as the KarushKuhnTucker conditions Fibonacci Search An efficient method of nding the minimum of a singlevariable function fx which strictly increases on either side of an optimum value ie is unimodal considering a number of discrete values of the decision variable In a sequence of n Fibonacci numbers Fquot the next number in the sequence is the sum ofthe previous two numbers eg 1 1 2 3 5 8 13 21 34 Where there are Fn 1 ordered discrete values of the decision variable X with values x the Fibonacci search can nd the minimum of the unimodal function fX in nl iterations For a minimization search over the interval al to b1 Initialization Select a nal solution tolerance tgt 0 Set index i 1 L1 a1 Fn2Fnb1 7 a1 and U1 a1 Fn1Fnb1 7 a1 Step 1 If Li ltfUl then The optimal value is below M Set am a and bm U and set Li1 3H1 FnizFnibi1 am and Um Li Step 2 If Li gtfUl then The optimal value is above L1 Set am L and bm b and Set Li1 Ui and Ui1 am FniiFnibi1 am Step 3 Seti i 1 Step 4 Ifi lt nl go to Step 1 Step 5 If fL gt fU then Final solution lies above Li so nal solution interval is L b ELSE Final solution lies below Li so nal solution interval is a U This method is often used in determining the step size or number of steps given a speci ed step size in the gradient search method described below Wagner 1975 p 455 has a nice example Bazaraa et al 1993 have excellent writeups on this and related search methods Grid Search Method A grid search evaluates points evenly spaced over a suitable grid in the decision space While this method is easy to program it is too timeconsuming for problems with more than a few decision variables However it can nd an approximate solution or a good starting point for a more intensive search algorithms 72 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 One dimensional line search procedure Starting with a trial solution to a minimization problem one moves in the direction of the gradient rst derivative If the rst derivative slope is positive move to the left if it is negative move to the right Below the procedure is summarized using the midpoint rule De ne x current trial solution g current lower bound on x C current upper bound on x S error tolerance for x Initialization Select E Find initial values for g and C by inspection or by nding any values of x at which the derivatives are positive and then negative Select an initial trial solution xx x 2 Iteration 1 Evaluate m dx x 2 If dfx20set 7cx dx 3 If SOJet x x dx C 4 Selectanew x Stopping rule If 7c 5 2 S Stop Otherwise perform another iteration This is a simple procedure in one dimension For multiple decision variables it can be applied to each variable in turn This is also known as the univariate gradient search Gradient search steepest ascentdescent method This is another iterative method based on first order derivatives which works easily for multiple dimensions As in the onedimensional search procedure the each iteration moves the trial solution in the direction of the gradient ie downhill for a minimization problem Additionally one needs to decide how far to move downhill the step size At the optimal solution 0 Consider 8 fi Bf 8x1 8x2 an t step size a single variable used in the 1D search The gradient search procedure for a minimization problem is summarized as Initialization Select 8 and an initial trial solution x Go to the stopping rule Iteration 1 Express fX39 tVfX39 as a function of tby setting v Bf xJxJt for1n J PX 73 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 and then substituting these expressions into fx 2 Use the onedimensional line search procedure or calculus or Fibonacci search to nd I t that minimizes fX39 tVfX39 over t3 0 3 Reset X39 X39 t Vfx and go to the stopping rule Stopping rule Evaluate Vfx Check if 3f 8 for allj ln Bx If so stop with the current x as the approximate optimal solution Otherwise perform another iteration X2 o is J to contour line of io As do most of the search methods discussed here the gradient search only nds a local optimum It also can have very slow convergence especially for bananashaped functions Why X1 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Newton39s Method An iterative search method that uses both first and second derivatives Like many other search methods the direction of the iteration is found using the first derivative slope The method then cleverly uses the second derivative at a point to predict how far to step to where the rst derivative equals zero Derivation of Method Begin your search at a pointh with XO value of the objective function The Taylor series of X atXO is 0 2 0 fX0 61X 2 dXZ Let X1 X0 AX be your next estimate of the optimal solution X df d Since 3 7 0 at any max or min express Elias a Taylor series set 0 The firstorder Taylor series of i 0 atXI is dX dfltX10 d Xo d2fX0 AX 51X 51X dXZ 39 dfX0 Manipulating this becomes AX 240 or d f X dXZ dfX0 X X0AXX0 A d2fX0 dXZ 3 Repeat the use of this equation until Bf S S for all j l n Graphically x J What if there is more than one local optimum or more than one decision variable The selection of the initialstarting solution is important fX dfd I I X0 gtX1 X 75 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Newton39s Method for n decision Variables This just applies the ideas of Newton s method to ndecision dimensions 15torder condition for minimum or maximum of f 3 0 for i1 n BX Let X0 initial trial solution and consider the multidimensional Taylor series expansions T 0 0 2 0 afX AXOzafX Bf X AX i1n BXZ BXZ BXlBXJ With n decision variables X 139 there are n simultaneous equations like the one above T 3f2X0 3f2X0 3f2X0 3X12 BXIBXZ aXlaXn 3f2X0 3f2X0 0 3X 3X 2 0BfX 2 1 3X2 AX 3X2 3f2X0 3f2X0 BXnBXl My In the second term the large matrix is HXO our old friend the Hessian matrix We can solve for AX using the inverse of the Hessian matrix 0 AX Hx0 1i 1 Putting this into an iterative search scheme for n decision variables 1 0 0 1 BfiXO X X HX 0 BX Continue iterations until af x0 BX lt8 where S is a convergence tolerance 76 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Sequential Linear Programming An efficient technique for solving convex programming problems with nearly linear objective and constraint functions As the name suggests each of the approximating problems will be an LP problem that can be solved relatively ef ciently Also successive problems will differ only by one constraint so the dual simplex method can be used to solve the sequence of problems very ef ciently General procedure Start with an initial trial solution X i 0 Use a rstorder Taylor series approximation to linearize the objective and constraint functions about the point X N fx 2 fxZVfxlTx xi gx sgxiVgxTx xi Formulate the approximating linear programming problem as W Minimize fx1VfxiT x xi Subjectto gJxiVgJXZTX xs 0 forj m b Solve the approximating LP to nd a new trial solution xlil U Evaluate the original constraints at xlil that is find gJxlH j l m Example Sequential LP problem Max 3X1 5X2 10 cosX3 STX1 X2 X3 S 20 Search over X3 enumeration Fibonacci Newton or Gradient Search LP for each value of X3 tried 77 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Quasi Global Optimum Search Methods Relies on nding and comparing several local optima by starting local searches in different areas of the feasible region Once a local optimum has been found how can a new starting point be found that makes nding a different local optimum most likely Approaches l Eyeball it manually choosing starting points from different parts of the feasible region 2 Randomly select new starting points 3 Select a new starting point which maximizes the distance from all previouslyexamined points Step lStart a local optimum search starting from 360 1 and ending with 2 Fl39l 2 F1391 2 in Remember the best solution found 2 and its objective function value f Step 2 Ifk previous local searches have been 39 39 over 11 decision variables solve k n Fj MHZ Z Z Z lXi39Xil kl j1i111 ST gobs Step 3 Use the solution to this linear program as a new starting point for a local search Step 4 Go to step 2 The above linear program finds a point a maximum distance from all points previously examined If Z K S S STOP Enough area has been searched Computationally intensive Often only local optima are found Harder to program fewer nice solution packages Example Modeling to Generate Alternatives using HSJ Brill ED SY Chang and LD Hopkins 1982 Modeling to generate alternatives The HSJ approach Management Science Vol 28 No 3 pp 221225 78 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Dynamic Programming DP Dynamic programing is a method of solving a particular class of math programming problems The method is essentially an efficient form of enumeration somewhat like the branchandbound method of integer programming Illustrate method by way of a problem Example The stage coach problem starts p320 of Hillier and Leiberman text A salesman wants to get from location 1 to location 10 with a minimum of risk as represented by life insurance premiums on each intermediate journey His map i4 7 Note that the total trip is divided into 4 stages 1 to 23 or 4 23 or 4 to 56 or 7 56 or 7 to 8 or 9 and 8 or 9 to 10 The life insurance cost for each possible trip segment Cij of going from i to Stage 1 Stage 2 Stage 3 Stage 4 Z 1 i Q 1 8 9 m 1 gt 2 4 3 2 gt 7 4 6 5 gt 1 4 8 gt 3 3a 3 2 4 6a 6 3 9a 4 4a 4 1 5 7a 3 3 How to select a route with minimum total life insurance cost 1 By enumeration Compare total cost of each possible combination of routes 127810 127910 126810 1269 10 125810 125910 137810 etc 18 possible combinations of routes Lots of combinations work 33888831 2 Use a shortsighted method Choose the cheapest route for the rst stage 1 gt 2 2 then the cheapest route from 2 to the 2nd stage 2 gt 6 4 then again for the third stage 6 gt 93 and the fourth stage 9 gt 10 4 The total cost is then 13 1 gt 2 gt 6 gt9 gt 10 13 But 1 4a6 gt9 gtlocostsonly3l34ll 79 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 So this method will not guarantee the optimal solution 3 Dynamic Programming Note that the routing decision is divided into 4 stages At each stage only one nextstep of several possible steps can be chosen For Stage 4 The last decision Make a table II 4 S firs C41 X 8 3 10 9 4 10 1 X4 is the decision variable of which location to go to next X2 is always 10 the end S is the M in which the g decision is made the entering state or condition Go from 8 or 9 fs is the cost of going from the present possible state to the nal state So far this isn39t very interesting Let s now work backwards to the nexttolast stage stage 3 f3335X3C3X3f4quot X3 n 3 3 X3 8 9 f S X 5 4 8 4 8 6 9 7 7 9 7 6 7 6 8 Going from 5 to 10 it is best to pass through 8 Going from 6 to 10 it is best to pass through 9 Going from 7 to 10 it is best to pass through 8 recursive objective function costtogo function or accumulated objective function It has two parts 1 the direct costs of the decision in the current stage and state and 2 the best implied cost for all the later states fki1S H1 Backwards again to Stage 2 f252X202X2f353X2 7 9K 9K n 7 2 S X2 5 6 7 f2s X2 2 ll 1 l 12 ll 5 or 6 3 7 9 10 7 5 4 8 8 ll 8 5 or 6 Same interpretation Give examples Finally back again to Stage 1 f1S1X1C1X1f252X1 7 n 7 l S X1 2 3 4 f1 3 X1 1 13 11 11 11 3 or 4 The best decision in Stage 1 is to go either to 100 3 or 4 This is the overall optimum 80 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Looking at Stage 2 results note that if coming from 100 3 going to 100 5 is preferred and if coming from 100 4 go either to 100 5 or 6 3 5 4 6 Then for Stages 3 and 4 3 5 gt8 1 10 4 6 9 There are 3 equallyoptimal paths 135810 145810 and 146910 All have total costs of 11 Why did this work Elimination of inferior subsets of decisions Go through stages to show why Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 DP Theory In solving this or any other DP problem we must formulate problems into Stages A decision must be made at each stage States When making a decision at a stage there are a number of states or conditions we can be in when making the decision State variables must be discrete Decision Variable and options At each stage we have a limited number of choices Decision variables must be discrete Objective function Objective function Recursive Accumulated Ob39ective Function for usual quotbackwards recursionquot fnSXn CnSXn fn1sn1 hSXn SW1 hSXn is the state transition equation DP works by accumulating the optimal objective function as it moves through stages eliminating bad decisions the quotrecursive relationshipquot At the last stage One optimal decision is speci ed and The optimal objective function value is found The rest of the optimal decisions are found by tracing from states to decisions for other stages Backward DP Example World Health Council Example p 339 in Hillier and Lieberman text The WHC has 5 medical teams it can allocate among 3 countries It would like to allocate these teams to maximize the additional personyears of life APYL for all countries combined APYL increased life expectancy 0 Population NOTE Another way is to increase population while keeping life expectancy the same Table 1 gives the return in 100039s of personyrs for each country Table l 100039s personyears return for each country PiXi Xi No of Teams l 2 3 0 0 0 0 l 45 20 50 2 70 45 70 3 90 75 80 4 105 l 10 100 5 120 150 130 Formulated as a math program 3 Max Z 2 PiXi il 3 ST 2x1 5 Xi integer V i Could solve by integer programming Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 For solution by DP Stages Let each country be a stage The decision in Stage 3 will be how many teams go to Country States The incoming state of each will be the number of teams remaining to be allocated Decision How many teams should go to the country in a given stage if only 5 teams are remaining to be allocated 3 Objective function Max Z 2 PiXi i1 Accumulated objective function harsxn Pn Kn emsX111 the quotrecursive relationshipquot Go over Again we39ll solve it backwards starting with StageCounty 3 mm 11 3 s 13 X2 0 0 0 1 50 1 2 70 2 3 80 3 4 100 4 5 130 5 Essentially allocate all that remains at the last Stage to Country 3 Stage2 112 f2sX2P2X2f337X2 SXz 0 1 2 3 4 5 953 X2 0 0 0 0 1 50 20 50 0 2 70 70 45 70 Oorl 3 80 90 95 75 95 2 4 100 100 115 125110 125 3 5 130 120 125 145 160 150 160 4 f342 P22 f3422 45 70 115 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Stage 1 At Stage 1 no allocations have been made so S 5 111 I f1S1P1X1P 2SX1 3 x1 0 1 2 3 4 5 9133 x1 5 160 170 165 160 155 120 170 1 P1l PEG14 45125 170 The Solution The maximum number of additional personyears is 170000 Trace the optimal decisions forward through the stages Stagei Xquot S S 1 1 1 5 451 2 3 4 143 3 1 1 011 Compare C 39 39 quot Needed for Enumeration Vs DP Typical comparison is by number of st calculations needed to solve the problem For DP No of Cost Calculations max 2 number of states in stage nnumber of decisions examined Stage 11 1 if Smax and dmax are same for all stages lNo of cost cal nmax Smax dmaxl For enumeration No of cost cal nmax H dnmax n 1 mm is same for all stages No cost cal dmaxnmax For WHC problem For DP No cost calc 61 6 6 16 including infeasible sol39ns 6 21 6 excluding infeasible sol39ns By Enumeration No of cost calc 6 6 6 63 m including infeasible sol39ns DP requires 22 the calculation effort for this problem 84 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Forward DP WHC Example Revisited Forward A 39 39 Obiective function ifnsn1Xn PnXn f39in1Sn Sn1 Xn Concern of outgoing rather than incoming state Stage 1 n 1 Incoming state Sl5 f1X1P1X1P1313932 X1313932 Outgoing State S2 Pkl X1 0 120 5 1 105 4 2 90 3 3 70 2 4 45 1 5 0 0 Stage 2 7 2 f2S3X2P2X2f133X232 Us X2 Outgomg State S3 X2 0 1 2 3 4 5 105 20 0 120 125 135 145 155 150 155 4 70 45 1 105 110 115 120 110 120 3 4 45 2 90 90 90 75 90 01 or 2 3 70 65 45 70 0 4 45 20 45 0 5 0 0 0 Stage 3 7 n73 f3S4X3P3 X3f2 S4X3 Outgoing State S4 X3 0 1 2 3 4 5 f3s4 X3 0155 12050 9070 7080 45100 0130 0 155 170 160 150 145 130 170 1 131 it works Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 More DP Applications Reservoir Releases Over Time from a Single Reservoir St1 St Qt It massbalance given Qt The DP Stages timeperiods t States Volume of water in reservoir at time t St Decisions How much water to release at time t Qt Objective function Max economic bene ts 2 24b Szan Cumulative objective function Backward moving lust Qt bltst Q Ft1sttst Q It Start with t tn and move stages backwards to t l If Qt gt St It then bSt Qt 00 infeasible Stage tn n tn 39 frsE 0 msE 0 State stquot 0quot 0 om stquot on 6 i Reservoir Capacity Sp Must discretize both Release values m values Storage values p values Stage t n frst 09 ms 0 F gs s o 1 State 3 0 0 Om fYS 0 6 i Reservoir Capacity Sp Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Stage t l n 1 fs 0 bs 0 f sfsl ol 19 s o 0 om f s 01 s1 initial storage value There is only one reservoir state in the present S1 What would a forwards formulation be ftSt Qt bSt Q ft1St1St Q It where St is the storagestate leaving stage t Computational effort to solve this reservoir problem Let tn number of stages m number of decision alternatives per stage p number of discretizations of the state variable Number of cost calculations by DP approx tn pm Number of cost 39 39 quot bV quot approx mtn Let39s compare the computational effort for tn 7 days and p m Tquot quot quot ofmandp 39 39 quot 13pr w u i by A 10 71010700 10000000 100 700007104 1014 1000 70000007106 1021 It s possible to solve a much larger reservoir problem by DP than would otherwise be possible Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Operating 2 Reservoirs in Series m Timeperiods t m Storage in Reservoir l discretized values W Storage in Reservoir 2 discretized values Decision 1 Release from Reservoir l discretized values Decision 2 Release from Reservoir 2 discretized values Cumulative objective function Backward moving ftSitSzt QltaQ2t btSit932t Q1t9Q2t f kt131t1tQ1t S2tQlt39Q2t Note Conservation of mass equations for each reservoir determine the transition between states at each Stage S1t1 SltIt39Qlt and S2t1 S2tQlt39Q2t 88 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 To setup for each stage t Computational Effort No of cost calculations by DP tn 0 pl 0 pg 0 m1 0 m2 For 2 reservoirs over 7 days let p1 p2 m1 m2 Tquot quot quot ofm C 39 39 quot ByDP C 39 39 quot by quot 10 70000 1014 100 700000000 7108 1028 1000 71012 1042 Calculations by enumeration m2397 General Comments Multiple State Variables Multiple Decision Variables quotCurse of Dimensionalityquot Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 The Final Days MultiObjective Optimization Viewing operations research in terms of Alternative Generation amp Comparison 0 to suggest solutions 0 to get people talking 0 need to test and refine decisions suggested by optimization methods Review again the steps of OR application Uses of operations research in practice Pre Opt amp Post Opt Go back and put methods in context of Operations Research method PreOpt use notes from early in class l Formulating the Problem as an ORproblem What is the problem Objective function de nitions Decision variables Constraints Simpli cations identi ed VVVV 2 Solve Mathematical Problem 3 Interpret Results perhaps modify resolve and reinterpret Over 39 quot 39 of Solution Methods Make a huge table 1 Formulation limitations 2 Computational limitations 3 Advantages 4 Disadvantages 5 How does this method work Q Trial amp Error Enumeration Calculus Lagrange Multipliers LP IntegerLP NonLinear Programming DP GA Discussion why select one method over another 90 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 THE SIMPLEX ALGORITHM in one page The following steps form the fundamentals of the Simplex Algorithm for solving linear programs 1 Set up an quotinitial tableauquot 2 Is the trial solution represented by the initial tableau optimal If all coefficients in Row 0 are 2 0 then the solution is optimal and you may STOP If any coefficients in Row 0 of the tableau are lt 0 the solution represented by this tableau is not the optimal solution and the set of basic variables must be changed 3 Select a new basic variable Select as a new basic variable the variable with the most negative coefficient in Row 0 The column for this variable is the quotpivot columnquot 4 Select a new nonbasic variable Divide the RHS entry of each row by the coefficient in the column of the new basic variable The new nonbasic variable is the old basic variable associated with the row having the minimum ratio This row is called the quotpivot rowquot 5 Update the tableau Form a new tableau that has m for each entry in the pivot column except for the coefficient in the pivot row The entry in both the pivot row and pivot column must be equal to M Use linear algebra and Gaussian elimination techniques to do this Replace the old basic variable with the new basic variable in the quotBasic Variablequot column This new tableau represents a new comerpoint solution 6 Is this new solution optimal If all coefficients in Row 0 of the new tableau are 2 0 then STOP the solution is optimal If any coefficients in Row 0 of the tableau are lt 0 the solution represented by this tableau is not the optimal solution and the set of basic variables must be changed GO TO STEP 3 Reading the nal tableau The following tableau represents an optimal solution and final tableau for the problem X1 X2 Subject To 1 X1 X2 S 4 2 X2 S 3 Basic Eqn Coefficient of Variable Variable amp Z amp 1 S M Z 0 1 2 0 8 X1 1 0 1 1 1 0 4 S2 2 0 0 1 0 1 3 The values of each basic variable is given by the RHS column The value of each nonbasic variable is zero Z8X14S23X20S10 The value of the Laggange multiplier shadow price for each constraint is given by the coefficient in Row 0 under the associated slackvariable S1 or S2 M 2 X2 0 The Reduced of each coefficient in the objective function is given by the coefficient in Row 0 under the associated decision variable X1 or X2 Can you obtain these results 46 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Some Simplex Details The basic simplex method described earlier applies only to a relatively narrow set of linear programming problems In particular the problem must be a MINTMIZATION problem and all constraints must be of the form Zaij Xi S bj ie all constraints must be less than or equal to constraints There are also some ambiguities in the application of the simplex method This lecture will try to clear up these problems Maximiz ation The simplex method minimizes the value of an objective function But note Max 2 Min 2 Min 2 Max 2 So we can always make a maximization problem into a minimization problem or vice versa Ties for New Basic Variable If two or more variables have the same negative coefficient in Row 0 of the tableau and that coefficient is the most negative coefficient in Row 0 then select any variable from those with the tie The choice is arbitrary Ties for New Non Basic Variable What if 2 or more variables tie on the minimum ratio test for becoming the new nonbasic variable Break the tie arbitrarily Possibility of cycling problems discussed in book Very rare Unbound Objective Function Max z gt Do or Min z gt oo An unbound solution occurs if the new nonbasic variable has a RHSpivot ratio 00 ie if MinRHSpivot be For this case that value of the new basic variable is not limited by any constraint and becomes in nite Multiple Optima Where the optima fall on two or more comerpoints the optima will also include the edges of the feasible region between these points These cases will appear only when after the STOPPING criterion has been reached at least one non basic variable has a coefficient of zero in Row 0 Ifthis variable is increased from zero it will not affect the objective function The other comer optimal comer point solutions are found by performing extra tableau iterations 47 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Equality Constraints Equality constraints are of the form Eaij Xi bj bj 0 The problem with equality constraints is that the initial solution X 0 is no longer feasible To x this we add a new type of slack variable called an arti cial variable This type of slack variable acts as a regular slack variable in the simplex method The new equality constraint becomes Eaij Xi Rj bj We also add a huge penalty to the objective function if R39gt0 The new objective function becomes MinzEciXiMRJ M is a HUGE number compared to the other coef cients 01 These changes force the simplex method to quickly move away from the initially infeasible initial solution to a real feasible corner point This approach is called the BIG M method 2 Inequality Constraints 2 inequality constraints are of the form Eaij X1 2 bj or with a slack variable Eaij Xi SJ bj The same problem arises with 2 inequality constraints as with equality constraints ie the initial tableau does not represent a feasible solution Unless bj S 0 X cannot be equal to zero and the initial trail solution is infeasible We solve this problem again using the Big M method First an arti cial variable is added to the constraint Eaij Xi S Rj bj Second a large penalty is given for having gt 0 by adding a HUGE coef cient for R1 in the objective function Min Z 201 Xi M R where M is very large This quickly drives gt 0 after the initial tableau and points the simplex method towards the feasible region 1quot I n the BIG M method allows infeasible solutions but only at an overwhelming penaltv 48 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Negative RHS Sometimes constraints have the form Zaij Xi S bj where bj lt 0 This can cause the initial tableau solution all X 0 to be infeasible This problem persists if we multiply both sides by negative one and reverse the inequality and becomes similar to solving a 2 inequality constraint We solve this again using the Big M method Zaij Xi Sj Rj bj 01quot Zaij Xi Sj Rj ABSbj Min z Zci Xi M Rj where M is very large N0 Feasible Solution This case can only occur if there are equality constraints or 2 inequality constraints in addition to regular S constraints Therefore arti cial variables must appear in the formulation No Feasible Solution Two Parallel Equality Constraints No Feasible Solution H2 H2 H2 H1 H1 H1 This case is discovered if the solution to the problem has at least one artificial variable not equal to zero Allowing Negative Values for a Decision Variable How could we let a decision variable Xi have any value including negative Must divide Xi into two new decision variables Xi and X Xi will represent Xi if it has a positive value and X39i will represent negative values of Xi We then replace X1 in all constraints and the objective function with Xi Xi X We can similarly add a lower bound to the negative value of Xi by adding another constraint where Li is the absolute value of the lower negative bound X39i S L1 This ensures that Xi 2 Li 49 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Implementing the BigM Method We39ve suggested this method a lot What does it do to the simplex method A simple example Mn 5 X1 2 X2 Subject to X1 2 1 2 X1 X2 4 3 X1 4 X2 S 30 Change this to 1Iin 5X12X2MR1MR2 Subject to X1 S1 R1 2 X1 X2 R2 4 3 X1 4 X2 S3 30 Note that the 3rd constraint a S constraint does not require an artificial variable The m initial tableau Basic Equation CLefficients of Variabl Variable No 2 1 1 s1 S3 R1 R1 RHS 0 1 5 2 0 0 M M 0 1 0 1 0 1 0 1 0 1 2 0 2 1 0 0 0 1 4 3 0 3 4 0 1 0 0 30 Note that the quotarti cial variablesquot R1 and R2 are not vet basic variables however We want to start at the origin They need to have a coef cient of zero in Row 0 We have to x this before we begin the simplex method using linear algebra operations Gaussian elimination Multiplying Row 1 by M and subtracting it from Row 0 makes R1 basic and multiplying Row 2 by M and subtracting this from Row 0 makes R2 basic This becomes the true initial tableau This results in Basic Equation Coefficients f Variables Variable N0 Z X1 X2 S1 S R1 R2 RHS Z 0 1 53M 2M M 0 0 0 5M R1 1 0 1 0 1 0 1 0 1 R2 2 0 2 1 0 0 0 1 4 S3 3 0 3 4 0 1 0 0 30 This tableau represents the comerpoint quotsolutionquot at the origin X1 X2 0 This pseudosolution is penalized by its poor performance of 5M Ah That39s better Now we start the simplex method using this as our initial tableau Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 To complete the example following the standard Simplex algorithm Step 2 This solution is not optimal several coefficients in Row 0 are negative Stgp 3 The X1 column has the most negative value making this the quotpivot columnquot and X1 the new basic variable Step 4 EqnRow 1 has the minimum ratio of RHScoefficient in the X1 column l 1 making this the quotpivot rowquot and R1 the new nonbasic variable Step 5 Updating the tableau to make X1 basic the new tableau becomes Basic Equation Coefficients f Variables Variable No 2 l x1 x1 s1 S3 l R1 R1 RHS z 0 1 0 2M 52M 0 53M 0 52M X1 1 0 1 0 1 0 1 0 1 R2 2 0 0 1 2 0 2 1 2 S3 3 0 0 4 3 1 3 0 27 Step 6 This tableau is not optimal Continue the algorithm with Step 3 Step 3 S1 is the new basic variablepivot column Step 4 Row 2 is the new nonbasic variable 22l Row 1 is excluded because of the negative coef Step 5 The new tableau is Basic Equation Coefficients f Variables Variable No 2 l x1 x2 s1 33 l R1 R2 RHS z 0 1 0 12 0 0 M 25M 10 X1 1 0 1 12 0 0 0 12 2 S1 2 0 0 12 1 0 1 12 1 S3 3 0 0 52 0 1 0 32 24 Step 6 This tableau is not optimal Continue the algorithm with Step 3 Stgp 3 X2 is the new basic variablepivot column Stgp 4 Row 2 is the new nonbasic variable l052 Step 5 The new tableau is Basic Equation Coefficients f Variables Variable No 2 l x1 x2 s1 33 l R1 R2 RHS z 0 1 0 0 1 0 1M 2M 9 X1 1 0 1 0 1 0 1 0 1 X2 2 0 0 1 2 0 2 1 2 S3 3 0 0 0 5 1 5 4 19 Stgp 6 This tableau is optimal with the solution X1 1 X2 2 Z 9 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Sensitivity Analysis in Linear Programming Sensitivity Analysis How would the solution change if parameter or coef cient values are varied Often for real problems parameter values are not known with certainty Thus sensitivity analysis can tell us the relative importance of different sources of uncertainty Sensitivity analysis is the systematic variation of parameter and input data values in a model to assess the affect of uncertainties or variation in these variables on model results and designs based on model output Sensitivity analysis helps answer such questions as 1 What is the effect if parameter 0 bj or a1 is off by some error 8 2 If input data values are accurate within some error E what are possible effects on model results 3 Is further research into the value of certain parameters and input data warranted Since many models have large numbers of parameters and input data conducting a complete sensitivity analysis on all parameters and data and all possible combinations of covaried errors is impossible This implies that sensitivity analysis must be somewhat judicious concentrating on those parameters and inputs that seem most important and prone to error Brute Force Sensitivity Analysis We could always resolve the linear program with new parameter values Linear programming solutions are usually quite fast so this is often a very reasonable option But here are some simpler tricks These are also discussed in Chapter 6 of Hillier and Lieberman Lagrange MultipliersShadow PricesDual Prices The shadow price 7 associated with each constraint represents the change in the value of the objective function if the RHS constant of the constraint is changed by one unit Note Beware the Sign of these Lagrange multipliers l A slight change in the formulation of the equations or in the solution method will change the sign of the Lagrange multiplier although the absolute value should remain the same It is common for two computer solution packages to give different signs for the shadow prices In practice I recommend interpreting the sign intuitively and then applying the absolute value for the magnitude It is easy to intuit the sign changing the constraint value to quotloosenquot the constraint should improve the value of the objective function These appear as the coef cients in Row 0 under the slack variables in the nal simplex tableau Slack Variables The value of the slack variable represents the amount or the quotresourcequot remaining unused in the constraint ie the constraint can be tightened by up to the value of the slack variable before the optimal solution is changed These values are given directly by the simplex solution Reduced Cost The reduced cost is the amount by which the coef cient in the objective function would have to change to make a nonbasic decision variable become basic nonzero These values are given by the coef cients in Row 0 under the decision variables in the nal simplex tableau Ranges in Which the Basis is Unchanged Remember the basis is the set of variables with nonzero values Often linear program output will include a quotrange in which the basis is unchangedquot This output indicates how much particular coef cients in the objective function RHS constants and occasionally coef cients in the LHS of the constraints can change before one or more of the basic variables must become nonbasic and vice versa 52 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Within the range of objective function coefficient values for which the Basis is unchanged changes in these coef cient values will change the value of the objective function but leave the optimal solution values for the decision variables completely unchanged Within the range of the RHS constants for which the Basis is unchanged changes in these RHS coef cient values for binding constraints will change both the solution values and the objective function value For nonbinding constraints the range on the tightening side should be the same as the slack variable values Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Example of LINDO Output MIN 5 X1 7 X2 SUBJECT TO 2 X1 X2 gt 10000 302X101X2gt 0 4 01 X1 02 X2 lt 0 END LP OPTIIVIUM FOUND AT STEP 2 OBJECTIVE FUNCTION VALUE 1 633333320 VARIABLE VALUE REDUCED COST X1 3333333252 0000000 X2 6666666504 0000000 ROW SLACK OR SURPLUS DUAL PRICES 2 0000000 6333333 3 0000000 6666667 4 1000000000 0000000 NO ITERATIONS 2 RANGES IN WIHCH THE BASIS IS UNCHANGED OBJ COEFFICIENT RANGES VARIABLE CURRENT ALLOWABLE ALLOWABLE COEF INCREASE DECREASE X1 5000000 2000000 19000000 X2 7000000 INFINITY 2000000 RIGHTHAND SIDE RANGES ROW CURRENT ALLOWABLE ALLOWABLE RHS INCREASE DECREASE 2 10000000000 INFINITY 9999999023 3 0000000 1000000000 1000000000 4 0000000 INFINITY 1000000000 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 How Much can C quotquot 39 0 hi and a Change Before the Solution Moves to Another Cornerpoint Consider the initial tableau for the problem Z 1 SJ RHS 1 3 Q 0 Q m A I b T T coef cients in the constraints Identity matrix The nal tablea 1 will have the form Equation No 2 Xi sj RHS 31A2 x1 Zxh2TX B A B I Bb 1 NOTE B is the basis matrix and y is the vector of Lagrange multiplier values The indicates values at the optimal solution B and y together summarize all operations on the initial tableau needed to create the nal optimal tableau y is the vector of Lagrange multiplier values at the optimal solution How much can we change 0 hi or a before the Basis changes Case 1 Ifthe optimal solution remains at the same cornerpoint Basic variables remain basic For changes in 9 that keep the same Basis the solution values and corner point remain the same although the Lagrange multiplier values will change Let h h Q A A AA and 2 2 Q where Q AA and g represent changes in b A and g respectively Then BA B n Bb S Z EvTXaJ yangy Deterministic Optimization and Design UC Davis This implies The new nal tableau has the form Jay R Lund Winter 2002 Equation No Z Xn SJ Sm RHS 0 1 Y AY 939 y I Z l 0 BA B I Bb m Case 2 If the optimal solution has changed corner points Basic variables have changed The new simplex tableau found in Case 1 will no longer satis the optimali test if the optimal comer point solution has changed This happens if either another comerpoint is optimal g there is now no feasible solution Using these insights to find changes in parameter values that keep the set of basic variables the same Case 3 Ifthe solution is infeasible Some basic variables will have negative values Ie a decision or slack variable will have a negative value This can be quickly found because at least one element of B b will be negative Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Duality Duality is part of the mathematical theory behind linear programming that makes linear programming a power problemsolving tool Some useful results of duality l Helps interpretation of LP results 2 Very useful for sensitivity analysis 3 Can be used to improve computational efficiency reducing the number of constraints simplex solves faster with more decision variables and fewer constraints 4 Useful for speeding multiobjective analysis What is duality For every original primalb LP problem there is an associated dual problem Compare the LP statements Primal Problem DualProblein MaxZgX MinW b ST 3 ST A X S b y A S g X 2 0 y 2 0 At X Z 9X At y W by At optimal solutions g X by Z Compare at Initial Tableaux Primal Max Dual Min sj RHS SDi RHS 9 g 0 g 0 A I h I 9 Constraints ltgt Decision Variables Objective function coefficients ltgt RHS Compare the Final Tableaux Primal Dual xi si Yi SD yA 2 X Z A X h X2 Z x ya Dquot S J S J Note Solving either the dual g the primal by simplex solves both problems Deterministic Optimization and Design UC Davis The relationship between primal and dual linear programming problems is shown by writing them as Quirino Paris nd Primal Dual Max Z 01x1 02x2 03x3 Min R b1y1 b2y2 Subject to Subject to a11X1a12X2a13X3 Sbi Y1 20 a21Xi 2122 2123 3 b2 Y2 2 0 X1 20 a11Y1a21Y2 201 X2 2 0 a12y1 a22Y2 2 02 X3 2 0 a13y1 a23Y2 2 03 Winter 2002 Here yl and y2 are dual variables for the primal constraints and x1 x2 x3 are dual variables for the dual constraints The notion that Lagrange multipliers are in essence dual variables can be illustrated by de ning the Lagrangean function for both linear programming problems The use of the Lagrangean function in this context is of great help in understanding why and how the primal and the dual LP problems are so intimately related to each other We now state quottwoquot Lagrangean functions corresponding to the primal and dual LP problems and conclude that the two functions are identical and therefore only one Lagrangean function exists for a pair of primal and dual linear programming problems We use y1 y2 as Lagrange multipliers for the primal problem and x1 x2 x3 as Lagrange multipliers for the dual Lagrangean for Primal Lagrangean for Dual max L 01x1 02x2 03x3 t yilbi 39 a11X139 a12Xz 39 a13X3l Y2lb2 39 a21Xi 39 a22Xz 39 a23x3 min L39 blyl b2y2 Xllcl 39 311Y139 a21Y2 X2 2 39 a12y1 39 a22Y2 X3C3 39 a13y1 39 a23Y2 The two Lagrangean functions just stated are actually the same In this way the primal and the dual LP problems are tied together by a unique Lagrangean function Duality Example For the Primal Problem Find the Dual Problem MaxZ3X15X2 MinZ10Y112Y24Y3 Subject to Subject to X1X2S10 Y12Y2Y323 2X1X2S12 Y1Y225 X1 S 4 An economic interpretation If the primal problem is to maximize pro t subject to resource constraints we can interpret Yj as the contribution to pro t per unit of resource j The objective of the dual problem then is to minimize the total implicit value of the resources consumed 58 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Yj kj the marginal contribution to pro t of resource j Some PrimalDual Relationships for LP 1 Weak Duality Propegy If both X0 and X0 are complementary feasible solutions for the primal and dual problems respectively then 2 X S Y h 2 Strong Duali Property If both x and y are complementary optimal solutions for the primal and dual problems respectively then 2 X Y b Note Computationally we would rather have variables than constraints B summarizes all the operations performed on the simplex tableau to get the final solution The Final Tableau Coefs of Variables Basic Var Eqn No X1 n SJ Sm RHS Z 0 1 w A39 139 31 Zn 1 0 A 3 h III Ll Initial Tableau The initial tableau consists of the stacked vector and matrix T t vector of Row 0 in initial tableau g l 9 l 0 T matrix of Rows l to m in the initial Tableau A l 1 b Final Tableau The nal tableau consists of the stacked vector and matrix I39 t 2 y A a consequence of operations done to Row 0 TA B b ABA quotFundamental Insi htquot 7 amp coefs 7Ls ttXTyA2lXlXhl TBTBABBp Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Transportation Problems A Simple Transportation Problem Common problem of shipping good between multiple sources and destinations Minimize cost Let n number of locations Xij units of good ow shipped from origin i and destination cij Cost of transporting a unit of good from i to d demand for good at each location supply of good at each source location i Here s the Problem How much ow should go where What39s the LP Min Z icUXU 11 11 ST 2X1 2 d j 1 n satisfy demands at ea location 11 n 2X1 5 SI i 1 n limited sources at each location j1 Xij20Viandj There are n2 decision variables and Zn constraints for this problem Project homework is a transportation problem plus a little bit more Multi Period Transportation Problem with Warehouse Storage Add time subscripts to each variable amp coef cient to represent each of T timeperiods in the plan Let T number of scheduling periods Xijt djt Sit 0th cw unit cost of warehousing at locationI and time t Kkt Storage capacity of warehouse at location k and time t This problem is a little more complex Since warehouses store goods in time a time dimension sub script is essential to this problem 60 Deterministic Optimization and Design Jay R Lund UC Davrs Winter 2002 The LP T n n n Min 2 Z ZZZCWXW Elem1 21 11 1 11 Subject To j 21 1 11 2X1 W S 4th D W V jt conservation ofmass at all locations and times 11 111 Djt 2 djt V jt satisfy demands at all locations and times S S sit V it limited sources at all locations and times Wkt S bkt V kt limited storage capacity Xijt S rm Vijt limited route capacity on each route at each time Xijt ijt Xikt 2 0 V i jkt nonnegativity 2 Number of decrs1on variables Tn Number of constraints 4n n2T Number of parameters to estimate 0 cw b r d and s 2n2 4n Transportation with Warehouses and Travel TimesDelivery Lags Let Xijt departures from i to at timet Ailt arrivals from i to at time t Lilt lag or travel time from i to departing at time t Note Xijt AilMm since departures at time t arrive at time t Lilt The LP T r1 r1 r1 Min 2 Z ZchwX1Z ZcWVIn 21 11 1 11 Subject To 11 11 2A1 W S 4th D WN1 Vjt conservation of mass at all locations and times 11 111 Xijt Aijyt ijtV it arrivals lage departures by Lilt Djt 2 djt V jt satisfy demands at all locations and times S S sit V it limited sources at all locations and times Wk S bkt V kt limited storage capacity Xijt S rm Vijt limited route capacity on each route at each time Xijt Xth Xikt 2 0 V i jkt nonnegativity This Transportation problem is now rather exible and applies to lots of problems Variations of this also can apply to other logistics problems such as water and wastewater movements Just consider reservoirs to be warehouses and you can model Califomia s water system this way 61 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Scheduling Problems Simple Scheduling Problems CPM and related problems There are n activities to schedule to complete a project Let Si the starting time of activity di duration of the activity pij 1 if activity is a prerequisite of activity i equals zero otherwise Objective Min time to completion T The LP39 Min T ST T2S1di i1n Si 2111ij pijdj i 1 n andj1 n Scheduling where Activity Duration can be shortened at a cost Fundamentally this is a multi objective optimization problem Cohon 1978 The two objectives are 1 minimize cost and 2 minimize time to completion T A weighting factor x can be assigned to combine and weight these two objectives in the overall optimization objective function is max duration of activity Let ci a1 Xi Xi spent to speed completion of activity i di 2 bi minimum duration of activity Min 2le 11 ST T2Sidi Vi Si 2 171ij t Pijdj V iaj i j 112 bi V i d1 ci aiXi V i Minimize a 39 39 quot of Cost and Timeduration WeightingMethod Here we39ll weight each of the two objectives Making several LP runs with different weights will produce an ef cient tradeoff curve for the two objectives Change objective function to Minoai Xi1 0LT 11 ConstraintMethod Another multiobjective optimization approach is the quotconstraint methodquot This would involve solving the costminimizing LP several times with each LP run having a different constrained value for the completion time T T S Ct The constant 0L will vary from Tmin resulting from the completiontime minimizing formulation to Tmax the costminimizing solution This quotconstraint methodquot is actually a bit more rigorous 62 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Assignment Problems Allocating resources on a onetoone basis Example allocating personnel to tasks on a project so as to l maximize overall performance or 2 make employees happier Example Allocate m graduate students to m available desks Students have given preferences for particular desks l most preferred m least preferred cij student is preference for desk Xij lifstudent i sits in deskj X 0if not The Linear Pro Iram version of this problem m m Mn 2 2 CU Xij i1j1 m ST inj l forj l m il m inj lfori l m jl Number of decision variables m2 Number of constraints 2 In Other LP problems Should include some more of these examples Regional Planning Reservoir operations Water Quality Regional environmental planning Air Quality Planning Product Mix Goal Programming 63 Deterministic Optimization and Design Jay R Lund Winter 2002 UC Davis LP Solution of NonLinear Objectives With Linear Constraints Sounds contradictory But can be done for problems with objective functions of forms Min fx convex Max fx concave 0 x Piece wise Lineariz ation In each case a linear approximation can be made of the function for speci c ranges of X Min fx if x W a X3 y f y X2 X1 39 j 0 d1 d2 13 X faX z fX The resulting linearized nonlinear program is fdl f0 M111 faX fO T X1 flt12 fd1 d2 d1 flt13 flt12 d3 d2 X3 Subject to X X1 X2 X3 X1 S 11 X2 S 12 11 X3 S 13 12 followed by any other Note The coefficients of X1 X2 and X3 are just slopes of parts of the objective function 64 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Why it works LP will always make X1 d1 before X2 becomes gt 0 Similarly for X2 and X3 fX To maximize this same function it won39t work 0 x The LP will want to max X3 and keep X1 and X2 0 Maximizing Concave Objective Functions A similar linearization approach will work for maximizing a concave function subject to linear constraints y yfaX 0 d1 d2 d3 X Later we will see how any type of singledecisionvariable can be linearized with the addition of integerlinear programming Need a numerical example Example ofPiecewise Linearization l Minimize a convex function Min Z 5X2 ST constraints on X 2 Maximize a concave function Max Z 3X0 7 ST constraints on X 65 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 Integer Programming and Mixed IntegerLinear Programming For many problems the decision variables can only take on integer values For example for a location decision typically a facility can be located at only one site a factory can sell only an integer number of airplanes etc Decision variables with integer values only are called 39integer variablesquot A common case is where a decision variable can have values of either 0 g 1 These are called quotbinarv variablesquot quotYesquot or quotnoquot values for variable Example Capital Improvement Plan A city sewer utility must decide which capital improvement projects to select to minimize longterm operation and maintenance costs Net Present Capital Deci ion No Decision Decision Variable Value Requirements 1 Sewer Rehab 1 X1 70 million 20 In 2 WWTP Rehab 1 X2 50 million 15 In 3 Sewer Rebuild 1 X3 40 million 12 In 4 Sewer Rehab2 X4 30 million 10 In Capital Avail 25 million Cannot build everywhere because of capital availability constraint Xiyes orno 0no1yes Max Z 70X1 50X2 40X3 30X4 max total new present value ST 1 20X1 15X2 12X3 10X4 S 25 capital constraint 2 X1 X2 X3 X4 are binary 01 Now to solve 1 By enumeration Try all combinations of Xi 0 1 Choose combination with the best feasible value of the objective function For example 24 16 combinations for n binary decision variables where are 2n combinations Gets to be lots of combinations fast 66 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 2 Branch and Bound Methods The concept An efficient enumeration procedure a For the example we can see that any solution where X1 l and X2 l is infeasible violating the budget constraint For these values of X1 and X2 the solution is infeasible regardless of the values of X3 and X4 So by finding an infeasible subset of decision values we eliminate many combinations of the larger set ofvalues Here we eliminated ll00 ll0l lll0 and llll b Similarly we can eliminate combinations of decision values which lead to inferior solutions For the example Let the decision be 010l This is feasible just barely Since the objective is one of maximization and the decrease in any decision variable only decreases the objective function we know that 010 1 is better than 0001 0000 and 0100 The book and appendix detail methods for putting these ideas to ef cient practice Typically a branch and bound approach will signi cantly limit the number of combinations to be evaluated This can still be a lot Mixed Integer Linear Programming Often we must solve LP problems that have one or more binary or integer decision variables The Hillier and Lieberman goes over several BranchandBound approaches pp 515541 Some major concepts for understanding such algorithms are 0 LP relaxation 0 Branching 0 Bounding 0 Fathoming Appendix VIH provides a simple algorithm for solving integerlinear programs 67 Deterministic Optimization and Design UC Davis Jay R Lund Winter 2002 Summary of the BIP Branch and Bound Algorithm This section needs some work Initialization Set Z cgto for a maximization problem Apply the bounding step fathoming step and optimality test described below to the entire problem If not fathomed classify this problem as the one remaining subproblem Steps for each iteration H N Branching Amongthe 1 I I I r select the one that was created most recently Break ties according to which one has the larger bound Branch from the node for this problem to create two new subproblems by xing the next variable the branching variable to 0 or 1 Bounding For each new subproblem obtain its bound by applying the simplex method to its LP relaxation This section needs some work M Optimality test Fathoming For each new subproblem apply the three fathoming tests described below and discard those subproblems that are fathomed by any of the tests A subproblem is fathomed dismissed from further consideration if Its bound is less than or equal to Z ie we already know of a better solution Its LP relaxation has no feasible solutions The optimal solution to its LP relaxation satis es the integer constraints If this solution is better than the incumbent it becomes the new incumbent If there are no remaining subproblems STOP The incumbent is optimal Otherwise perform another iteration 68 Deterministic Optimization and Design Jay R Lund UC Davis Winter 2002 More NonLinear Obiective Functions with Linear Constraints Using IntegerLinear Programming Loucks et al 1981 have a nice presentation of linearizing any singlevariable objective function although often integerlinear programming is required Consider the two functions Concave Function Convex Function Ff3 32 c d c 1amp5 mix 3 4 Hit a 3 tr 3910 E11 d2 d3 3910 391 3912 d 3 3 Method for Linear Programming Max concave or Min convex objective functions The first method of linearization is appropriate for linear programming and was described above It is re stated below n Maximize fX s1X1 szXz S3X3 Z siXi for the concave function only i1 n Minimize fX s1X1 szXz S3X3 Z siXi for the convex function only i1 Subject t0 X 10 X1X2 X3 Xj S dj dj1 for each linear segment j and any original linear constraints in the problem Method for Mixed IntegerLinear Programming either convex or concave functions Max or Min fX z fd0ZO fd121 fd222 fd3Z3 s1X1 szXz S3X3 fdl1Zl1 SlXil Subject t0 X 1020 1121 1222 1323 X1 X2 X3 zoZlZzZ3l Xj S dj dj1Zj1 for each linear segment j Zj are integer variables for all j and any original linear constraints in the problem 69


Buy Material

Are you sure you want to buy this material for

25 Karma

Buy Material

BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.


You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

Why people love StudySoup

Jim McGreen Ohio University

"Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

Allison Fischer University of Alabama

"I signed up to be an Elite Notetaker with 2 of my sorority sisters this semester. We just posted our notes weekly and were each making over $600 per month. I LOVE StudySoup!"

Steve Martinelli UC Los Angeles

"There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

Parker Thompson 500 Startups

"It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

Become an Elite Notetaker and start selling your notes online!

Refund Policy


All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email


StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here:

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.