### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# INTRODUCTION TO OPERATIONS RESEARCH AND OPTIMIZATION CAAM 378

Rice University

GPA 3.58

### View Full Document

## 22

## 0

## Popular in Course

## Popular in Applied Mathematics

This 58 page Class Notes was uploaded by Walker Witting on Monday October 19, 2015. The Class Notes belongs to CAAM 378 at Rice University taught by Staff in Fall. Since its upload, it has received 22 views. For similar materials see /class/225000/caam-378-rice-university in Applied Mathematics at Rice University.

## Reviews for INTRODUCTION TO OPERATIONS RESEARCH AND OPTIMIZATION

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/19/15

Lecture Notes for CAAM 378 A Quick Introduction to Linear Programming DRAFT Yin Zhang September 24 2007 Contents 1 What is Linear Programming 11 A Toy Problem 12 From Concrete to Abstract 13 A Standard Form 14 Feasibility and Solution Sets 15 Three Possibilities 2 Vertices of the Feasibility Set 21 Matrix and Vector Partitions 22 Convex Set7 Polyhedron and Extreme Point 221 De nitions 222 Vertices of Feasibility Set 223 Basic Feasible Partitions and Vertices 23 Exercises 3 Simplex Method First Look 31 Terminology 32 Reduced Linear Program 321 Partitioning and Elimination 322 Reduced Linear Program 323 What Happens at A Basic Feasible Solution 33 One Iteration of the Simplex Method 34 Do We Reach a New Vertex 35 Exercises a Simplex Method More Details 41 How to Start Simplex 7 Two Phases 411 Phase l An Auxiliary Problem 3 27 CONTENTS 412 Phase ll The Original Problem 29 42 Main lmplemetational Issues 43 Degeneracy7 Cycling and Stalling 44 Exercises Duality and Optimality Conditions Dual Linear Program 52 Optimality Conditions 53 How to nd a dual 54 Exercises PrimalDual InteriorPoint Methods 61 Introduction 62 A Primal Dual Method 63 Solving the Linear System 64 Convergence Considerations Introduction to Newton7s Method A1 An Example A2 Exercises 30 30 33 35 38 39 39 40 43 44 47 50 51 Chapter 1 What is Linear Programming An optimization problem usually has three essential ingredients a variable vector x consisting of a set of unknowns to be determined7 an objective function of z to be optimized7 and a set of constraints to be satis ed by z A linear program is an optimization problem where all involved functions are linear in z in particular7 all the constraints are linear inequalities and equalities Linear programming is the subject of studying and solving linear programs Linear programming was born during the second World War out of the necessity of solving military logistic problems It remains one of the most used mathematical techniques in today7s modern societies 11 A Toy Problem A local furniture shop makes chairs and tables The projected pro ts for the two products are7 respectively7 20 per chair and 30 per table The projected demand is 400 chairs and 100 tables Each chair requires 2 cubic feet of wood while each table requires 4 cubic feet The shop has a total amount of 1000 cubic feet of wood in stock How many chairs and tables should the shop make in order to maximize its pro t Let zl be the number of chairs and 2 be the number oftables to be made These are the two variables7 or unknowns7 for this problem The shop wants to maximize its total pro t7 20z1 30x27 subject to the constraints that a the total amount of wood used to make the two products can not exceed the 1000 cubic feet available7 and b the numbers of chairs and tables to be 5 6 CHAPTER 1 WHAT IS LINEAR PROGRAMMING made should not exceed the demands In additional we should not forget that the number of chairs and tables made need to be nonnegative Putting all these together we have an optimization problem max 20L 30 st 2x1 4x2 3 1000 11 0 3 1 3 400 39 0 3 x2 3 100 where 20L 30 is the objective function st is the shorthand for sub ject to77 which is followed by the constraints of this problem This optimization problem is clearly a linear program where all the func tions involved both in the objective and in the constraints are linear func tions of 1 and 2 12 From Concrete to Abstract Let us look at an abstract production model A company produces n prod ucts using in kinds of materials For the next month the unit pro ts for the n products are projected to be c1c2 cn The amounts of mate rials available to the company in the next month are b1b2 bm The amount of material i consumed by a unit of product j is given by aij 2 0 for i 12 m and j 12 n some aij could be zero if product j does not use material i The question facing the company is given the limited availability of materials what quantity the company should produce in the next month for each product in order to achieve the maximum total pro t The decision variables are obviously the amounts produced for the n products in the next month Let us call them 12 an The optimiza tion model is to maccimize the total pro t subject to the material availability constraints for all in materials and the nonnegativity constraints on the n variables In mathematical terms the model is the following linear program 12 FROM CONCRETE TO ABSTRACT 7 max 01x1 02x2 Cnxn St 011 112 alnxn S b1 021 122 aann S 52 12 am i am z 39 amn n S bm zlx2 2 2 0 The nonnegativity constraints can be vital but are often forgotten by beginners Why is nonnegativity important here First in the above context it does not make sense to expect the company to produce a negative amount of a product Moreover if one product say product k is not pro table corresponding to ck lt 0 without the nonnegativity constraints the model would produce a solution M lt 0 and generate a pro t ckxk gt 0 In fact since a negative amount of product k would not consume any material but instead generate77 materials one could drive the pro t to in nity by forcing M to go to negative in nity Hence the model would be wrong had one forgotten nonnegativity The linear program in 12 is tedious to write One can shorten the expressions using the summation notation For example the total pro t can be represented by the left hand side instead of the right hand side of the following identity V L E C i 01 C z Unin i1 However a much more concise way is to use matrices and vectors If we let 0 0102 CnT and z 1g an Then the total pro t becomes 0T2 In a matrix vector notation the linear program 12 becomes max ch st Ax S b 13 x 2 0 where A 6 WW and b E Wquot The inequalities involving vectors are always understood as component wise comparisons For example the n vector z 2 0 means that each component x gt 0 for 239 1 2 n 8 CHAPTER 1 WHAT IS LINEAR PROGRAMMING 13 A Standard Form A linear program can have an objective of either minimization or maximiza tion7 while its constraints can have any combination of linear inequalities and equalities It is impractical to study linear programs and to design algo rithms for them without introducing a so called standard form 7 a unifying framework encompassing most7 if not all7 individual forms of linear programs Different standard forms exist in the literature that may offer different advan tages under different circumstances Here we will use the following standard form min ch st Ax b 14 x 2 0 where the matrix A is m gtlt 717 b E W and 0x 6 3E The triple Abc represents problem data that needs to be speci ed7 and x is the variable to be determined In fact7 once the size of A is given7 the sizes of all the other quantities follow accordingly from the rule of matrix multiplication In plain English7 a standard linear program is one that is a minimization problem with a set of equality constraints but no inequality constraints except nonnegativity on all variables We will always assume that A has full rank7 or in other words7 the rows of A are linearly independent which ensures that the equations in Ax b are consistent for any right hand side b E Wquot We make this assumption to simplify the matter without loss of generality7 because redundant or inconsis tent linear equations can always be detected7 and removed through standard linear algebra techniques Moreover7 we normally require that the matrix A have more columns than rows7 that is m lt n This requirement7 together with the full rank of A7 ensures that there are in nitely many solutions to the equation Ax b7 leaving degrees of freedom for nonnegative and optimal solutions A linear program is said to be equivalent to another linear program if an optimal solution of the former7 if it exists7 can be obtained from an op timal solution of the latter through some simple algebraic operations This equivalence allows one to solve one and obtain a solution to the other We claim that every linear program is equivalent to a standard form lin ear program through a transformation7 which usually requires adding extra variables and constraints Obviously7 maximizing a function is equivalent to minimizing the negative of the function A common trick to transform an 13 A STANDARD FORM 9 inequality aTx S B into an equivalent equality is to add a so called slack variable 77 so that ang ltgt aTzn 7720 Let us consider the toy problem 11 With the addition of slack variables x37x475 we can name them anyway we want7 we transform the linear program on the left to the one on the right max 20L 30 max 20L 30 st 2x1 4x2 3 1000 st 2x1 4x2 3 1000 1 S gt 1 4 2 S 2 5 17M 2 0 172737475 Z 0 After switching to minimization7 we turn the linear program on the right to an equivalent linear program of the standard form min 720 7 30 0 0x4 0 st 2x1 4x2 x3 0x4 0 1000 1 0x2 0 x4 0 400 15 01 2 03 04 5 172737475 Z 0 where CT 720 30 0 0 07 b is unchanged and A OHM 4 1 0 0 1 0 OHO 0 0 1 This new coef cient matrix is obtained by appending the 3 by 3 identity matrix to the right of the original coef cient matrix Similarly7 the general linear program 13 can be transformed into min icTz st Asb 16 Ls 2 0 where s E W is the slack variable vector This linear program is in the standard form because it is a minimization problem with only equality con straints and nonnegativity on all variables The variable in 16 consists of 10 CHAPTER 1 WHAT IS LINEAR PROGRAMMING z 6 3E and s E Wquot and the new data triple is obtained by the construction A m I 1H1 as where I is the m by m identity matrix and 0 E Wquot 14 Feasibility and Solution Sets Let us call the following set fx Azbz20C E 17 the feasibility set of the linear program 14 Points in the feasibility set are called feasible points from which we seek an optimal solution f that minimizes the objective function ch With the help of the feasibility set notation we can write our standard linear program into a concise form mincTz z E f 18 A polyhedron in 3 is a subset of 3 de ned by points satisfying a nite collection of linear inequalities andor equalities For example a hyper plane x E aTz is a polyhedron where a 6 3E and B 6 3E and a half space x E aTz is a polyhedron as well In geometric terms a polyhedron is nothing but the intersection of a nite collection of hyper planes and half spaces In particular the empty set and a singleton set that contains only a single point are polyhedra Clearly the feasibility set f in 17 for linear program 14 is a poly hedron in 3 It may be empty if some constraints are contradictory say 1 3 71 and 1 2 1 or it may be an unbounded set say fz 2x1720z20c2 19 which is the half diagonal line emitting from the origin towards in nity Most of the time f will be a bounded nonempty set A bounded polyhedron is called a polytope 15 THREE POSSIBILI TIES 11 The optimal solution set or just solution set for short of a linear pro gram consists of all feasible points that optimize its objective function In particular we denote the solution set of the standard linear program 14 or 18 as 8 Since 8 C 7 S is empty whenever f is empty However 8 can be empty even if f is not The purpose of a linear programming algorithm is to determine whether 8 is empty or not and in the latter case to nd a member f of S In case such an d exists we can write 8 x E f oTx oTxi or SE EnoTxon f 110 That is S is the intersection of a hyper plane with the polyhedron f Hence 8 itself is a polyhedron If the intersection is a singleton S xi then f is the unique optimal solution otherwise there must exist in nitely many optimal solutions to the linear program 15 Three Possibilities A linear program is infeasible if its feasibility set is empty otherwise it is feasible A linear program is unbounded if it is feasible but its objective function can be made arbitrarily good For example if a linear program is a mini mization problem and unbounded then its objective value can be made ar bitrarily small while maintaining feasibility In other words we can drive the objective value to negative in nity within the feasibility set The situation is similar for an unbounded maximization problem where we can drive the ob jective value to positive in nity Clearly a linear program is unbounded only if its feasibility set is an unbounded set However a unbounded feasibility set does not necessarily imply that the linear program itself is unbounded To make it clear let us formally de ne the term unbounded for a set and for a linear program We say a set f is unbounded if there exists a sequence N C f such that a 00 as k a 00 On the other hand we say a linear program mincTz z E f is unbounded if there exists a sequence N C f such that CTk a foo as k a 00 Hence for a linear program the term unbounded means objective unbounded When the feasibility set f is unbounded whether or not the correspond ing linear program is unbounded depends entirely on the objective function 12 CHAPTER 1 WHAT IS LINEAR PROGRAMMING For example consider f given by 19 The linear program maxz1 2 12 E f max2x1 1 2 0 is unbounded However when the objective is changed to minimization in stead the resulting linear program has an optimal solution at the origin If a linear program is feasible but not objective unbounded then it must achieve a nite optimal value within its feasibility set in other words it has an optimal solution f E S C f To sum up for any given linear program there are three possibilities H The linear program is infeasible ie f 0 In this case S Q 3 The linear program is feasible but objective unbounded In this case f is an unbounded set but 8 0 9 The linear program is feasible and has an optimal solution f E S C f In this case the feasibility set f can be either unbounded or bounded These three possibilities imply that if f is both feasible and bounded then the corresponding linear program must have an optimal solution f E S C 7 regardless of what objective it has Chapter 2 Vertices of the Feasibility Set We have seen that both the feasibility set and the solution set of a linear program are polyhedra in 3 Like polygons in two or three dimensional spaces polyhedra in 3 generally have corners77 or vertices 21 Matrix and Vector Partitions It will be convenient for us to use matrix partitions in the development of theory and algorithms for linear programming This section introduces nec essary notations for partitioned matrices and vectors which can be skipped by advanced readers Consider a 3 gtlt 7 matrix a11 a12 a13 a14 a15 a16 a17 A a21 a22 023 a24 a25 026 a27 A1 A2 A7 7 aan 032 033 034 035 036 037 where Aj is the j th column of A ie Aj alj 127 13T is a 3 gtlt 1 vector for j 12 7 For any vector x E 3E7 a11 112 117 7 A a21 122 quot39 127 E A j a31 132 137 71 Let us partition the index set 1 2 3 45 6 7 into two subsets B and N with an arbitrary order within each subset say B 524 N 1673 13 14 CHAPTER 2 VERTICES OF THE FEASIBILITY SET Then AB and AN are respectively the 3 gtlt 3 and 3 gtlt 4 sub matrices of A whose columns are those of A corresponding to the indices and orders within B and N respectively Namely AB A5 A2 A4 AN A1 A6 A7 A3 Similarly x3 6 3E3 and xN 6 3E4 are respectively sub vectors of x whose components are those of x corresponding to the indices and orders within B and N respectively Namely 3 5 2 x4T7 zN 1 6 7 3 Corresponding to the index set partition B N 7 Ax ZAJxj 2 ijj ZA j ABB AN N j1 jeB jeN Moreover the linear system of equations Ax b can be written as ABB AN N b for any right hand side vector b E 35 Obviously these notations can be extended to the general case where A 6 Wm x 6 3E and b E Wquot For any index partition BN with BUN12n B N 21 there holds Ab ltgt ABBANNb where the symbol 77 denotes equivalence Moreover if B contains m indices ie lBl m and the square matrix AB is nonsingular there holds Ab ltgt BAg1biANN That is we can solve the equation Ax b to obtain xB as a function of xN Furthermore in this case x 2 0 lt3 Ag1b 7 ANN 2 0 xN 2 0 24 22 CONVEX SET POLYHEDRON AND EXTREME POINT 15 22 Convex Set Polyhedron and Extreme Point 221 De nitions De nition 1 Convex Set A set C C 3 is a convex set if my C Ax1iAy C VAE 01 that is the line segment connecting any two members lies entirely in the set By this de nition an empty set is a conuep set De nition 2 Extreme Point A point z is an extreme point of a conuep set C C 3 if it does not lie in between any other two members of the set that is for any yz E C such that y 31 x 31 2 z Ay1iz VA 6 01 Clearly any extreme point must be on the boundary of the set and can not be an interior point De nition 3 Polyhedron and its vertices A polyhedron in 3 is a set of points in 3 de ned by a nite collection of linear equalities andor inequali ties ie the intersection of a nite number of hyper planes and half spaces A bounded polyhedron is also called a polytope An ecotreme point of a poly hedron is also called a vertex of the polyhedron Figure 21 Convex sets Proposition 1 The following statements are true 1 A hyperplane x 6 3E aTx is conuem 2 A half space x 6 t aTx S 5 is conuem 16 CHAPTER 2 VERTICES OF THE FEASIBILITY SET 3 The intersection of a collection of convex sets is convex 4 A polyhedron is convex The following facts will be useful for optimizing a linear function in closed convex set C that is minch x E C C 25 This is a problem more general than a linear program It is called a convex program Proposition 2 If a linear function ch has a unique minimum maximum on a closed convex set C then the minimum maximum must be attained at an extreme point of the set The proof is by contradiction Suppose that the unique minimum is attained at a non extreme point x E C then there must exist two other points y z E C such that ch lt cTy ch lt cTz and x Ay17 Az where A 6 01 Hence ch AcTy 17 AcTz lt Ach 17 9ch ch which is a contradiction 222 Vertices of Feasibility Set Let us examine closely a particular polyhedron 7 the feasibility set of our standard linear program 14 fx Axbx20 26 where the matrix A 6 WW and the vector b E W are given The vector inequality x 2 0 is understood as component wise ie x 2 0 for i 1 2 n We will call f the standard polyhedron because we will treat it as the representative of polyhedrons We will assume that a the matrix A has more columns than rows so that m lt n and b the ranks of A is in These assumptions guarantee that in general the equation Ax b will have in nitely many solutions While the rst assumption will occur naturally from the context of linear programming the second one can always be achieved by a routine linear algebra procedure whenever the original matrix A is not full rank 22 CON VEX SET POLYHEDRON AND EXTREME POINT 17 Extreme points of a polyhedron are the corners77 of the set They are easy to tell only in two or three dimensional spaces where visualization is possible In high dimensional spaces7 we need some algebraic characterization for extreme points For any nonnegative vector x 6 3 we can de ne a natural index parti tion PEPltgt hagt0 020x ha0 27 where we drop the dependence of P and O on x whenever it is clear from the context Furthermore7 we assume that the indices within P and O are in xed orders7 even though how they are ordered is inconsequential Lemma 1 A point z 6 3 is an extreme point of if and only if the matmw Apm is offull column rank Proof Without loss of generality7 we assume that A AP A0 and zT 95 LTD otherwise7 a re ordering will suf ce Clearly7 Ax Apxp b7 xpgt0 W Dl andx00 kl0l with lPllOl n We rst prove the necessary condition by contradiction Suppose z is an extreme point of f but AP is not of full rank Then there exists a nonzero vector 1 E W Dl such that Apo 0 Moreover7 there exists a suf ciently small but positive scalar 739 6 3 such that xp i To 2 0 since zp gt 0 Let yltp37o mn7 2ltzp67o mn7 then Ay A2 b and yz 2 0 that is7 yz E 7 Since 1 1 z Ey i2 for z 31 y E f and z 31 2 E f we have arrived at a contradiction to the hypothesis that z is an extreme point of f We now prove the suf cient condition by contradiction Suppose that AP is of full column rank but z is not an extreme point of f Then there exist distinct points y z E f with y 31 z 31 2 such that for some A E 017 p Ayp12p 0 Ayo 17 A20 39 The nonnegativity of y and 2 implies that yo 20 07 but y 31 2 hence yp 7 2p 31 0 Therefore7 since Apyp flpr b7 Apyp 7 2p 07 contradicting to the hypothesis that AP is of full column rank 18 CHAPTER 2 VERTICES OF THE FEASIBILITY SET Theorem 1 Let f be de ned in where A 6 WW has a rank equal to m A point z E f is an eactrerne point vertecc off if and only if there epists an indecc partition B7N of127 7n see such that lBl rankAB m7 Aglb 2 07 28 and 953 Aglb zN 0 29 Proof Let condition 28 hold and x be de ned as in 29 Since z 2 0 and A ABB AN N ABB b clearly z E f and Pz C B Because the columns of A3 are linearly independent7 so are those of APW By Lemma 17 z is a vertex of f Now suppose that z is a vertex of f By Lemma 17 Ap Apm has a full column rank lf lPl m7 then B P satis es the conditions 28 and 29 On the other hand7 if lPl lt m7 we can always expand the index set P to a larger index set B so that P C B and lBl m rankAB For such an index set E which is generally not unique7 both 28 and 29 hold This proves the theorem 1 223 Basic Feasible Partitions and Vertices De nition 4 Basic Feasible Partition An indecc partition B7 N is a basic feasible partition for the polyhedron f whenever holds Moreover a basic feasible partition is called nondegenerate if Aglb gt 0 otherwise it is degenerate In view of Theorem 17 under the condition rankA m we have the following correspondence between the vertices of f and the basic feasible partitions for f 1 Each basic feasible partition corresponds to a vertex of 7 which is de ned as in 29 3 Each nondegenerate vertex of f corresponds to a basic feasible parti tion In this case7 Pz B 9 ln general7 each degenerate vertex of f corresponds to multiple ba sic feasible partitions ln this case7 lt m and Pd must be expanded to B Many such expansions exist 2 3 EXERCISES 23 Exercises H D 00 Prove that the optimal solution set of a convex program de ned in 25 is a convex set Consider the standardization of the LP min72 1 2 17z17z2 2 0 Check Theorem 1 at the feasible point zT 010 and zT 1 Consider the standardization of the LP min72 1 2 17m 17x17z2 2 0 Check Theorem 1 at the feasible point zT 001 20 CHAPTER 2 VERTICES OF THE FEASIBILITY SET Chapter 3 Simplex Method First Look 31 Terminology Due to historical reasons the following terminology has been widely used in the linear programming literature 0 Feasible solution A set of values for the vector x that satisfy all the constraints including the nonnegativity 0 Basic feasible solution A feasible solution that has m basic vari ables taking nonnegative values and n 7 m nonbasic variables taking zero values In addition the submatrix of A corresponging the basic variables is an m by m nonsingular matrix The simplex method works exclusively with basic feasible solutions 0 Degenerate basic feasible solution A basic feasible solution that has one or more zero basic variable On the other hand all basic variables are positive in a non degenerate basic feasible solutions 32 Reduced Linear Program 321 Partitioning and Elimination Given an index partition BUN12 n B NQ 21 22 CHAPTER 3 SIMPLEX METHOD FIRST LOOK such that the submatrix A3 of A formed by the columns A with indices in B is m gtlt m and nonsingular therefore lBl m and lNl n 7 Similarly we de ne AN as the submatriX of A formed by the columns A with indices in N Accordingly we partition z and 0 into 3 zN and 03 UN respectively That is 12n 7 BlN r AB l ANl c 7 03 l 0N z 7 x3 l xN The actual ordering of indices inside B or N is immaterial as long as it is consistent for all the portioned quantities Under such a partition where AB is nonsingular we have the following equivalence Ab ltgt ABBANNb ltgt BAg1biANN Upon substituting the expression for 3 into that for ch we have ch chg cExN chglb 7 chglANzN c mN bTAchB UN 7 AEAchBVsN or equivalently ch bTy sExN 32 where we have used the short hand notation T y Ag 63 and 5N UN 7 A y 33 Note that y and 5N depend solely on the given data Abc and the given partitioning but independent of x They will change as the partitioning changes 322 Reduced Linear Program We have used the equation Ax b to eliminate the variables in 3 from the original LP problem The reduced LP problem has no more equality 32 REDUCED LINEAR PROGRAM 23 constraints and involves only variables in zN rnin bTy s xN m1v st Aglb i AglANzN 2 0 34 N Z 0 The rst inequality is nothing but the nonnegativity for 3 This reduced LP is completely equivalent to the original LP However we can deduce useful information from this reduced LP that is not apparent in the original LP 323 What Happens at A Basic Feasible Solution If the partitioning is associated with a basic feasible solution then B corre sponds to a basis 7 and N to a nonbasis 7 such that the basic variables 7 x3 Aglb 2 0 and the mmbasic variables77 zN 0 In this case we observe the following Any feasible change that can be made to zN is to increase one or more of its elements from zero to some positive value o If SN 2 0 then it is not possible to decrease the objective function value by increasing one or more element of zN Hence we can conclude that the current basic feasible solution is optimal On the other hand if any element of 5N is negative we can try to increase the corresponding variable in aN say Q in terms of the original indexing as much as possible so long as we maintain the inequality constraints in the reduced LP 0 When Q is increased from zero to t while all other elements of zN remain at zero then ANaN becornes tAj where A is the j th column of A 0 In the above case the rst inequality in the reduced LP changes to 3 7 tAglA 2 0 The problem is unbounded if the above inequality poses no restriction on how large t can be meaning that one can arbitrarily improve the objective 24 CHAPTER 3 SIMPLEX METHOD FIRST LOOK 33 One Iteration of the Simplex Method Given date A 6 WW 0 6 3E and a basic feasible solution x 6 3 along with its corresponding index partition B7 N with zN 0 and 3 Aglb 2 0 The indices in B and N are7 respectively7 in arbitrary but xed orders Step 1 Dual estimates Solve Agy 63 for y and compute 5N UN 7 A y Step 2 Check optimality If all SN 2 07 stop Optimum Reached Step 3 Select entering variable Choose an index q such that 5Nq lt 07 and let j be the corresponding the q th index in N Here q is a local index for SN E m and j is a global index for z 6 3E The variable zj currently non basic7 will be entering the basis Step 4 Compute step Solve ABd Aj for d E 3 where Aj is the j th column of A This vector d is used to update zB Aglb to account for the change that is to take place due to the increase in 72 Step 5 Check Unboundedness If all d S 07 stop Unbounded Step 6 Select leaving variable Compute the largest step length It so that 3 7 td 2 0 namely7 f Egg i E 9619 35 where the minimum occurs at xBp xi for some indexz E B Here p is a local index for x3 6 W and 239 is a global index for z 6 3E The variable xi currently basic7 will be leaving the basis Step 7 Update variables Let Q t and 3 3 7 d Note that after the update the p th basic variable 13p E xi is zero and it leaves the basis and becomes non basic 34 DO WE REACHA NEW VERTEX 25 Step 8 Update Basis and non basis Move the index 239 from B to N andj from N to B that is 239 leaves the basis and j enters the basis Go to Step 1 34 Do We Reach a New Vertex After each iteration we certainly have a new partition B and N though they differ with the previous one by only a single index With the new partition we have a new 3 2 0 and a new zN 0 Therefore if the new basis rnatrix say A is again nonsingular then we have a new basic feasible solution ls the new basis matrix A still nonsingular after each update The matrix A is nonsingular if and only if AKA is nonsingular prove it by youself We note that A differs from A3 only in the r th column where the column A has been replaced by A If we denote the k th column of the identity matrix by ck then Ag1A36k 6k 7g 7 In addition we note that the new p th column of A is Aj where j is a global index that satis es AglAj d Therefore 71 AB A 61 ep1dep1 cm This matrix is nonsingular if and only if the p th element of d is nonzero For example when m 7 and p 4 d1 d2 d3 d4 d5 d6 d7 D H b 03 H O O O O O C H O O O O C H O O O O O H O O O O H O O O O O H O O O O O H O O O O O O and ldetM lARl ld4l Since the p th element of d satis es dp gt 0 see equation 35 we conclude that indeed A is nonsingular and the new feasible solution is indeed a basic feasible solution ie a vertex 26 CHAPTER 3 SIMPLEX METHOD FIRST LOOK 35 Exercises 1 Let A and B are two square matrices of the same size and A is nonsin gular Prove that B is nonsingular if and only if A lB is nonsingular Chapter 4 Simplex Method More Details 41 How to Start Simplex Two Phases Our simplex algorithm is constructed for solving the following standard form of linear programs T min 0 x st Ax b x 2 0 where A is m gtlt 71 To start the algorithm7 we need an initial basic feasible solution or a vexter for the feasibility set In general7 such an initial basic feasible solution is not readily available Some work is required To be absolutely clear7 what we need is essentially a partition B7N of the index set 127 7n7 with m indices in B7 such that the basis matrix AB is nonsmgular and Aglb 2 0 Then we let the m basic variables to be xB Aglb and the rest of non basic variables to be zeros This gives us a basic feasible solution needed Let us see how we can start the simplex algorithm to solve the following LP min ch st Ax S b 41 x 2 07 where again A is m gtlt n We already know that the main constraint Ax S b is equivalent to Ax Is b for s 2 0 By adding thd slack variable 57 we convert the LP into the standard form with m equations and nm variables The coef cient matrix bacomes A A I which is m gtlt n 27 28 CHAPTER 4 SIMPLEX METHOD MORE DETAILS Whenever b 2 0 the LP is clearly feasible since z 0 satis es the constraints It is also easy to construct an initial basic feasible solutiuon We can simply set x 0 as the non basic variables and s b 2 0 as the m basic variables This way we satisfy the equation Ax Is b and in fact obtain a basic feasible solution This is true becarse the corresponding basis matrix is the identity matrix and Aglb I gtk b b 2 0 If we order our 71 m variables in the order x s then we have B 1 2 n and N n 1n 2 nm From this partition we can start the simplex method 411 PhaseI An Auxiliary Problem When b is not nonnegative however things are more complicated First of all we cannot easily tell whether the LP is feasible or not Even it is feasible an initial basic feasible solution is not obvious To handle this case we need to do a so called phase l procedure The purpose of phase l is tow folds First we would like to determine whether the original LP is feasible or not Second we would like to construct an initial basic feasible solution wheneven the original LP is feasible Towards this end we consider a so called aurilz39ary linear program Since z 0 and s b do not give a feasible solution when b has one or more negative component we consider adding another new scalar variable t called the auxiliary variable that is chosen to make 5 b t 2 0 for each component 5 of 5 However since this auxiliary variable is inconsistent with the original problem we wish to eventually make it vanish if possible The auxiliary problem is the following min t st Asite b 42 s st 2 0 where t is the auxiliary scalar variable and e 1 1 1T E W is the vector of all ones This linear program is always feasible and cannot be unbounded prove it by yourself We will make the auxiliary variable t one of the basix variables and con struct a basic feasible solution in the following manner Recall that our of the n m 1 variables in xgsgt we need m basic variables and n 1 non basic ones that are zeros 41 HOW TO START SIMPLEX 7 TWO PHASES 29 We still set x 0 as our 71 non basic variables so we need one more non basic variable We set t7 min bi7bkgt0 1 i m namely It takes the absolute value of the most negative element of b which is assumed to be 5k for some index k Since t is positive it must be a basic variable Now the equation Ax s 7 t6 b dictates that sbteb7bk620 where clearly 5k 0 there might be other zero elements in s as well We make 5k to be the last non basic variable For example for the vector 1 given below It 7b3 2 and 3 1 5 71 1 1 s 7 b m 7 72 2 1 7 0 0 1 2 With these choices we have the feasibility Ax s 7 t6 b with xst 2 0 ii m basic variables in t and in the m 7 1 components of s excluding 5k iii 71 1 non basic variables in z 0 and in 5k 0 The basis matrix consists of the m 7 1 columns of the identity matrix excluding the k th column plus the column 76 It is nonsingular So we have found a basic feasible solution for the auxiliary problem Starting from this basic feasible solution we can use simplex method to solve the auxiliary problem In the subsequent simplex iterations we should always let t have the priority to leave the basis whenever it quali es as a candidate Once t leave the basis the optimal objective value is reached At the optimality if t 0 then the original LP is feasible otherwise It gt 0 it must be infeasible and we stop here 412 PhaseII The Original Problem In the case where the original problem is feasible we delete the column corresponding to the auxiliary variable t it7s non basic and zero anyway from the auxiliary problem and shrink the non basis by one This produces a basic feasible solution for the original problem since It is gone from which we can start the simplex method to solve the original problem 30 CHAPTER 4 SIMPLEX METHOD MORE DETAILS 42 Main Implemetational Issues We observe that in the revised simplex algorithm7 the main computational task appears to be solving two linear systems of equations one in Step 1 and another in Step 4 The two systems involve the same matrix AB though it appears in Step 4 as the transpose TBC 43 Degeneracy Cycling and Stalling So far everything sounds so rosy But there is one place where things can go wrong That is7 what if zB has a zero component which happens to correspond to a positive component of d In this case7 t 0 in 357 and no real change whatsoever7 except for the partitioning7 has been made to the variables The objective value also remains the same When this happens to a simplex iteration7 such an iteration is called a degenerate simples iteration This situation is also called stalling What are the consequences of degenerate iterations First of all7 degenerate iterations that make no progress are not produc tive At the least7 they slow down the convergence if convergence eventually occur lndeed7 in the worst case7 it can prevent convergence from occuring This phenomenon is called cycling where the simplex method keeps changing basis but going on circle Namely7 after a while it comes back to a basis that has been visited before7 while staying at the same vertex There exist some deliberate pivoting procedures for choosing entering and leaving variables that can avoid cycling Fortunately7 as dangerous as it is in theory7 cycling almost never happens in practice and therefore is not of a real concern On the other hand7 generacy does cause real problems for simplex methods because it occurs so frequently7 even more often than nondegeneracy7 in practice well7 people prefer redundancy than inadequacy7 and it can signi cantly degrade the performance of simplex methods 44 Exercises 1 Prove that the auxiliary LP is feasible and cannot be unbounded 2 Prove that an m gtlt in matrix consists of the m 7 1 columns of the 44 9 7 U EXERCISES 31 identity matrix plus the column 6 where e E W is the vector of all ones is nonsingular Let us arrange the variables in the order x s t with 71 variables in x m variables in s and one variable in t Write down the index sets B and N for the initial basic feasible solution constructed for the auxiliary problem Prove that if the optimal value of the auxiliary LP is positive then the original LP is infeasible Otherwise it is feasible For the general standard forrn LP rnincTz Ax b x 2 0 con struct an auxiliary LP and an initial basic feasible solution for it Hint Put 5 in the objective 32 CHAPTER 4 SIMPLEX METHOD MORE DETAILS Chapter 5 Duality and Optimality Conditions 51 Dual Linear Program Consider our standard form linear program 14 again ie T min 0 z st Ax b x 2 0 where A is m gtlt n with m lt n and full rank the the sizes of all the other quantities follow accordingly The reduced LP corresponding to 14 is min bTy sExN 11v St 7 AglAN N Z 0 N Z 0 From this reduced LP we can make the following observation Proposition 3 If there is a partition BN of the ihdeco set 12 n such that Aglb 2 0 ii 5N UN 7 ART 2 0 for 11 AchB then f solves the LP where x Aglb and z 0 Moreover the optimal objective value satis es cT bTy Therefore conditions i ii are suf cient for optimality In the nonde generate case where becomes Aglb gt 0 condition ii is clearly necessary for optimality 33 34 CHAPTER 5 DUALITY AND OPTIMALITY CONDITIONS Condition ii can be rewritten as 63 7 A y 0 and UN 7 ART 2 07 which implies that y satis es 0 7 ATy 2 0 or ATy S c 51 It turns out that for any y satisfying 51 and any feasible x satisfying Ax b and x 2 07 the following inequality holds 5 A Ty 95TAT7J S xTc CT7 where we have used the facts Ax b7 x 2 0 and ATy S c Consequently7 rnaxbTy ATy S c S rninch Ax b7 x 2 07 52 y x as long as both extreme values exist The right hand side is nothing but our linear program 14 Appearing on the left hand side is another linear prograrn7 which can also be written as rnaX bTy st ATy s c 53 s 2 0 where a slack variable 5 is added Obviously7 this pair of linear prograrns share the same set of data A7 b7 0 and are closely related We call 14 the primal LP and 53 the dual LP Whenever this prirnal dual pair of LPs are both feasible7 they satisfy the property in 52 which is called weak duality We now forrnally state the weak duality as follows Proposition 4 Weak Duality Ifx is primal feasible andy is dual feasible then bTy S ch As a result holds true The objective values of the primal and the dual serve as bounds for each other Once they coincide7 then they must have reached their optimal values simultaneously In fact7 optirnality can occur only if the primal and the dual objective values coincide This property is called strong duality Proposition 5 Strong Duality A primal feasible x is optimal for the primal if and only if there is a dual feasible y such that ch bTy In this case y is optimal for the dual 52 OPTIMALITY CONDITIONS 35 52 Optimality Conditions Strong duality states that the optimality of the primal is inseparable from that of the dual The pair is one entity as far as optimality is concerned Now we can state the optimality conditions for the pair as follows Proposition 6 Optimality Conditions The points f and y are optimal for the primal and the dual respectively if and only if they together satisfy 1 the primal feasibility Ax b andp 2 0 the dual feasibility ATy S c and the closure of the duality gap ch 7 bTy 0 53 How to nd a dual We have seen that as far as optimality conditions are concerned7 a linear program is inseparable with its dual and vice versa Hence7 being able to nd a dual for any given LP is crucial There exist mathematical techniques for nding duals that are applicable not only to linear programming but also to more general problem classes It is also possible to always convert an LP to one of the standard forms7 get the corresponding dual and then simplify Nonetheless7 for linear programming it is much easier to just follow a veri ed recipe For convenience7 we will classify constraints into two classes 1 sign restrictions and 2 main constraints A sign restriction on a variable is either m 2 0 or xi 3 0 A variable without any sign restriction is called a free variable Any constraint that is not a sign restriction is treated as a main constraint which can be an equality or an inequality The coef cients for all main constraints form an m by n coef cient matrix A 6 WW ie7 there are n variables and in main constraints We consider the following linear program of a general form a minmax c z 2 bi st aiTz bi i12 7in7 3 hi 54 2 0 xjfree j12 7n7 S 0 36 CHAPTER 5 DUALITY AND OPTIMALITY CONDITIONS where aT is the i th row of A In this LP the objective can be either min imization or maximization and each constraint and each variable can take respectively one of the three given forms The recipe for nding a dual for thus LP is given in the following chart where if an LP is a minimization problem we go from the left column to the right column to nd its dual otherwise we go from right to left maximization objrhs vectors coef cientsT variables constraints co nstr aints variables minimization gt ltgt ltgt ltgt ltgt variable 2 0 gt constraint 3 ltgt ltgt ltgt ltgt ltgt rhsobj vectors coef cients variable free constraint variable 3 0 constraint 2 constraint 2 variable 2 0 constraint variable free constraint 3 variable 3 0 Figure 51 Chart for Finding a Dual The chart says that to nd a dual for a given primal we ip the opti mization goals switch the objective and right hand side vectors and take a transpose of the coef cient matrix for the main constraints This way the numbers of main constraints and variables in the dual are always equal to respectively the numbers of variables and constraints in the primal and vice versa In addition the types of the constraints and variables follow the rules stipulated in the chart The chart also reveals one to one relationships be tween the variables in the primal and the constraints in the dual and vice versa Example 1 Find the dual of Now let us use the chart to nd the dual of the general LP 54 We partition the index set 1 2 m into three subsets 1 G corresponding to constraints of the type aiTz 2 hi 53 HOW TO FIND A DUAL 37 2 E corresponding to constraints of the type aiTz bi 3 L corresponding to constraints of the type aiTz S bi Sirnilarly7 we partition the index set 17 27 771 into three subsets 1 P corresponding to variables of the type 2 0 2 F corresponding to free variables 3 M corresponding to variables of the type xi 3 0 Now we partition the rows of A according to the partition G7 E7 L7 and the columns of A according to the partition P7 F7 M narnely7 AG A AE Ap AF AM 55 AL With these partitions7 we can write 54 into the following form rnin CT S t Aa gt be AE bE ALz bL 56 p Z 0 xp free M S 0 The dual readily follows from the chart in Figure 51 max bTy st Agy 3 0p Agy CF AilIll 2 CM 57 90 Z 0 yE free 11L S 0 The cornplernentarity conditions for this pair of prirnal dual LPs are A095 be 0 ya AL95 bL 0 9L 96p 0 0P Ag 96M 0 CM A y 0 58 38 CHAPTER 5 DUALITY AND OPTIMALITY CONDITIONS It is possible that some of these subsets are empty in which case the corresponding constraintsvariables are in non existence For example if M Q then the variable zM and its dual constraint Ajay 2 CM do not exist 54 Exercises H Use the weak duality to show that if the primal is unbounded then the dual must be infeasible and Vice versa 3 Prove that for feasible z and y the duality gap satis es ch 7 bTy sTx where s c 7 ATy 2 0 is the dual slack C40 At each iteration of the primal sirnplex rnethod out of the three opti rnality conditions which is satis ed and which is not Explain Chapter 6 PrimalDual InteriorPoint Methods 61 Introduction Consider the following primal linear program min ch st Ax b 61 x 2 0 where 07 6 3 b E W and A E Emmm lt lts dual linear program is max bTy st ATy z c 62 2 2 0 where z 6 3 is the vector of dual slack variables A primal dual method is one that treats the primal and the dual equally and tries to solve both at the same time For feasible z and 1727 the duality gap is ch 7 bTy Tc 7 zTATy Ax b 96 7 y sz c 7 ATy 2 l H 0 xz 2 0 39 40 CHAPTER 6 PRIMAL DUAL IN TERI OR POIN T METHODS Duality gap is closed at optimality CTx 7 bTy Tz 039 The complementarity conditions are zjz0 21727 771 63 62 A PrimalDual Method Consider F Rzn n a Rznim7 ATy z 7 c Ax 7 b 9671172 95121 0 6 4 xnzn This is a linear quadratic square system The optimality conditions for the linear program is Fyz 07 72 2 0 65 It is also called the KKT Karush Kohn Tucker system for the LP The Jacobian matrix of Fyz is 0 AT I F xyz A 0 0 7 66 Z 0 X where X diagx and Z diagz F xyz is nonsingular for 72 gt 0 if A has full rank Even though 65 is just mildly nonlinear7 one should not expect that the pure Newton7s method would solve the system because of the presence of nonnegatiVity constraints on x and z A main idea for interior point methods is that one starts from an initial point with z gt 0 and z gt 0 and forces all subsequent iterates for z and z to remain positive7 ie7 to stay in the interior of the nonnegative orthant One 62 A PRIMAL DUAL METHOD 41 can always enforce this interiority requirement by choosing appropriate step parameter in a dampled Newton setting It is important to note the necessity of keeping the nonnegative variables x and z strictly positive at every iteration instead ofjust nonnegative Consider a scalar complementarity condition x2 0 zz 6 3E The Newton equation ie the linearized equation for 2 0 at a given point x z is 612de zz If one variable say 2 is zero and the other is nonzero then the Newton equation becomes xdz 0 leading to a zero update d2 0 Consequently 2 will remain zero all the time once it becomes zero This is fatal because an algorithm will never be able to recover once it mistakenly sets a variable to zero Therefore it is critical to keep the nonnegative variables strictly positive at every iteration not just nonnegative Even keeping nonnegative variables strictly positive one would still ex pect dif culty in recovering from a situation where a variable is adversely set to too small a value To decrease the chances of such mistakes at early stages it would be desirable that all the complementarity pairs converge to zero at the same pace namely Mk O as k oo for every index i Towards this goal and for other theoretic considerations as well one can perturb each complementarity condition zizi 0 into zizi 7 M 0 and let M go to zero in a controlled fashion This idea incorporated into the damped Newton method leads to the following primal dual interior point algorithm Recall that it is called primal dual because it treats the primal and dual variables equally Like in any iterative method we need some initial point to start In particular we need positive z and z in our initial point A simple and not so bad initial point consists of z z n gtk e where e is the vector of all ones and y 0 Algorithm 1 A Prima Dual lnterior Point Algorithm Choose initial point x gt 0yz gt 0 WHILE quotstopping criteiia not satis ed DO Step 1 Choose a 6 01 and set M oszn Step 2 Solve the linear system for drcdz dz 0 F Wyyz Ch F7y72 0 d2 he 42 CHAPTER 6 PRIMAL DUAL IN TERI OR POIN T METHODS Step 3 Compute the primal and dual step length parameters 04p ilminX 1dz 71 04d ilminZ 1d2 71 Step 4 Choose 739 E 01 and update 2 z 04de y y T addy 2 2 add2 END We now make the following observations on the algorithm There are two basic parameters in the algorithm 039 in Step 2 and 739 in Step 4 Their sensible choices are of vital importance to both the theoretical analysis and the practical performance of the algorithm For starters7 however7 039 2 and 739 997 say7 will work ne most of the time In Step 37 each minimum is taken as the smallest element of the corre sponding 71 1 dimensional vector The scalars 04p and 041 are the largest step lengths in 01 such that 2ozp dz 2 0 and 2 041 d2 2 07 respectively The formulas are nothing but compact expressions of the ratio tests performed in simplex methods In Step 4 tha parameter 739 is chosen to step back a little from the bound ary of the positive orthant where the next iterate might land had a full step 0417 or ad been taken By choosing 739 E 017 the posititivity for the variables x and 2 will always hold If the algorithm converges7 it does converge most of the time even for rather naive choices of parameters7 then some components of z and 2 will go to zero in order to satisfy the complementary slackness con ditions X2 0 Namely7 the iterates will eventually approach the boundary of the positive orthant The step computed in Step 2 is the Newton step for the so called per turbed KKT equation 0 RW EFWQM 0 a 6 namely7 the complementarity equation X2 0 is perturbed into X2 Me This perturbation tends to keep all components of X2 equal and7 hopefully7 forces them to converge to zero at more or less the similar pace 63 SOLVING THE LINEAR SYSTEM 43 Overall we may view the algorithm as a perturbed and damped New ton method Finally we state a reasonable stopping criterion HM 511 MW 2 7 CH 10 bTyl 1 W 1 11011 1 le91 S tolerance where we use the sum of the relative errors in the three blocks of equations to measure the total error 63 Solving the Linear System The main computation in Algorithm 1 at each iteration is to solve to the linear system in Step 2 involving the coef cient matrix F xyz which is 271 771 by 271 771 but very sparse see 66 This system can be written as ATdydz m 67 Adz rp 68 ZdtXd2 To 69 where X dz39agz and Z diagz are diagonal matrices and rdciATyiz rpb7A 72167Xz Multiplying 67 by X and subtract 69 from it we obtain XATdy i Zdt m 7 r0 Multiplying the above equation by AZ 1 and invoking 68 we arrive at a square system for c in 610 After c is computed we can then easily compute d2 and dr by back substitution using 67 and 69 Hence a procedure for solving the linear system is as follows AZ lXAT dy AZ1er 7 n rp 610 d2 rdiATdy 611 dz 2 16 7 Xdz 612 Now the major computation becomes solving 610 which is an m by m system with a positive de nite coef cient matrix provided that A has full rank For large scale problems 610 is usually solved by sparse Cholesky factorization 44 CHAPTER 6 PRIMAL DUAL IN TERI OR POIN T METHODS 64 Convergence Considerations To guarantee in theory that the primal dual interior point algorithm Algo rithm 1 always converges to a solution one needs to use rather sophisti cated schemes to choose the step parameter 739 On the other hand even with simple minded choices such as 739 09 the algorithm does converge most of the time for problems that are not particularly dif cult Here we provide some indications on why the algorithm usually works even for choices such as 739 09 At each iteration given the current iterate yz the update direction dz ch d2 satis es the equation Adz 7Az 7 b AT 12 7ATy z 7 c and the next iterate yz is calculated as K x dz 11 y a dy 7 613 2 2 d2 for some 04 6 01 Here for simplicity we are using the same step parameter for both the primal and the dual variables By direct substitution we have AK 7 b 17 aAx 7 b ATM 2 7 c 17 aATy z 7 c 614 Since 1 7 04 6 01 we have the following relationships HAN bH liaMlA bH lt llA ciblt llATy2icH liaMlATleicH lt Ufa2741 That is we can improve the primal and dual feasibility at each iteration However such improvements do not imply that the iterates necessarily con verge to a feasible primal and dual point the improvements could shrink to zero as 04 7 0 Moreover it is not clear that we can also improve the complementary slackness at each iteration To simplify the matter let us assume that the initial point 0110213 is strictly feasible ie it satis es Azo b ATyO 20 c 020 gt 0 64 CONVERGENCE CONSIDERATIONS 45 It follows from the recursive relationships 614 that the feasibility of 0 yo 20 implies the feasibility of 7113211 for k 17 27 In this case7 0 Fk7yk72k 0 szk and iWWRwaat j 4MVoa i1 Hence7 it suf ces to consider driving the duality gap sz to zero Now consider the duality gap at x y where f y fr is ob tained from the update forrnula 613 Hence7 Tz x adzTz adz sz 04sz szz a2dzTdz sz 04sz szz 0 szozeTpe7Xz XdzZdrpe7Xz sz aa 71sz u axTzn 17 a17 UTz We leave the fact dtTdz 0 as an exercise Since by our choice 04 E 01 and 039 E 017 we have the recursive relationship WWW 7 17 M17 7sz lt 2 that is7 the algorithm always decreases the duality after each iteration More precisely7 ma gt7 H7MH07UMM4VofU k71 7 04717 Ugt xOT20 If the step parameters 0 are bounded below by a nonzero shortest step g ie7 0 2 Q then for allj 17a717a 17g17alt1 46 CHAPTER 6 PRIMAL DUAL IN TERI OR POIN T METHODS Consequently7 0 S S 17 g17 0kz0T20 a 0 as k a 007 which implies the convergence of the duality gap to zero7 and hence the convergence of the algorithm lndleedl7 we observe that the step pararneter Oz rarely shrinks to zero That is one of the reasons the algorithm usually works even for naive choices of step pararneter Appendix A Introduction to Newton s Method One of the fundamental computational problems in science and engineering is to solve nonlinear systems of equations of multiple variables A general nonlinear square system of equations can be expressed as follows Fm 0 M where z 6 3E and F 3 a 3B namely there are 71 variables and n equations We will assume that not all of the n equations are linear otherwise Al becomes a linear system which is much easier to solve In a more detailed notation we have 1 FIW 1111727 7 z E 7 E E 2z1x2 J l m l anltzgtl l a j where we use the subscripts to denote components of vectors For example a simple nonlinear system with n 2 is 7 Fz Gabe 0 A2 2 2 z1z275 At a given point 0 6 3 note that we use a superscript to denote a particular vector considered as the current approximate solution to the 47 48 APPENDIX A INTRODUCTION TO NEWTON S METHOD equation 0 we approximate by its rst order Taylor expansion Lcz E Fx0 Ff x 7 f where F z is an n gtlt 71 matrix called the Jacobian matrix of Fx which is the rst derivative of The entries of the Jacobian matrix F x consist of all the rst order partial derivatives of the component functions Fiz7s arranged in an ordered fashion ie 7 In other words the 24th row of F x is the gradient of For example when n 2 F x is a 2 gtlt 2 matrix 3F1m 3F1m i 31 31 F 95 7 31021 31021 an 312 In particular the Jacobian of the function in A2 is 262ml 76 F 7 2 2 J A3 We note that since 0 is a xed point Fx0 is a xed n vector and F 0 a xed 71 gtlt 71 matrix Hence Lcz is a linear function of z and called the linearization of at the point x0 Solving the square linear system of equations Lcz 0 we obtain a point f as a new approximate solution to the equation 0 It is easy to verify that the solution to Lcz 0 is N we 7 F ltzcgti1Fltzcgt provided that F 0 is nonsingular We leave the veri cation to the reader In the case of n 1 both and its derivative F x are scalar func tions The linearization of at f y Lcx E Fx0 F 0 x 7 0 A4 is the tangent line of the curve y at the point z 0 The new approximate solution F 0 95 c 9 A5 Figure A1 Newton7s method one dimensional case is the intersection of the tangent line y Lcx with the s axis y 0 see Figure A In the general case of 71 variables and n equations Newton7s method computes the iteration sequence C 3 by the recursive formula W zk e ksenviron starting from a given initial point zo lf 0 is suf ciently close to a solution f satisfying Fx 0 then under proper conditions we can expect that the sequence converges to the solution f namely lim xk f kaoo The main idea behind Newton7s method can be described as follows Instead of trying to directly solve the hard problem the nonlinear system one solves a sequence of easy problems the linear systems to obtain a sequence of approximate solutions that will hopefully get closer and closer to the true solution Introducing a step parameter 04k 6 01 at each iteration we obtain the so called damped Newton7s method mk M 7 akFk 1Fk It is known that properly chosen step parameters can help enhance the con vergence of Newton7s method which is corresponding to the case where 04k 1 for all k 50 APPENDIX A INTRODUCTION TO NEWTON S METHOD A 1 An Example Let us consider solving the simple nonlinear system de ned in A27 namely7 Fm 52 5 l 0 2 2 z1x275 It has two equations and two variables lncidently7 it also has two solutions z 1 2 and z 71 72 The Jacobian matrix of this function is given in A3 The following Matlab functions implement Newton7s method for solving the nonlinear system A2 Z main program function x newtonx0 Z input x0 the starting point Z output x hopefully a solution Z make x0 a column vector if not already if sizex01 lt sizex02 x0 x0 end x x0 iter 1 Z iteration counter tol 1 8 Z error tolerance maxiter 0 Z maximum iteration while iter lt maxiter Z begin loop F funcx Z evaluate Fx resnrm normF Z residual norm Z print out information at each iteration fprintf iter Z2i residualnorm Z82e iterresnrm fprintf x Z74f Z74fn x Z stopping criterion if resnrm lt tol disp Converged break end J jacobianx Z evaluate Jacobian x x JF Z update iterate iter iter 1 Z increment counter A N EXERCISES end Z end loop Z function evaluation function F funcx F exp2x1 expx2 x1quot2x2quot2 5 Z Jacobian evaluation function J jacobianx J 2exp2x1 expx2 2x1 2x2 These Matlab functions are saved into a le called newton1m and can be invoked with an initial point x0 15 15 by typing newton115 15 in Matlab The output from Matlab is as follows newton115 15 iter 1 residualnorm 156e001 x 15000 15000 iter residualnorm 296e000 x 11673 19994 iter 3 residualnorm 393e001 x 10228 19937 iter 4 residualnorm 751e003 x 10005 19999 iter 5 residualnorm 309e006 x 10000 20000 iter 6 residualnorm 524e013 x 10000 20000 Converged For this initial point7 which is fairly close to the solution f 1 2 Newton7s method takes six iterations to achieve the required accuracy A2 Exercises 1 Justify the following claims a y Cs in A4 is the tangent line of the curve y at the point z x0 b f in A5 is the intersection of the tangent line with the x axis 2 Run newton1m with initial guesses x012agtk117 a246810 52 9 APPENDIX A INTRODUCTION TO NEWTON S METHOD and answer the following questions a Does Newton7s method always converge b When Newton7s method converges7 does it always converge to the solution closest to the initial point c When Newton7s method converges7 describe how the residual norm decreases at each iteration towards the end Write an M le newton2 m7 by modifying the functions in newtonl m7 to solve the following nonlinear system 1271 37471 F 37572 0 1M 25 with the starting points z 1 2 3 2 1 and z 10 1 10 1 10 Observe that for the second starting point7 the obtained solution is NOT nonnegative Chapter 2 Vertices of the Feasibility Set We have seen that both the feasibility set and the solution set of a linear program are polyhedra in 9 Like polygons in two or three dimensional spaces polyhedra in 9quot generally have corners or vertices 21 Matrix and Vector Partitions It will be convenient for us to use matrix partitions in the development of theory and algorithms for linear programming This section introduces nec essary notations for partitioned matrices and vectors which can be skipped by advanced readers Consider a 3 X 7 matrix 111 112 113 114 115 116 117 A 121 a22 123 124 a25 126 127 lt A1 A2 A7 7 131 132 133 134 135 136 137 where A is the j th column of A ie A alj 12 137T is a 3 X 1 vector for j 12 7 For any vector x 6 7 11111 11212 117 7 14 121 122 127 E A j 131 as z 137 7391 Let us partition the index set 1 2 3 4 5 6 7 into two subsets B and N with an arbitrary order within each subset say B 524 N 1 6 73 13 14 CHAPTER 2 VERT I CES OF THE FEASIBILITY SET Then AB and AN are respectively the 3 X 3 and 3 X 4 sub matrices of A whose columns are those of A corresponding to the indices and orders within B and N respectively Namely AB A5 A2 A4 AN A1145 A7 A3 Similarly x3 6 933 and xN E 934 are respectively sub vectors of 1 whose components are those of 1 corresponding to the indices and orders within B and N respectively Namely x3 x5 x2 x4T xN x1 x5 x7 x3T Corresponding to the index set partition B N 7 A1 E ijj ZAJxj E ijj ABxB ANxN 7391 jeB jeN Moreover the linear system of equations Ax b can be written as 14313 Jr ANiN b for any right hand side vector b E 933 Obviously these notations can be extended to the general case where A 6 WWquot 1 E 9quot and b E 93 For any index partition B N with BUN12n B N 21 there holds ACEb ltgt ABCEBANCENZJ where the symbol ltgt denotes equivalence Moreover if B contains m indices ie lBl m and the square matrix AB is nonsingular there holds ACEb ltgt BAg1biANIN That is we can solve the equation Ax b to obtain mg as a function of 1N Furthermore in this case x 2 0 gt Aglap 7 ANN 2 01N 2 0 24 22 CONVEX SET POLYHEDRON AND EXTREME POINT 15 22 Convex Set Polyhedron and Extreme Point 221 De nitions De nition 1 Convex Set A setC C 9 is a convex set if xy EC xliy EC VAE 01 that is the line segment connecting any two members lies entirely in the set By this de nition an empty set is a convex set De nition 2 Extreme Point A point x is an extreme point of a convex set C C 9 if it does not lie in between any two members of the set that is for any 312 6 C x 7 y l 7 2 VA 6 01 De nition 3 Polyhedron and its vertices A polyhedron in 9quot is a set of points in 9quot that satisfy a collection of linear equalities andor in equalities A bounded polyhedron is also called a polytope An extreme point of a polyhedron is also called a vertex of the polyhedron Figure 21 Convex sets Proposition 1 The following statements are true 1 A hyperplane x E 9 aTx is convex 2 A half space x E 9quot aTx S is convex 3 The intersection of a collection of convex sets is convex 4 A polyhedron is convex 16 CHAPTER 2 VERT I CES OF THE FEASIBILITY SET 222 Vertices of a Standard Polyhedron Let us examine closely a particular polyhedron Tx 9 i AxbxZO 25 where the matrix A 6 Wm and the vector b E 9 are given The vector inequality 1 2 0 is understood as component wise7 ie7 xi 2 0 for 239 17 27 771 We will call 7 the standard polyhedron because we will treat it as the representative of polyhedrons We will assume that a the matrix A has more columns than rows so that m lt n and b the ranks of A is m These assumptions guarantee that in general the equation Ax b will have in nite many solutions While the rst assumption will occur naturally from the context of linear programming7 the second one can always be achieved by a routine linear algebra procedure whenever the original matrix A is not full rank Extreme points of a polyhedron are the corners of the set They are easy to tell only in two or three dimensional spaces where visualization is possible In high dimensional spaces7 we need some algebraic characterization for extreme points For any nonnegative vector x E 9quot we can de ne a natural index parti tion P E Px 239 xi gt O7 O E 0x 239 xi O7 26 where we drop the dependence of P and O on 1 whenever it is clear from the context Furthermore7 we assume that the indices within P and O are in xed orders7 even though how they are ordered is inconsequential Lemma 1 A point x E 9quot is an extreme point of if and only if the matrix Apm 239s offull column Tank Proof Without loss of generality7 we assume that A AP A0 and xT mg mg otherwise7 a re ordering will suf ce Clearly7 Ax Apxp b7 xpgtOEWPl andxo069 lol with lPllOl n We rst prove the necessary condition by contradiction Suppose x is an extreme point of 7 but Ap is not of full rank Then there exists a nonzero vector v 6 Wm such that Apv O Moreover7 there exists a suf ciently small but positive scalar T E 9 such that xp i 7 1 2 0 Let yltxpgmew 2ltP0 ew 22 CONVEX SET POLYHEDRON AND EXTREME POINT 17 then Ay A2 b and 112 2 O that is 112 6 7 Since 1 1 x Ey i2 for x 7 y E 7 and x 7 2 E 7 we have arrived at a contradiction to the hypothesis that x is an extreme point of 7 We now prove the suf cient condition by contradiction Suppose that AP is of full column rank but x is not an extreme point of 7 Then there exist distinct points 112 6 7 with y 7 x 7 2 such that for some 6 01 CUP 7A2p 0 yo Jr 7 20 I The nonnegativity of y and 2 implies that yo 20 O and hence yp 7 2p 7 0 Therefore Apyp Apr b or Apyp 7 2p O contradicting to the hypothesis that AP is of full column rank 1 Theorem 1 Let 7 be de ned in where A 6 Wm has a rank equal to m A point x E 7 is an extreme point vertex 0fr if and only if there exists an index partition B N of 1 2 n see such that lBl rankAB m Aglb 2 O 27 and x3 Aglb 1N 0 28 Proof Let condition 27 hold and x be de ned as in 28 Since x 2 0 and 14 ABIB Jr ANiN ABB b clearly x E 7 and Px C B Because the columns of A3 are linearly independent so are those of APW By Lemma 1 x is a vertex of 7 Now suppose that x is a vertex of 7 By Lemma 1 AP APW has a full column rank lf lPl m then B P satis es the conditions 27 and 28 On the other hand if lPl lt m we can always expand the index set P to a larger index set B so that P C B and Bi m rankAB For such an index set B which is generally not unique both 27 and 28 hold This proves the theorem D 18 CHAPTER 2 VERT I CES OF THE FEASIBILITY SET 223 Basic Feasible Partitions and Vertices De nition 4 Basic Feasible Partition An index partition B7 N is a basic feasible partition for the polyhedron 7 whenever holds Moreover a basic feasible partition is called nondegenerate if Aglb gt 0 otherwise it is degenerate In View of Theorem 17 under the condition rankA m we have the following correspondence between the vertices of 7 and the basic feasible partitions for 7 1 Each basic feasible partition corresponds to a vertex of 7 which is de ned as in 28 D Each nondegenerate vertex of 7 corresponds to a basic feasible parti tion In this case7 Px B 03 In general7 each degenerate vertex of 7 corresponds to multiple ba sic feasible partitions In this case7 lt m and Px must be expanded to B

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "I made $350 in just two days after posting my first study guide."

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.