### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Optimization MAT 168

UCD

GPA 3.88

### View Full Document

## 15

## 0

## Popular in Course

## Popular in Mathematics (M)

This 13 page Class Notes was uploaded by Otilia Murray I on Tuesday September 8, 2015. The Class Notes belongs to MAT 168 at University of California - Davis taught by James Bremer in Fall. Since its upload, it has received 15 views. For similar materials see /class/187343/mat-168-university-of-california-davis in Mathematics (M) at University of California - Davis.

## Reviews for Optimization

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/08/15

LINEAR PROGRAMMING REDUX JIM BREMER MAY 12 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear concise and comprehensive way The book contains all of this material but it is unfortunately spread across several chapters and in my opinion confusing in part These notes are a bit more demanding then the book if you can read and thouroughly understand them then you are doing very well in the course The dif culty that most students will encounter in these notes is that they assume a thorough knowledge of linear algebra l have said several times in class and will repeat now there is in my experience almost no subject with a better rewardeffort ratio than linear algebra statistics being perhaps more rewarding still per unit of effort A thorough complete knowledge of elementary linear algebra will serve anyone in a technical eld well indeed Note that this discussion is not comprehensive in particular 1 have omitted any discussion of numerical implementation and stability as well as any discussion of the geometry underlying linear programs 1 UNDERDETERMINED SYSTEMS We begin by reviewing underdetermined systems of linear equations Let A be an m X n matrix with n gt in Then the system of equations 1 AI b is underdetermined 7 it has more variables than equationsI Suppose we were to row reduce the augmented matrix A l b in order to solve the system Because the matrix A has more columns than rows we will necessarily have free variables if this isn7t clear to you work out an example now So we cannot expect a unique solution for this systemI Of course it is also possible that 1 has no solutions at all if this isn7t clear stop and nd an example By making an additional assumption on the matrix A we can ensure that the system 1 has a solution for every I E RmI In particular we will generally assume that the matrix A has rank mI Recall that the rank of a matrix is the dimension of its column and row spaces these two dimensions are equal So the assumption that A has rank m means that there are no redundant equations in the system 1 or equivalently that the column space of A must be all of Rm 7 in other words for every I there is some solution of the system It is also equivalent to saying that some set of m columns of the matrix A forms a basis for RmI We will be particularly interested in a distinguished class of solutions of We say that z E R is a basic solution for the system 1 if there is a set of indices i1I I I C 12 I I I n such that 1 zj Oforjg i1IIIim and 2 the corresponding columns Ai1 I I I Aim of the constraint matrix A form a basis for RmI Note that the columns of the matrix corresponding to a basic solution form a submatrix B of A which is invertible indeed a set of m vectors v1I I I Um in Rm form a basis if and only if the matrix whose columns are v1I I I Um is invertible check this Basic solutions for 1 are relatively easy to nd simply nd a submatrix B consisting of the columns Ai1IIIAlm of A which form a basis perhaps via GramSchmidt Orthogonalization solve the m X m system B2 b for z and form the basic solution I with entries 7 2 ifj e i1IIIim zj 7 0 otherwise Since the matrix B is invertible there is one and only one basic solution 1 associated with the columns i1 I I I imI We will call that solution the basic solution associated with the columns i1I I I imI Alternately we will say that z is the basic solution corresponding to the invertible submatrix BI These facts are sufficiently important that we will repeat them in Lemma form we have proved the following LEMMA loll Suppose that A is an m X n matrix n 2 m of rank m and further suppose that the columns il l l l Zm ofA form a basis for Rm Then there is a unique vector 1 which we will call the basic vector for the columns il l l l im such that 1 Ar b 21139 Ofor allj i1lxim 2 LINEAR PROGRAMS A linear program is any optimization problem of the form max c z 2 subject to AI b I 2 07 where A is an m X n matrix n gt m of rank m c is a given row vector of length n b is a given row vector of length m and z is a row vector of unknowns of length n Remark 21 Note that the assumption that A has rank m ensures that there are no redundant constraints in It di ers from the usual de nition of linear program only in that it excludes certain infeasible problems from consideration eg the problem max zlzg subject to 11 7 12 71 7 11 12 71 11712 2 0 is excluded We will call a vector 1 which satisfies the constraints Ar b and z 2 0 a feasible vector for the program 2 Moreover a feasible vector 1 for which cz obtains a maximum among the set of all feasible vectors is called an optimal feasible vector for the program 2 or more simply a solution to The linear function cm is called the objective function for the program The first thing we should note about the problem 2 is that the constraint equations Ar b are under determinedl Because of our assumption about the rank of A we expect there to be an infinite number of solutions for this system of equations That only stands to reason since the problem is asking us to find from among the infinite number of 1 s satisfying the constraints a single I such that cm is maximized Our second immediate observation is that despite the fact that the system Ar b has an infinite number of solutions it is not necessarily true that there exists even a single I satisfying all of the constraints For example this is the case for linear program ma 11 12 13 3 subject to 11 13 71 12 7 13 71 11712713 2 0 We call a linear program for which there are no vectors 1 satisfying both Ar b and z 2 0 infeasible Clearly infeasible linear programs do not admit a solution There is another way in which a linear program can fail to have a solution There might not be a maximum value of the objective function cm In such cases we say that the program 2 is unbounded We close this section with a final observation about 2 if the set of I which satisfy the constraints is bounded as a set in R then the problem 2 cannot be unboundedl As with all sufficiently elementary facts there are many different incantations we can invoke to see that this is so eg a continuous function on a compact set in Rm obtains its maximum This implies for example that a linear program of the special form max ctr subject to AI b l S I S u cannot be unbounded it can however still be infeasible 3 THE FUNDAMENTAL THEOREM The Fundamental Theorem of Linear Programming for which there is a separate handout on the website including a proof is an extremely powerful statement about solutions of problems of the form THEOREM Sill FTLP Suppose that A is an m X n matrix n gt m of rank m Then the linear program mar ctr 4 subject to AI b I 2 0 admits a solution there is a solution I E Rm and a set i1i i T im ofm indices such that the following properties 11j Oforj i1ixim 2 the corresponding columns Ail i i i Aim of the constraint matrix A form a basis for Rm In other words if there is a solution to 4 then there is a solution to 4 which is a basic solution for the constraint equation Ar bi Solutions of this form are suf ciently important that we immediately make a de nition a vector 1 which satis es the constraints Ar b and z 2 0 and for which in addition the conditions 1 and 2 above hold will be called a basic feasible vector for the program If in addition I is a solution to 4 we will call I a basic solution for the linear program note the distinction between a solution to the constraint equations and a solution the linear program as a whole i The Fundamental Theorem immediately reduces the problem of nding a solution to a linear program from a search through a potentially in nite set to a search through a nite one In particular it suggests that instead of looking for I which satisfy the constraints we should look for subsets of columns of the constraint matrix A which form basesi We can now describe our rst algorithm for solving the linear program To simplify things we will assume that the problem under consideration is not unboundedi Note that it is not dif cult to modify this algorithm to detect unboundedness but we won t do that here because we will be shortly introducing a superior algorithmi We begin the algorithm by letting 77 700 and 10 0 and then executing the following sequence of steps for each set of m indices il l i T im in the set 1 i i i n Step 1 Form the matrix B Ai1 Ai2 Aim of the columns of A corresponding to the indices i1i i i imi Step 2 If B is invertible that is if the columns of B are a basis for R7 then nd the unique solution 2 E Rm to the system of equations B2 bi Otherwise we are nished processing this set of indices Step 3 If 2 satis es the constraints 2 2 0 then we have found a basic feasible vector x which is de ned by Ij 27 ifjei1mim 0 otherw1sei Otherwise are nished processing this set of indices Step 4 If c z gt 77 then we let IO z and 77 cm This procedure will terminate in a nite number steps since there are ltgtmw ways of choosing m indices from a set of it possible indices and upon termination 77 will be either be 7 in which case the problem is infeasible or 77 will be the maximum value of the objective function and 10 will be a basic solution 4 TABLEAUS Consider the linear program max c z 5 subject to AI b I 2 07 where A is an m X n matrix n gt m of rank m and suppose that i is a basic solution to the constraint equation Az b Note we are not assuming that it is a solution to the entire linear program just the constraint equation indeed we are not even assuming that it is feasible In the future we will call such vectors basic vectors for the linear program 5 to avoid any confusion over the word solution A basic vector which is feasible will be called of course a basic feasible vector and a basic vector which is feasible and optimal will be called a basic solution for the We can by rearranging the columns of A and the rows of I ensure that the basis associated with i consists of the rst m columns of A We can then write the LP 5 as t r max c1113 7 c2zN 39 B N 13 6 subject to lt IN b 113ny 2 07 where we have partitioned the variables 11 1 into two sets the basic variables 13 corresponding to the rst m columns of A and the nonbasic variables IN This leads to a corresponding partitioning of the constraint matrix A into the m X m invertible matrix B and the m X n 7 m matrix N Remember that we started with some basic vector i for the constraint equations Az b We can also partition its entries We will let 1 denote the values of the basic entries of i and 5N denote the nonbasic entries Because i is a basic solution its other entries are zero ie N 0 The constraint equation in 6 is of the form BIB N z N b Since B is invertible we can multiply both sides of this equation by B l which yields 7 IBB71NINB7112 We will make two observations about the equation First plugging the vector i into 7 we get I BileN Bilb or since N 0 B 71b 133 Moreover 7 allows us to rewrite the objective function in the form cz c313 cgzN c z3 7 BilNzN cgzN E EEIN where E is a column vector of length n 7 m Thus we can rewrite the LP 5 as max 5 7 5 1 N IN 8 subject to I B lN lt EB gt 53 113ny 2 07 We will call the form of the linear program 8 the tableau associated with the basic vector i We say that 8 is a feasible tableau if Iquot 2 0 and it is an optimal tableau if i is an optimal feasible vector This form has several useful properties H lt explicitly exhibits the value of i in the constants that appear on the right hand side of the constraint equation E0 It is very easy to determine if i is a feasible vector for the linear program if I 2 0 then i is feasible otherwise it is not 3 Assuming i is feasible it is also easy to determine if i is a solution to the LP Although this form of the LP is associated with a particular vector i it is in fact equivalent to the original LP Thus we can test the feasibility of any vector 1 and evaluate the objective function cz using In particular we examine the vector E If none of the entries of E are positive then there can be no other basic feasible vector 1 for which cz takes on a value larger than 5 note the feasibility assumption is important here because it means the entries of IN we are testing are nonnegative Remark 41 We are used to writing down the tableau form of a linear program in a table in the following manner nonbasic basic 5 THE SIMPLEX METHOD OVERVIEW In this section we introduce the simplex method an improved algorithm for solving linear programs max ctr 9 subject to AI b I 2 0 where A is an m X n matrix n gt m of rank m Suppose that i and Q are basic vectors for the LP 9 and further suppose that X is associated with the columns i1 im of A and Q with the columns j1 jm Then we say that i and Q are adjacent if the sets i1 and j1 jm have m 7 1 elements in common In other words two basic vectors i and Q are adjacent if their associated bases differ by one element The idea behind the simplex method is quite simple it is an iterative method which starting with an initial basic feasible vector moves from one basic feasible vector to another adjacent one in an effort to increase the value of the objective function For each iteration j we form the tableau max cjtzN 7 13 10 subJect to I B 1N gt lt IN gt j 113ny 2 07 for the associated basic feasible vector 1 there is a slight abuse of notation here we are identifying the basic part of I with 1 If I is a solution for the linear program it is evident from the coef cients of see the observations about the tableaus above Otherwise we can choose a nonbasic variable in IN whose coef cient is positive called the entering variable which we will move into the basis Of course we must swap this variable with a properly chosen basic variable from EB called the leaving variable in order to maintain a basis In most cases this results in an increase in the objective function Remark 51 It is very important to understand what having an initial basic feasible vector entails It means not only do we have a vector i with no more than m nonzero entries which is feasible i 2 0 and Ad b but also that the associated columns of the constraint matrix A form a basis Not any old basis of columns of A will do i we need a basis consisting of columns ofA such that the associated basic solution is nonnegative 6 A SIMPLEX METHOD STEP In this section we describe in detail what a single step of the simplex method entailsi First we assume that the underlying linear program is of the form max ctr 11 subject to AI b I 2 0 where A is an m X n matrix n gt m of rank mi Next we suppose that at the jth iteration we have the tableau max 5 5 N t 1 IB 7 12 subject to I B N lt IN gt 713 1137 IN 2 07 which is associated with the basic feasible vector i Throughout this section whenever we have a vector 1 E R we will denote by 13 the part of 1 corresponding to the basic variables for the tableau 12 and by IN the part of 1 corresponding to the nonbasic variables for this tableaui Let us be explicit about the dimensions of the various blocks of the constraint matrix in 12 the matrix B is an m X m matrix the idenity block in 12 is also m X m and B lN is m X n 7 This of course means that 13 represents the m basic variables and IN represents the n 7 m nonbasic variablesi Our goal is to nd an adjacent basic feasible vector i for which the objective function is larger than i Recall that a basic vector adjacent to i is a basic vector which shares m 7 1 0 its asic columns from the matrix A in common with if So in other words we are going to proceed by switching one basic variable with a nonbasic variable We rst choose the nonbasic variable from IN which will become basic in the next iteration We do this by examining the coef cients of E and looking for one which is positive Increasing the corresponding nonbasic variable will increase the objective function note that if no such nonbasic variable can be found then i is already optimali We call the chosen nonbasic variable the entering variable since it will be entering the basis for the next iteration Of course when we increase the entering variable we will need to adjust all of the basic variables in order to ensure that the constraint equation Ai b is still satis ed We are proposing to change the nonbasic part of i by letting I tek where t is a suitably chosen positive real number and ek is the standard basis vector corresponding to the nonbasic variable we have chosen ie it has a one in the correct position and zeros everywhere else Since the equation 13 IABB71NIANIB must hold for any solution of the constraint equations we have I zB 7tB 1Neki We de ne the vector Ar by AI B ilNeki This is the direction of the change in 123 Notice that Nek is just the column of the original constraint matrix A which corresponds to the entering variab e We would like to increase t as much as possible 7 recall that the objective function will increase by Ekti The value oft is usually constrained however by the requirement that I 2 0 In particular when the value of coordinate i of AI is greater than 0 then the corresponding equation in 13 gives us a bound on t So we let t be where the minimum ranges over all indices i for which is positive If there are no such indices then the problem is unboundedi That is we can increase t to in nity without violating the constraints and as we do so the objective function will increase without boun That is we increase t until one or more of the variables in x becomes zero We now choose from among the set of x a leaving variable That is a variable to move out of the basic set x3 and into the nonbasic set xN at the next iteration We have now computed the value of the basic feasible vector at the next iteration x and xAN It only remains to do bookkeeping to update the list of variables that are basic and nonbasic for the next iteration Remark 61 Note that it is never necessary while performing the simplex method algorithm to actually compute the form of the constraint matrix which appears in Indeed it is more convenient to leave the constraint matrix in its original form 11 and compute the direction Ax by solving a system of equations Remark 62 Assuming that we do not run into unboundedness the new set of basic variables do indeed correspond to a set columns of A which forms a basis To see this note that the vector Ax records the coe cients of the entering column with respect to the current basis Because the entry of Ax cooresponding to the leaving variable is nonzero by de nition it means that the entering column is not in the span of the basis columns excluding the leaving column This implies the the resulting set of columns is a basis 7 DEGENERACY 8 lNITIALIZATION I There is an obvious unresolved dif culty with the simplex method we need to nd a basic feasible vector in order to start the simplex method if we start with an infeasible vector we might never nd a solution to the LP As was remarked upon above this is a nontrivial task which entails more than nding an invertible submatrix of the constraint matrix We have been spoiled by the textbooks practice of writing linear programs in the form max c x 14 subject to Ax S b x 2 0 This form is particularly nice for initialization because once m slack variables have been introduced 14 takes on the form r max c x subjectto A Ilt7gt xw 2 0 This makes it trivial to nd an initial basis for the constraint matrix we simply pick the identity submatrix 1 Of course the resulting basic vector is feasible only if b 2 0 so there is still work to be done even in this case but nonetheless it is easier than the initialization of a general LP Instead of considering this simple case we will once again the consider the general LP max c x 15 subject to Ax b x 2 0 where A is an m X n matrix n gt m of rank in Without loss of generality we may assume that b 2 0 if bi lt 0 for some i then we can multiply the corresponding constraint by 71 In order to initialize 15 we introduce the auxiliary linear program mini yiy2uAym 16 subject to Ax y b x y 2 0 It should be clear that the original problem 15 is feasible if and only if 16 has a solution such that 9192mym0 Moreover it is easy to nd an initial feasible vector for 16 we simply let y b and x 0 Then the corresponding submatrix of the constraint matrix is the identity and so clearly invertible and because b can be assumed to satisfy b 2 0 this vector is feasib e Thus we initialize the simplex method by attempting to solve the auxiliary LP 16 If for the resulting solution zy we have yl yg H yn 0 then we have found a basic feasible vector for the original LP There is one complication however Finding a basic solution of 16 such that 8 holds means that we have found a basic feasible vector for 15 but it does not mean that we can necessarily identify the basisl In particular if all of the basic variables for the solution of 16 are z s then it is obvious what columns of A to choose if the basic variables for the auxiliary solution are Ii1vzi27wrim then the columns Ail Aim are a basis for Rm and the associated solution of Ar b is feasible However if one or more of the yj are included in the basic variables for the solution of 16 then more work must be done to nd a basis of columns of A We could nish the initialization process by completing our basis with a set of additional columns from A but in most cases it is easier and more elegant to simply use a foolproof initialization scheme that we will discuss below Remark 81 Note that we can only have a basic solution for 16 with one or more yj as basic variables if the solution has fewer than m nonzero entries 9 DUALITY The dual of the linear program max 5 1 17 subject to AI S b I 2 7 is the linear program min b y 18 subject to Any 2 c y 2 0 It is easy to see that the dual of the dual 18 program is again the original problem 17 which we refer to as the primal problem This de nition is motivated by the search for an upper bound for the objective function of 17 using the constraint equations Your book does a good job of explaining this in the beginning of Chapter 5 Note that there is one constraint in the dual problem for every variable in the primal problem and one variable in the dual for each constraint in the primal Since 17 is not the usual form for linear programs our rst task is to nd the dual for a program in the usual form max ctr 19 subject to AI b I 2 0 We proceed by rewriting the constraints in 19 as inequalities max ctr 20 subject to AI S b 7 Ar S 7b I 2 0 It is now clear that the dual of 19 is the linear program min b yl 7 b yg 21 subject to A 7141 lt gt 2 b 11792 2 0 Letting 2 yl 7 yg we see that 21 is equivalent to min b z 22 subject to A z 2 c 2 free In other words we can combine the two sets of dual variables yl and yg we got from splitting the equality in the primal into a single dual variable 2 which is not constrained to be nonnegative In general we will nd this to be the case an equality constraint in the primal corresponds to a free variable in the dual and a free variable in the primal corresponds to an equality constraint in the dual We are now in a position to prove the Weak Duality Theorem which con rms the intuition which lead to the de nition of the dual THEOREM 91 Weak Duality If I is a feasible vector for the primal linear program 17 and y is a feasible vector for its dual problem 18 then 0 S b y Proof Using the dual constraint equation A y 2 c and the primal constraint equation Ar S b we have ctr S 91AM MAI S 9 17 519 QED Finally we close this section with a discussion of affine objective functions which will prove useful in the next section Consider the optimization problem max 5 5 1 23 subject to AI S b I 2 0 where A a is an m X n matrix n gt m of rank in Notice that the objective function of 23 is not a linear function but rather an a ne function It should be clear that this is no obstacle to the solution of 23 Indeed we can write 23 as a linear program in the form max ctr 2 subject to AI S b 24gt z 5 z 2 0z free By a slight abuse of notation we will refer to problems of the form 23 with affine objective function as linear programs The dual of the linear program 24 can then be written as min b y 5w subject to Any 2 c 25 w 1 y 2 0 w free This linear program is evidently equivlent to the optmization problem min 5 b y 26 subject to A y 2 c y 2 0 So we see that the dual of the program 23 with an affine objective function is the linear program 26 which also has an affine objective function 10 THE DUAL TABLEAU The primary observation of this section 7 and the most important single fact in the theory of duality 7 is that the dual of a tableau for the primal problem is a itself a tableau for the dual problem This fact and the form of the two corresponding tableaus imply all sorts of nice results eg7 strong duality To see that this is so7 suppose that max 5 E zN 39 71 7 27 subject to I B N lt IN gt 71 113ny 2 07 is a tableau associated with a basic feasible vector i for a primal problem Using what we learned in the last section check thisl7 we nd that the dual of 27 is min 5 E Q I 0 28 subject to lt BAN gty2 lt E gt Q free7 Of course7 we can rewrite this as min 5 i Q 29 subject to B lN y 2 5 y 2 0 If we introduce slack variables QB into 297 and rename the variables already present QN7 then we arrive at the linear program min 5 i QB 30 subject to I 7B 1N lt 1 gt 7E 937 W 2 0 We call this linear program the dual tableau corresponding to the primal tableau 27 We can immediately make the following observations 1 There is a correspondence between the dual variables Q3 and the primal basic variables 13 and between the dual variables QN and the primal variables IN E0 As written7 the basic variables for the tableau 30 are the QN and not the QB So we have chosen to emphasize the relationship mentioned above rather than emphasize which variables are basic and which are nonbasic for the dual 9 There is clearly a basic vector Q associated with the dual tableau QB 0 and QN E 4 The basic vector Q is not necessary feasible for the dual 7 we do not expect E 2 0 We will call the basic vector Q associated with the dual tableau 30 the dual basic vector of i This is a very important idea we associate with each basic vector of the primal a basic vector of the dual We close this section by proving the Strong Duality Theorem Thanks to our discussion of tableaus and dual tableaus7 the proof is trivia THEOREM 101 Strong Duality Suppose that i is a basic solution for the p7imal problem eg a basic optimal feasible vector max c x 31 subject to AI S b I 2 0 Then the corresponding basic vector Q for the dual min b Q 32 subject to A1 2 c y 2 0 of 31 is a basic solution eg a basic optimal feasible vector for the dual Moreover 33 c i b gl Proof Write the tableau max 5 ctr N t 1 IB 7 34 subject to I B N lt IN gt 713 113ny 2 07 for the basic vector i That i is feasible means that 13 2 0 and that it is optimal means that E S 0 Now the tableau mini 5 3 93 35 subject to I 7B 1N lt ZN gt 75 B we W 2 0 is the corresponding dual tableau 7 the one associated with the basic vector Q That E S 07 means that Q is feasible and I 2 0 implies that it is optimal since the dual is a minization problem So Q is a basic optimal vector for the dual Equation 33 now follows because we have QB 0 and IN 0 for the pair of vectors 1 and Q QED 11 lNITIALIZATION H We now discuss the use of the dual problem to effect initialization 12 PROBLEMS 1 Write down a feasible7 bounded linear program and solve it 2 What is the dual of the linear program max ctr subject to AI b l S I S u ls it possible for the dual of this LP to be infeasible What does that say about the primal7 assuming it is feasible 3 Why does the simplex method have to start at a basic feasible vector Can we begin with a nonbasic feasible vector How about an infeasible basic vector 4 How could you go about making the first initialization procedure foolproof 5 What is the dual of the linear program max ctr subject to AI b 1 free O l Show that the proof in Section 9 of the fact that the dual of a program with an affine objective function has an affine objective function still holds if the equality constraint 2 l is changed to 2 S 1 THE FUNDAMENTAL THEOREM OFlANEARPROGRANm NG Here I give a proof of the Fundamental Theorem of Linear Programming77 as well as a discussion of its signi cance Note that a statement of the theorem and a discussion of its geometric meaning can be found in Chapter 3 of the Vanderbei texti Theorem Consider the linear program maximize 5 c x subject to Ax b 1 I 2 0 where c is a nonzero vector of length n and A is an m X n matrix m S n with rank m If the problem I admits a solution x 7 that is there exists an optimal feasible vector x 7 then there is an optimal feasible vector x E R such that x has exactly m nonzero entries Before we launch into a proof let s discuss the signi cance of the theoremi Suppose that the LiPi 1 has a solution Then we know that it has a solution with exactly m nonzero entries The reason this is so useful is that if we pick m entries il l i i im to be nonzero then the constraints Ax b determine the rest of the entries That is there is at most one solution x to the m X n system of equations 2 Ax b such that the entries xi1iuxim are the only nonzero entries prove this Note that its quite possible that there are NO solutionsi Moreover its entirely possible that there is a solution x but x does not satisfy the other constraints iiei such t at 3 x 2 0 does not hold The important point is that there is AT MOST ONE such solution in This means that we already have an algorithm for solving the LP problem Namely we consider each of the 4 ways of choosing m of the n indices to be nonzeroi Each such choice gives us at most one possible feasible vector which we obtain by trying to solve the constraint equation Ax i We simply compute c x for each of those vectors The largest such value is the maximum of the objective function Proof of the Theorem We will start by showing that if x is an feasible vector with k gt m nonzero entries then we can produce a feasible vector x with h 7 1 nonzero entriesi To do this we rst note that given a set of h gt m indices I i1 i i i ik we can nd a solution y to the equation 5 Ay 0 such that yi 0 for i g 1 This follows from the linear dependence of the columns i1i i i ik of the matrix Al Now set 6 x5 x 7 6y Note that by our choice of Y Axe Ax b so any such vector satis es the rst constraint equationi Our goal is to nd a feasible vector with one fewer nonzero entry than xi To do that we will look at each coordinate of x5 separatelyi Let 6 satisfy 7 7 eiyi 0 1 for i E If We can without loss of generality assume that at least one Si is positive why Let 6 be the smallest positive 61quot Then it is easy to see that the vector 15 satis es the constraints 15 gt 0 AND it has at most k 7 1 nonzero entriesi Now it is clear that if there is a feasible vector x then there exists one with at most m nonzero entries we just repeatedly apply the argument from the preceding paragraph until we get down to at most m nonzero entriesi We need to make one additional observation to prove the theoremi Let I be an optimal feasible vector 7 that is a solution to the problemi Consider the vector 8 IE z 7 6y where y is a solution to Ay 0 and yi 0 only if z 7 0 By a simple continuity argument we see that for any 6 of small enough magnitude 7 either positive or negative 7 the vector 15 satis es the constraints A15 0 9 IE 2 0 Since we assumed that z is an optimal feasible vector 7 that is that z maximizes 0 1 7 it follows that 10 c y 0 Otherwise we could make c z 7 6y larger while still satisfying the constraints by choosing a epsilon either positive or negative of small enough magnitudei This means that if we start with an optimal feasible vector 1 with k gt m nonzero entries and apply the argument of the rst paragraph the result will be a feasible vector i with at most k 7 1 nonzero entries such that ll 5 c zi In other words i will again be an optimal feasible vector The theorem now follows by repeated application of the procedure of the rst paragraph to an optimal feasible vectori

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "Selling my MCAT study guides and notes has been a great source of side revenue while I'm in school. Some months I'm making over $500! Plus, it makes me happy knowing that I'm helping future med students with their MCAT."

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.