### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Nonlin Prog IOE 611

UM

GPA 3.76

### View Full Document

## 18

## 0

## Popular in Course

## Popular in Industrial Engineering

This 88 page Class Notes was uploaded by Loy King on Thursday October 29, 2015. The Class Notes belongs to IOE 611 at University of Michigan taught by Staff in Fall. Since its upload, it has received 18 views. For similar materials see /class/231593/ioe-611-university-of-michigan in Industrial Engineering at University of Michigan.

## Reviews for Nonlin Prog

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/29/15

141 Algorithms for NLP with nonlinear constraints Katta G lVlurty7 lOE 611 Lecture slides Penalty Function Methods Consider rnin 633 s to h a O i 1 to m gj20j1to Penalty function for equality constraints plthltxgtgt 2 1 h a5 typically7 Where 5 1 or 2 lf 5 27 this function is continuously differentiable Penalty function for inequality constraints qgyc 21niax0 gjar7 Where 7quot 1 to 4 lt is cont diff ifquot 27 and twice cont diff if 7quot 3 The overall penalty function for our NLP is 0433 plthltgtgt MW 127 The Exterior Penalty Method solves the NLP by nding the unconstrained min of auxiliary function 633 Home where u gt O is the Penalty parameter Example min 961 s to 1 2 Z 0 Example min 33 33 s to 331 32 1 O The Penalty Problem Inin fultgt s to a E X Here X may be H or the set of feasible sols of other constraints not included in penalty function 0433 usually these may be simple constraints like bounds on vars Theorem X 75 Q7 and suppose new the minimizer of the penalty problem exists Vi gt 07 and Then i Min obj value in original NLP Z supugt0 11 9mm T7 Wt T7 altltugtgt i over it gt 0 Theorem Under hypothesis of above theoreni7 and also if u u gt O is contained in a compact subset of X 7 we have i Inin obj value in original NLP supugt0 liInMH00 ii uau gt O as u gt 007 and the limit of any convergent 128 subsequence of is opt to original NLP SUMT Take an increasing sequence 1 2 diverging to 00 Find ut1 using Mm as initial pt by some unconstrained min algo All My may be infeasible to original NLP7 but as u gt 007 mm gt opt of original NLP assuming it exists7 hence method called exterior penalty method Lagrange Multiplier Estimates Suppose X R Since mm is unconstrained min of fua7 we have foutxm O Vi From the coef cients in this eq When u is large7 we can estimate the opt Lagrange multiplier vector for original NLP Computational Difficulties When u very large computa tional dif culties are caused by ill conditioning Also7 When there are nonlinear equalities7 movement along a direction d from a feasible pt 3 leads to infeasible pts7 so When u is large7 even if 633 decreases fut Ad may be larger than fut except When is very small So7 step lengths tend to be small resulting in slow convergence amp premature termination 129 Also7 Hessian of auxiliary function tends to be highly ill conditioned when u is large For 2nd example above7 Hessian of auxiliary func is 21 u 2y 2y 21 u lt has eigenvalues 2 and 21 07 so its cond no tends to 00 That7s Why sequential implementations are used 130 Exact Penalty Functions Exact penalty functions are penalty functions for which min of auxiliary func is also opt for original problem for nite positive values of penalty parameter The absolute value L1 is an exact penalty function for min 633 s to 0 2391tom 921 XLZO i1tom The L1 auxiliary function is m m x 9W Hl 9 ltgt t Z max0 9 gtl 21 2m1 Theorem Suppose 633 convex7 9233 af ne for i 1 to m7 and concave for i m 1 to m E Let 3 3 7 be a KKT pair for original NLP Then for u 2 max7 7 3 also minimizes the auxiliary func with the L1 penalty function 131 Augmented Lagrangian Penalty Function ALAG or Multiplier Penalty Function For siplicity consider eq constraints only rst min 633 s to h ltgt O7 239 1 to m Let 7r 7T1 Wm be the Lagrange multiplier vector The ALAG leads to the auxiliary function no 7rgt M mm Him The ALAG is another exact penalty func lf 3 3 7 satisfy 1st order opt conds for original NLP7 then VQCFMQZ 7T O Vi gt 0 Theorem lf 3 3 7 satis es 2nd order suff opt conds7 there exists a 1 gt O s th Vi 2 m FAQ 7 has a strict local min at 77 Method of Multipliers 1 lnitialization Select 730 7T0 the initial pair7 and u 11 um the penalty parameter values for different constraints wa0 max h a0 i 1 to m is initial infeasibility measure 132 2 Penalty func 1nin Let 337 W M be current vectors Find 1nin ofFMra71T 63371Tha2 1u h 2 Suppose it is 1 lfwltajr1 g 617 stop With 33TH as 33TH 71 satisfy 1st order opt conds If 61 lt Myer g in de ne Mg 715 2M h ltr1gtgt7 lur1 Hr With yawn Wan1H1 reg peat this step lfwltrllgt gt iw r for each 239 for which h ycr1 gt iw r de ne it 1011377 and it it for all other 239 De ne 7r 1 7W With T171T1M1 repeat this step To handle inequalities Write the constraint 9296 2 O as 9233 522 O and apply the above method 133 Barrier function methods For inequality constraints only lf equality constraints exist you can include them in objective func using penalty func for equality constraints So consider min 633 s to 9733 2 O7 j1to Barrier function for these inequalities is continuous in 3 933 gt O7 and it gt 00 as point tends to the boundary from interior Frisch s Log Barrier Function 395 21 logegj Other barrier functions used are 21 7 21 logmin1 J The auxiliary func is 633 1333 Let min s to 933 gt 07 and let 33m be the minimizing point Theorem lnf 996 2 0 g inf u gt O Also7 for u gt WW1 T Wu T and BMW i Theorem Suppose original problem has opt at 3 3 Then 134 min0ltgt 2 996 2 0 1331 Wu infww r u gt 0 And the limit of any convergent subsequenoe of is opt sol of original NLP7 and uBi gt O as u gt O The Barrier Algo Initialization Start with 730 satisfying 9330 gt 07 no gt O and 6 6 01 General step Given 77 m use an unconstrained rnin algo to solve to rnin 633 MTBQS s to 933 gt O The constraints 933 gt O can be ignored as 333 gt 00 as 7 tends to satisfy 9733 O for any j Let 33TH be opt sol lfiTBaT1 lt 67 terminate with 33TH Otherwise let MTH 5m With 33TH MTH go to next step 135 Recursive QP or Merit Function Sequential QP MSQP Algorithms ff i x70 ziltom Consider min 633 s to 9233 L20 im1tomp All functions assumed twice cont differentiable Lagrangian La 7r 633 27112 mg x A merit function Sa7 an absolute value penalty function the L1 penalty function balancing the competing goals of de creasing 6amp3 and reducing constraint violation7 is used m mp 596 996 Z 9 lt96gt 2 MI min09 gt 21 2m1 Where the m are positive penalty parameters satisfying certain lower bound restrictions discussed later A QP employing a 2nd order approx to the Lagrangian mini mized over a linear approx to constraints7 is solved in each step Output of QP provides a descent direction for the merit func7 and a line search is carried out in this direction Let a be current pt7 amp d a 737 and 7 satisfy 7 Z OW E m 1 m p 2nd order Taylor approx to Lagrangian 136 obtained using a PD symmetric approx to Hessian updated by BFGS QN update formula lt is L 7T VxL 7Td dTBd Where B is the current approx to the Hessian of the Lagrangian Using the constraints7 it can be seen that minimizing this s to linearized constraints leads to QP 1 T min V0ltgtd id Bd l O 239 1 to m s to 9237 Vg d 20 im1tomp Let d 7 be the opt pair for this QP lf d 07 3 3 7 is a KKT pair for original NLP Termi nate lf d y 07 it is a descent direction for merit func m m 596 996 Z M 9 lt96gt Z M m11109 gt 21 2m1 Where weights M satisfy M gt 7r VzI These weights are usually choosen from 1 u magtlt7r a mmgt V1 137 where m are weights used in previous step Do a line search for min SQ Ad Z O lf is opt step length7 next pair is a a Rd 7 lf it satis es KKT conds for original NLP reasonably closely7 terminate with it Otherwise go to next step with it Theorem d is a descent direction at a for the merit function S A Dif culty Even if original NLP feasible7 the QP may be infeasible For this replace QP by 1 min V6lt cgtd dTBd pltZ ui 2 S 130 g iVg a 6du v 0 E g 9 6Vg 9 6du 2 0 i6 m1mp ui vi 2 O where p is a positive penalty parameter This QP model always feasible since d O is feasible for it lf d y O is opt for it7 it will also be a descent direction for S 138 Example Inin 696 x 7637 s to 27 33 10 O7 731 1207372 120 Theorem Assume initial pt 730 su iciently close to a KKT pt 3 for NLP7 and the pair 377 satis es Vg i s th 9237 O is li7 and 7E gt O W s th 2396 771 1m p H gm O and yTvixIi y gt O for all y y O in y Vg y OW s th 9237 0 Then the sequence of pairs generated by algo converges to 3 3 7 superlinearly 139 Successive or Recursive or Sequential LP Approaches Penalty SLP PSLP Consider rnin 633 l s to 9233 i gt 0 2391tom O z39m1tomp xeXxA b The L1 exact penalty function for this problem is F 33 996 M2211 9 lt96gt Sign max0 9 ltgtll The Penalty Problem is rnin FHltgt7 s to a E X This has a nonlinear obj func7 but linear constraints Given any a E X7 de ne 11 Zr 2 associated With it by rnax0 g a 2 rnax0 g a W E 1 m yzr rnax0g W E 771 1 mp So7 forz39 1 to m7 Z 140 PSLP attempts to solve the penalty problem using 1st order approx amp a trust region strategy The 1st order approx of F 33 around current pt 3 37 denoted by F Lud Where d a a is FLiltdgt 6ltzgt WW1 mg mm vgiltzgtdl mill rnax0 9253 V9 lt 3gtdll 2m PLSP attenips to nd d to rnin FLHd s to AG d 3 b and a 3 dj 3 oz Vj 1 to 717 for some selected positive trust region tolerance 04 This leads to following LP 141 mp ltZ Z Z yz l 239m1 Inin V6lt 3gtd M m N H sto ngM VMiEampnLHWmp E 1 39 39 39 a m Mz gb O djSOZ jE1n yia Zl Z O Z lf 7 E X7 then 0 is a feasible sol to this LP lf d is an opt sol of this LP7 de ne Actual change in exact penalty func FILL FAQ d Predicted Change by the linearized version Flaw FL dgt Theorem d O is an opt sol for above LP iff a is a KKT sol for penalty problem 142 Also7 since d O is feasible to LB the predicted change by linearized version is g O7 and is 0 iff d O is opt to LP The model PSLP Algorithm Start with an a E X as current point Select trust region tolerance 047 penalty parameter 7 and scalars O lt p0 lt p1 lt p2 lt 1 and tolerance adjustment factor 5 E 01 Typically7 p0 10 67 p1 0257 p2 0757 05 Solve LP corresponding to point 73 lf d O is opt to LB 3 satis es necessary opt conds for penalty problem ln this case if a is close enough to being feasible to original NLP7 it is a KKT point for it7 terminate lf 3 is infeasible to original NLP7 increase penalty parameter u and repeat lf opt to LB d y O7 compute the actual and predicted changes By theorem7 predicted change gt 07 compute R Actual Change Predicted change lf R lt p07 penalty function either worsened or improvement in it is insuf cient keep a as current sol7 shrink oz to 504 and go to next step After several such reductions if needed7 a new pt will 143 be choosen lf R gt p07 accept J as the new current sol lf R lt p1 shrink oz to 504 as the penalty function has not improved suf ciently lf pl 3 R 3 p2 retain 04 at its present value lf R gt p2 amplify trust region by setting 04 to 045 Go to next step 144 The Generalized Reduced Gradient GRG Method Write the constraints as eqs by introducing squared slack vari ables for inequality constraints7 if any Consider problem in form min 633 s to Maj 07 E g 76 g u Where Mac h1yc hmycT Start with a feasible sol lf none available7 let 730 be a good pt Modify problem to min 633Ozan17 s to Maj n1h0 07 E 3 a 3 u7 0 g n1 g 1 Where n1 is an arti cial variable with a large positive cost coeff of oz in obj func 0 Clearly7 for modi ed system a an 1T is a feasible sol And modi ed system in same form as the original We continue to discuss the original problem Let a be current feasible sol Assume Vhlt jgt of order m X n has rank m Partition the variables into 3331 Where 733 is a vector of m basic variables satifying VmBh of order m X m is nonsingular and 33 is the vector of remaining 71 m nonbasic variables The reduced gradient at a in the space of nonbasic vari 145 ables 33D is 8 8655 7 8M7 Admit i 627D 6273 6273 627D lt ln the space of nonbasic variables 33 de ne the search direction CD 0339 w w by l y XL 0 if above conds not met 5j ifeither j ltOampltujor jgt0amp cjgt j lf gD O7 3 is a KKT pt7 terminate lf gD y O7 DgD lt 07 so gD is a descent direction at 3 31 in the space of nonbasic variables7 it is the negative reduced gradient direction Take a positive step length7 say7 from 3 31 in the space of nonbasic variables to the pt 3 31 yD The corresponding values of basic variables B are to be determined uniquely from the square system of nonlinear eqs hltBltgt 373D Ayp 0 Newton7s method is used to nd B Denote the vector 33 by 5 to avoid confusion Beginning With 50 i337 Newton7s 146 91 Line Search Algorithms Katta G lVlurty7 lOE 611 Lecture slides Algorithms for onedirnensional opt problems of form Inin fA overAZO or a3A bforson1ealtb Bracket An interval in feasible region which contains the min A 2point bracket is A1 3 A 3 A2 Where f A1 lt O and f A2 gt O A 3point bracket is A1 lt A2 lt A3 s th fA2 g minflt1gtaflt3gt39 How to Select An Initial 3 pt Bracket First consider Inin fA over A Z 0 Where at O the function decreases as A increases from 0 otherwise 0 itself is a local Inin Select a step length A s th lt possible because of above facts De ne A0O AT AH2HA forr12 as long as HAT keeps on decreasing7 until a value k for 7quot is found for rst time s th fltAk1 gt So we have lt HA1 also Among 4 points Aki1 Ak Ak Ak1Ak17 drop either Ak1 or Ak1 Whichever is farther from the point in pair Am Ak Ak1 that yields srnallest value for fA and remaining 3 pts form an equi distant 3 pt bracket lf problem is rnin fA over a g A g b it is reasonable to assume that fA decreases as A increases thro7 1 otherwise a is a local rnin Apply above procedure choosing A su iciently srnall7 keep de ning AT as above until either a k de ned as above is found7 or until the upper bound b for A is reached Methods Using 3 pt Brackets Golden Section Search Works well when function is uni modal in initial bracket ie7 it has unique local min there Quite reasonable assumption in many applications Let be unknown min pt Unimodality gt 1 lt 2 lt then gt and lt 1 lt 2 then gt General Step Let 045 be interval of current bracket a f would already have been computed earlier 739 2 0618 is the Golden Ratio Let 1 04 1 0618gtlt a A2 a 06185 a If f1 fog not available7 compute them lf f1 lt f2 new bracket is 04 1 A2 f1 gt f2 new bracket is A1 A2 f1 f2 new bracket interval is A1 A2 Repeat with new bracket7 continue until bracket length be 80 comes srnall Quadratic Fit Line Search Method General Step Let A1 A2 A3 be current 3 pt bracket Fit a Quad approx ax2 b C to using function values at 1 A2 3 Because of bracket property7 will be convex lts unique min is A3 A gtfltA1gtltA AWW A A3gtfltA3gt 2W2 A3gtfltA1gt As A1gtfltA2gt A1 A2gtfltA3gtl ygt lf gt 2 and gt f2 new bracket is A1 A2 gt 2 and lt f2 new bracket is A2 A3 gt 2 and f2 new bracket is either of the above lf lt 2 similar to above lf A27 quad t failed to produce a new pt ln this case if 3 1 g 6 positive tolerance7 stop with 2 as best pt 262 lf2 1lt3 2 A l 3 1gt67de nei Ag EQ lf2 1gt3 2 Compute f and go to the above step again Terminate method when either of maxf1 f3 f2 or 3 1 or f 2 becomes smaller than a tolerance hletlujds lsuig 2 ptlBrackets The method Of BiSGCtiOH Let 11 be current bracket Compute f a b2 lf il O a b2 is a stationary pt7 terminate fa bgt2gtl gt 0 continue with 1 a b2 l lt 0 continue with a b2 b Not preferreol7 because method does not use function values Cubic Interpolation Method Let A1 A2 be current bracket Fit a cubic approx to using values of f1 f2 f 1 f 2 Because of bracket conols7 min of this cubic func is inside bracket lt is i fQ Vquot 1lt21gt1flt2gt flt1gt2ygt wheren Women and u ltn2 f39ltA1gtf39ltA2gtgt12 lf f is small accept as approx to min Otherwise if f gt O lt 0 continue with A1 A MD Line Search Method Based on Piecewise Linear Ap proximation Let 11 be initial 2 pt bracket for f A piecewise linear or polyhedral approximation for f in the bracket 11 is the pointwise suprernurn function of the linearizations off at 1 amp b P rnaxf1f 1 1 1 f ltbgtlt bl Frorn bracket conols7 the min of P occurs in 11 at the point Where the two linearizations are equal7 it is 1 113 Where dp fltagt NJ f ltbgtlta 5 f ltbgt f ltagt The quadratic Taylor series approximation of f at 1 is QltAgt M aw agt w arr a The mm of QltAgt occurs at 1 1Q Where f ltagtf ltagt if f ltagt gt 0 ioo if f 1 g 0 One simple strategy is to take the new point to be 1 W 1 rnindP dQ if 1Q gt 07 or 1 113 otherwise and take the next bracket to be the 2 pt bracket among 1 A1 1 b and 84 continue But C Leniareohal amp R Mif in make several modi cations to guarantee convergence to a stationary pt even when f is nonoonvex Also see Murty LCLNP Newton7s Method for line search 2nd order method for min f over entire real line lt is application of Newton Raphson method to solve the eq m 0 Starting with initial point A07 generates iterates by r1 r m assuming all f T gt 0 Not suitable if 2nd derivative 3 O at a point encountered Suitable if a near opt sol known7 then function is locally convex in the nbhd of opt Secant Method A modi ed Newtonls method Method ini tiated With a 2 pt bracket7 and replaces f T by the nite fAr f r71 difference approx A 7A 1 in Newton formula 7 7quot Inexact Line Search Procedures Exact line searches expensive in subroutines for solving higher dimensional niin problems lnexact line searches with suf cient degree of descent do guarantee convergence of overall algo Consider rnin 037 y over Z O a is current point7 y is a descent direction at 3 3 Conditions for Global Convergence 1 Suf cient Rate of Descent Condition lf is step length choosen7 new pt is a y Cond is that average rate of descent from a to a y be 2 speci ed fraction of initial rate of decrease in thetaltxgt at a in direction y Select or E O 1 and require satisfy f0 W Raf lt0gt 2 Step Length not too Small Cond 1 will always be satis ed for any 04 lt 1 by su iciently small steps This requires that step lengths are not too small Several equivalent ways to 87 ensure this One is to require that satisfy 21 f W 2 WW for some selected 5 E or 1 Sats step must be long enough that rate of Change in f at is Z speci ed fraction of its magnitude at 0 Theorem lf 6a is bounded below on R 7 V0 y lt 07 O lt 04 lt 5 lt 17 there exists 0 lt 1 lt 2 s th for any E A1 27 a y satis es both oonds 1 amp 21 Theorem 6a is cont diff7 and there exists 6 Z O s th V6z V67c 62 Vxz E R Starting with x0 suppose the sequence 33 generated by Choosing yr satisfying V6aryr lt 07 and the iteration 33TH M Aryr where T satis es oonds 1 amp 21 Then one of following holds 3 Either V6ak O for some 7 or b lirnrn00 007 or V0xfyf 0 C MOO WM From 0 we know that unless the angle between V6lt gt amp yr converges to 900 as 7quot gt 07 either WWW gt 07 or gt 007 or both lf we can guarantee that angle between WWW amp y is bounded away from 900 this can be guaranteed by proper Choice of 17 for example in Quasi Newton niethods yr HTV6T where HT is PD7 this property will hold if oond numbers of HT are uniformly bounded above then either V6lta gt 07 or gt 007 or both Other equivalent oonds to ensure step lengths not too small are 22 f0 2 ND f lt0gt for some selected 7 E 0 oz 01 N L P Katta G lVlurty7 lOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING NLP deals with optimization models with at least one nonlinear func tion NLP does not include everything other than linear NLP does not include IP or Discrete Opt Needs special Enumerative Techniques NLP also called Continuous Optimization or Smooth Optimization Models of following form Minimize 633 sto h O t1tom 91476 0 p1tot All functions 633 h a gpa assumed Smooth FUHCthHS Smooth FUHCthH one with all derivatives 1 Beyond 2nd derivatives7 impractical particularly when many variables So7 for us smooth means Continuously differentiable if using gradients only Twice continuously differentiable if using Hessians lnequality constraints include lower and upper bounds on de cision variables 57 g xj g uj Vf denotes 6369 at a a written as row vector Also J called the gradient of at 92 denotes 7 n x n Hessian matrix of at 2 J x7 lfgltycgt 9133 gmaT is vector of functions7 then Vgltxgt 63 Ii 1 to m7 j 1 to 71 the m x n Jacobian matrix J of 933 at 3 3 Each row vector in Vgltxgt is the gradient vector of one function in QUADRATIC FUNCTIONS Simplest nonlinear func tions A Quadratic Form in a 1 xnT is of form 21 61219622 21 Egan Q j j De ne Square symmetric matrix D dzw of order n Where d q forz391ton dijd q j forijjgti Then ag xTDx Example Mac 8196f 7963 5961962 6961963 18962963 A Quadratic Function is of form xTDx 03 00 A square matrix M of order 717 Whether symmetric or not7 is Positive semide nite PSD if xTMx 3 0 for all ac e R Positive de nite PD if xTMx gt 0 for all ac y 0 Convex Concave Functions A function f de ned over R 7 or some convex subset of R 7 is COHVGX Function iff for all 1332 in that set7 and all 3 030431 fltozx1lt1 00962 afltx1gt lt1 ozgtfltx2gt Inequality called Jensen7s Inequality after Danish niathe niatician who de ned it in 1905 Geometric interpretation as function lying beneath every ChOTd Function f COHCEWG if above inequality holds other way7 ie7 is convex Sorne Properties of Convex Functions 1 Nonnegative Combinations of convex functions convex 2 lf f is convex de ned on convex set F then for all 731 7T E F and 041 oar Z O satisfying 2104239 17 2 1042066 wow 3 f convex iff its Epigraph is a convex set 4 lf convex7 for all 047 the set 73 g 04 is a convex set Converse not true 5 Pointvvise suprernuni function of convex functions convex 6 Differentiable function de ned on real line convex iff its 1st derivative is a monotonic increasing function7 ie7 iff its 2nd derivative is nonnegative function 7 Quadratic function de ned over R covex over R iff its Hessian is PSD Quadratic function Whose Hessian not PSD7 may be convex 5 over a subspace of R 8 Twice continuously differentiable f de ned over R is convex i its Hessian PSD for all 3 9 Gradient Support Inequality Di erentiable function f de ned over R is convex i for all a 2 a3 3 3 for all a Lower bound property of LINEARIZATION Func tion x de ned above called linearization of at 92 For convex functions7 linearization at any point7 lower bound for func tion at every point So approximating convex function by linearization7 leads to underestimating it at every point How to check if a given function is convex One variable functions like 33 311 6 6X1 loga1 over 331 gt O are convex For many variables7 checking convexity halol7 as it involves checking PSD of Hessian at every point Local COHVGXity lf Hessian of f at a is PD7 in small neighborhood of 3 37 convex ln this case we say is locally convex at 3 3 Types of NLPs Unconstrained Minimization No constraints Real world problems have constraints Unconstrained min very imp because constrained problems can be transformed into unconstrained ones by penalty function methods LP lf all functions a ine QP lf objective function quadratic7 all constraints a ine Linearly constrained NLP Objective function nonlinear7 all constraints a ine Equality constrained NLP All constraints equations7 and no bounds on variables Convex Programming Problem Of form Minimize 633 sto h O 2391tom HV gp O p1tot Where 633 convex7 all h a ailine7 all gpa concave Nicest among NLPs Useful necessary and suf cient optimality conditions for global minimum are only known for convex pro gramming problems Nonconvex Programming Problems Violates some of the conditions for convex programming Types of Solutions For NLP Feasible solution a is Local Minimum Strong local min Weak local min Global min Stationary Point For 6 gt 07 696 2 6lt 7gt for all feasible a satisfying lt e if 633 gt 63 3 for all feasible 3 y satisfying lt e lf not strong lf 633 2 63 3 for all feasible 3 lf it satis es necessary condition for local minimum Local strong7 weak maxima7 and global maxima similar Differences ln Constructing LP Models 85 NLP Mod els Variety of functional forms ln LP all af ne ln NLP unlimited variety of functional forms Data LP involving m constraints in 71 variables has m 1n 1 1 coef cients as data elements Large scale LP refers to one with m gt 1000s7 and n gt 107000s Such models solved to global optimality7 in reasonable time To construct an NLP rnodel need to determine functional forms for objective7 constraint functions Usually involves Curve Fit ting and Parameter estimation Usually by Least Squares Method using special unconstrained rnin algos So even con structing NLP rnodel7 needs NLP algos So7 even 200 variable NLP rnodel considered large scale Expectation OH solution LP we dont even talk about local rnin7 because we get global rnin ln NLP7 if model nonconvex7 no e icient algos can guarantee nding global min So7 one compromises on type of solution ex pected Convex Programs Are Nicest NLPs l Theorem For convex program every local min is global min For convex program7 any method nding local min Will nd global min Also7 every stationary point is global min in convex programs Properties of PSD PD matrices Algos to check Preserved under symmetrization M PD7 PSD7 DgMMUm Signs of diagonal elements M mij If PD7 all mm gt all mm 2 Skewsymmetry on a 0 diagonal element If M mij PSD and mm 07 then for all j7 mij m 0 So in symmetric PSD matrix if diagonal element is 07 its row and col must be 0 Preservation in Principal submatrix after Gaussian PiVOt step Let D dij be symmetric n X 717 and suppose d11 y 0 Perform Gaussian pivot step with 111 as pivot element After pivot step eliminate row 1 and column 17 resulting in matrix D1 of order n l x n l D is PD ifl 111 gt O and D1 is PD D is PSD ifl 111 gt O and D1 is PSD Superdiagonalization Algorithm for checking if M PD Symmetrize Let D M MT Perform Gaussian Pivot Steps Let D0 D Doforz391ton l Let D24 be matrix from previous operation lf any diagonal entries in D24 3 O7 terminate Conclude M not PD Otherwise7 carry out Gaussian pivot step on Dial with its 2th diagonal element as pivot element7 resulting in matrix DZ lf all Gaussian pivot steps carried out and all diagonal elements of nal matrix Dnnl are gt 07 M PD7 terminate Examples 3122 1020 1 2 02 0240 04 432445 0 2 36 0053 3 14 Superdiagonalization Algo for checking M PSD Symmetrize Let D M MT Perform Gaussian Pivot Steps Let E0 D Doforz391ton 1 Let E24 be current matrix after previous operation lf any diagonal entries in E24 are lt 07 terminate Con clude M not PSD lf top diagonal entry in EA is O lf top row or 1st col of E 1 are nonzero7 termi nate Conclude M not PSD Otherwise strike off the zero top row and 1st col of EA and let remaining matrix be new current matrix E2 lf top diagonal entry in EA is gt 0 Perform Gaussian pivot step on E24 with the top diagonal element as pivot element 15 After this pivot step7 erase top row and lst ool of resulting matrix7 and let remaining matrix be the new current matrix E2 lf no termination in above steps7 M PSD7 terminate Examples 0 2 3 45 1020 2 3 3 00 0240 3 3 3 00 2445 4 0 0 84 0053 5 0 0 42 51 Separating Hyperplanes Katta G llurty7 lOE 611 Lecture slides The intersection of family of convex sets is always convex The union of two convex sets may not be convex The T of A Farkas Lemma are separating hyperplane theorems separating a polyhedral cone from a point outside it Given nonempty subsets K1 K2 C R 7 a separating hy perplane for them is H 3 076 a satisfying lZanEKl 0x x aVEK2 L It is strict separating hyperplane if inequalities hold strictly as inequalities May not exist for certain pairs A necessary condition for exis tence is Interior of K1 interior of K2 Q However7 even if K1 K2 7 a separating hyperplane may not exist Ability to construct separating hyperplanes ef ciently 45 has great signi cance in LP and in combinatorial op timization Sum Of tWO COHVGX Sets Given two convex subsets7 K1 K 2 of R 7 their sum7 K1 K2 73 y a E K1 y E K2 The sum of two convex sets is always convex THEOREM K C R nonempty7 closed7 convex b E R717 b Z K Then there exists a hyperplane separating b from K THEOREM K C R convex7 nonempty b Z K Then K can be separated from b by a hyperplane COROLLARY SUPPORTING HYPERPLANE THEOREM b a boundary point of a convex set K C R There exists a hyperplane through I containing K on one of its sides Such a hyperplane is called a supporting hyperplane for K at its bound ary point I THEOREM K1K2 convex 7 nonernpty7 and K1 K2 Q Then there exists a hyperplane separating K1 and K2 How to nd the separating hyperplane lb i1tom 1d K2A Zb im1tomp 2 b Z PosA 3 b Klt A1Atgt 4 b Z K 3 g O7 239 1to m7 each isa differentiable convex function 5 b Z K 3 9296 3 O7 239 1 to m7 each is differentiable but not all of them convex7 even though K is known to be convex EXAMPLE M is a P niatrix Z Z 0 satis es MZ q 2 0 but not all of the following Separate 2 from set of feasible solutions of following 81 Algorithm for QP thro7 LCP Katta G llurty7 lOE 611 Lecture slides Linear Complementarity Problem LCP lnput Data Mum q E R Desired output A w E R 2 E R satisfying w Mz q wz20 wjzj O Vj Complementarity 2n nonnegative variables forming n Complementary pairs wj Z3 is the jth pair The complementarity cond requires at least one variable in each pair be 0 No ob j func to optimize 2 1 5 Example M q 1 2 6 Other Concepts Complementary pairs of column vectors Complementary vector of variables Complementary set of col umn vectors Complementary Matrices Complementary cones 71 Complementary basic vectors Complementary bases degenerate7 nondegenerate Complementary Cones Complement of a variable Complement of a Column Vector Complementary Feasible Basic Vector Total Enumeration Method for LCP 02 Geometric Method for LCP When n 2 Complementary Cones are the orthants when M 1 2 1 21 11 M 13 1 2 20 The LCP Corresponding to QP Consider min 03 xTDa s to A3 b a Z 0 Where Amm amp D is symmetric WLOC The KKT conds for this QP form the LCP u D AT 96 CT 1 A O y b T u 96 u 96 2Q gtQ 0 39U 11 39U y Convex QP corresponds to an LCP associated with a PSD matrix Conversely an LCP with a PSD matrix is a convex QP Example Minimum Distance Problem Find nearest point to P0 2 1T in lt 1 3T 5 4T 5 2T 4 OT gt The Complementary Pivot Algorithm lf q 2 07 w q 2 O is a complementary solution to LCP So7 assume q 2 O lntroduce arti cial variable 2390 with column 6 1 1T wzzoq I M eq wzz020 w q e Z 0 2390 A is feasible when is large7 and forms an unbounded edge of the set of feasible solutions Find smallest 7 0 say7 for which this sol is feasible 0 minq239iE 1n w q A06 2 0 2390 A0 is an extreme point sol associ ated with the basic vector BO w1 wt1 Z0wt1 wn where t is s th qt minq239 i 1 to The basic vector BO is called an Almost Complementary Feasible Basic Vector ACFBV An ACFBV is feasible to above system and satis es 1 lt has one basic variable from all but one complemaentary pair That c pair is called the missing C pair in this ACFBV 2 Z0 is a basic variable in it lf wt from missing pair is entered into 307 it can be veri ed that we get the extreme half line w q e Z 0 2 0 A Z A07 called the Initial AC Ray The Algorithm Bring zt into Bo Determine dropping basic variable by usual lexico min ratio test to maintain feasibility Continue according to following rule C P Rule lf at some stage Z0 becomes dropping variable7 we are left With a CFBV7 the corresponding BFS is a CFS of original LCP7 terminate lf Z0 does not become the dropping variable in the current ACFBV7 bring the complement of the dropping variable in previ ous step lf updated col of entering variable has no positive entry7 vve get another extreme half line called the secondary AC ray ln this case the algorithm has failed to solve this LCR terminate lf updated col of entering variable has positive entries7 deter mine dropping variable by lexioo min ratio test and continue Two ways to terminate 1 With a secondary AC ray ln this case7 the algo has failed to solve the problem 2 With a complementary BFS Examples 1 1 1 1 3 1 1 1 1 5 M 1 1 2 O 9 1 1 O 2 5 1 O 3 3 M 1 2 5 a C 2 111 Unconstrained Minimization in Rquot Katta G Murty7 lOE 611 Lecture slides Consider min 633 over a E R We consider Descent Methods rst Each iteration in these methods consists of 2 steps 1 Find a search direction a descent direction g at current point 73 2 Use a line search method to nd the step length a Ry is the new point7 terminate with that as the best point if practical termination conds are met Otherwise go to the next iteration with the new pt as current pt Steepest Descent Method Cauchy 1847 A gradient method Search direction is steepest descent direction with I as the met ric matrix7 it is V6gtT at 73 Method globally convergent even with inexact line searches Far from optimum the method works well7 but in the nbhd of a stationary pt method very slow7 taking sniall nearly orthogonal steps Zigzagging lf Hessian at opt is PD and its condition no its largest eigen value m 04 rate of convergence of method be comes increasingly slower as 04 gt 00 depending on initial sol 30 evil2 Convergence linear With rate bounded above by M1 Newton7s Method for Unconstrained Minimization When a is current point7 method approximates 633 around a by quadratic function 6Q V9lt 3gtlt a so zgtTv xeltzgtltx 92gt 1st order nec conds for minimum of quad appro is that y a a satisfy Vi y V6 T This gives the iteration fEH4 1 yr where V x6 yr V6 T lf Hessian is PD7 y is unique and method well de ned The di rection 3 called Newton direction for 696 at 96 Traditional Newton step length is always 1 Variable metric property Theorem Suppose Vix x h ja is PD and each h ja is LipschitZ cont with constant 7 Let a satisfy V0lt 3gt O lf 30 is close to 3 37 33 converges to a at 2nd order rate Levenberg Marquardt Modi ed Newton Method Problems with Newtonls method 1 may not be PD then Newton direction may not be descent direction7 may be singular then Newton direction not de ned D Step length of 1 may not give desceny in Fixed by choosing step length by a line search routine 00 Not globally convergent To avoid 17 37 use as search direction 6 Hltgtgt71V6 gt where 6 gt O choosen so that 6 is PD lf 6 too small7 6 may be near singular lf 6 too large7 6 H becomes diagonally dominant amp method behaves like steepest descent with linear convergence rate To choose 6 start with some 6 gt O amp ascertain PD of 6 H by attempting to construct its Cholesky factorization lf unsuccessful7 multiply 6 by 4 and repeat until such a factorization is available Model Trust Region Methods Another group ensuring global convergence While ensuring fast local convergence The N amp L M M N methods used a quad model of function to determine search direction7 and then a line search technique to determine step length in that direction The line search technique does not use the Hessian or the full dimensional quad model of the function 2nd dif culty With these methods is that region of trust Within Which quad approx at current pt is suf ciently reliable may not include next pt 33TH T R methods circumvent these problems by 1st choosing a trial step length AT within which quad approx is considered good7 and then uses the quad model to select the best step of at most length AT by solving minQ7c 0w v0xrs isTHs s to AT where H Vix a is the Hessian and 5 a 737 AT is an 100 estimate of how far we can trust quad niodel7 so it is called trust radius Case 1 EH PD and H 1V6x T 3 AT 5r H 1V6x7 T is unique sol Case 2 Otherwise7 sol 57 satis es 5T A7 and H mils V6TT where MT 2 O is s th H 11 is at least PSD ln this case if H is PD7 sol given by 57 HLTI 1V6TT where yr gt O is s th 5T AT lf H inde nite7 let 1 its smallest eigen value and let 01 E R denote its corresponding eigen vector Then Either H m is PD and 57 H url 1V6aTT for the unique yr gt rnax0 1 for which 5T AT Or MT 1 and 57 H urlV6TT 001117 Where w E R1 is choosen so that 5T AT and H url is the MoorePenrose pseudoinverse ln practice7 MT calculated approx by an iterative process With each iteration requiring the Cholesky factorization of a matrix of 101 form H 1 as described above Approx sol given by following dogleg step CP Cauchy pt7 NP Newton pt7 33TH is intersection of line segment joining CP and NP with sphere if intersection exists7 or take 33TH as NP itself Procedure for Selecting Ar After 33TH obtained with trial value of An compute R may 6ltr1gt T QW QW lf O lt Rf lt 0257 make AT1 RT gt 075 and T1 27TH An make ATH QAT RT 3 O7 ie7 633 did not improve in this iteration7 reject T17 keep 33 as new pt and repeat this step with ATH Otherwise keep ATH AT and continue with 33TH as new pt 102 Quasi Newton Methods Methods based on building up the Hessian through the com puted values of V6ltxgt amp Also called Secant Methods Br approximation to Hessian at Tth step Methods usually begin with BO I7 and many of these meth ods maintain Br symmetric and PD lf 737 is current pt amp Br the current approx7 then the search direction at 737 is 5 BT1V0TT and a line search is performed to get the next point Variable metric property QuasiNewton Condition From Taylor series expansion we have V6T1T ltV9ltTgtgtT V x9rrll 737 The condition requires that BTH satisfy V6T1 V6ltxrT BT1T1 76 Updating formulae generally also have hereditary symme try hereditary PD properties Most successful of these methods is the BFGS method7 Which uses the updating formula 103 yrltyrgtT BTSTltSTgtTBT yrgtT5r lt5rgtTBT5r where 5 quot 96TH 96 y V6 1 V6TT Br1 Br ln implementing this niethod7 instead of updating B 7 the Cholesky factor of BT is directly updated Method also called PD Secant method Method usually reset after 71 iterations 104 Conjugate Direction Gradient Methods Introduced by Hestenes amp Stiefel 19527 originally for solving A3 b 7 square nonsingular system7 through min A3 bTA b These methods use only 1st order derivatives7 and do not need storing or updating a square matrix First consider min f 076 TA Where A is PD Sym metric These methods developed originally to solve this problem using at most 71 line searches COHj ugacy WI t A PD Symmetric Set of nonzero vec tors ph pn is said to be conjugate wrt A iff pZApj 0 W y j Let PM be s th its set of col vectors is conjugate wrt A Then the linear transformation a Pz diagonalizes f into CPZ PTAPZ PTAP is diagonal because of the conjugacy condition Hence F is separable in the Z variables7 ie7 2311 Fjzj7 so minimizing can be carried out through 71 line searches by the alternating variable method So7 105 in the x space7 f can be minimized through 71 line searches7 once in each of the directions P1 Pm in any order The C C methods generate the C directions one after the other7 so that each is a descent direction at current pt at the time that direction is generated General C G Method Step 1 Initiate With any pt 730 Search direction in this step is the steepest descent direction yo Vf0T Do a line search General Step Let 33f be current pt Search direction is y VffT 5471 Do a line search to get next pt 33TH lf 7quot 1 n this pt is optimal7 terminate Otherwise go to the next step Different C C methods use different formula for r These are WE Fletcher amp Reeves method il l H r i WM Polak7 Ribiere7 Polyak method l W C descent method Each direction is descent direction at current pt if all line searches are carried exactly For quad function f all above 106 formulae give same value to r if all line searches carried exactly To min general nonlinear function 633 apply same method Won7t guarantee min in exactly 71 steps Method usually reset after every n steps7 or Whenever generated direction not descent at current pt When n large7 2nd method seems better Practical Termination Conds Terminate in Step 73 When some or all of these quantities are small WW 9ltT 1gtl V9ltTgtll7 llx xr lll 107 The Simplex Direct Search Method for NLP Nelder amp Mead 1965 A pattern search technique useful for problems of low dimension Not good when dimension is high Aim is to obtain a small simplex containing a minimum ln the end either the best vertex is taken as the best pt7 or the best pt obtained by some interpolation in nal simplex Four types of moves Each iteration begins with a simplex lt 731 33111 gt with its vertices sorted so that 6Q g 633211 W 1 Method tries to replace worst vertex 33 by a better pt Best facet of current simplex is lt 731 73 gt it excludes worst vertex7 its centroid is a 2211 33 Re exion xr of 33 through a in this facet is 737 27 xn 11 lf 6337 lt 617 very successful Now try expanding simplex in same direction Let 736 233T 33111 Expansion successful if 6336 lt 6331 ln this case7 the new simplex is lt 736 731 73 gt7 go to next iteration 108 41 Theorems of Alternatives for Systems of Linear Constraints Katta G llurty7 lOE 611 Lecture slides System of constraints is Feasible if it has a feasible solution7 ie7 one satisfying all constraints in it Infeasible if it has no feasible solution History Beginning J Farkasls famous paper 1901 These fundamental for deriving necessary optimality conditions for LP and NLP Farkas was Prof of theoretical physics at U of Kolozs var7 Hungary7 motivated by nonlinear min problems subject to in equality constraints in study of mechanical equilibrium rst posed by J Fourier in 1798 General form Every theorem of alternative T of A deals With a system of linear constraints7 I say lt constructs another system H in different variables7 but sharing the data With I Statement usually says Either I has a feasible solution or H has a feasible solution but not both The FIE and the Ell The Fundamental lnconsistent Equation FlE 0 l The Fundamental lnconsistent lnequality Ell 0 Broadly7 T of A for systems of equations says that an infeasible system of equations is equivalent to the FlE And a T of A for systems involving inequalities states that if the system is infeasible7 it is equivalent to the Fll T of A for systems of linear eqs Theorem A system of linear eqs7 A3 b is infeasible iff the FlE can be obtained as a linear combination of eqs in it Example 371 272 273 2 1 2 3 1 Example 1 2 2 1 1 8 A 104 77b 16 O 2 6 8 8 6 T of A for linear eqs Given AW bmxh either I has a solution 37 or H has a solution 7r 7T1 mm but not both I H A3 b 7rA O T of A for systems including linear inequalities Example 331 332 273 3 2 1 2 3 1 Valid Linear Combination of Linear lnequalities Can multiply inequalities only by nonnegative scalars Can add in equalities only if they are all in the same direction ie7 either all are 57 or all are T of A for systems involving inequalities can be proved using Tucker s Lemma Tucker s Lernrna AmX71 given Consider following homoge neous systems A3 3 O 1 7rA O 7r 3 O 2 Where 7r 7T1 7Tmgt7 a 31 xnT There exist solutions a for 1 and 7 for 2 satisfying A92 WT gt0 Tucker Diagram 771 71 111 am 0 aml amn O O LeInnia says that there exist solutions 3 3 7 to the two systenis7 which satisfy for each i 1 to m7 either 7139239 gt 07 or Ali gt 07 or both Farkas Lemma Given Amxn bmxh either I has a solution 37 or H has a solution 7r 7T1 mm but not both I H Ab WAEO 30 7rbgt0 Optimality Condotions for LP from Farkas Lemma Theorem Consider general LP min 03 s to A3 3 b Where Amxn lf 3 is a feasible solution7 it is optimal iff there exists a 7 7T1 7T7 satisfying Dual feasibility il l C S Conds 7 Example Consider feasible solution a 6 O 1O2T to LP 961 2 3 4 5 1 1 1 2 1 3 5 2 2 1 3 5 8 1 6 3 3 5 5 1 7 7 3 1 3 5 Minimize Motzkir s T of A4 m U Ax gt 0 W 076 0 G0rd0n7s T of A m WA 0 7r 2 0 Gale7s T of A3 A9635 WAO 7rb1 11 N L P Formulation examples amp techniques Katta G lVlurty7 lOE 611 Lecture slides Rectangular Rail Car Design Top amp bottom sheet 325m2 Siding sheet 175m2 Volume should 2 1000 m3 Height should 3 3 In Total area of sides should 3 75 m2 Find min cost design Simple Portfolio Model Data b budget available to invest 71 number investment opportu available My expected fraction return in jth op portu7 j 1 to n D n X n variance covariance matrix of fraction returns W D usually estimated from past data Actual returns random variables Two important objective functions One to max expected return 2nd to min risk7 mea sured by variance of total fraction return 18 Banks7 mutual fund operators use models like this mostly based on QP models to plan investments Curve Fitting Parameter estimation in non linear modeling In above7 objective and constraint functions came with prob lem Not common in NLP modeling In most7 functional forms of objective and constraint functions to be determined empirically This work called Curve Fitting Two phases in it Phase I Select Model Function Either based on theo retical considerations7 or by guessing suitable one looking at plots of data Phase II Parameter Estimation Usually model function has parameters Whose values to be determined to give best t to data Input for curve tting INDEPENDENT VARIABLES x x1 WT DEPENDENT VARIABLE y Characteristic we trying to 20 determine as func of 3 DATA FOR PARAMETER ESTIMATION Observations on the values of y at m points in x space7 ie7 T T 3111 when 27277 r1tom MODEL FUNCTION SELECTED fa 96 where a a1 asT is vector of parameters to be estimated PARAMETER ESTIMATION Case 1 m s In this case try to determine the parameter vector 1 from the equa tions y1 aw ym fa 96m Case 2 m gt 5 Most commonly used method is Least squares method It determines a to min the least squares measure of deviation7 L2ltagt is Mar 7 Least squares solution 1 minimizes L2a Minimum value L2a called Residue Residue measures error in t lf residue small7 f 1 3 accepted as function representing y as a function of 3 lf residue large7 two remedies 0 Model function selected inappropriate Select a better one and try again 0 Least squares problem may be nonconvex7 algorithm used may not have obtained good solution Run algorithm with a different initial point7 or use another algo lf fa 3 is linear in 17 measures of deviation L1a Loom are used7 and parameter estimation carried out using LP Optimizing Nitric Acid Production Costs Important process variables P1 air compressor discharge pressure in psi Q1 ow rate thro7 cornpressor in cfrn at 70F 85 147 psi P2 CC outlet pressure in psi Q2 portion of Q1 going thro7 PET in cfrn T2 inlet ternp at PET in F Curve tting jobs for constructing model 1 Q1 P1 related through an equation With a linear model function a good t found P1 0002032Q1 186916 2 P2 Q2 T2 related through an eq Best linear t was P2 0100027T2 00234267Q2 68381 3 P1 P2 pressure drop across process train is function of Q1 T2 Best linear t was P1 P2 0001104Q1 54469 4 Chp is a function of P1 Q1 PRThp is function Of Q2 T2 P2 Best ts are Chp 007706Q1 13103548 02Q1 17835 192 Also SThp Chp PRThp has an upper bound of 2000 5 Costs estimated in terms of CF of air processed through cornpressor are 129066 X 104 X SThp Q1 ST stearn cost CC fuel cost 431402x10 9T2 586707x 10 6 24 Labor cost 01025 x 104 01970 x 10 9021 Other costs 04333 x 10 4 08055 x 10 9021 SO overall model is Minimize 1290066 gtlt 10 4Chp PRThpQ1 431402 gtlt 10 9T2 10025 x 10 9021 subject to 100 3 P1 3 140 85 3 P2 3 100 24 000 3 Q1 3 28 000 900 3 T2 3 1250 Q1 Q2 2 0 Q2 2 0 Chp 007706Q1P10393548 02Q1 PRThp 0000345202230 17835P20392217 0 g 0 PRThp g 2000 P1 0002032021 P2 0100027T2 00234267Q2 P1 P2 0001104021 186916 68381 54469 When model solved Optimum solution reduced pro 26 duction cost in ton of acid to 146 from 153 by op erating conditions at time of study Boiler Shop Optimization Company has 5 boilers of different load ranges work ing in parallel for generating steam Boiler z Boiler load range Lower 62 upper kl 1 10 units 60 2 10 60 3 15 120 4 12 112 5 15 135 Each boiler7s energy efficiency depends on load is efficiency for 2th boiler when load is 732 units of steam We fit a cubic polynomial aw allxi 6mm agixf for Best values of parameters are 27

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "I made $350 in just two days after posting my first study guide."

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.