### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Advanced Calculus MATH 567

WVU

GPA 3.8

### View Full Document

## 42

## 0

## Popular in Course

## Popular in Mathematics (M)

This 28 page Class Notes was uploaded by Rae Kutch on Saturday September 12, 2015. The Class Notes belongs to MATH 567 at West Virginia University taught by Staff in Fall. Since its upload, it has received 42 views. For similar materials see /class/202659/math-567-west-virginia-university in Mathematics (M) at West Virginia University.

## Reviews for Advanced Calculus

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/12/15

Eigenvalues and Eigenvectors If AV7tV with V nonzero then 7 is called an eigenvalue of A and V is called an I Of A bUl r J39 I0 I 39 Agenda Understand the action of A by seeing how it acts on eigenvectors AMV0 system to be satis ed by 7 and V For a given 7 only solution is V0 except if detAM0 in that case a nonzero V that satis es AMV0 is guaranteed to exist and that V is an eigenvector detAM is a polynomial of degree n with leading term 7 n This polynomial is called the characteristic polynomial of the matrix A detAM0 is called the characteristic equation of the matrix A There will be n roots of the characteristic polynomial these may or may not be distinct For each 39 39 7 the cur r quot 39 satisfy AMV0 the nonzero solutions V are the eigenvectors and these are the nonzero vectors in the null space of A M For each eigenvalue we calculate a basis for the null space of AM and these represent the corresponding eigenvectors with it being understood that any linear combination of these vectors will produce another eigenvector corresponding to that 7 We will see shortly that if 7 is a nonrepeated root of the characteristic polynomial then the null space of AM has dimension one so there is only one corresponding eigenvector that is a basis of the null space has only one vector which can be multiplied by any nonzero scalar Observation If M is an eigenvalue of A with Vi a corresponding eigenvector then for any 7 doesn t have to be an eigenvalue we have AMvi7ti7t vi Fun facts about eigenvalues eigenvectors l Eigenvectors corresponding to different eigenvalues are linearly independent 2 If 7 is a nonrepeated root of the characteristic polynomial then there is exactly one corresponding eigenvector up to a scalar multiple in other words the dimension of the nullspace of AM is one L V If 7 is a repeated root of the characteristic polynomial with multiplicity m0 then there at least one corresponding eigenvector and up to m0 independent corresponding eigenvectors in other words the dimension of the nullspace of A M is between 1 and m0 4 As a consequence of item 1 above if the characteristic polynomial has n distinct roots where the degree of the polynomial is n then there are n corresponding independent eigenvectors which in turn constitute a basis for Rn or CH if applicable UI V When bad things happen to good matrices de cient matrices A matrix is said to be de cient if it fails to have n independent eigenvectors This can only happen if but not necessarily if an eigenvalue has multiplicity greater than 1 However it is always true that if an eigenvalue 7 has multiplicity m then nullA MYquot has dimension In The vectors in this null space are called generalized eigenvectors O V Complex eigenvalues If A is real then complex eigenvalueseigenvectors come in complex conjugate pairs If 7 is an eigenvalue with eigenvector V then 7d is an eigenvalue with eigenvector V 7 If A is a triangular matrix the eigenvalues are the diagonal entries Diagonalization If A has a full set of eigenvectors n linearly independent eigenvectors and we put them as columns in a matrix V then AVVA where A is a diagonal matrix with the eigenvalues corresponding to columns of V down the diagonal Then V39lAVA this is called diagonalizing A Also we have AVAV391 Calculating powers of A Am VAV391m VAmV391 In general if P is any invertible matrix then P39lAP is called a similarity transformation of A Diagonalizing A consists of nding a P for which the similarity transformation gives a diagonal matrix Of course we know that such a P would need to be a matrix whose columns are a full set of eigenvectors A is diagonalizable if and only if there is a full set of eigenvectors Given any linearly independent set of n vectors there is a matrix A that has these as eigenvectors namely AVAV391 for any diagonal A we wish to specify the diagonal entries of A are the eigenvalues A similarity transformation can be considered as specifying the action of A in a transformed coordinate system Given yAx as a transformation in Rquot if we transform the coordinates by xPu and yPw so that our new coordinate directions are the columns of P then u and w are related by w P39lAPu so that P39lAP is the transformed action of A in the new coordinate system ie A is transformed to the new matrix P39lAP in the new coordinate system If A doesn t have a full set of eigenvectors then it cannot be diagonalized why if A can be diagonalized then APAP391 we have APPA and the columns of P are seen to be of full set of eigenvectors but you can always nd a similarity transformation P39lAP such that P39lAP has a special upper triangular form called Jordan form We will not delve any further into this however Remember however we did note that a square matrix A always has a full set of generalized eigenvectors even when A itself is deficient Additional remark on similarity transformations If P39lAPB then A and B have the same characteristic polynomial and the same eigenvalues and eigenvector structure in the sense that if v is an generalized eigenvector of A then P39lv is an generalized eigenvector of B An algebraic way of seeing this is to note first that P391AM P B M and P391AMkP B Mk and detBM det P391AM P detP391detAM detPdetAM since 1detP391P detP391 detP It follows that AM v0 implies P391AM P P39lv0 BMP391v0 AMlw0 implies P391AM kP P39lv0 BMkP391v0 Symmetric matrices ATA Properties 1 All eigenvalues are real 2 There is always a full set of eigenvectors 3 Eigenvectors from different eigenvalues are automatically orthogonal to each other So in particular if all the eigenvalues are distinct there is an orthonormal basis of Rn consisting of eigenvectors of A If the eigenvalues have multiplicity greater than 1 you can always arrange for the corresponding eigenvectors to be orthogonal to each other GramSchmidt process So nally you can always arrange for the orthogonal eigenvectors of A to have magnitude 1 and thus construct a full set of orthonormal eigenvectors of A If V is the matrix whose columns are those eigenvectors then we have VTVI so that VTV391 Application to quadratic forms Fxyzxy2225x27xzl3yz By writing a quadratic form as xTAx where A is symmetric the transformation xVu where the columns of V are an orthonormal set of eigenvectors of A gives new coordinates u in which the quadratic form is xTAx uTVTAVu uTV39lAVu uTAu where A is the diagonal matrix of eigenvalues This is called diagonalizing the quadratic form In terms of the new coordinates the quadratic form consists pure of a combination of squares of the coordinates with no cross terms In this example we have gtgt A5 5 73525 0 gtgtA A 750000 05000 05000 0 735000 65000 gtgt V D eig A V 06880 07022 04788 06290 05453 03336 6535 m o 5000 5000 0000 1833 6125 7690 65 2 81222 0 0 0 28892 0 0 0 80114 gtgt V39V the columns of V are orthonormal vectors ans 10000 00000 00000 00000 10000 00000 00000 00000 10000 So the new quadratic form is Xy2225X27Xzl3yz 81222u1228892u2280114 1132 where XVu The new coordinate system is easily displayed within the original Xyz coordinates The columns of V are the new coordinate vectors shown as red green blue unit vectors respectively corresponding to the vectors ijk in the standard coordinate system Determinants detA De ning properties 1 DetA is a linear function of elements in any single row 2 If two rows are interchanged value of det is multiplied by l 3 detIl Other properties that are a consequence of the de ning properties 0 If two rows are identical detA0 if you interchange them you get the negative of detA but the new matrix is the same so idetAdetA and detA0 follows 1 By expanding across each row in turn we get the super expansion formula detAZ a Sgn6aiu1 21252 21353 an6n Note 6 is a permutation ofl2 n sgno is the number of interchanges required to turn 6 into l2n put it in the correct order Note that each term in the sum is in the form of a product of one element from each row of A coming from different columns and the sum is over all such possible products Adding multiple of a row to another row does not change the value of the determinant If two rows are multiples of one another detA0 What the determinant determines detA9 0 if and only if A is row equivalent to I This is because one of our row operations doesn t change whether or not the value of the determinant is zero or not Thus the determinant of the reduced row echelon form of A is nonzero if and only if detA is nonzero and the determinant of the reduced row echelon form of A is nonzero if and only if AI for otherwise there is an entire bUJN VVV row ofzeros U V detATdetA You can show this by examining the expansions on each side Consequence anything that s true about rows is true about columns 6 Laplace expansion formula for detA Cofactors Since detA is a linear function of each row we have for a xed value of i detA ZjCiJaij where Cij depends only on elements not in row i these coef cients Cij are called the cofactors and the matrix CCij is called the cofactor matrix CU 1ijdetM U Mij can be described as the matrix that results by deleting row i and column j of A The sign lij is associated with the ij position It s equal to l in the 11 position and otherwise alternates in sign as we go across rows or down columns detMij is called the ij minor determinant Lapalce expansion formula detAZj lijdetMiJaij Can also expand down columns 7 cofactor formula is the same detAZi lijdetMiJaij the only difference is the sum is over i 7 down the columns Determinant of a triangular matrix upper triangular or lower triangular is the product of the diagonal entries To evaluate determinants by hand we can use row or column operations to make all but one element in some column or row zero then we can expand by that column or row and there will be only one nonzero term in the expansion that results in a determinant with one less row and column We work our way down this way until we arrive at a 2x2 or even a 1x1 determinant Alternately you can just put your matrix into row echelon form which is uppertriangular and then obtain the determinant as the product of the diagonal elements 7 make sure however that you keep track of any row interchanges or if you multiply rows by scalars as these will change the determinant s value by the appropriate multiplicative constant The amazing cofactor matrix C the cofactor matrix 7 the property of interest row i of A row i of C detA if ii39 0 if iii39 The idea is the Laplace expansion formula detA ZjCiJaij can be interpreted on the right as the dot product of a row of C with a row of A Formula for the inverse A391detA391CT Equivalent properties for a square matrix all of the following imply the others detA 0 A71 A391 exists A is invertible A is nonsingular Ax0 has only x0 as a solution boldface means column vector RankA i Columns of A are linearly independent Rows of A are linearly independent Axb has a solution for each b Any solution of Axb is unique Row space of a matrix The subspace of Rquot that is spanned by the rows Basis for row space The nonzero rows of the reduced row echelon form of A provide a basis of the row space Dimension of row space rankDimension of the column space Matrix representation ACR where R is the reduced row echelon basis of the row space of A and C is the matrix consisting of the colunms in A that form a basis for its column space Null space The vectors x in Rquot that satisfy Ax0 Any two row equivalent matrices have the same row space and the same null space row operations do not change the solution set of a linear system Dimension of null space nr colsleading entries We will see this next Constructing a basis ofthe null space First we de ne Basic variables are those associated with the pivot columns in A or columns that contain a leading entry nonbasic variables are those associated with colunms in A that do not contain pivotsleading entries In our example gtgtA 1 2 4 3 1 0 2 3 1 8 2 3 1 1 1 3 2 0 3 1 7 1 1 1 2 1 2 2 0 1 10000 0 0 02000 0 02000 0 10000 0 24000 0 14000 0 0 10000 04000 0 04000 0 0 0 0 10000 10000 0 0 0 0 0 0 Basic variables x1 x2 x3 x5 Nonbasic variables x4 x6 In turn set each nonbasic variable equal to one with the others equal to zero generate the corresponding solution of AX0 This will produce nr independent solutions where rranlltA In our example we obtain for the nullspace basis vectors via MATLAB but you can easily do it yourself gtgt null A 39r39 02000 02000 24000 14000 04000 04000 10000 0 0 10000 0 10000 The values of the nonbasic variables are in bold type for emphasis Other observations If rn full column rank then the null space is just the zero vector basis considered as the empty set the columns of A are linearly independent AX0 has the unique solution X0 The row space is Rquot the rows span all of Rquot If rm The rows of A are linearly independent The columns span Rm AXb has a solution for any b Linear systems of rst order DE s Systems of the form c At c EU Example xl DC1 xz COSZ xz 2x1 e xz can be written in the form t 1 cost x1 x x where x 2 e 0 xz Note Any scalar DE of order n can be rewritten as a system of n rst order DE s Basic theory Existenceuniqueness An initial value problem includes an initial condition of the form x00 9 Initial value problems for linear systems of first order DE s have a uniquesolution on the largest open interval 1 containing to for which the entries in At and bt are continuous Homogenous system c At c For such equations we have superposition of solutions any linear combination of solutions is a solution This is easy to see if we rewrite the equation as L39c c At39c 6 and note thatL is linear namely L x7 L L7 and La qua To obtain the general solution of c At c on an interval 1 where existenceuniqueness holds we need only have a family of solutions that can satisfy any set of initial conditions at some t to contained in I If we are working in Rquot ie we have n first order DE s then we are looking for n solutions of c At c given by clt cnt so that ct cp c1t c ct is the general solution Such a set of solutions is called a fundamental set of solutions This will be the case if the vectors clt0 cnt0 are linearly independent at some t to since in that case co 3ct0 cp c1t0 cn39cnt0 will have a solution for any chosen Eco Note that independence OfJ39c1t 39cnt at t to implies independence throughout the interval 1 l Nonhomogeous system x At c Z10 has the general solution 5c 5c 9p where 5c is the general solution of c At c and cp is any particular solution of c At c EU We can actually write a simple formula for cp which we do a bit further down Fundamental matrix If we take a fundamental set of solutions of c At c and put them in a matrix t lm cnt then such a matrix is called a fundamental matrix of the DE c At39c Then linear combinations ofthe solutions ct cp c1t cn39cnt can be written ct t5 An initial condition cto co then results in ice t05 and E 1to co and ct t 1to co gives a formal way of writing the solution Note that the matrix Pt t 1to is also a fundamental matrix each column is a linear combination of solutions and so is a solution and its columns can clearly satisfy any initial condition at t to and Pt also has the nice property that ct Pt c0 is the solution of c At39c x00 9 This is the same as the property that Pt0 I A fundamental matrix t such that t0 1 is said to be quotnormalized at t toquot We have just observed that if t is any fundamental matrix then t 1to is a fundamental matrix normalized at t to Variation of parameters formula Once we have the general solution ofthe homogenous system we can write a formula for a particular solution of c At c bt cp tl 1tbtdt is a particular solution or more specifically cp t 1sl3sds is the particular solution satisfying cpt0 6 Then the solution ofthe initial value problem c At c Z10 ct0 co is given by c t 1t03c0 tl 1SZ1sds Notice how much quotsimplerquot this is than our old variation of parameters formula the general idea is clearly exhibited while the algebraic details are hidden in the inverse notation Constant coefficient coef cient homogeneous linear systems We cannot hope to solve general equations c At c in terms of formulas although we have demonstrated some nice properties ofthe solutions of such linear systems After all even a simple equation such as Airy s equation y ty or in matrix form 7 0 1 t elementary functions We concentrate here on the case where the matrixA is constant Such types of equations are important and arise naturally when nonlinear autonomous DE s are linearized autonomous means that I does not explicitly appear in the equation l c where x1 y x2 y has no solution that can be written in terms of Solutions of c A From here on we assume thatA is constant c A has a solution of the form ct e157 where 7 is a constant vector if and only if A is an eigenvalue ofA and 7 is a corresponding eigenvector This can be seen by plugging in c t Mia A el AT if and only ifo A The case of pure exponential solutions n independent eigenvectors lfthe matrixA has n independent eigenvectors as would be the case for instance if the eigenvalues were distinct roots of the characteristic equation then we obtain n solutions 3 90 elm 2 j 1n that are a fundamental set of solutions since at to 0 the vectors SCI0 71 j 1n are linearly independent by assumption In that case quotwe are donequot in the sense that we have the general solution ofthe form ct clell vl czelz vz cnelquot 7n Afundamental matrix d can be composed from these solutions and can be written in the form t VeA where V 717n is the matrix of eigenvectors and eAt represents the diagonal matrix 3 3 with 0 em the terms elx down the diagonal A fundamental matrix of solutions normalized at t 0 can then be written since 0 V as Pt t 10 VeA V l The matrix exponential In the case of one equation the DE x ax has solution x ea xo Can something like this work in the matrix case lfA is an n x n constant matrix we want to de ne 6 in such a way that ct eA J39co is the solution of c Afr We can de ne eA expAt alternate notation via power series In the scalar case 6 quot 1 at at2 at3 and the property em aea follows by differentiating the powerseries In the matrix case we define eXpAt 1 At At2 9103 the series being convergent for any xed I and we have it eXpAt AexpAt for the same reason This means that t eXpAt satisfies t A t so that each column of t eXpAt is a solution Offc Ax and moreover 0 eXpA0 1 Thus eXpAt is a fundamental set of solutions normalized at t 0 We note that eA eB eABgt unless AB AB This however would work for the case exited eAcItI If we de ne the diagonal matrix A 3 3 then it is not hard to verify from the power series that expAt 3 3 This is consistent with out earlier notation IfAv 2Lquot then 6 eH eA gt 7 e tA M 7 e since A Mquot 6 Applying this relationship ifA has n independent eigenvectors then column by column it is easy to verify that eA V VexpAt and so we can write 6 VeXpAtV 1 Having arrived here this is not surprising but inevitable as we previously observed that the right hand side is a fundamental matrix of solutions normalized at t 0 and as this is also true for 6 and there can be only one matrix with this property the two must be equaL Deficient matrices A matrix is quotdeficientquot if it doesn t have n independent eigenvectors This can happen if and only if eigenvalues are repeated as roots ofthe characteristic equation However the following result is true for all matrices lfthe eigenvalue 1 has multiplicity k as a root ofthe characteristic equation pot detA M 0 then the linear system A MY 6 has k indepdendent solutions ie the nullspace of A 20quot has dimension k The solutions of A 20quot 6 are called generalized eigenvectors If we put together all ofthe eigenvectorsgeneralized eigenvectors corresponding to all the eigenvalues they form a linearly independent set of n vectors Fundamental solutions corresponding to generalized eigenvectors lf 7 is a generalized eigenvector corresponding to eigenvalue 1 then A MY 6 by de nition For each such D we generate a solution ct emf of Sc A as follows xt emf ei eWWv 1 2 1 171 7 1 k e A lt 7A M t2 k1 A M tk1 mm L tk e v tA M A MW k11 tk 1A MY lv with the series terminating because all subsequent terms are 6 since 7 is a generalized eigenvector Note that c0 7 Now if we consider t as composed of solutions ofthis form then at t 0 we have 0 V the matrix composed of the eigenvectorsgeneralized eigenvectors The columns ofthis matrix are linearly independent as observed above so that t is a fundamental matrix and the solutions constructed from the eigenvectorsgeneralized eigenvectors form a fundamental set of solutions 1 2 1 i eigenvectors lt gt 3 3 lt gt 5 6 3 1 1 1 1 1 1 1 2 z eigenvectors 2 2 lt gt 1 2139 2 2 lt gt 1 2139 4 3 1 1 5 6 6 1 2 2 1 4 2 eigenvectors lt gt 1 1 0 lt gt 2 3 6 4 1 0 characteristic polynomial X3 5X2 8X 4 5 1 3 1 2 0 characteristic polynomial X3 8X2 19X 12 1 1 1 1 1 2 eigenvectors 1 lt gt 1 1 ltgt 3 1 ltgt 4 1 1 1 2 4 4 2 8 4 characteristic polynomial X3 4X2 4X 16 5 13 6 1 2 1 eigenvectors lt gt 2 1 lt gt 2 0 lt gt 4 1 1 1 5 13 1 1 1 characteristic polynomial X3 7X2 20X 24 roots 11 2 1 1 3 21 1 3 1 0 7 31 1 1 2139 1 row echelon form 0 1 l 3 1 1 21 0 0 0 71 1 1 1 11 3 0 0 1 1 1 11 1 1 1 11 0 2m 0 1 1 1 11 1 2 2 0 0 2 m 1 2 2 5 13 4 1 1 11 3 2 21 221 1 2 3 6 10 2 1 1 3 2 9 15 2 characteristic polynomial X5 7X4 19X3 25X2 16X 4 X 22X 13 1 22111 1 2 1 1 0 2 0 2 2 1 6 10 2 0 A 21 1 1 4 6 1 nullspace basis 1 only one eigenvector 1 1 3 5 1 1 3 2 9 15 4 0 Generalized eigenvalues 1 1 0 2 0 2 2 0 2 1 6 10 2 0 2 A 212 1 1 4 6 1 nullspace basis 1 0 1 1 3 5 1 1 0 3 2 9 15 4 0 1 two generalized eigenvectors 1 7 1 0 1 0 2 0 3 1 2 2 6 10 2 0 0 A 1 1 1 3 6 1 nullspace basis 1 0 two 1 1 3 6 1 0 0 3 2 9 15 3 0 1 eigenvectors not 3 Generalized eigenvalues 0 1 0 2 0 1 3 1 2 2 6 10 2 1 0 0 A 3 1 1 3 6 1 nullspacebasis 0 1 0 1 1 3 6 1 0 0 0 3 2 9 15 3 0 0 1 Note 1 1 2 2 0 1 1 0 0 0 1 1 2 2 0 71 0 1 0 0 2 0 1 0 0 0 0 1 0 0 2 0 0 1 1 0 0 0 1 0 0 0 1 1 0 0 0 0 1 0 0 0 0 2 1 0 0 1 0 1 0 1 0 1 0 0 0 0 2 1 0 1 0 1 1 1 0 2 0 2 3 6 10 2 1 1 2 6 1 1 1 3 7 1 3 2 9 15 2 39Fhe middle matrix provides the eigenvalues and their multiplicities then we have a similarity transformation to quotcama ougequot the matrix Linear systems of first order DE s x Atxgt Initial conditions xt0x0 Existenceuniqueness theorem The Initial Value Problem X Atxgt xt0x0 has a unique solution in the largest open interval I in t that contains to for which At and gt are continuous General solution of homogeneous problem X Atx If we think of this as Lxx Ax0 we see that L is linear in the usual sense of a linear operation Hence for the homogeneous case superposition of solutions applies General solution on interval I where A is continuous If x Ax is an nxn system n first order de s then the general solution can be expressed as a superposition ofn solutions p1 pzymy pn xc1 p1c2 pz cn qn provided that we can satisfy any set of initial conditions at some point with a solution of this form This will be true if for some t o in the interval I the system c1p1t0cz pzt0cn pnt0x0 has a solution for any x0 In turn this is true if and only if det It0 0 where It is the matrix whose columns are the solutions p1 pzymy pn If xc1 p1c2 pz cn pn is the general solution then we call It a fundamental matrix of solutions for the system x Ax Any solution of x Ax can then be written in the form x c1 p1c2 pz cn pH Itc where c is a vector of coefficients Then the solution of the initial value problem x Ax xt0 x0 can be written x It D391t0 x0 The matrix It D391t0 is a matrix where each column is a combination of the columns of It and so It D391t0 is a another fundamental matrix I t of the system x Ax with the property that I t0I and the solution of x Ax xt0 x0 is simply x I tx0 We say that I is a fundamental matrix normalized at tt0 if this holds Linear independence of solutions We say solutions p1 pzymy pn are linearly independent on I if no solution is a linear combination of the other solutions on the interval I If p1 pzvmy pH is a linearly independent set of n solutions then It is a fundamental matrix of solutions and det It is nonzero for any t in the interval I The reason this is true is because any set of solutions p1 pzm pn that fails to be a fundamental set must for some to have the property that It0 is singular and in turn there is a nonzero vector c of coefficients such that It0c0 Then the solution x Itc satis es zero initial conditions at t o and hence must be identically zero Now 10ch means that one of the columns is for all t in I a linear combination of the other columns ie the columns of It are linearly dependent We then argue that if the columns are independent It must be nonsingular at each t in I and hence a fundamental matrix Nonhomogeneous linear systems x Axg xxhxp is the general solution where the meaning is homogeneous general solution plus particular solution as in the scalar case There is a simple formula for xp which I ll refer to as the variation of parameters formula is the particular solution that satisfies xpt00 We can also write it in terms of an indefinite integral Constant coefficient case x Axgt A is constant 9h x Ax homogeneous case look for exponential solutions xe 1v we find that this is a solution if and only if 7 is an eigenvalue of A and v is a corresponding eigenvector Fundamental matrix If A has a full set of real eigenvectors the fundamental matrix I can be written as Fundamental matrix normalized at t0 Fundamental matrix normalized at to General solution examples in real and complex cases Matrix exponential and its properties General solution when matrix is deficient Transformational approach 7 decouple the system in the case of a full set of eigenvectors use a similarity transformation to convert to Jordan for in the case that you don t have a full set V10 11 t given normalized at t0 expAtexpAtexp0tI or simply note that w t4 tx0x0 for any x0 Math 567 Notes Differential equations One independent variable one or more dependent variables In a differential equation we prescribe the rate of change of each dependent variable as a function of given values of all the dependent and independent variables In socalled higher order differential equations higher order derivatives are specified but these can be put into a form where only first order rates of change of the variables are specified through the introduction of additional dependent variables First order differential equations General first order DE y fxy This is only a prescription for the rate of change of y To obtain a specific solution we must specify a starting value ofy called the initial condition This initial condition takes the form yxo yo where x0 and yo are given constants From the initial condition the solution evolves in both directions in x as determined by the differential equation An initial value problem is a differential equation together with initial conditions appropriate to define a unique solution For first order differential equations an initial value problem takes the form y xay yxo yo It is important to determine mathematical conditions onfthat guarantee a unique solution of an initial value problem A useful such condition is supplied by Picard s theorem Let xoyo be contained in the interior of a rectangle R in the xy plane in which the functions fxy and a are both continuous Then the initial value problem y fxy yx0 yo has a unique solution that can be uniquely continued as long as the points xy on the solution curve remain in the rectangle R Solution curves to first order DE s cannot cross at any point where Picard s theorem can be applied for were they to cross two different solutions would emerge from the same initial condition One can therefore imagine the plane being quotfilledquot with nonintersecting solution curves these are referred to as the integral curves of the DE We ll assemble some examples once we can solve a few DE s The first type are the separable first order DE s Separable first order DE s These take the form y AxBy We proceed by calculating dy dx AltxgtBltygt Em dy Altxgtdx I BO obtain a family of solutions via the arbitrary constant C in implicit form If convenient we solve for y If the form of the solution is sufficient so that for any choice of initial condition there is a value of C such that our solution satisfies the initial condition then we are assured that we have all the solutions passing thru points where Picard s theorem can be applied Such a solution is called the general solution however in separable DE s we often quotmissquot a few solutions dy IAxdx C and upon integration we Exy y2 Thisisseparable y2I1 2dyIdx xCy or y C1x renaming C to C If we impose the initial condition say y0 1 and plug in 1 we get 1 y0 C 1y 1 The solution exists on the interval oo1 and quotblows upquot at x 1 Try to graph the integral curves of this DE Ex y yzequot This is somewhat interesting because some solutions quotblow upquot but some do not Proceeding using the technique for separable DE s d a y287x J eixdx eix C y 87x1C you do it that C 0 and y 81 8 Then proceeding similarly you can see that for 0 lt y0 lt 1 we have 00 lt C lt 0 y exists for all x and y gt 1 C asx gt 00 On the other hand ify0 gt 1 we then have 1 gt C gt 0 and for some x gt 0 the denominator becomes 0 and the solution blows up Again try and sketch the integral curves of this DE Now say y0 1 We find Note Although in these examples the solution method gives us a oneparameter family of solutions we do not obtain all the solutions we are missing the solution y 0 Linear first order DE s This is an important class of DE s The form is y pxy gx These DE s can be solved exactly in terms of integrals However first the existenceunique theorem for initial value problems The initial value problem y pxy gx yx0 yo has a unique solution in the largest open interval Ithat contains x0 for which px and qx are continuous Note that for linear DE s solutions cannot blow up as long asp and q remain continuous The solution method for linear first order DE s relies on the idea of an integrating factor a function that we multiply the equation by so that the left hand side becomes a perfect derivative The integrating factor is given by e p for a linear first order DE in standard form Note that 81 JPN Now y mm go elpy elppltxgty elpgoc JG elpgoc and integrating both sides eIpy Jejpgxdx C and we then can divide by ZIP which is never zero to solve for y Two examples 1 y y x Here px 1 so 8 equot is the integrating factor e xy e xy e xx e xy e xx e xy 8quotde equot xequot C SO y 1 x C8quot is the general solution It in fact is the general solution since every initial condition can be satisfied by choosing an appropriate value of C The solution method for linear first order DE s in fact always provides all the solutions 2 xy 3y 1 This is a little quottrickyquot because to apply the solution method we must first put the equation in standard form y y Note thatpx and gx so x 0 is a point where pg are not continuous Now we can apply the solution method Let s consider first values Ofx gt 0 i 8 81 ch 831 x3 Sox3y 3x2y x2 x3y x2 x3y x3 0 y cx 3 It is easy to see that this also produces a family of solutions for xlt0 as well Second order De s The initial value problem here takes the form y fxyy yx0 yo y x0 v0 Through the introduction of an auxiliary dependent variable v y such an equation can be rewritten as two first order de s y v v w v yx0 yo V060 V0 which makes it easier to see how we quotstartquot the de off with initial values of xyv and then the de specifies the rate of change at any values of the variables so that the solution can quotevolvequot from the initial condition Now back to the socalled scalar form y fxyy Second order de s are probably the most studied since they have interesting behavior that can be relatively easily studiedunderstood and because of their connection with mechanics F ma Newton s law of motion in one dimension is a second order differential equation Here we will study only second order linear de s Second order linear de s y pxy qxy x What makes this linear ie why do we call this a linear de One observation we could make is to note that as a function ofy y y the left hand side is linear much as fxyz ax by 02 is considered a linear function of xyz except here the coefficients of y y y can be functions Ofx and not just constants However we want to present an equivalent more general condition which is the quotrealquot definition of linearity We consider the left hand side as an operation L on y namely for a fixed choice Ofpq we define Ly y pxy qxy Now we consider the properties of the operatorL as it acts on functions in general If u u1u2 denote functions of x and c 01 02 denote constants we have Lu1 m Lu1 Lu2 additivity property Lcu1 cLu scaling property or combining the two L01u1 02m 01Lu1 02Lu2 distributive property ofL These are the general properties of an operation that define quotlinearityquot additivity and scaling You should satisfy yourself that Lu u pxu qxu is a linear operation Thus Ly fx becomes a what we call a linear differential equation much as A E is referred to as a system of linear algebraic equations because Ac1 3c2 A21 ASC2 and 140 CAX the linearity properties hold for matrix multiplication by A Having observed the above properties ofL makes the subsequent development easier to understand Before proceeding however we state an existenceuniqueness theorem for initial value problems The initial value problem y pxy qxy fx yx0 yo y x0 v0 has a unique solution in the largest open interval Icontaining x0 in which pqfare continuous In the development below whenever we refer to an interval it is understood to be a maximal open interval in which pqfare continuous Second order linear homogeneous DE s We call the second order linear DE Ly fx homogeneous iffx E 0 Le if the DE takes the form Ly 0 Now we develop properties of the general solution ofLy 0 First we have The superposition principle for solutions of linear homogeneous DE s lfy1 and y2 are solutions ofLy 0 theny 01y1 Czyz is also a solution for any constants 0102 The proof is easy L01y1 czyz 01Ly1 02Ly2 0 0 Of course the superposition principle extends to linear combinations of three solutions but as we ll see for second order de s we only need two Now on an interval Iwhere intial value problems have unique solution if we have a family of solutions capable of satisfying any initial condition at any one value of x then that family of solutions represents all the solutions on I that is we have the general solution on I This is because given any solution y ofLy f if we match its values yxo and y xo at a point x x0 with a solution from our family then that solution must be identically equal to y Given a homogeneous second order linear de Ly 0 it makes sense then to look for a general solution of the form y 01y1 Czyz where y1y2 are specific solutions This is because our two coefficients 0102 should we hope be sufficient so that this y can be made to satisfy any initial conditions yx0 y0y x0 v0 by choosing those coefficients appropriately The question then arises Given solutions y1y2 ofLy 0 on an interval when isy 01y1 Czyz the general solution on 1 Answer y clyl 02y2 is the general solution on I if there is an x0 in Isuch that the equa ons yo 01y1xo 02y2xo V0 01y 1x0 02y 2xo have a solution 0102 for any given choice Ofy0v0 Now these are just two linear equations for the unknowns 0102 And we know that this linear system will have a unique solution for any y0v0 if and only if the determinant of the matrix of coefficients is nonzero ie y clyl 02y2 is the general solution ofLy 0 on I if there is a x0 in Ifor which let ylloco ylzoco 0 The determinant expession is called the Wronskian Ofy1y2 y1xo y2xo and is denoted Wxo let y1x0 y2x0 V1060 V2060 What else can we say 1 Given solutions y1y2 of Ly 0 on an interval if Wx0 0 at some x0 in I then Wx 0 at any point x in I This is true because we have the general solution and therefore can satisfy initial conditions at any point in I 2 Given solutions y1y2 of Ly 0 on an interval if Wx0 0 at some x0 in I then one of the solutions is a contstant multiple of the other or equivalently there are constants 0102 for which clyl czyz E 0 This is true because Wx0 let y1x0 y2x0 0 means V1060 V2060 from the basic theory of linear equations that there are values of 0102 not both zero for which 0 0131060 0232050 0 01y 1x0 0235060 But in that case using those constants the solution clyl czyz satisfies zero initial conditions at x x0 and hence must be identically zero ie clyl czyz E 0 One of those constants at least is nonzero and if say it is 01 we can then write y1 g fy2 so thaty1 is a constant multiple of y2 Give 2 above we can then say the following lfy1 and y2 are solutions ofLy 0 such that neither is a contstant multiple of the other then Wx 0 on the interval Iand y clyl czyz is the general solution ofLy 0 This condition is usually easy to verify just by looking at the two solutions under consideration for the general solution A set of solutions y1y2 ofLy 0 such thaty clyl czyz is the general solution is called a fundamental set of solutions It is always possible to find at least theoretically or numerically a fundamental set of solutions just pick an x0 in land then define y1 as the solution of the initial value problem Ly 0 yx0 1 y x0 0 and define y2 as the solution of the initial value problem Ly 0 yxo 0 y xo 1 Then these two solutions are a fundamental set since Wx0 1 or even more directly becuase the solution ofLy 0 yxo yo y xo v0 is given byy yoy1x voy2x as the initial conditions are easily seen to be satisfied by this y Nonhomogeneous second order linear de s Now considerLy fx We use yp to denote a specific or particular solution of this de Using this particular solution together with a fundamental set of solutions y1y2 of the correspondng homogeneous de Ly 0 the same L of course we can then represent all the solutions ofLy fx in the form y 01y1 0232 yp It is easy to check that for this y we have Ly 01Ly1 02Ly2 Lyp 0 0 fx fx so that all such y are solutions Moreover it is easy to see that any set of initial conditions can be satisfied by choosing the appropriate values of 0102 This makesy 01y1 Czyz yp the general solution For an alternative approach lety be any solution ofLy fx and considerthe function y yp We have Ly yp Ly Lyp fx fx 0 and since y yp satisfies the homogeneous de we have y yp 01y1 Czyz and so y 01y1 Czyz yp Thus we need just one solution ofLy fx together with the two solutions that make a fundamental set of solutions of the homogeneous equation in order to be able to generate all solutions ofLy fx The variation of parameters formula for yp It turns out that we can write a formula for yp in terms of integrals involving x the nonhomogeneous term and the solutions y1 y2 of the homogeneous equation Ly 0 This formula is often derived using a quottrickquot as in the book We will take a different approach here In any case the formula is yp y1 1dx y21dx where Wx is the Wronskian Ofy1y2 To begin out discussion we observe another consequence of linearity a superposition principle for nonhomogeneous DE s If m satisfies Ly f1x and uz satisfies Lu2 f2x then a solution of Ly 01f1x 02f2x iSy mm mm This observation is just the definition of the linearity ofL slightly modified Now pick some value of x sayx a and consider a solution ux ofLy fx with ya 0 y a 0 We breakfx up into a sum of functionsfx Z x where x fx on the interval xi1x and is zero othenNise don t worry about the overlap and don t worry about the discontinuity inf Now what can we say about uit the solution at x tOfLy x ya 0 y a 0 We will assume t gt a If x1x is outside the interval at then mm 0 since in that case between a and tthe equation Ly x just boils down to Ly 0 and the zero initial conditions ensure that u 0 between a and t since never gets quotturned onquot In essence we are saying that given our initial conditions the value of u at a given x t depends only on the behavior Offx between a and r If we then fix things so that a x0 lt x1 lt lt xquot twe have ut 211mm So suppose now that a lt xH lt x lt t What is uix Well we can say uix 0 for a lt x lt xH and forxi lt x lt twe have Lu 0 so we can find uix forxi lt x 5 t if we know the values of uix and qui because the behavior of ui is governed by the homogeneous equation in that interval and we assume that we have a fundamental set y1y2 of solutions of the homogeneous equation We can rewrite our DE for ui in the form y v v pxv qxy fzx At xH we have y 0 and v 0 and integrating the equations above over the assumed short interval from xH to x we obtain approximately yx 11 vsds E 0 y xi Vxi F1 PSV QSy fiSdS EfixiAx XH the error is on the order of A02 Given now the values ofyy at x x we can find uix x 5 x 5 tthat satisfies these initial conditions To help in this calculation we solve a related problem namely for an arbitrarily specified value of s we find the solution Yx ofLy 0 ys 0 y s 1 Y 01y1 Czyz Applying the initial conditions we obtain Ys 0 01y1s 05223 Y s 1 01y 1s 02y 2s Using Cramer s rule we solve for 0102 det 0 y2s det y1s 0 1 y 2s y2s m 01 01 WS WS det 13 y2S det y1S 23 y13 y23 3213 MW and Yx y1x y2x The dependence of Yx on s is apparent so we will use the notation Yxs Now to find uix we adapt Yxs by putting s x and multiplying Yxx by xAx to produce the correct value of qui fxAx We obtain then forx gt x uix YxxfxAx and at x twe have uit YtxfxAx Now our particular solution is ut 21 mm 2 211 Ytx xAx Now as Ax gets smaller and smaller the approximation becomes more and more accurate while the sum converges to an integral that gives the exact value of ut namely t t t t y2S y1S y2S y1S um lmsmods l WSy1t W yzm ods ylml WSfsds y2ltrgtl W Now if we change the definite integrals into indefinite integrals we are essentially adding on an arbitrary constant the effect of this on ut is to add on a term of the form C1y1t C2y2t which is just a homogeneous solution and hence does not affect the fact that ut represents a particular solution Hence we obtain the usual variation of parameters formula ux y1Jy Vr2fdx yZIy Wlfdx t But more instructive is the formula ut IYtsfsds The function Yts is sometimes referred to as the impulse response It represents the value of the solution at twhen the solution begins at rest at x a and remains at rest until x s when it experiences a quotkickquot produced by a quotdelta functionquot right hand side Ly 6x s which instantaneously gives it a rate of change of 1 Then Yts is the value at x t of the solution produced by righthand side 6x s The total response at x t is then a sum in the limit an integral of such impulse responses scaled byfx of impulses that occur between x a and x t Using delta functions instead of limits of sums for a 5 x 5 t because we have fx I5x sfsds the solution at x t is the same weighted superposition of inpulse responses namely yt Ytsfsds Constant coefficient linear DE s In the case of 2nd order we will write these as Ly ay by 0y fx and the homogeneous case is ay by 0y 0 Solutions are obtained in the form y 2 When we plug in we obtain Le a12 131 02 and we have Ly 0 fory e if and only if a12 131 c 0 This is called the characteristic equation of the DE and the two roots of this quadratic determine as follows a fundamental set of solutions ofLy 0 1 Distince real roots When there are two distinct real roots 1112 eg 12 21 3 0 the two solutions are y1 2 and y2 2 and these form a fundamental set as neither of these functions is a constant multiple of the other 2 Repeated real root When the roots are real and repeated eg 12 21 1 0 there is only one distinct root 1 1 and only one solution of the form y 2 Another solution can be obtained by various arguments but here is one If we think of e as a function Ofx and of 1 then we have Le a1 1Ze because the root 1 1 is repeated If we now take the partial derivative of both sides with respect to 1 and interchange derivatives with respect to x and 1 so that the derivatives with respect to 1 are carried out first then we have air Le Lxe while LLM a1 12etx M1 1 ax1 12etx and so setting the two equal we have Lxe 2a1 1 ax1 12e Then setting 1 1 we see that Ler 0 when 1 1 is a repeated root of the characteristic equation So in the case that 1 1 is a repeated root our two solutions are y1 2M and y2 x2 3 Complex roots Here the roots are 11 p iv 12 p iv where pv are real numbers eg 12 61 10 0 We could write as our two solutions y1 a and y2 em but first we need to know what we mean by 2 and second it may be more convenient to work with realvalued solutions To answer the first question we can decide that 2 8W8 is necessarily a property of this function Next we can observe for several reasons that e cosx isinx known as the Euler formula should be true For one reason note that e i2 should be a property of the function and that msx isinx sinx icosx icosx isinx SO these functions both have the same derivatives and at x 0 we have 8 0 0030 isinO 1 Thus cosx isinx behaves just as we would want 2 quot to behave so that we define e cosx isinx In that case 2 eWcosvx isinvx is the quotcorrectquot definition If we plug this into the equation Ly 0 we see that the real parts and imaginary parts must separately vanish so that we may take as realvalued solutions y1 8 cosvx y2 8 sinvx In the case of the example y 6y 10y 0 we have 12 61 10 01 L 36 40 3 z39and ourtwo solutions are y1 8 cosx yz 8 sinx Higherorder constant coefficient DE s Here the DE is any a1y my 0 and the characteristic equation in finding solutionsy e is an1quot a11 a0 0 One may have real or complex roots some of which may be repeated ie have multiplicity greater than one When roots are repeated additional solutions are obtained by multiplying the basic solution eithery 2 with 1 real orthe pairy 8 cos vx y 8 sinvx by powers Ofx until one obtains the number of solutions equal to the multiplicity of the root As an example suppose Le p1eb with p1 1 1212 61 1031 3 0 as the characteristic equation where the 9th degree polynomialp is in factored form Then our fundamental set of solutions is the 9 solutions 8 xe corresponding to the root 1 1 of multiplicity 2 73x 73x 8 cosx x28 cosx x28 cosx corresponding to the pair of roots 1 3 i i of 73x 73x 8 sinx x28 sinx x28 sinx multiplicity 3 2 corresponding to root 1 3 of multiplicity 1 Nonhomogenous equation Method of undetermined coefficients This method provides a form of a particular solution in the case of constant coefficient b DE s when the nonhomogeneous function is of a special formfx pxe ofsz S111 x where px represents some given polynomial In this case a particular solution can be found in the form ypx xXPxe cos bx Qxe sin bx where Px and Qx represent general polynomials of the same degree as px whose coefficients are as yet undetermined s is the smallest nonnegative integer such that no term in ypx when multiplied out is a solution of the homogeneous equation Ly 0 Often s 0 The coefficients in P and Q are determined by plugging into the DE and matching coefficients Things to note Anfx of the form fx px orfx 2 orfx sin bx or combinations of two of these can all be considered special cases of the general form fx pxe fix 8111 x lffx is a sum of terms of the above form then yp can be obtained as a sum of terms of the corresponding forms Examples 1 y 2y 3ye Note that y Zy 3y 0 has solutions 23e x Our particular solution has the form yp A8 where s 0 since 8 is not a solution of the homogeneous equation Plugging in we find LAe 4Ae 8 ifA I so we have yp Ie 2 y Zy 3y equot Examining s 0 we have yp Ae x which is a solution of the homogenous equation so we try 3 1 yp er x which is not a solution of the homogeneous equation and so the appropriate form Plugging in we have Lyp 2Aequot Axe x 2Ae x er 3er 4A8quot equot ifA SO that yp xequot 3 yH Zy 3y cosxl sian Here we break up the problem solving y Zy 3y cosx for a particular solution yp1 and then solving y Zy 3y sian for a particular solution yp2 i y Zy 3y cosx yp1 Acosx Bsinx plug in and find A and B ii y Zy 3y sian yp2 Acosx Bsinx plug in and find A and B not the same AB as in part 1 The yp ypl yp2 4 y 4y sian Here our homogeneous solutions are y1 cost y2 sian For yp we try first 3 0 and yp Acost Bsin2x but these are solutions of Ly 0 so we try 3 1 and obtain yp xA 0st Bsin2x as the correct form for y 5 y Zy x2 equot xcost sian The homogeneous equation has characteristic equation 12 21 0 and so solutions y1e y280 1 Now we write the form of yp yp x1Ax2 Bx C Dequot ExFcos2x Gx Hsin2x Can you explain why this is More important than the precise solution itself is the fact that we can figure out the form of the solution with complete reliability

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "I bought an awesome study guide, which helped me get an A in my Math 34B class this quarter!"

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.