### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# INTERMED POL METHOD POLS 7014

UGA

GPA 3.84

### View Full Document

## 45

## 0

## Popular in Course

## Popular in Political Science

This 15 page Class Notes was uploaded by Estelle Prosacco on Saturday September 12, 2015. The Class Notes belongs to POLS 7014 at University of Georgia taught by Staff in Fall. Since its upload, it has received 45 views. For similar materials see /class/202251/pols-7014-university-of-georgia in Political Science at University of Georgia.

## Reviews for INTERMED POL METHOD

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/12/15

POLS 7014 Introduction to Matrices lnlroducllon to Matrices 1 1 The Cast of Characters 0 A matrix is a rectangular array ie a table of numbers For example 2 5 8 0 ancnw 1 4 X 4x3 7 0 1 gtilt This matrix with 4 rows and 3 columns is of order 4 by 3 For clarity I will often show the order in parentheses below a matrix gtilt Note that a matrix is represented by a boldface uppercase letter lnlroducllon to Matrices A more general example 1111 112 am A 121 i122 a2n mm 5 5 5 aml am limit This matrix is of order m gtlt n alj is the entry or element in the ith row and jth column of the matrix An individual number eg 2 or 112 such as a matrix element is called a scalar Two matrices are equal if they are of the same order and all corre sponding entries are equal lnlroducllon to Matrices 3 o A column vector is a onecolumn matrix 5 a1 12 y 7 a 4x1 1 quotW11 4 an 0 Likewise a row vector is a onerow matrix l b b17b27 7b7l 1m l adopt the convention that row vectors are always written with a prime ie Note that boldface lowercase letters are used to represent vectors introduction to Matrices o More generally the transpose of a matrix X written as X or XT interchanges rows and columns for example i g j 1 4 7 0 X 2 5 8 0 4x3 7 8 9 3x4 3 6 9 10 O O 10 o A square matrix of order n has n rows and n columns for example B 2 2 6 3x3 7 3 4 The main diagonal of a square matrix B consists of the entries bid 7Lgtlt7L J i12un In the example the main diagonal consists ofthe entries 5 2 and 4 introduction to Matrices The trace of a square matrix is the sum of its diagonal elements n traceB 17 5 2 4 7 rl A square matrix is symmetric if it is equal to its transpose That is A is symmetric if alj ail for all i and j nan Thus 5 1 3 B 2 2 6 7 3 4 is not symmetric while 5 1 3 C 1 2 6 3 6 4 is symmetric introduction to Matrices s A square matrix is lowertriangular if all entries above its main diagonal are 0 for example 10 L 52 20 woo Similarly an uppertriangular matrix has zeroes below the main diagonal for example 55 2 U 02 4 00 3 introduction to Matrices 7 A diagonal matrix is a square matrix with all offdiagonal entries equal to 0 for example 0 O O O 0 0 7diag6 2O7 O 7 A scalar matrix is a diagonal matrix with equal diagonal entries for example introduction to Matrices An identity matrix is a scalar matrix with ones on the diagonal for example introduction to Matrices o A unit vector has all of its entries equal to 1 for example 1 1 0 0 1 7 13 I 010 4x17 1 3X3 0 0 1 1 o A zero matrix has all of its elements equal to 0 for example 0 0 0 0 0 0 0 1x3 0 0 0 0 0 0 introduction to Matrices 10 introduction to Matrices 11 2 Basic Matrix Arithmetic 21 Addition Subtraction Negation Product of a Matrix and a Scalar 0 Addition and subtraction are de ned for matrices of the same order and are elementwise operations for example for 123 51 2 A 7 B 2x3 2x3 30 4 435 2931 AB 7 52 61 1 293 AB 1 5 10 0 Matrix negation and the product of a matrix and a scalar are also elementwise operations examples 2 1 51 2 2 sji 456 2x37 30 4 7 7 1 2 3 214337 A7 4 5 6 153 6 21373XB7BX37 9 0 12 0 Consequently matrix addition subtraction and negation and the product of a matrix by a scalar obey familiar rules for scalar arithmetric introduction to Matrices A B B A commutivity A B C A B C associativity A B A B B A A A 0 A is the additive inverse of A A 0 A 0 is the additive identity A A 0A Ac commutivity CA B cAcB distributive laws Abc AbAc 0A 0 zero 1A A unit 1A A negation where ABC and 0 are matrices of the same order and 700 and 1 are scalars introduction to Matrices 13 0 Another useful rule is that matrix transposition distributes over addition A B A B introduction to Matrices 22 Inner Product and Matrix Product 0 The inner product or dot product of two vectors with equal numbers of elements is the sum of the product of corresponding elements that is n b 6 Sn nxl gal l Note the inner product is de ned similarly between two row vectors or two column vectors 0 For example 1 2013i 3 2 1 06 10 39 25 9 introduction to Matrices 15 o The matrix product AB is de ned if the number of columns of A equals the number of rows of B In this case A and B are said to be conformable for multiplication Some examples 1 0 0 1 2 3 O 1 O conformable 4 5 6 0 0 1 2x3 3x3 1 0 0 O 1 O 1 2 3 not conformable 4 5 6 0 O 1 2x3 3x3 1 5075175253 351 conformable 1x4 I2 13 4x1 introducle o Mamba 2 0 0 conformable lo al lo 1 2x2 2x2 1 0 2 0 2 conformable 0 0 3 2x2 2x2 In general if A is m gtlt n then for the product AB to be de ned B must be n gtlt p where m and p are unconstrained Note that BA may not be de ned even if AB is ie unless in p introducle o Mamba o The matrix product is de ned in terms of the inner product Let a represent the 2th row ofthe m gtlt n matrix A and bi represent the jth column of the n gtlt p matrix B Then C AB is an m gtlt p matrix with n cl albj E alkbkj 1H That is the matrix product is formed by successively multiplying each row of the righthand factor into each column ofthe lefthand factor introducle o Mamba 0 Some examples 1 0 0 1 2 3 ii 0 1 0 4 5 6 0 0 1 2x3 3x3 112030 102130 102031 50 60 40 51 60 40 50 61 2x3 introducle o Mamba 1 130131132133 1 l o 1z1 2z2 srsl 1X4 2 1x1 I3 4x1 introduction to Matrices o In summary A B AB general matrix m X n matrix n gtlt p matrix m gtlt p square matrix n gtlt n square matrix n gtlt n square matrix M gtlt n row vector 1 gtlt n matrix n gtlt p row vector 1 gtlt p matrix m gtlt n column vector n gtlt 1 column vector m gtlt 1 row vector 1 X n column vector n gtlt 1 quotscalarquot 1 gtlt 1 column vector m gtlt 1 row vector 1 gtlt n matrix m gtlt n introduction to Matrices 0 Some properties of matrix multiplication ABC ABC associativity A BC AC BC distributive laws AB C ABAC 7 ImA A unit mxn A 0 0 zero WWW pr WP qumxn A B A transpose ofa product mxnxnxm wwwme ABF F B A introduction to Matrices 0 But matrix multiplication is not in general commutative For A B the product BA is not de ned unless in p WWW For A and B the product AB is m X m and BA is n gtlt n so mxn nxm they cannot be equal unless in n Even when A and B are both n gtlt n the products AB and BA need not be equal When AB and BA are equal the matrices A and B are said to commute introduction to Matrices 23 The Sense Behind Matrix Multiplication o The de nition of matrix multiplication makes it simple to formulate systems of scalar equations as a single matrix equation often providing a useful level of abstraction o For example consider the following system of two linear equations in two unknowns 11 and 312 211 512 4 n3 5 0 Writing these equations as a matrix equation 2 5 11 4 131275 x 2x22x1 2x1 introducle o Mamba o More generally for in linear equations in n unknowns A x b mxnjmxl mxl introducle o Mamba 3 Matrix Inversion o In scalar algebra division is essential to the solution of simple equations For example 2 I 6 or equivalently 1 1 6 12 2 I I 6 where 6 1 is the scalar inverse of 6 o In matrix algebra there is no direct analog of division but most square matrices have a matrix inverse The matrix inverse of the n gtlt n matrix A is an n gtlt n matrix A 1 such that AA 1 A IA L introducle o Mamba 26 introducle o Mamba 27 For example take the matrix A7 1 O Square matrices that have an inverse are termed nonsinguar 7 0 0 Squares matrices with no inverse are singular ie strange unusual and suppose for the sake Of argument that If an inverse of a square matrix exists it is unique B bl 112 Moreover if for square matrices A and B AB I then necessarily 1721 1722 BA I and B A l is the inverse of A o In scalar algebra only the number 0 has no inverse ie 0 1 is B t unde ned U b b b b 7 1 O 11 12 7 11 12 AB 7 7 y I2 0 ln matrix algebra 0 j is singular but there are also nonzero singular O O 1721 1722 0 0 7Lgtlt7L matrices contradicting the claim that B is the inverse of A and so A has no i rs introduction to Matrices 28 A useful fact to remember is that if the matrix A is singular then at least one column of A can be written as a multiple of another column or as a weighted sum linear combination of the other columns and equivalently at least one row can be written as a multiple of another row or as a weighted sum of the other rows gtllt In the example above the second row is 0 times the rst row and the second column is 0 times the rst column gtllt Indeed any square matrix with a zero row or column is therefore singular gtllt We ll explore this idea in greater depth presently introduction to Matrices 29 A convenient property of matrix inverses If A and B are nonsingular matrices of the same order then their product AB is also nonsingular with inverse AB 1 B IA l gtllt The proof ofthis property is simple ABB 1A 1 ABB 1A 1 AIATI AA I I gtllt This result extends directly to the product of any number of nonsingular matrices of the same order AB FYI F 1 B 1A 1 introduction to Matrices 30 Example The inverse of the nonsingular matrix 1 3 is the matrix gtllt Check introduction to Matrices 31 0 Matrix inverses are useful for solving systems of linear simultaneous equations where there is an equal number of equations and unknowns The general form of such a problem with nequations and rt unknowns is A x b nxnnxlj nxl Here as explained previously A is a matrix of known coef cients x is the vector of unknowns and b is a vector containing the known righthand sides of the equations The solution is obtained by multiplying both sides of the equation on the left by the inverse of the coef cient matrix b A le A Ix A lb x Ale introduction to Matrices 32 For example consider the following system of 2 equations in 2 unknowns gs 211 512 11 312 iiiiiii i OI ln matrix form introduction to Matrices 33 2 51141 1 3 5 3 5 4 1 2 5 13 a i and so i i i That is the solution is 11 1312 6 Check 2 1356 4 13 36 5 introduction to Matrices SA introduction to Matrices 35 31 Matrix Inversion by Gaussian Elimination Adi0in to this matrix an order3 identify matrix 0 There are many methods for nding inverses of nonsingular matrices 2 2 0 1 0 0 1 1 1 O 1 O GaLISSIan elimination after Carl Friedrich Gauss the great German 4 4 4 0 0 1 mathematician is one such method and has other uses as well In practice nding matrix inverses is tedious and is ajob best left to a computer 0 Gaussian elimination proceeds as follows Suppose that we want to invertthe following matrix orto determine that it is singular 2 2 0 A 1 1 1 4 4 4 Attempt to reduce the original matrix to the identify matrix by a series of elementary row operations EROs of three types bringing the adjoined identify matrix along for the ride E1 Multiply each entry in a row of the matrix by a nonzero scalar constant EH Add a scalar multiple of one row to another replacing the other row Em Interchange two rows of the matrix introduction to Matrices 36 Starting with the rst row and then proceeding to each row in turn i Insure that there is a nonzero entry in the diagonal position called the pivot eg the 11 position when we are working on the rst row exchanging the current row for a lower row if necessary Ifthere is a 0 in the pivot position and there is no lower row with a nonzero entry in the current column then the original matrix is singular Numerical accuracy can be increased by always exchanging for a lower row if by doing so we can increase the magnitude of the pivot introduction to Matrices 37 Applying this algorithm to the example Divide row I by 2 1 1 000 1 1 1010 4 4 4001 Subtract row I from row 2 1 1 0 100 0 0 1 10 4 4 4 001 Subtract 4 gtlt row I from row 3 1 1 0 ii Divide the current row bythe pivot to produce 1 in the pivot position 0 0 iii Add multiples of the current row to other rows to produce 0 s in the 0 0 1 5 1 0 currentcolumn 0 8 4 2 0 1 Interchange rows 2 and 3 1 1 0 0 0 0 8 4 2 O 1 0 0 1 1 0 introduction to Matrices 38 introduction to Matrices 39 Divide row 2 by 8 Thus the inverse of A is 1 1 0 0 0 0 1 1 71 1 01 5 1og A 5 0 0 1 1 0 1 O Add row 2 to row I which can be veri ed by multiplication 1 O l l 0 l 0 1 3 1 0 i 2 411 8 0 0 1 1 O Add gtlt row 3 to each of rows 1 and 2 1 0 0 0 l l 0 1 0 l i i 2 2 8 0 0 1 1 O Sociology 761 introduction to Matrices 0 Why Gaussian elimination works Each elementary row operation can be represented as multiplication on the left by an ERO matrix For example to interchange rows 2 and 3 in the preceding example we multiply by 1 O O Em O O 1 1 Satisfy yourself that this works gs ad0vertised The entire sequence of EROs can be represented as E39P39 39 39EZEI A7 1w B E A7 In 1w B where E Ep E2E1 Therefore EA In implying that E is A l and EL B implying that B E A4 4 Determinants 0 Each square matrix A is associated with a scalar called its determinant and written lAl or det A o The determinant is uniquely de ned bythe following axioms rules a Multiplying a row of A by a scalar constant multiples the determinant by the same constant b Adding a multiple of one row of A to another does not change the determinant c lnterchanging two rows of A changes the sign of the determinant d detI 1 0 These rules suggest that Gaussian elimination can be used to nd the determinant of a square matrix The determinant is the product of the pivots reversing the sign of this product if an odd number of row interchanges was performed introduction to Matrices For the example detA 281 16 c As well since we encounter a O pivot when we try to invert a singular matrix the determinant of a singular matrix is O and a nonsingular matrix has a nonzero determinant introduction to Matrices 5 Matrix Rank and the Solution of Linear Simultaneous Equations 0 As explained the matrix inverse suf ces for the solution of linear simultaneous equations when there are equal numbers of equations and unknowns and when the coef cient matrix is nonsingular This case covers most but not all statistical applications 0 In the more general case we have in linear equations in n unknowns A x b mxnjmxlj mxl 0 We can solve such a system of equations by Gaussian elimination placing the coef cient matrix A in reduced rowechelon form RREF by a sequence of EROs and bringing the righthandside vector b along for the ride Once the coef cient matrix is in RREF the solutions of the equation system will be apparent introduction to Matrices A matrix is in RREF form when i All of its zero rows if any follow its nonzero rows if any ii The leading entry in each nonzero row ie the rst nonzero entry proceeding from left to right is 1 iii The leading entry in each nonzero row after the rst is to the right of the leading entry in the previous row iv All other entries in a column containing a leading entry are 0 o The proofthat this procedure works proceeds from the observation that none of the elementary row operations changes the solutions of the set of equations and is similar to the proof that Gaussian elimination suf ces to nd the inverse of a nonsingular square matrix introduction to Matrices o Considerthe following system of 3 equations in 4 unknowns O 2 1 11 4 0 1 0 12 2 6 O 1 2 13 5 Adjoin the RHS vector to the coef cient matrix 2 0 1 2 1 4 0 1 0 2 6 O 1 2 5 Reduce the coef cient matrix to rowechelon form Divide row I by 2 gt Hm on tom 10 1 40 O 60 2 introduction to Matrices 46 introduction to Matrices A7 Subtract 4 gtlt row I from row 2 and subtract 6 gtlt row I from row 3 Writing the result as a scalar system of equations we get 0 5 1 11 0 O 1 4 4 2 00 2 8 8 13 414 4 gtilt Multiply row 2 by 1 0 0 1 0 1 The third equation is uninformative but it does indicate that the O O 1 4 4 original system of equations is consistent O O 2 8 8 The rst two equations imply that the unknowns 12 and 14 can be Subtract x row 2 from row 1 and add 2 gtlt row 2 to row 3 given arbitrary Values say I and 11 and the Values 0fthe 1 and 1 0 0 g 13 corresponding to the leading entrieswillthen follow 2 001 4 4 I1 I 39ththl d39 t39 OOKOdbO t 39k I344IZ WI e ea ing en ries mar e y as eris s and thus any vector x 11 12 1314 1 1 4 41 I is a solution of the system of equations lritroductlon to Matrices 48 gtllt A system for which there is more than one solution in this case an in nity of solutions is called underdetermined 0 Now consider the system of equations 2 0 1 2 11 4 0 1 0 12 6 0 1 2 3 Attaching b to A and reducing the coef cient matrix to RREF yields 1 0 0 1 0 0 1 4 2 0 0 0 0 2 The last equation 0x1 012 013 014 2 is contradictory implying that the original system of equations has no solution gtllt Such an inconsistent system of equations is termed overdetermined introduction to Matrices 49 o The third possibility is that the system of equations is consistent and has a unique solution This occurs for example when there are equal numbers of equations and unknowns and when the square coef cient matrix is nonsingular The rank of a matrix is its maximum number of linearly independent rows or columns A set of rows or columns is linearly independent when no row or column in the set can be expressed as a linear combination weighted sum of the others It turns out that the maximum number of linearly independent rows in a matrix is the same as the maximum number of linearly independent columns Because Gaussian elimination reduces linearly dependent rows to 0 the rank of a matrix is the number of nonzero rows in its RREF gtllt Thus in the examples the matrix A has rank 2 lritroductlon to Matrices To restate our previous results gtllt When the rank of the coef cient matrix is equal to the number of unknowns and the system of equations is consistent there is a unique solution gtllt When the rank of the coef cient matrix is less than the number of unknowns the system of equations is underdetermined if consistent or overdetermined if inconsistent introduction to Matrices 51 Homogeneous Systems of Linear Equations 0 When the RHS vector in an equation system is a zero vector the system of equations is said to be homogeneous A x 0 mxnnx1 mxl o Homogeneous equations cannot be overdetermined because the trivial solution x 0 always satis es the system 0 When the rank of the coef cient matrix A is less than the number of unknowns n a homogeneous system of equations is underdetermined and consequently has nontrivial solutions as well 0 Consider for example the homogeneous system 20 12 1 0 40 10 2 0 60 12 I3 0 I4 introduction to Matrices Reducing the coef cient matrix to RREF we have 1 0 0 1 I1 0 0 0 1 4 2 0 0 0 0 0 3 0 I4 Thus the solutions trivial and nontrivial may be written in the form 11 z 12 I 13 41 I4 I That is 12 and 14 can be given arbitrary values and the values of the other two unknowns follow from the value assigned to 14 introduction to Matrices 6 Eigenvalues and Eigenvectors 0 Suppose that A is an ordern square matrix Then the homogeneous system of equations A LLX 0 will have nontrivial solutions only for certain values of the scalar L There will be nontrivial solutions only when the matrix A LI is singular that is when detA LI O This determinantal equation is called the characteristic equation ofthe matrix A Values of L for which the characteristic equation holds are called eigenvalues characteristic roots or latent roots of A The German word eigen means own ie the matrix s own values introduction to Matrices 54 Suppose that L1 is a particular eigenvalue of A Then a vector x x1 satisfying the homogeneous system of equations A LlLLx 0 of A 39 quot with the 39g 39 L1 Eigenvectors associated with a particular eigenvalue are never unique because if x1 is an eigenvector of A then so is 0x1 where c is any nonzero scalar constant is called an 39 introduction to Matrices 0 Because of its simplicity let s examine the 2 gtlt 2 case The characteristic equation is 111 L 1112 0 1121 1122 L I ll make use of the simple result that the determinant of a 2 gtlt 2 matrix is the product ofthe diagonal entries minus the product of the offdiagonal entries so the characteristic equation is 111 L 1122 L 11121121 0 L2 1111 122L 1111122 11121121 0 introducle o Mamba This is a quadratic equation and therefore it can be solved by the quadratic formula producing the two roots 1 L1 5 1111 122 111 my 4a11a22 11121121 1 L2 5 111 1122 111 my 401111122 l1121121 The roots are real ifa11 a222 402111122 11121221 is nonnegative Notice that L1 L2 1111 1122 traceA L1L2 11111122 11121121 d8tA As well ifA is singular and thus detA 0 then at least one ofthe eigenvalues must be 0 Both are 0 only if A is a 0 matrix introducle o Mamba 57 Things become simpler when the matrix A is symmetric as in most statistical applications of eigenvalues Then 1112 1221 and 1 1 E 1111ll22 lll11 any 41 1 2 5 1111a22 111 a222 41 In this case the eigenvalues are necessarily real numbers since the quantity within the squareroot is a sum of squares b b introducle o Mamba Example 1 05 A T 05 1 L1 1 1 i1 124052 15 L2 11 1 l1 1y 4052 05 To nd the eigenvectors associated with L1 5 solve 1 15 05 1 05 1 15 2 05 05 1 05 05 12 X1 111 21 1 I2 21 where 131 can be given an arbitrary nonzero value 1 121 121 which produces introducle o Mamba Likewise for L2 05 13251935 3 8 82 82 3 l8 1 x 7 21 7 122 2 7 22 122 where 132 is arbitrary which produces The two eigenvectors of A have an inner product of O I21 I22 X1 X2 39 I21 I22 7 I21 122I21I22 O Vectors whose inner product is O are termed orthogonal

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "I made $350 in just two days after posting my first study guide."

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.