Join StudySoup for FREE

Get Full Access to
CAL - LINEA 54 - Study Guide - Midterm

Description

Reviews

Midterm 2 Study Guide Monday, October 16, 2017 9:52 PM

A.

Vector Spaces

a.

A vector space is closed under addition, linear combination, scaling, and contains the zero vector

B.

Basis for vector spaces

a. A set of vectors that are linearly independent that span a vector space.

b.

A standard basis for a normal Rn*n vector space is made up of vectors: e1, e2, …, en.

i. Where:

We represent a number in a basis in matrix form as a collection of values for each

c.

basis vector.

What you'll be asked:

i. Find [x]B, or the vector in the standard basis for P2, representing this polynomial

We also discuss several other topics like What is so controversial about freud's psychosexual stage theory?

ii.

Find the matrix for the transformation that represents going from a third degree polynomial to its derivative.

We know that all of the first term of P3 has derivative 0.

The second term maps to the first term of P2, and is multiplied by 1 (exponent 1) The third maps to the second, and is multiplied by 2.

The fourth maps to the third, and is multiplied by 3.

Is 1 + x x3 in the span of {1+x3, 12x, x2x3 iii. }?

iv. Are the following matrices linearly independent?

C.

Definitions for bases

a. The dimension of a vector space is the number of basis vectors that it has. The image of a transformation is its range (for example, R3 b. , or P2)

If you want to learn more check out What is a non superimposable mirror image?

We also discuss several other topics like What are some truths about texas?

c.

The null space:

i. Of a transformation is called the kernel

ii.

Is spanned by a set of vectors (nonzero) that, under the transformation, yield the zero vector

iii. Is also a subspace, so it can have a dimension

d.

For the transformation in B. c. ii. above, what is the null space, image, and dimension for both the null space and the input?

D.

Transformations terminology

a.

An isomorphism is simply a transformation/matrix that behaves the same way. It must have the same dimension. Rigorously:

i.

Def. A linear transformation T: V > W is called an isomorphism if there is a linear transformation called T1:W > V such that:

T(T1(w)) = w for any w in W.

T1(T(v)) = v for any v in V.

b.

Note that if a transformation doesn’t have an inverse, then it doesn't have an isomorphism.

E.

Matrix representation of Linear transformations

a.

To do a transformation from one basis to another, where the transformation T is in a standard basis: Don't forget about the age old question of What are the three components of the quebec legal system?

b. The dim(V) = dim(null V) + dim(rank V)

You will be asked:

c.

i. Given a basis, what is its corresponding transformation ii. Solve the below: Don't forget about the age old question of What is damköhler number?

F.

Determinant

a.

The determinant is an intrinsic property of matrices that represents how an nxn matrix (as a transformation) scales the parallelopiped formed by the column vectors. However, it is the signed area formed by these vectors, which means it can be negative. If you want to learn more check out List of countries that belongs to south asia?

b.

Calculating the determinant.

For 3x3 matrices and larger

i.

First, choose a column or row to expand upon (you want the greatest number of zeroes in it)

ii.

For each number along that column or row, multiply that number by the determinant of the rest of the matrix (you delete the column and row of each number per number)

iii.

Continue until you've completed your column/row, adding or subtracting each value according to alternating +/ in a matrix (note that lots of zeros are useful here because they will add zero to your total sum)

c. If the determinant is nonzero, then the matrix is invertible. d. det(AB) = det(A)det(B) (this is not true for adding)

e.

Determinant of an upper or lower triangular matrix is the product of the diagonal down the matrix.

f.

For a matrix, the effect on the determinant:

i. Add scaled row to another (replacement): no effect

ii. Swap rows (interchange): det > det

iii. Scale row: det > scaling factor * det

Det(kA) = Det(A)* kn g. , where A is nxn

Cofactor matrix is the matrix made up of, for each location i, j in A, the

h.

determinant of Aij.

i. The transpose of the cofactor matrix of A is the adjugate of A A1 j. = (1/det(A))*adj(A) variation of cramer's rule k. Cramer's rule:

G.

Eigenvalues and Eigenvectors

a.

For any nxn matrix, you can treat it like a transformation, which changes space; some of these stretch certain vectors in their domain. b.

For some of these matrices/transformations, there are vectors that can be input which will result in an output that is simply a multiple of the input vector. This vector is an eigenvector.

i.

If one eigenvector exists, by scaling, so do every multiple of the eigenvector. We usually write down the one with all whole numbers.

c.

If the eigenvectors of a matrix span the vector space of the matrix, then the matrix is diagonalizable

d.

Each eigenvector is scaled by a certain amount by the matrix transformation. This amount is called an eigenvalue.

H.

Calculating Eigenvectors/Values

a.

The multiplicity of this characteristic polynomial:

i.

1<= geometric multiplicity (number of unique roots) < algebraic multiplicity (total number of roots counting repeats such as (x2)2) ii.

*note characteristic equation:

det(AλI) = 0

b. The eigenvalues of a triangular matrix are the values on the diagonal.

c.

If there are r distinct eigenvalues, there are at least r distinct eigenvectors. The set of these r corresponding eigenvectors are linearly independent.

Recall determinants:

d.

Det(A) = (each scaling factor multiplied together)*(1)number of times row i.

swapped*(product of pivots in U)

1) Where U is an echelon form of A 2) Where A is invertible

e.

A and B are similar if there exists A = PBP1.

i.

If A and B are similar, they have the same characteristic polynomial and eigenvalues with same multiplicities, but not the other way around

Guaranteed diagonal matrix, assuming A is diagonalizable: I.

A = PDP1.

a.

i. D is made up of the eigenvalues arranged along the diagonal ii. P is made up of each corresponding eigenvector in order iii. (the above refer to the eigen of A)

J. Properties and methods

K. Dot product