ISBN9780073383170
Solutions for Chapter 3: Elementary Functions
3.3.1) Show that . ) "' ., . , (a exp(~ .uri) = -e-: ( 2 +;ri) (b) exp 4 (c) exp(: +;ri) =-exp:.
3.3.20) Show that ;r (al Log(-t'il =I - -::;-i:U>l Log( I - i l = - In 2 - -i.
3.3.62) Verify that the derivatives of sinh:: and cosh:: are as stated in equations (2). Sec. 39.
3.3.2) State why the function f (::) = 1.c 2 - 3 - :e: + e< is entir
3.3.22) Show that Log(i1 ) f- 3 Log i.
3.3.81) Solve the equation cos: = ./2 for:.
3.3.23) Show that log (i 2 ) # 2 log i when the branch ( .3;r 11 ;r)
3.3.82) Derive expression (5). Sec. 40. for the derivative of sin- 1 :..
3.3.66) Derive expression ( I I ) in Sec. 39 for lsi nh :: 1 2.
3.3.83) Derive expression (4). Sec. 40. for tan- 1 :..
3.3.6) Show that jexp(::: 2 ) I ::: exp( I::: I.!).
3.3.51) Establish differentiation formulas (3) and (4) in Sec. 38.
3.3.84) Derive expression (7). Sec. 40. for the derivative of tan- 1 :..
3.3.7) Prove that jcxp(-2:::)10.
3.3.85) Derive expression (9). Sec. 40. for cosh- 1 :.
3.3.27) Find al I roots of the equation log:::. = i ;r /2. A11.\'. :.: = i.
3.3.69) Give detai Is showing that the zeros of sinh:: and cosh:: arc as in the theorem in Sec. 39.
3.3.45) Assuming that f' (:) exists. state the formula for the derivative of c / i.~ 1
3.3.71) Show that tanh:: = -i tan(i .:- ). Sugge.\t ion: l; se idcnt itics (-t) in Sec. 39.
3.3.11) Describe the behavior of e: = eei 1 as (a) x tends to -ex:: (h) y tends to x.
3.3.72) Derive differentiation formulas ( 17 ). Sec. 39.
3.3.31) Show that I , , Rellog(::..- ll]= -lnl
3.3.74) t;se the results in Exercise 12 to show that tanh:: = tanh: at points where cosh:: # 0.
Complex Variables and Applications was written by and is associated to the ISBN: 9780073383170. Since 85 problems in chapter 3: Elementary Functions have been answered, more than 43368 students have viewed full step-by-step solutions from this chapter. Chapter 3: Elementary Functions includes 85 full step-by-step solutions. This expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: Complex Variables and Applications, edition: 9.
-
Cyclic shift
S. Permutation with S21 = 1, S32 = 1, ... , finally SIn = 1. Its eigenvalues are the nth roots e2lrik/n of 1; eigenvectors are columns of the Fourier matrix F.
-
Distributive Law
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.
-
Gauss-Jordan method.
Invert A by row operations on [A I] to reach [I A-I].
-
Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.
-
Hilbert matrix hilb(n).
Entries HU = 1/(i + j -1) = Jd X i- 1 xj-1dx. Positive definite but extremely small Amin and large condition number: H is ill-conditioned.
-
Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.
-
Incidence matrix of a directed graph.
The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .
-
Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.
-
Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.
-
Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.
-
Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A - AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).
-
Norm
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.
-
Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b - Ax) = o.
-
Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.
-
Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.
-
Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.
-
Rank r (A)
= number of pivots = dimension of column space = dimension of row space.
-
Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).
-
Spectral Theorem A = QAQT.
Real symmetric A has real A'S and orthonormal q's.
-
Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.