 22.22.1.53: What is the difference between constrained and unconstrained optimi...
 22.22.1.54: State the idea and the basic formulas of the method of steepest des...
 22.22.1.55: Write down an algorithm for the method of steepest descent.
 22.22.1.56: Design a "method of steepest ascent" for determining maxima.
 22.22.1.57: What i~ linear programming? lt~ ba~ic idea? An objective function?
 22.22.1.58: Why can we not use methods of calculus for extrema in linear progra...
 22.22.1.59: Whar are slack variables? Artificial variables? Why did we use them'?
 22.22.1.60: Apply the method of steepest descent to f(x) = X12 + 1.5X22. starti...
 22.22.1.61: What does the method of steepe~t de~cent amount to in the case of a...
 22.22.1.62: In Prob. 8 start from Xo = [1.5 I ] T. Show that the next evennumb...
 22.22.1.63: What happens in Example I of Sec. 22.1 if you replace the function ...
 22.22.1.64: Apply the method of steepest descent to f(x) = 9X12 + X22 + 18.\'1 ...
 22.22.1.65: In Prob. 12, could you start from [0 O]T and do 5 steps?
 22.22.1.66: Show that the gradients in Prob. 13 are orthogonal. Give a reason.
 22.22.1.67: Graph or sketch the region in the first quadrant of the X1x2plane ...
 22.22.1.68: Graph or sketch the region in the first quadrant of the X1x2plane ...
 22.22.1.69: Graph or sketch the region in the first quadrant of the X1x2plane ...
 22.22.1.70: Graph or sketch the region in the first quadrant of the X1x2plane ...
 22.22.1.71: Graph or sketch the region in the first quadrant of the X1x2plane ...
 22.22.1.72: Graph or sketch the region in the first quadrant of the X1x2plane ...
 22.22.1.73: Maximize f = lOx. + 20X2 subject to Xl ~ 5.Xl + X2 ~ 6. X2 ~ 4.
 22.22.1.74: Maximize f = Xl + X2 subject to Xl + 2X2 ~ 10. 2X1 + X2 ~ LO. X2 ~ 4.
 22.22.1.75: Minimize f = 2X1  IOx2 subject to Xl  X2 ~ 4. 2xI + x2 ~ 14. Xl +...
 22.22.1.76: A factory produces two kinds of gaskets, GI G2 with net profit of $...
 22.22.1.77: Maximize the daily output in producing XI chairs by a process PI an...
Solutions for Chapter 22: Unconstrained Optimization. Linear Programming
Full solutions for Advanced Engineering Mathematics  9th Edition
ISBN: 9780471488859
Solutions for Chapter 22: Unconstrained Optimization. Linear Programming
Get Full SolutionsThis textbook survival guide was created for the textbook: Advanced Engineering Mathematics, edition: 9. Chapter 22: Unconstrained Optimization. Linear Programming includes 25 full stepbystep solutions. Since 25 problems in chapter 22: Unconstrained Optimization. Linear Programming have been answered, more than 46318 students have viewed full stepbystep solutions from this chapter. Advanced Engineering Mathematics was written by and is associated to the ISBN: 9780471488859. This expansive textbook survival guide covers the following chapters and their solutions.

Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.

Block matrix.
A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.

Complex conjugate
z = a  ib for any complex number z = a + ib. Then zz = Iz12.

Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.

Eigenvalue A and eigenvector x.
Ax = AX with x#O so det(A  AI) = o.

Elimination.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.

Exponential eAt = I + At + (At)2 12! + ...
has derivative AeAt; eAt u(O) solves u' = Au.

Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.

Left nullspace N (AT).
Nullspace of AT = "left nullspace" of A because y T A = OT.

Matrix multiplication AB.
The i, j entry of AB is (row i of A)·(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .

Multiplier eij.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).

Network.
A directed graph that has constants Cl, ... , Cm associated with the edges.

Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b  Ax) = o.

Orthogonal matrix Q.
Square matrix with orthonormal columns, so QT = Ql. Preserves length and angles, IIQxll = IIxll and (QX)T(Qy) = xTy. AlllAI = 1, with orthogonal eigenvectors. Examples: Rotation, reflection, permutation.

Outer product uv T
= column times row = rank one matrix.

Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.

Similar matrices A and B.
Every B = MI AM has the same eigenvalues as A.

Special solutions to As = O.
One free variable is Si = 1, other free variables = o.

Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and AI are BT AT and (AT)I.

Vector space V.
Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.