×
Log in to StudySoup
Get Full Access to UTA - MAE 3360 - Class Notes - Week 14
Join StudySoup for FREE
Get Full Access to UTA - MAE 3360 - Class Notes - Week 14

Already have an account? Login here
×
Reset your password

# david hullender Description

##### Uploaded: 06/02/2017
333 Pages 399 Views 0 Unlocks
Reviews

Application of Math Principles  To Engineering Problems 8th edition By Professor David A. Hullender January 1, 20171 Preface The objectives of this notebook are to introduce solution techniques for solving typical equations  encountered in the analysis and simulation of engineering systems. When obtaining solutions to  equations, it is very beneficial to focus on the common sense details associated with the  performance of the actual engineering system for which the equations are written. Consequently,  throughout this text, equations for actual engineering systems are utilized so that it is possible to  apply common sense to the prediction and approximation of the solution to the equations before  actually solving the equations. Knowing the approximate solution provides confirmation to the  actual solution in the end.  From the very beginning of the notebook, computer algorithms in MATLAB will be utilized to  obtain and plot solutions for typical equations encountered when solving engineering problems.  Having experience with MATLAB prior to reading this notebook is not necessary. Specific  details and examples of using MATLAB for solving engineering equations and for simulating  dynamic systems are included in order for the reader not experienced with MATLAB to be able  to utilize the software effectively without having to be creative and start from scratch.  At the end of several of the sections, a list of previous quiz, homework, and exam problems are  listed that utilize some of the concepts introduced in that particular section. The solutions to  almost all of the problems are provided; however, it is very important to test your understanding  of the concepts by trying to work each problem without looking at the solution. Then and only  then will you know your weaknesses; it is better to determine your weaknesses prior to an exam  instead of during an exam! Reviewing the solution to a problem before trying to work it without  looking at the solution is a total waste of time; in my opinion, this is the number one mistake  students make when preparing for exams. The second greatest mistake is preparing for an exam  with other students; you are not learning what you don’t know and, unfortunately, when you take  an exam, that is when you will find out what you don’t know. Table of Contents Chapter 1 Basic Math Principles 1.1 Complex Numbers 5 1.2 Notation for Time Derivatives and Use of the ‘D’ Operator 6 1.2.1 Notation for Time Derivatives 6 1.2.2 Use of the ‘D’ Operator to convert Differential Eqn’s to Algebraic Eqn’s 6 1.3 Eigenvalues of a System 7 1.3.1 General Properties of Eigenvalues 7 1.3.2 Using MATLAB to Compute Eigenvalues 8 1.4 Solving Simultaneous Algebraic Equations 9 1.4.1 Manually Solving Simultaneous Equations 9 1.4.2 Using Symbolic Math in MATLAB to Solve Simultaneous Equations 112 Chapter 2. Solving Differential Equations  2.1 Analytical Solutions of Differential Equations Using the Laplace Transform 15 2.1.1 Linearizing Nonlinear Differential Equations 15 2.1.2 The definition of the Laplace Transform 20 2.1.3 Properties of the Laplace Transform 20 2.1.4 Final Value Theorem 21 2.1.5 Initial value theorem 21 2.1.6 Converting Differential Eqn’s to Algebraic Eqn’s Using Laplace Transform 22 2.1.7 Common Inputs and Their Laplace transforms 23 2.1.7.1 Step Functions and Constants 23 2.1.7.2 Pulse and Impulse Functions 24 2.1.7.3 Exponential Functions 25 2.1.7.4 Periodic Functions 25 2.1.8 Inverse Laplace Transform 25 2.1.8.1 Inverse Laplace Transform Using Partial Fractions 25 Obtaining Partial Fractions Using MATLAB 28 2.1.8.2 Inverse Laplace Transform Using the Residue Theorem 28 Shortcut for Complex Poles 29 2.1.8.3 Using MATLAB to Plot the Inverse Laplace Transform without Getting an Equation for the Inverse Laplace Transform 32 2.1.9 Modes of a System 34 2.2 Numerical Solutions to Differential Equations Using MATLAB 36 2.2.1 General Considerations 36 2.2.2 Using the ‘impulse’ command in MATLAB 36 2.2.3 Using the ‘step’ command in MATLAB 37 Obtaining DC Gain using ‘dcgain’ command 37 2.2.4 Using the Command ‘lsim’ 38 Pulse Series input 39 2.2.5 Using the Command ‘initial’ 41 2.3 Using MATLAB to Get the System Response Using Numerical Integration 41 2.3.1 Example Problems using ‘ode45’ in MATLAB 42  Deterministic Inputs 42 Random Inputs 43 2.3.2 Example Simulation Using Options and Event Functions with ‘ode45’ 44 2.4 Using MATLAB to Get the System Steady State Response for Periodic Inputs 46 2.4.1 Frequency Response Analysis 46 2.4.2 Examples of Frequency Response Function Plots 47 2.5 Using MATLAB to Get Lower Order Approximations for Transfer Functions 51 2.5.1 Examples of Modal Approximation 51 2.5.2 Transfer Function Approximations using Inverse Freq. Domain Analysis 54 2.6 Converting Linear Continuous Differential Eqn’s to Discrete Time Eqn’s 65 2.6.1 Converting 1st Order Differential Eqn’s Using Inverse Laplace transform 65 2.6.2 Conversion Using the Continuous-to-discrete MATLAB command, ‘c2d’ 663 Chapter 3. Expressing Differential Equations in State Variable Format 3.1 Representing an nth Order System by n First Order Differential Eqn’s. 68 3.2 Simulation Diagram Approach When There are Input Derivatives 69 3.3 General Matrix Format for State Variable Equations 70 3.4 State Variable Equations in Matrix Format if there are Input Derivatives 71 3.4.1 Only 1st Order Input Derivatives 71 3.4.2 Format Options for Input Derivatives Higher than 1st Order 72 3.4.2.1 Simulation Diagram Approach 72 3.4.2.2 Phase Variable Approach 73 3.4.2.3 MATLAB Approach 73 3.4.2.4 Example Demonstrating the Different Methods 74 3.4.2.5 Example of Phase Variable Approach when the Numerator and  Denominator Orders are Equal 75 3.5 Maintaining Access to Natural Variables When Creating State Variable  Equations from Block Diagrams 76 3.6 SIMULINK 77 Chapter 4 Example Problems 81 Chapter 5 Solutions to Example Problems in Chapter 4 103 Chapter 6 Previous Quiz Problems and Solutions 134 Chapter 7 Previous Homework and Solutions 151 Chapter 8 Previous Exams and Solutions 261 Chapter 9 MATLAB M-files 3234 Chapter 1 Basic Math Principles 1.1 Complex numbers A complex number can be expressed in exponential format, i.e. y a jb Me j = + = 2 2 M a b = + φ φ = + angle a jb ( ) Note, the angle is in one of four quadrants. Simple use of the inverse tangent on a calculator  may give the incorrect answer. In MATLAB, the command ‘angle’ with a complex number  gives the correct answer for any of the four quadrants. Consider the examples below. imaginary part b M Example:  y j ej = − − = 4 3 5 φ 3.785 real part a M 2 2 = + = 3 4 5 − 1 φ angle j π = − − = + = ( 4 3) tan (3/ 4) 3.785 imaginary part 3.785 rad -4 -3 real part Example: The solution y(t) to a differential equation is found to be the following:5 ��(��) = 3 + 25����−4��sin(3�� + ∅) where  �� = |−2 + ��3 −4 + ��3| ������ ∅ = ���������� (−2 + ��3 −4 + ��3) What are M and ∅? �� =|−2 + ��3| |−4 + ��3|=√(−2)2 + 32 √(−4)2 + 32=√13 √25= 0.7211 ∅ = ����������(−2 + ��3) − ����������(−4 + ��3) = 2.159 − 2.498 = −0.339 ������ 1.2 Notation for Time Derivatives and Use of the ‘D’ Operator 1.2.1 Notation for Time Derivatives Throughout this book, time derivatives of functions will be denoted with ‘dots’ over the function.  For example, for the first three time derivatives of function y(t) ��̇ ≡�������� ��̈≡��2�� ����2��⃛ ≡��3�� ����3 1.2.2 Use of the ‘D’ Operator to convert linear Differential Equations to Algebraic  Equations The ‘D’ operator is used to express a linear differential equation with an input and with no  initial conditions into an algebraic equation. In ‘D’ operator format, the first derivative with  respect to time of a function, f(t), is denoted by Df(D) where f(D) denotes the function f(t) in ‘D’  operator format. The 2nd derivative of f(t) in D operator format is denoted by D2f(D), etc.  Consider the following 3rd order linear ordinary differential equation for the output y(t) of a  system with input u(t): ����⃛+16��̈+ ������̇ + �������� = ����̈+ +������̇ + �������� Initial conditions ��(0−), ��̇(0−), ��̈(0−), ��(0−), ������ ��̇(0−) are in general needed as well as the  input function u(t) to obtain the solution for y(t). If all of these initial conditions are zero, then  the ‘D’ operator can be used to express the differential equation as an algebraic equation. Note,  if any of the initial conditions are not zero, then the Laplace operator ‘s’ will be required instead  of ‘D’. In “D’ operator format, the differential equation above is written as follows: (������ + �������� + ������ + ������)��(��) = (������ + ������ + ������)��(��) Solving for y(D) gives ��(��) = [������ + ������ + ������ ������ + �������� + ������ + ������]��(��) The function of ‘D’ in the brackets is called the transfer function relating the input u to the output  y. Note that the numerator and denominator polynomial coefficients in the transfer function are  simply the coefficients in the differential equation. The denominator of the transfer function is  called the characteristic polynomial; the poles of the transfer function (roots of the characteristic  polynomial) are called the eigenvalues of the system. Knowing the transfer function and  eigenvalues of a system are very beneficial to analyzing time response dynamics for different  system inputs.6 In general if u(t) is a constant, then the solution of the differential equation for y(t) will also  become a constant as time goes to infinity; when y(t) becomes a constant, ��̇(��) will become zero  as will ��̈(��), ��⃛(��), ��̇(��), and ��̈(��). Thus, in the end (as time goes to infinity), the original  differential equation above simplifies to  0 + 0 + 0 + 100�� = 0 + 0 + 150�� This equation can be solved for the limiting value (also called the final value) of y(t), i.e. �� = 150 100��  where u is the constant value for the input. For example, if u =4, then the final value of y is 6. 1.3 Eigenvalues of a System 1.3.1 General Properties of Eigenvalues The eigenvalues of a system define the time response characteristics of a system resulting from  an external input or from nonzero initial conditions. The eigenvalues are obtained by factoring  the characteristic polynomial which is the denominator of the transfer function of the system.  Eigenvalues will either be real or complex. The real parts of all of the eigenvalues of the system  must be negative otherwise the system will be unstable; thus, a single eigenvalue, real or  complex, with a positive real part will cause the time response of the system to diverge to  infinity. A complex pair of eigenvalues, −�� ± ����, corresponds to the following polynomial formats for a  polynomial: (�� + �� + ����)(�� + �� − ����) = [(�� + ��)2 + ��2] = ��2 + 2���� + ��2 + ��2 = ��2 + 2�������� + ����2 where -r is the real part of the eigenvalue, �� is the imaginary part also called the damped  natural frequency, �� is the damping ratio, and ���� is the undamped natural frequency. Note  that �� =������. Also note that eigenvalues have time constants, ��; �� = |1��|. Time constants  determine how long it will take for the time response of the system to essentially end; it takes  approximately 5 times the largest eigenvalue time constant for the response to be within 1% of  its final value. So, when performing simulations for problems in which the solution becomes  zero or a constant, a rough estimate of the final time should be five times the largest time  constant, i.e. �������������������� ���������� �������� ≈ 5�������� Example: A system has the following eigenvalues: −5, −2, −4 ± ��3 The time constants of this system are 1/5=0.2 sec., 1/2=0.5 sec., 1/4=0.25 sec. The largest time constant is 0.5; thus, it will take about 5*0.5=2.5 seconds for this system to  return to equilibrium following an input disturbance or nonzero initial condition. Usually,  ‘returning to equilibrium’ means that the solution to the equations become constant. For the complex pair of eigenvalues −4 ± ��3, we can write (�� + 4 + ��3)(�� + 4 − ��3) = (�� + 4)2 + 32 = ��2 + 8�� + 25 Thus, ���� = 3������ ������, ���� = √25 = 5������ ������, ������ �� = 0.8 Eigenvalue problems in this notebook: 4.16, 4.13, 4.16, 4.22, 4.25, 4.26, 4.27, 4.28, 4.29, 4.30,  4.36, 4.37, 4.40, 4.43, 5.6, 5.13, 5.16, 5.22, 5.25, 5.26, 5.27, 5.28, 5.29, 5.30, 5.33, 5.36, 5.37,  5.43, Q7, Q14, Q18, H3, H5, H9, H12, H13, H20, H21, H24, H25, H28, H34, H35, H43, E1C,  E2A, E2C, E3B, E3D, E4B,E5A, E5C, E6A, E6C7 1.3.2 Using MATLAB to Compute Eigenvalues All MATLAB commands are defined using lower case letters. The best way to learn to use any  command is to type ‘help command-name’. Using ‘help’ works for all of the MATLAB  commands. If you don’t know the name of a command, type ‘help’ to get a list of the different  categories of commands. For example, typing ‘help’ lists ‘elfun’ for elementary functions. To  see a list of these elementary functions type ‘help elfun’.  As mentioned in section 1.2.2, the transfer function defines how the output of a system responds  to the input to that system. Also, the denominator of a transfer function is the characteristic  polynomial and the roots of this polynomial are called the eigenvalues of the system. If the  numerator and denominator polynomials of a transfer function are known, then the transfer  function can be entered into MATLAB. Consider the transfer found in section 1.2.2: ��(��) = [������+������+������ ������+��������+������+������] ��(��) The procedure for entering the coefficients of polynomials and creating transfer functions is  shown below. Note, putting a semi-colon at the end of a statement stops MATLAB from  echoing the result; this is really important if the result consists of hundreds of numbers. In the  MATLAB commands below, the polynomials are given names ‘num’ and ‘den’; the transfer  function is given the name ‘G’: >> num=[6 28 150]; Entering the numerator polynomial into MATLAB >> den=[2 16 74 100]; Entering the denominator polynomial into MATLAB >> G=tf(num,den) Entering the transfer function into MATLAB �� = [������ + ������ + ������ ������ + �������� + ������ + ������] The ‘damp’ command gives the eigenvalues, damping ratios, and undamped natural frequencies. >> damp(G)   Eigenvalues Damping Time Undamped Natural  Ratio Constant, sec. Frequency, rad/sec -2 not applicable 0.5 not applicable -3 ± j4 0.6 0.333 5 The time constants are computed using one over the absolute value of the real parts of the  eigenvalues. For this example, the time constants are 1/2 and 1/3. Time constants are positive  numbers. Note, MATLAB will incorrectly give values for the damping ratio and frequency for real eigenvalues. Real eigenvalues do not have a damping ratio and frequency. I have replaced  the numbers with ‘not applicable’ in the MATLAB response to the ‘damp’ command. The eigenvalues can also be obtained from the denominator polynomial using the command  ‘roots’, i.e. >> roots(den) 8 1.4 Solving Simultaneous Algebraic Equations The concept of solving simultaneous algebraic equations is included due to the fact that it will  likely be the greatest source of solution challenges unless a simple appropriate approach is used  and unless thorough attention to detail is given at each step. When solving engineering  problems, we often start with a combination of differential equations and algebraic equations that  represent a simultaneous set of equations defining the dynamics of the system to be simulated.  When appropriate, the differential equations can first be converted to algebraic equations using  either the ‘D’ operator or the Laplace transform so that all of the equations are then algebraic.  1.4.1 Manually Solving Simultaneous Equations There are numerous ways to solve simultaneous algebraic equations. For engineering problems,  there is normally only a single variable representing the output of interest. Consequently, it is  suggested that a simple step by step approach be used for simplifying and reducing the  simultaneous equations down to a single equation with the variable representing the output of  interest being the only remaining unknown. Consider the following three simultaneous algebraic equations with unknowns x, y, and z. (1) 2�� + 3�� − 10�� = 4 (2) − 5�� + 2�� + 8�� = 1 (3) 3�� − 4�� + 2�� = 0 These equations are considered simultaneous because you can’t solve for either x, y or z without  using all three equations. Unless the solutions for all three unknowns are needed, an approach  giving solutions for all of the unknowns, such as the matrix inverse approach, may be  computationally burdensome and not necessary.  Suppose x is the output of interest meaning we only want a solution for x. The suggested  approach is to solve for one of the other unknowns in one of the equations and substitute that  solution into the remaining equations; this process will reduce the number of equations by one  and reduce the number of unknowns by one. This process is repeated until only a single equation  remains. For instance, if we solve (1) for z, we get �� = 0.2�� + 0.3�� − 0.4 Substituting this result for z into (2) and (3) gives (2) − 5�� + 2�� + 8(0.2�� + 0.3�� − 0.4) = 1 (3) 3�� − 4�� + 2(0.2�� + 0.3�� − 0.4) = 0 or (2) − 3.4�� + 4.4�� = 4.2 (3) 3.4�� − 3.4�� = 0.8 We now have two equations and two unknowns x and y. Since we want the solution for x, we  can solve (2) for y and substitute the result into (3). This gives �� = 5.23539 It is not uncommon for one or more of the equations to contain a symbolic parameter in place of  a numerical value. This is always the case when solving engineering problems with the ‘D’  operator or with the Laplace transform. In such cases, the solution will contain this symbolic  variable. For instance, suppose our original three equations contain the ‘D’ operator, i.e. (1) 2���� + 3�� − 10�� = 4 (2) − 5�� + 2��2�� + 8���� = 1 (3) 3�� − 4�� + 2�� = 0 If we use the same solution process while treating the symbol D as some numerical value, we get  the following: �� =8��2 + 64�� + 17 4��3 + 62��2 + 36�� − 85 Although the step by step solution process is the same, close attention to detail and checking  each step is required to avoid costly errors. Being able to solve simultaneous algebraic equations  is often critical for analyzing and simulating the dynamics of an engineering system.  It is important to note that before starting the solution process, one should always confirm that  the number of unknowns matches the number of equations. If there are more unknowns than  equations, the solution for the output of interest will be obtained in terms of the extra unknowns;  in general, a shortage in the number of equations means the modeling process for the system is  incomplete. Example: Consider the mass/spring/damper system shown below; the displacement of the mass is denoted  by y(t) and the input force that moves the mass is denoted by f(t). y(t) Spring k Damper c Mass m Force Input f(t) An equation for each component can be written using modeling techniques. Note, the use of  modeling techniques to obtain equations for engineering components and systems will be  discussed more in depth in a later section of this notebook. The algebraic equation for the force  ��1 on the spring in terms of the spring constant k=100 N/m is ��1 = 100�� The differential equation for the force ��2 on the damper in terms of the viscous damping  coefficient c = 2 Ns/m is as follows: ��2 = 2��̇10 Using the ‘D’ operator, this differential equation can be converted to an algebraic equation, i.e. ��2(��) = 2����(��) And finally, the differential equation for the acceleration of the mass m = 4 kg is ��̈= (�� − ��1 − ��2)/4 Converting this differential equation to an algebraic equation gives ��2��(��) = [��(��) − ��1(��) − ��2(��)]/4 We started with one algebraic equation and two differential equations and used the ‘D’ operator  to simplify the model to three algebraic equations with three unknowns: ��1(��), ��2(��), & ��(��).  Don’t forget, f(D) is the input which will be a known function of time. Using the step by step reduction process to obtain a single equation, the equation for the output  of interest y(D) is obtained, i.e. ��(��) = [1 Or 4��2 + 2�� + 100] ��(��) ��(��) = [1 ����2 + ���� + ��] ��(��) This single algebraic equation defines how the input force ��(��) affects the displacement y(t) of  the mass. Just how to use this equation for analysis purposes is not important at this point.  What is important, however, is that we were able to convert the differential equations to  algebraic equations and then use simple algebraic reduction of the equations to get a single  equation for the output of interest. For more complex systems consisting of many complex simultaneous equations, we can use  Symbolic Math in MATLAB to obtain the single equation solution.  Problems requiring this concept: 4.3, 4.20, 5.3, 5.21, Q3, Q11, H9, H23, H24, H25, H34, H43,  E1a, E1c, E4a 1.4.2 Using Symbolic Math in MATLAB to Solve Simultaneous Equations Symbolic math allows a computer to obtain a solution in terms of symbols that may not have  been assigned numerical values. Here we demonstrate how to use one of the symbolic  commands ‘solve’ to get the solution for the three algebraic equations in section 1.4.1. The  symbolic statement below provides the solution for all three unknowns x, y, and z. >> H=solve('2*x+3*y-10*z=4','-5*x+2*y+8*z=1','3*x-4*y+2*z=0','x,y,z') Note that each equation has been entered between single quotation marks in terms of the  symbolic variables x, y, and z which are the three unknowns. Also, the three unknowns are  listed between single quotation marks at the end of the statement. 11 To see the solution for x, type ‘H.x’, i.e. >> H.x Ans = 89/17 Since ‘solve’ is a symbolic math command, answers will be displayed as a ratio of two whole  numbers. To see the answer in floating point arithmetic, you must first specify the number of  digits to be displayed and then use the command ’vpa’, i.e. >> digits(5) >> vpa(H.x) Ans = 5.2353 Example: It is not uncommon for equations to contain parameters that are symbolic. The first  step is to declare which parameters are to be symbolic. For the mass/spring/damper system in  the section 1.5.1, note that the answer was obtained in terms of m, c, k, and D. Realizing this in  advance alerts us to begin the solution process by declaring these to be symbolic. This is done  with the following MATLAB command where ‘d’ has been used in place of ‘D’ to avoid a  conflict since ‘D’ is used in the symbolic algorithms. The ‘syms’ command declares that the  listed symbols may not have numerical values >> syms d m c k  The next step is to use the MATLAB ‘solve’ command to get the solution. For the  mass/spring/damper example problem, this is done as follows: >> H=solve(‘F1=k*y’,’F2=c*d*y’,’m*d^2*y=f-F1-F2’,’y,F1,F2’); Note as before, that each of the equations has been entered between single quotes. Following the  equations, the three unknowns are listed also between single quotes. Executing these two  commands gives the solution H which contains the solution for all three unknowns. The solution  for only y has been stored in the computer memory as H.y; using the ‘pretty’ command prints  the solution in a more recognizable format, i.e. >> pretty(H.y) which gives �� = [1 ����2 + ���� + ��] �� Since m, c, and k are symbolic, it is possible to assign numerical values to each. This is done  with the ‘subs’ command. Note, if the numerical values are not simple integers, the ‘digits’ and  ‘vpa’ commands may be needed to make the results presentable. For instance, if m = 4, c=2, and  k=100:12 >> Y=subs(H.y,[m c k],[4 2 100]) Y=f/(4*d^2+2*d+100) >> pretty(Y) �� =1 4��2 + 2�� + 100 �� Additional commands that are useful when working with symbolic variables are (1) ‘collect’ for  grouping like powers of a symbol such as ‘d’; (2) ‘numden’ for getting the numerator and  denominator polynomials of a symbolic transfer function; (3) ‘poly2sym’ for converting a vector  of polynomial coefficients to a symbolic polynomial; (4) ‘sym2poly’ for converting a symbolic  polynomial to a vector of coefficients; and (5) ‘int8’ and ‘int16’ for converting and rounding off  numbers with a decimal point to integers. For other useful symbolic commands, type ‘help  symbolic’. Pile Driving Example Consider the following three lumped mass approximation model for a pile being driven into the  ground with input force F(t); the output of interest is the force f(t) exerted on the ground  resulting from the input force F(t). F(t) F(t)  m=10 kg M z  b=5 Ns/m k k k f M M f  k=1000 N/m b v b w b It can be shown that the equations representing this model for displacements z, v, and w defined  to be zero at equilibrium and assuming zero displacement at the bottom end of the pile are as 13 shown below. The ground force �� is the force that forces the bottom end of the pile into the  ground. The output of interest to us is the ground force defined in the last equation. mz bz k z bv k v F    + + = + + mv bv k v bz k z bw k w     + + = + + + 2 2 mw bw k w bv k v    + + = + 2 2 f mg bw k w  = + + 3 The ‘3mg’ term represents a second input to the problem and will be neglected since it is  relatively small compared to the impact force F(t) and since we are only wanting the transfer  function relating f to the input F. Use the ‘D‘ operator to convert the differential equations to algebraic equations and then use  symbolic math to get the transfer function relating the ground force f(t) to the input force F(t).  Note, ‘d’ is used instead of ‘D’ to avoid conflict with symbolic notation for differentiation. % Symbolic math demo: Pile Driving Model clear all % clears memory format short e % numbers to be shown in scientific format syms m b k d F % variables and parameters to be assigned numerical values or to %be used in other symbolic commands G=solve('(m*d^2+b*d+k)*z=(b*d+k)*v+F','(m*d^2+2*b*d+2*k)*v=(b*d+k)*z+(b*d+k)*w',...  '(m*d^2+2*b*d+2*k)*w=(b*d+k)*v','f=(b*d+k)*w','z,v,w,f'); Gf=collect(G.f,d); %group the powers of d Gf=collect(Gf,F); %group all terms multiplied times F Gf=subs(Gf,[m b k],[10 5 1000]); %substitute values for m, b, and k digits(4) % specifies the number of digits shown in answers Gf=vpa(Gf); % converts the numbers in Gf from integers to numbers with a decimal pretty(Gf) % shows Gf in standard transfer function format [num,den]=numden(Gf/F); % separates the num. and denom. and removes F from the numerator Num=sym2poly(num); % converts the symbolic num. to the standard vector for a poly Den=sym2poly(den); % converts the symbolic denom. to the standard vector for a poly H=tf(Num,Den) % generates the standard transfer function Executing this m-mile gives the following non-symbolic transfer function: H =  6.25 s^3 + 3750 s^2 + 750000 s + 5e07  -----------------------------------------------------------------------------------------------  50 s^6 + 125 s^5 + 25088 s^4 + 30011 s^3 + 3.004e06 s^2 + 750000 s + 5e07 Note that the answer is in terms of ‘s’ instead of ‘D’. This is because transfer functions in  MATLAB are expressed in terms of the Laplace operator ‘s’ instead of the ‘D’ operator. In this  case, it does not matter if ‘s’ or ‘D’ is used since there are no initial conditions. Use of the  Laplace transform is addressed in the next section of this notebook.14 Chapter 2. Solving Differential Equations  2.1 Analytical Solutions of Differential Equations Using the Laplace Transform Obtaining an ‘analytical’ solution to a differential equation means getting an equation for the  output of interest as a function of time. Contrast this with getting a ‘numerical’ solution; in this  case, only a plot is obtained of the output of interest as a function of time. In reality, it is almost  always the case that a plot of the solution is desired in the end; consequently, even if an equation  for the solution is obtained, a computer will be used to plot the equation. So, if only a plot is  desired in the end, it might seem burdensome to first get an equation and then turn around and  plot the equation when the plot could have been obtained numerically in a single step.  When using a computer to obtain solutions to differential equations it is very important, when  possible, to have an estimate of what the plot of the solution should look like in advance. It is a  huge risk to simply believe that the computer solution is correct. Consequently, knowing how to  obtain an analytical solution, even if the solution is not carried out to the very end, provides  insights to what the solution should look like. For this reason, an in-depth coverage on obtaining  analytical solutions is included in this document.  In general, for 2nd or higher order differential equations, obtaining an analytical solution requires  that the differential equation be linear. If not linear, either numerical integration is required  resulting in a numerical solution or, the differential equation must be linearized if an analytical  solution is desired. The following sections of this document pertain to the use of the Laplace  transform to get analytical solutions to linear differential equations. 2.1.1 Linearizing Nonlinear Differential Equations This section of the notebook pertains to the use of the Laplace transform to obtain analytical  solutions for ordinary linear differential equations. Just as was the case for the ‘D’ operator, the  Laplace transform enables us to convert a differential equation to an algebraic equation.  However, the use of the Laplace transform does not require that initial conditions be zero. To  Laplace transform an ordinary differential equation, the differential equation must be linear.  Quite often differential equations for engineering systems contain nonlinear functions  complicating or making it impossible to get analytical solutions. If an approximate solution is  acceptable, it is often possible to replace a nonlinear function within a differential equation with  a straight line approximation enabling us to use the Laplace transform. The difficulty is not in  getting the straight line approximation but rather determining the best straight line to use for the  approximation. For example, consider the following nonlinear differential equation for x(t) with  initial condition ��(0−) = 1. The minus sign on the zero, 0-, refers to time just before t = 0.  Initial conditions are always found at t = 0-, not at t = 0 or at t = 0+. This is very important to  remember. 3 x+ x = (0 ) =1 8 x − This equation is nonlinear because of the ����term. The input to this equation is a constant equal  to 8. It is straight forward to demonstrate that the solution for x will start at the initial condition 15 of 1 and end at a constant final value since as time goes to infinity, ��̇ will become zero which  allows for us to solve for ��(��)��→∞ = ��(∞) = ��. Thus, a justifiable linearization approach is  obtain a straight line approximation for ���� using the two points �� = �� and �� = ��; then replace  ����in the differential equation with the equation for the straight line. The result is ��3 ≈ 7�� − 6 Substituting for ��3in the differential equation gives a linear approximation for the original  differential equation, i.e. − x+ (7x −6) = 8 (0 ) =1 x ��̇ + 7�� = 14 It is advised that the linearized differential equation should always be checked to confirm that the  linearization points are satisfied. Setting ��̇ = 0 gives 7�� = 14 ���� �� = 2 which is correct. The  initial value theorem (see the section on Laplace transforms) or actual solution ��(��) is needed to  check that ��(��)��=0 = 1.  So, for differential equations with solutions starting at an initial condition and ending at a  different value, the best straight line to use when linearizing the nonlinear function is a line  between the starting and ending values. On the other hand, it is possible for solutions of  differential equations to always remain close (in the neighborhood) to certain values. For  example, consider the mass spring damper system in Section 1.4.1. Suppose the input force is  generated by hitting the mass with a hammer. Intuitively, the impact will cause the mass to  accelerate to the left but eventually the motion will damp out with the position of the mass in the  original location before the impact. Thus, the starting and ending values of the position of the  mass are the same. In such cases, it may be beneficial to use the point and slope method for  getting the straight line equation approximation for the nonlinear function.  For example, suppose the initial condition to the previous nonlinear differential equation  fluctuates but is always very close (in the neighborhood of) to the final value of 2, e.g. 0.18 ≤ ��(0−) ≤ 2.02. For such cases, a straight line approximation using the point (x = 2) and slope of the function at  x = 2 is preferred, i.e. 3 = 8 ������ [����3 ���� ]��=2= 3(2)2 = 12 The result will be  ��3 ≈ 12�� − 16  ����=2 It is important to confirm at this approximation for x3gives x = 2 at x = 2. Substituting this  approximation into the original differential equation gives ��̇ + 12�� = 2416 Suggestions for getting straight line approximations for nonlinear equations For obtaining straight line approximations, consider the following thought process. Suppose we  want a straight line approximation for a nonlinear function of x, f(x), i.e. f (x) ≈ cx + d Regardless of the approach, it is beneficial to first sketch the function to be approximated with  your best guess for the location of the line. (1) The first approach using two points on the function f(x) is shown below. f (x) cx + d f x ( )2 f x ( )1 x 1 x 2 x In this case, solve two equations for the two unknown’s c and d, i.e. ��(��1) = ����1 + �� ��(��2) = ����2 + �� Example: Obtain a linear approximation for ��3 using the two point method for points y = 0 and  y = 0.1. Be sure to start by sketching ��3and noting the two points for your straight line. The  two equations are 0 = 0�� + �� ������ 0.13 = 0.1�� + �� Solving gives c = 0.01 and d = 0. Thus, for 0 ≤ �� ≤ 0.1, ��3 ≈ 0.01�� + 0 y 3 0.001 0 3 y ≈ y +.01 0 0.1 y 17 (2) The second approach is to use one point on the function f(x) and the slope at that point. f (x) cx + d f x ( )1 x x 1 In this case, use the truncated Taylor series approach to getting c and d, i.e. f x ≈ cx + d = f x + f x x − x ( ) ( ) '( )[ ] 1 1 1 where '( )1 c = f xis the derivative (slope) of f(x) with respect to x at 1 x = xand  ( ) '( ) 1 1 1 d = f x − x f x . Example: Obtain a straight line approximation for ����for values of �� in the neighborhood of  �� = ��. ��. Writing the first two terms of the Taylor series for ���� gives 2(�� − 0.1) = 0.001 + 0.03(�� − 0.1) = 0.03�� − 0.002 ��3 ≈ 0.13 + 3����=0.1 y 3 3 yy ≈ cy + d = 0.03y − 0.002 0.0010 0.1 y 18 Example: Water flows into and out of the water tank shown below. The differential equation  for the height H of the water in the tank is ��̇ + 4√�� = 12 H 4 3 H H 3 9 16 9 H H Obtain a linear approximation for this differential equation by obtaining straight line  approximations for √�� for the two different sets of initial conditions given below. Note, the  equilibrium height ���� of the water in the tank is found by setting ��̇ = 0, ��.��. ���� = (124)2 = 9.  This is the height where the flow rate out of the tank equals the flow rate into the tank. (a) Assume the initial height of the water in the tank is 16. The height will start at 16 and  end at 9. So, we want to approximate √�� between the points 16 and 9 using the two  point method. 4 = 16�� + �� 3 = 9�� + �� Solving for c and d gives √�� ≈ 0.14286�� + 1.71429. It is wise to check the result.  For H=9, 0.14286*9+1.71429 = 3.00003. For H=16, 0.14286*16+1.71429 = 4.00005. (b) Assume the initial height of the water in the tank is between 8.99 and 9.01. Since the  equilibrium height is 9, the height of the water will always be in the neighborhood of 9.  Thus, the point and slope method at H = 9 will be used to approximate √��. √�� ≈ √9 +1 2√9(�� − 9) = 0.16666�� + 1.519 Checking the result: for H=9, √�� ≈ 0.16666 ∗ 9 + 1.5 = 2.99994 Using the c and d from (a) or (b), in either case, the linear approximation for the differential  equation is ��̇ + 4(���� + ��) = 12 or ��̇ + 4���� = 12 − 4�� Problems utilizing this concept: 4.2, 4.25, 4.26, 5.2, 5.25, 5.26, Q2, Q8, H8, E1c, E3b, E3d 2.1.2 The Definition of the Laplace Transform Consider some general function of time ��(��). The integral definition of the Laplace transform of  ��(��) is as follows: L f t F s f t e dt st ∫∞ − [ ( )] ( ) ( ) = =0 where t denotes time and s is the Laplace transform variable. Note that the integration starts at  time t = 0; thus, the function f(t) prior to t = 0 does not change the resulting Laplace transform. 2.1.3 Properties of the Laplace Transform Laplace transform of derivatives considering initial conditions  − L f t sF s f [ ( )] ( ) (0 ) = −   2 − − L f t s F s sf f ( ( )] ( ) (0 ) (0 ) = − −    3 2 − − L f t s F s s f sf f [ ( )] ( ) (0) (0 ) (0 ) = − − − Note that the Laplace transform of derivatives gives the same result as using the ‘D’ operator if  all initial conditions are zero; the only difference is ‘s’ is used instead of ‘D’. Note also that the  Laplace transforms of the derivatives contain initial condition terms at time t = 0-.  f f (t) (0 )− f (0 )+ − 0+ 0time, t 0 It is important to note that a function of time may have different values at times 0+ and 0-.  However, for most engineering problems, there will be no difference in the variables at t = 0-and  0+. Differences are caused by impulse inputs. For example, think of a bat hitting a baseball. At  time t = 0-the speed of the ball may be 100 mph towards the batter; at time t = 0+the speed of  the ball may be 200 mph away from the batter. The bat hitting the ball is effectively an impulse  force input to the ball instantaneously changing the velocity of the ball. The subject of where to 20 find the initial conditions is controversial; however, initial conditions for real physical  engineering systems are always found at time t = 0-regardless what you may read in another  book or what someone tells you.  2.1.4 Final Value Theorem If you know the Laplace transform of a variable it can be used to find the final value of that  variable using the final value theorem. That is, the final value theorem is used to compute ��(∞) which is the limit of ��(��) as t goes to infinity using the Laplace transform of ��(��), F(s). The  equation for the final value theorem is  f f t sF s ( ) lim ( ) ( ) ∞ =t ⎯→∞=s= 0 It is important to note that the final value theorem can only be used if none of the poles of F(s)  have positive real parts; otherwise, the final value theorem will give the wrong answer.  Consider the following examples: ��(��) =�� + �� ��(�� + ����)����(��)��=�� = [�� + �� �� + ����]��=��= ��. �� ���������� ���� �������������� ���������� ���� ��(��)��=∞ ��(��) =�� + �� ��(�� − ����)����(��)��=�� = [�� + �� �� − ����]��=��= −��. �� ������������������ ���������� ���� ��(��)��=∞ ��(��) =�� + ������ ��(�� + �� + ������)(�� + �� − ������)����(��)��=�� = [�� + ������ =������ (�� + �� + ������)(�� + �� − ������)]��=�� ���� + ������ = �� ���������� ���� �������������� ���������� ���� ��(��)��=∞ ��(��) =�� + ������ ��(�� − �� + ������)(�� − �� − ������)����(��)��=�� = [�� + ������ =������ (�� − �� + ������)(�� − �� − ������)]��=�� (−��)�� + ������ = �� ���������� ���� ������������������ ���������� ���� ��(��)��=∞ 2.1.5 Initial value theorem  The initial value theorem is used using the Laplace transform F(s) to compute ��(0+) which is the  limit of ��(��) as t approaches zero from infinity. It is important to note that (0 )+ fmay not  equal ��(0−) if the input to the system contains an impulse input. +=+=s t f (0 ) lim f (t) sF(s) 0 Example: ��(��) =����+�� ��(��+��)←⎯⎯ =∞ 21 f f t sF s 0 4 + ( ) lim ( ) ( ) 0= ∞ = = == 2 ⎯→∞ s t 0 2 + 4 3 + s + f f t sF s (0 ) lim ( ) ( ) = = = 0= 3 + s ←⎯⎯ =∞ s t 1 + 2 s =∞ The final value and initial value theorems are useful for checking the accuracy of Laplace  transforms of differential equations and for checking the accuracy of inverse Laplace transform  solutions. Problems associated with the initial value: 4.4, 4.8, 4.21, 4.23, 5.4, 5.8, 5.20, 5.21, 5.23, 5.26,  Q4, Q10, H4, H9, H18, H39, E1a, E1b, E2b, E2c, E3d, E5a, E5b, E5c, E6a, E6b, E6c 2.1.6 Converting Differential Equations to Algebraic Equations Using Laplace Transform Consider the following 2nd order linear differential equation for y with initial conditions  ��(0−) ������ ��̇(0−); the input to the differential equation is ��(��). my+ cy+ ky = f Laplace transforming both sides of the equation, considering that m, c and k are constants that  have not yet been assigned numerical values, gives:  m s Y s − sy − y +c sY s − y + k Y s = F s 2 − − − [ ( ) (0 ) (0 )] [ ( ) (0 )] ( ) ( ) Solving for Y(s) gives: ⎢⎣⎡+ + ⎥⎦⎤ ( ) 1 ⎢⎣⎡+ + + ⎥⎦⎤ − − y ms c my (0 )( ) (0 ) + + Y s2 2 = ms cs k F s ( ) ms cs k This algebraic equation is of the form Y(s) = H(s)F(s) + I(s) where H(s)is called the transfer function relating the Laplace transform of the input F(s)to the  Laplace transform of output Y(s), i.e. H s21 ( )⎢⎣⎡+ + =ms cs k ⎥⎦⎤ 22 Note that the transfer function does not contain any of the initial condition terms. Or, more  importantly, if only the transfer function is needed, it doesn’t matter what the initial conditions  are and the transfer function can be written simply using the coefficients of the differential  equation skipping the formal process of Laplace transforming the differential equation. I(s)exists in the equation above because of the nonzero initial condition terms; if the initial  conditions are zero, then I(s)is zero. F(s)remains a general expression until the input is  specified. The Laplace transforms for various common inputs are presented in the next section. 2.1.7 Common Inputs and Their Laplace transforms 2.1.7.1 Step Functions and Constants One of the most common or frequently experienced inputs is a step function.  f (t) A 0 time, t Using the integral definition of the Laplace transform, the Laplace transform for a step function  of magnitude A is found as follows: − A [ ] [ ]sA ∞ − −∞ ∫ − A F s Ae dt st st− = ( ) 00 1 = = 0 s e = s It is important to note that a constant of A for the input has the same Laplace transform. This is  because the integral for F(s) starts at t = 0; it doesn’t matter if the input was 0 or A prior to t=0.  When performing simulations, however, there may be a difference in the simulation results for a  step input and a constant input if the model contains input derivatives. For example, the Laplace  transform of ��̇is ����(��) − ��(0−); thus, if a differential equation contains the input derivative ��̇,  then the Laplace transform of the differential equation will contain ��(0−) which may or may not  be zero depending on the input being either a step (��(0−) =0) or a constant (��(0−) ≠ 0).  f (t) A F(s) = A s 0 time, t Problems with step and constant inputs: 4.9, 4.10, 4.13, 4.16, 4.22, 4.24, 4.25, 4.26, 4.30, 4.36,  4.37, 4.43, 5.9, 5.10, 5.13, 5.16, 5.21, 5.24, 5.25, 5.26, 5.27, 5.30, 5.36, 5.37, 5.40, 5.41, 5.43,  Q11, Q12, Q15, Q18, Q21, H5, H6, H7, H9, H15, H16, H20, H21, H21, H26, H27, H32, H35,  H36, H40, E2a, E2c, E3a, E3b, E3c, E3d, E5b, E5c, E6b, E6c23 2.1.7.2 Pulse and Impulse Functions An impulse function with area A is shown below.  f (t) Aδ (t) 0 time, t A simple derivation for the Laplace of an impulse function is to consider a pulse with area A  such as shown below f (t) A b 0 time, t b The Laplace transform for this pulse is obtained using the integral definition, i.e. A − A − A s t bs te ∞ − − ( ) 0 [ ] [ 1] 0 ∫ ∫b s b s t F s = + =− − 0− b e dt e dt b bs e = bs The limit of f (t)as b→0 is Aδ (t), an infinitely high function with a zero width; the area is still  equal to A regardless of the size of b. The limit of F(s) as b→0 is found using L’Hopital’s rule  to be A, i.e. ��(��)��=0 = (−��[��−���� − 1] ���� )��=0= �� Thus, the Laplace of Aδ (t)is A; if A=1, f (t)is called a unit impulse, δ (t). Problems with impulse inputs: 4.4, 4.6, 4.8, 4.24, 4.26, 4.41, 5.6, 5.8, 5.24, 5.25, 5.26, Q7, Q10, H4, H5, H9, H10, H11, H12, H15, H16, H19, H20, H21, H27, H36, H38, E3a, E3b, E3d, E4c,  E5c, E6c.24 2.1.7.3 Exponential Functions The Laplace transform for an exponential input, denoted by bt ae−,and shown below is  a ( ) = .  F s+ s b f (t) a ae− bt 0 time, t 2.1.7.4 Periodic Functions Periodic functions repeat themselves in a finite period of time. Equations can be obtained for  periodic functions using the Fourier series which is an infinite sum of sine and cosine terms. See  sections 2.2.4 and 2.4.1. If ��(��) = Asin(����) ��ℎ���� ��(��) =���� ��2+��2. If ��(��) = Acos(����) ��ℎ���� ��(��) =���� ��2+��2  2.1.8 Inverse Laplace Transform 2.1.8.1 Inverse Laplace Transform Using Partial Fractions The use of partial fractions to get an inverse Laplace transform is preferred if a computer is  available to factor the denominator and then compute the partial fractions. Note, if the  denominator is not already factored, then a computer will definitely be required to factor any  polynomial 3rd or higher order. For cases where the denominator is already factored, a much  more efficient method for getting the inverse Laplace is by means of the residue theorem which  will be explained in section 2.1.8.2.  Any Laplace transform can be expressed as the sum of its partial fractions. If the numerator  order is not greater than the denominator order and there are no repeated poles, each partial  fraction will be one of three possible forms. In general, the partial fractions for some general  Laplace transform H(s) consists of a constant K, plus 2nd order partial fractions for complex  poles, and 1st order partial fractions for real poles, i.e. ⎢⎣⎡+ ⎥⎦⎤ a s b + a s b + C C H s Kω ω ( ) = + ... ... 1 1 2 2 + 2 2 2 2 + + 1 s p + + 2 s p ( ) ( ) s r + + + 1 1 s r + + 2 2 1 2 The constant K only occurs if the order of the numerator polynomial equals the order of the  denominator polynomial. 25 For example, expressing the following F(s) in partial fractions gives the three possible partial  fraction formats: 4 3 2 2 37 214 550 375 ( )4 3 2 2 s s s s + + + + 3 4 8 7 F s s + =s s = + 2 + + s s s s s s + + + + 11 55 125 0 + 0 + 5 + + 6 25 Based on the discussion pertaining to inputs in section 2.1.2.6, the first partial fraction, 2,  corresponds to an impulse with area (or magnitude) of 2. The second partial fraction  corresponds to a constant (or step) with magnitude of 3. The third partial fraction corresponds to  an exponential term, t 4−. The fourth term can be shown to correspond to a sine wave with an  e5 exponentially decaying amplitude.  The general form for a 2nd order partial fraction with complex poles is denoted by as b  + 2 2 ( + ) +ω s d and its inverse can be shown to be − dt, ce t sin(ω +φ) where �� = √(��−���� ��)2 + ��2 and ∅ = ����������[(�� − ����) + ������].  A sine wave with an exponentially decaying amplitude is shown below f (t) − dt ce t sin(ω +φ) 0 time, t 8 7 s is 9.0588 sin(4 2.0591). 3+ + − t Thus, using  So, the inverse Laplace of 6 25 2+ + s s e t superposition, the total inverse is the sum of the individual partial fraction inverses, i.e. − − 5 3 t t f t t e e t ( ) 2 ( ) 3 4 9.0588 sin(4 2.0591) = + + + + δ26 So, if a Laplace transform is expressed in partial fraction format, then these four formulas can be  used to get the inverse Laplace transform directly without a computer or tables.   Consider a second example. 2 3 ⎢⎣⎡+ = + + ⎥⎦⎤ − − − 1 1 4 t L F s L [ ( )] 5 = + + δ 5 ( ) 2 3 s s 4 t e Consider the mass/spring/damper system shown below. This system was also introduced in  section 1.4.2; however, in 1.4.2, b was equal to 5 and the output of interest was the ground force  f(t). Changing the output of interest to z(t), it can be shown that the Laplace transform Z(s) for  the displacement z(t) for a unit impulse input is given by ��(��) = [100��4 + 4��3��3 + 7��4��2 + 6��5�� + 3��6 1��3��6 + 5��4��5 + 1.1��6��4 + 1.3��7��3 + 9��7��4��2 + 3��8�� + 1��9] F(t)  M=10 kg k k k M M M  b=100 Ns/m z  k=1000 N/m b v b w b This Laplace transform in partial fraction format can be shown to be ⎢⎣⎡+ + ⎥⎦⎤ 0.05431 ( )s s s Z s 0.03493 0.01076 =2 2 2 2 2 2 ( 0.991) 4.34 + + + ( 7.77) 9.75 + + + ( 16.2) 7.81 Using the formula for complex partial fractions above, the inverse Laplace of Z(s) can be shown  to be 0.991 7.7 7 1 6.2 − t − t − t z t e t e t e t ( ) 0.0125 sin(4.34 ) 0.0036 sin(9.75 ) 0.0014 sin(7.81 ) = + + Some of these examples demonstrate the need to be able to use a computer to compute the partial  fractions if partial fractions are desired. The next section introduces a homemade MATLAB m file ‘pfract’ for computing partial fractions; ‘pfract’ is listed in Chapter 13. Problems involving partial fractions: H12, H36, E1a27 Obtaining Partial Fractions Using MATLAB In general, any Laplace transform can be expressed in partial fraction format. Although it is  possible to obtain the partial fractions manually, a computer will be required to factor the  denominator polynomial if the order is greater than 2. Since a computer is to be used to factor  the denominator, it might as well be used at the same time to obtain the partial fractions. An m file named ‘pfract’ for computing partial fractions has been written and is included with this  document. This program factors the denominator and then computes the partial fraction  numerator coefficients. The program is limited to first order (non-repeated) real or complex  poles. For example, consider the following Laplace transform H(s): ��(��) =10��3 + 6��2 + 28�� + 150 2��3 + 16��2 + 74�� + 100 The MATLAB commands for getting the partial fractions and the results are >> num=[10 6 28 150]; >> den=[2 16 74 100]; >>pfract(num,den) % ‘pfract’ is not a MATLAB M-file but must be copied from Chapter 13 and saved. ��(��) =10��3 + 6��2 + 28�� + 150 2��3 + 16��2 + 74�� + 100 = 5 +1.118 �� + 2+−38.12�� − 101.5 Or  >> [R]=pfract(num,den) ��(1) = 5 ��(2) =1.118 �� + 2 ��(3) =−38.12�� − 101.5 ��2 + 6�� + 25 ��2 + 6�� + 25 Note that one of each of the three possible partial fraction formats occur in this example. The  poles are -2 and −3 ± ��4. The partial fraction ‘5’ occurs because both the numerator and  denominator are 3rd order. 2.1.8.2 Inverse Laplace Transform Using the Residue Theorem The residue theorem is the simplest way to get the inverse Laplace transform if you do not  already have the function in partial fraction format and if the denominator is already factored;  the numerator order must be at least one less than the denominator order. For cases where  the numerator order equals the order of the denominator, long division must first be used to get  the appropriate format for the residue theorem. The formula for the inverse Laplace transform of  a general function F(s) is as follows:28 ��(��) = ��−1{��(��)} = ∑���������������� ���� ��(��)���������� ��ℎ�� ���������� ���� ��(��) To get the residue for a first order real pole ‘–p’ where p ≥ 0:  �������������� ������ ��ℎ�� �������� ′ − ��′ ���� (�� + ��)��������(��)��=−�� For example: ⎢⎣⎡+ + = + ⎥⎦⎤ s t s t − L 2 1 ( 3)( 5) s s ⎢⎣⎡+ + 2 ( 3) s e ( 3)( 5) s s + + ⎥⎦⎤ ⎢⎣⎡+ + ⎥⎦⎤ 2 ( 5) s e ( 3)( 5) s − − 3 5 t t = − e e s s =− =− 3 5 s Consider the following Laplace transform with a 1st order pole, a 2nd order pole and a 3rd order  pole. ��(��) =5 (�� + 2)(�� + 4)2(�� + 10)3 The pole -2 is first order; the pole -4 is second order; and the pole -10 is third order. To get the residue for an nth order pole –p in the denominator of F(s), the formula is (�� − 1)![����−1[(�� + ��)����(��)������] �������������� ������ ���� ����ℎ ���������� �������� − �� =1 ������−1]��=−�� An example demonstrating the procedure for a 2nd order pole is as follows: ⎢⎣⎡+ + 2 s t ⎤ ⎢⎣⎡+ + ⎤ s t ⎢⎣⎡+ + ⎥⎦⎤ − L 1 ⎥ = + ( 3) s e 2 ⎥ + 1 d 2 2 ( 5) ( 3)( 5) s s 2 2 ( 3)( 5) s s ⎦ ⎦ s (2 1)! − ds s e + ( 3)( 5) s s 2 =− =− 3 s 5 − − − 3 5 5 t t t = − − 0.5 0.5 e te e Note, it is advised to write the denominator in factored format before beginning the residue  process to avoid errors. For example  4��2 + 28�� + 40 =8�� + 16 8�� + 16 4(�� + 2)(�� + 5)=2�� + 4 (�� + 2)(�� + 5) Shortcut for Complex Poles, − r ± jω Complex Poles With Nonzero Real Parts, r ≠ 0 For Laplace transforms F(s) with complex poles, the two residues for a complex pair will  combine and simplify to an exponentially decaying sine wave. Realizing this end result, enables  one to use a shortcut and obtain the resulting combined solution in a single step instead of the  two steps that would be required for each of the two complex conjugates. Consider the  following Laplace transform with a pair of complex poles −�� ± ����:29 N s ( ) F s+ + ω + − ω ( )s r j s r j D s = ( )( ) ( ) where N(s) is the numerator of F(s) and D(s) is what remains of the denominator of F(s) after  removing the complex pair that we are working with. The sum of the residues for the complex  pair of poles can be shown to simplify to the following expression which we will use for the  short cut: ert − [residue for − r + jω] + [residue for − r − jω]=sin(ω φ) ω+ M t where  = + + =− +=( )( ) N s 2 2  M s r F s [( ) ] ( ) ω ω and s r jD s s r j =− + ω ⎪⎨⎧⎟⎟⎠⎞ ⎪⎬⎫ ⎜⎜⎝⎛ angle s r F s angle( )( ) 2 2 φ ω ω = + + = {[( ) ] ( ) } N s s r j D s =− + ⎪⎩ ω s r j =− + ⎪⎭ For an example, consider the following Laplace transform s + 5 N s ( ) Y s+ + + − ( )D s s j s j = s s j s j ( 2 3)( 2 3) + + + − = ( )( 2 3)( 2 3) Thus, for this problem, N(s) = s+5 and D(s) = s. The inverse Laplace of Y(s) consists of the sum of three residues, i.e. y(t) =[residue_ for _ s = 0]+[residue_ for _ s = −2− j3]+[residue_ for _ s = −2+ j3] But, we will use the short cut for the sum of the residues for the complex pair, i.e. et − 2 y(t) =[residue _ for _ s = 0]+ sin(3 ) e y t 5 − 2 3 t M t +φ ( ) = + +φ M t 13 3 �� = |�� + �� sin(3 ) ��|��=−��+����= ��. �������� ∅ = ������−�� ([�� + �� ��]��=−��+����) = −��. �������� ������.30 Complex Poles with Zero Real Parts, r = 0 If any of the poles are complex with a zero real part, then it is important to note that the  exponential term ��−���� = ��−���� = ��; thus, the multiplier on the sine function does not go to zero  as �� → ∞. This will always happen if the input to a transfer function is a sine or cosine wave  since the denominator of the Laplace transform of sine and cosine functions have a zero real part.  Consider the previous example but with the real part of the complex poles equal to zero. =s ss Y s s + 5 + 5 ( )2 2 s s j s j ( 0 3)( 0 3) + + + − = ( 3 ) + et − 0 y(t) =[residue _ for _ s = 0]+ sin(3 ) e y t 5 − 0 3 t M t +φ ( ) = + + = + − M t t sin(3 ) 0.556 0.6479sin(3 1.03) 2 3 3 �� = |�� + �� φ ��|��=−��+����= ��. �������� ∅ = ������−�� ([�� + �� ��]��=−��+����) = ������−������−����= −��. ���� ������. For cases of a sine function input to a transfer function, it is not uncommon to be interested in the  inverse Laplace transform of the output after the transient portion of the solution has gone to zero  or only for large values of time (t→ ∞). If the poles of the transfer function all have negative  real parts, then the portion of the inverse Laplace transform associated with the poles of the  transfer function will all go to zero as t → ∞. Thus, the only thing left will be the portion of the  inverse associated with the input poles which will have zero real part which means this portion  does not go to zero but remains a sine or cosine function forever. For example, suppose we have ��(��) = ��(��)��(��) = [��������(�� + ��) (�� + ��)(�� + ����)[(�� + ��)�� + ����]]��(��) where the input ��(��) = ��������(������) ������ ��(��) =������ ����+������. The transfer function G(s) is the  expression in the brackets; note that all of the poles of G(s) have negative nonzero real parts.  Thus, the portions of the inverse Laplace associated with these poles will go to zero as t → ∞.  So, as t → ∞, the only portion left will be the portion of the inverse associated with the poles of  the input R(s). Thus,  ��(��)�� → ∞ = �� + �� + �� +������|��(��)|��=��+������������ ������(������ + ��) = ��|��(������)| ������(������ + ��) where �� = ���������� ���� ��(������). Thus, if a transfer function has a sine input with frequency 20  rad/sec and amplitude 30, that is r(t)=30sin(20t), then as t → ∞, the output will be a sine  function with the same frequency 20 rad/sec and with an amplitude ����|��(������)| and with a phase  shift of �� = ���������� ���� ��(������). That is, 31 |��(������)| = |��������(������ + ��) (������ + ��)(������ + ����)[(������ + ��)�� + ����]| = |−��. �������� + ����. ��������| = ��. ������ �� = ����������(−��. �������� + ����. ��������) = ��. ���� ������ Thus, if r(t)=30sin(20t), then as t → ∞, ��(��) = ���� ∗ ��. ������ ∗ ������(������ + ��. ����) = ��. ����������(������ + ��. ����) The main points to be learned are that for a system (transfer function) with a sinusoidal input, the  output will converge to a sinusoidal function at the same frequency as the input with the  amplitude of the output equal to the input amplitude multiplied by the magnitude of the transfer  function using the input frequency; there will be a phase difference in the input and output  calculated using the input frequency. This also applies to cosine inputs. Example The following system has r(t) = 4sin(0.3t) for the input. What will be the output y(t) at steady  state conditions (after all transients have died out)? How much greater or smaller is the output  amplitude compared to the input amplitude? ��(��) = [�������� + ��. ������ + �� �������� + �������� + ���� + ��]��(��) Ans: y(t)=2*4*sin(0.3t-1.35)=8sin(0.3t-1.35) Note the output amplitude is 2 times greater than  the input amplitude. Problems containing inverse Laplace transforms: 4.6, 4.23, 4.27, 4.30, 4.35, 4.36, 4.37, 4.43,  5.5, 5.6, 5.9, 5.22, 5.23, 5.25, 5.26, 5.27, 5.30, 5.35, 5.36, 5.40, 5.43, Q5, Q6, Q8, Q12, H2, H3,  H4, H6, H10, H12, H13, H19, H26, H27, H37, H38, H39, E2a, E2b, E2c, E3a, E3b, E3c, E3d,  E4B, E4c,E5c,E6C 2.1.8.3 Using MATLAB to Plot the Inverse Laplace Transform Without Getting an  Equation for the Inverse Laplace Transform    The previous sections of this document have focused on using the Laplace transform to get an  equation for the solution to a linear ordinary differential equation using the inverse Laplace  transform. This section introduces how to use the Laplace transform and MATLAB to get a plot  of the solution without having to first get an equation for the solution. For cases where only  a plot of the solution is desired, this approach to getting a plot of the solution represents the  ultimate in simplicity since there is minimum chance for error and minimum effort required to  get a plot. Also, this method can be used for cases with nonzero initial conditions and/or unusual  inputs; the Laplace of the input must be known. Suppose you have a Laplace transform Y(s). If an equation for the inverse Laplace, y(t), is  desired, then the only options are to use the partial fraction or residue methods mentioned in 32 previous sections and use inverse Laplace to get the equation. If however, an equation is not  needed but only a plot of y(t), then it is very convenient to use the ‘impulse’ command in  MATLAB to get the plot directly from the Laplace transform. If you pretend that Y(s) is the  transfer function for a system with a unit impulse input, then Y(s)*1 = Y(s) is the Laplace of the  output. Thus, entering Y(s) into MATLAB as a transfer function (pretending that it is a transfer  function) and then using the command ‘impulse(Y)’ gives a plot of y(t). For example, to get a  plot of the inverse of  Y s ( )3 2 2 =s s s + + + 13 55 75 >> Y=tf(2,[1 13 55 75]) %This MATLAB command defines the transfer function above. >> impulse(Y) %This MATLAB command generates a plot of y(t) A more versatile use of the impulse command allowing labeling and line width and color control  is as follows: >> Y=tf(2,[1 13 55 75]); >> [y,t]=impulse(Y,3); % Instead of generating a plot, this generates values for y and t. >> plot(t,y,'r','LineWidth',3) >> ylabel('y') >> xlabel('time, sec.') >> title('Using the impulse command to plot the inverse Laplace of Y(s)') Using the impulse command to plot the inverse Laplace of Y(s)0.03 0.025 0.02 0.015 y 0.01 0.005 0 0 0.5 1 1.5 2 2.5 3 time, sec. 33 For a second example, consider the problem in section 2.1.6, i.e. ⎢⎣⎡+ + ⎥⎦⎤ ( ) 1 ⎢⎣⎡+ + + ⎥⎦⎤ − − y ms c my (0 )( ) (0 ) + + Y s2 2 = ms cs k F s ( ) ms cs k If the input is a unit step, then F(s)=1/s; Y(s) becomes ⎢⎣⎡+ + ⎤   − − − − 1 1 (0 )( ) (0 ) [ (0 )( ) (0 )] 1 y ms c my ⎢⎣⎡+ + + + + ⎥⎦⎤ y ms c my s + + + Y s+ + ( )2 2 2 = ms cs k s ms cs k ⎥ = ⎦ s ms cs k ( ) If numerical values are assigned to the initial conditions and the parameters m, c, and k and then  the final expression for Y(s) is entered as transfer function (pretend), then the command  ‘impulse(Y)’ gives a plot of the inverse of Y(s) which is y(t). Thus, for m=10, c=0.05, k=50,  y(0-)=6, and ��̇(0−) = 2: ��(10��2 + 0.05�� + 50)=60��2 + 20.3�� + 1 ��(��) =[6(10�� + 0.05) + 10(2)]�� + 1 10��3 + 0.05��2 + 50�� + 0 >> Y=tf([60 20.3 1],[10 0.05 50 0]); % Pretending that Y(s) is a transfer function. >> impulse(Y) This gives us a plot of the solution to the original differential equation even though the input was  not an impulse but rather a step input and for a case of nonzero initial conditions. Problems utilizing this concept: H4, H5, H10, H11, H12, H15, H16, H19, H20, H27, H36, H38 2.1.9 Modes of a System The initial condition or impulse response of a dynamic system is madeup of the sum of the  responses of the individual modes of the system. It is not uncommon for some of the modes to  dominate the response of the system; consequently, relatively high order system models are  sometimes approximated by lower order models consisting of only the dominate modes. The way to determine the individual modes is to express the Laplace transform in partial  fractions. Each partial fraction represents a mode. If all initial conditions are zero and the input  is an impulse, the modes are simply the partial fractions of the transfer function of the system.  Modes are either be first order or second order. The dominant mode usually has the largest  eigenvalue time constant. A complex mode with a small damping ratio may dominate the other modes depending on the time constants.  The procedure for computing the modes is to use the MATLAB m-file ‘pfract’ on the numerator  and denominator polynomials of the Laplace transform or of the system transfer function.  Consider the mass/spring/damper system introduced in section 2.1.8.1. 34 Suppose that the output of interest is the displacement z(t) and the input F(t) is a unit impulse. It  can be shown that the transfer function H(s) relating Z(s) to F(s) is the following: ��(��) = ��(��)��(��) = [100��4 + 4��3��3 + 7��4��2 + 6��5�� + 3��6 1��3��6 + 5��4��5 + 1.1��6��4 + 1.3��7��3 + 9��7��4��2 + 3��8�� + 1��9] 1 The following MATLAB commands generate the individual modes of this system: >> num=[100 4000 70000 600000 3e6]; >> den=[1000 5e4 1.1e6 1.3e7 9e7 3e8 1e9]; >> pfract(num,den) 0.05431 ( )2 2 2 0.03493 0.01076 Z s ⎢⎣⎡+ + ⎥⎦⎤ =32.47 324.7 + + s s s s s s + + 1.981 19.81 + + 15.55 155.5 To determine if any of these modes dominate the response, the ‘damp’ command is used to  generate the eigenvalues, i.e. >> damp(H) Eigenvalues Damping Ratio Time Constant, s Undamped Natural Frequency, rad/s −0.99 ± ��4.34 0.223 1.01 4.45 -7.77±��9.75 0.623 0.128 12.5 -16.2±7.82 0.901 0.062 18.0

## What will be the output y(t) at steady state conditions (after all transients have died out)? Don't forget about the age old question of corey johnson unc

Comparing the time constants reveals that the response of the first mode will last approximately  5 seconds which is much greater than the others. Also, according to the damping ratios, the first  mode is significantly under damped compared to the other modes. Thus, according to the  eigenvalues, the first mode appears to dominate the total system response. Comparing the impulse responses of the individual modes with the total system impulse response  demonstrates the dominance of the first mode. The results of the following MATLAB  commands are shown in the figure below: >> H1=tf([0.05431],[1 1.981 19.81]); >> H2=tf([0.03493],[1 15.55 155.5]); >> H3=tf([0.01076],[1 32.47 324.7]); >> H=tf(num,den); >> impulse(H1,'r-.',H2,'r:',H3,'k',H,'k.',4) >> title('Comparing individual mode responses') >> legend('first mode','second mode','third mode','all 3 modes','Location','NorthEast') >> ylabel('displacement z per unit of force impulse')35 Note that the response of the dominate mode is almost identical to the total system response. 10x 10- 3 Comparing individual mode responses first mode second mode third mode e slupmi ecrof fo tinu rep z tnemecalpsid5 0 - 5 all 3 modes0 0.5 1 1.5 2 2.5 3 3.5 4 Time (seconds) Problems utilizing this concept: H5, H25, H36, E1c 2.2 Numerical Solutions to Differential Equations Using MATLAB 2.2.1 General Considerations There are numerous ways to obtain a numerical solution to a differential equation.  The first step is to determine if the equation is linear or nonlinear. If nonlinear, then a numerical  integration algorithm such as ‘ode45’ is required; numerical integration algorithms in general  require a state variable format (see Chapter 3).  If the differential equation is linear, there are many options depending on the format of the  equation. For example, for a transfer function format or a state variable matrix format with a  step or impulse input, use of the MATLAB commands ‘step’ or ‘impulse’ is very straight  forward. Transfer functions and state variable formats with other inputs such as sine waves,  pulses, or other functions of time including random functions are best handled using ‘lsim’. For  linear state variable matrix equations with nonzero initial conditions, ‘lsim’ can be used even if  there is also an input; ‘initial’ can be used if there are nonzero initial conditions and no input. 2.2.2 Using the ‘impulse’ command in MATLAB The command ‘impulse’ gives a plot of the solution to a differential equation expressed in  transfer function format or state variable format if the input is a unit impulse. This command  also gives the plot of the inverse Laplace of any Laplace transform that has been entered into  36 MATLAB as a ‘pretend’ transfer function; recall that the Laplace transform for a unit impulse is  1. Examples of using the command ‘impulse’ were provided in section 2.1.8.3. 2.2.3 Using the ‘step’ command in MATLAB The command ‘step’ gives a plot of the solution to a differential equation expressed in transfer  function format or state variable format if the input is a unit step. For example, if G is a transfer  function for a system, a plot of the step response of a step with magnitude 23 is obtained using  the following command: >> step(23*G) Repeating the example in section 2.1.9 for the mass/spring/damper system but this time for the  force input a unit step input gives the following results: >> step(H1,'r-.',H2,'r:',H3,'k',H,'k.',4) >> title('Comparing individual mode step responses') >> ylabel('displacement z per unit of force input, m/N') >> legend('first mode','second mode','third mode','all 3 modes','Location','NorthEast') 4.5x 10- 3 Comparing individual mode step responses first mode N /m ,tupni ecrof fo tinu rep z tnemecalpsid4 3.5 3 2.5 2 1.5 1 0.5 0 second mode third mode all 3 modes0 0.5 1 1.5 2 2.5 3 3.5 4 Time (seconds) DC Gain Command ‘dcgain’ Quite often for step or constant inputs, the output response a system becomes a constant at steady  state. The ratio of the output constant to the input constant is called the ‘dcgain’. Thus, if the dc  gain of a transfer function is 37, then the output constant will be 37 times the input constant.  37 The dc gain for a transfer function can be computed by setting the Laplace variable s to zero if  all poles have negative real parts. Notice in the figure above for the step responses of the  individual modes, each mode has a different dc gain; however, the sum of the mode dc gains  must and does equal the dc gain of the total transfer function. As a side note, if the input to a  transfer function is a constant, then the transfer function is multiplied by the constant over s to  get the Laplace of the output. When using the Final Value theorem, multiplying by s cancels the  s from the input; so, when setting s equal to zero, you get the same result had you set s equal to  zero in the transfer function.\ Problems utilizing this concept: 4.43, H5, H20, H31, H28, H36, E2c 2.2.4 Using the Command ‘lsim’ For most any input function of time, F(t), the MATLAB command ‘lsim’ can be used to generate  the output y(t) of a system defined in either transfer function format or state variable format. For  example, suppose  ��(��) = 2 sin(7��) + 0.5sin(10��) The following MATLAB code generates 10 seconds of time values every 0.005 seconds and then  generates a value for F at each of the time values. The ‘lsim’ command generates a numerical  solution for the output variable y corresponding to the system H with input F(t). When using  ‘lsim’, you must know how long to run the simulation and a reasonable time increment; these  values can usually be obtained from the system eigenvalues and the input frequencies; the time  increment should be on the order of 1/(10M) where M is either the largest eigenvalue magnitude  or the largest input frequency in rad/sec, whichever one is greater. For H(s) in the previous  example, the largest time constant is 1.01 s and the largest eigenvalue magnitude or input  frequency is 18 rad/s. So, the time increment is 1/(10*18) ≈ 0.005 ��. The final time for the  simulation should normally be sufficiently greater than five time constants in order to see just the  portion of the solution associated with the input . Since the time constant is 1 s, the final time  will be selected to be 10 s. Note in the plot below the larger amplitudes associated with the five  time constant transient start up period. >> t=0:.005:10; >> F=2*sin(7*t)+0.5*sin(10*t); >> [y]=lsim(H,F,t); >> plot(t,y,'r','Linewidth',2) >> title('Example of a general input function') >> xlabel('Time (sec)') >> ylabel('y(t)')38 6x 10-3 Example of a general input function 4 2 - 6- 4- 20 )t(y0 1 2 3 4 5 6 7 8 9 10 Time (sec) Problems utilizing this concept: H12, H20, H43 Pulse Series Input For a second example, consider a series of pulse inputs such as might be generated by repeated  hammer blows, a positive displacement pump, etc. Such a pulse sequence f(t) is shown below;  the pulse amplitude is F, the pulse width is T, and the period is mT. Thus, T/mT=1/m defines the  portion of the pulse period associated with the pulse. For example, if m=20, then the width of  each pulse will only be 1/20 th of the time between pulses. f(t) T F mT t An m-file for pulse functions is provided with the other m-files at the end of this notebook and  has been used to generate a pulse series with T=0.25, F=1, and m=6. N=7 where N is the  number of pulses to be generated and NumPPP=10 where NumPPP is the number of data points  per pulse; the larger NumPPP, the sharper will be the corners of the pulses. The period of this  pulse series is 1.5 seconds which is the time between pulses. For the mass/spring/damper system  in section 2.1.9.1, 1.5 seconds is approximately equal to the inverse of the first mode frequency  in Hertz (4.34/2��).  39 >>T=0.25;m=6;F=1;N=7;NumPPP=10; >> [f,t] = PulseSeries (T,m,F,N,NumPPP); Note, PulseSeries is not a MATLAB function; you must enter and save it (see M-files at the end of this book) 1 0.8 e dutilp0.6 ma noitcnuf es0.4 lup0.2 0 0 1 2 3 4 5 6 7 8 9 10 time, sec. >> [z]=lsim(H,f,t); >> plot(t,z,'k','LineWidth',2) >> xlabel('time, sec.') >> ylabel('displacement z per unit of pulse amplitude') >> title('mass/spring/damper response with pulse series input') 3.5x 10-3 3 2.5 e dutmass/spring/damper response with pulse series input2 ilpma e1.5 slup fo ti1 nu rep 0.5 z- 1  tneme0 calpsid-0.5 -1.5 0 2 4 6 8 10 12 time, sec. Notice that after about five time constants, the amplitude of the response is approximately  constant.  40 2.2.5 Using the Command “initial” The MATLAB command ‘initial’ allows one to get an initial condition response for a state space  system with no input. Note, for an in depth understanding on state variables, see Chapter 3. ��̇ = ���� + 0 �� = ���� + 0 ��(0−) ≠ 0 Consider the following example noting that B and D must be defined with some arbitrary values  to enter a state space system. The initial conditions for the state variables are x1=2 and x2=-4. >> A=[0 1;-8 -4]; >> B=[0;0]; >> C=[1 0]; >> D=; >> G=ss(A,B,C,D); >> X0=[2;-4]; >> initial(G,X0) 2 Response to Initial Conditions 1.5 1 edutilpmA0.5 0 -0.5 0 0.5 1 1.5 2 2.5 3 Time (seconds) Problems utilizing this concept: H29 2.3 Using MATLAB to Get the System Response Using Numerical Integration There are many numerical integration algorithms available in MATLAB; the selection depends on the complexity of the differential equations being solved. For 90% of the problems, ode45  with either a variable step or fixed step will suffice. There is a vast amount of documentation  available: ‘help ode45’ and ‘help odeset’. I prefer the variable step option which is the default  but am required to use the fixed step option if using a generated input data sequence with a fixed  time increment between data points; this is almost always the case for input data that has been  generated prior to the simulation process such as with random or stochastic inputs. In general, numerical integration algorithms require that the differential equations be expressed  in state variable format. See Chapter 3 for an in-depth coverage on obtaining different state  variable formats depending on the specific problem. 41 2.3.1 Example Problems using ode45 in MATLAB At least two m-files are required; one to start the simulation and plot the output and a second that  contains the derivative equations for the state variables. Consider the following example. Deterministic Input Use the 'ode45' command in MATLAB to generate the solution v(t) of the   following differential equation: − 2v+18v+66v+130v = 260 (0 ) =1 v (0 ) = 0.2 v v (0 ) = 0.5 − − And then, repeat the solution for the same differential equation but with 66��̇3in it instead of  66��̇. Plot the responses on the same graph for comparison. % First m-file to start the simulation and plot the results. [t,x]=ode45(@ex1diffeqns,[0 2.5],[1 .5 .2]);  % Tells ode45 the location of the derivative eqn’s for the first differential equation. [t,xn]=ode45(@ex2diffeqns,[t],[1 .5 .2]);  % Tells ode45 the location of the derivative eqn’s for the 2nd differential equation using the  same time vector. plot(t,x(:,1),'r',t,xn(:,1),'k:','Linewidth',2)% Plots every value of the first state variables legend('linear','nonlinear') xlabel('time, s') ylabel('v(t)') title('Comparing solution to linear and nonlinear equations') % ex1diffeqns m-file that contains the derivative equations % for linear version function dx=ex1diffeqns(t,x) dx=zeros(3,1); dx(1)=x(2); dx(2)=x(3); dx(3)=130-65*x(1)-33*x(2)-9*x(3); end % ex2diffeqns m-file that contains the derivative equations % for nonlinear version function dx=ex2diffeqns(t,x) dx=zeros(3,1); dx(1)=x(2); dx(2)=x(3); dx(3)=130-65*x(1)-33*x(2)^3-9*x(3); end42 ) t(v2.4 2.2 2 1.8 1.6 1.4 1.2 1 Comparing solution to linear and nonlinear equations linear nonlinear0 0.5 1 1.5 2 2.5 time, s Example of ode45 for a random input or for a data series for the input Repeat nonlinear portion above but this time change the input to a random input instead of a  step input. See section 8.3.3 for generating random inputs. +++ = (0 ) =1 3 2v 18v 66v 130v 260u − v (0 ) = 0.2 v v (0 ) = 0.5 − − % m-file for generating solution with a random input global U step % Transports values to other m-files N=1024; % Number of random input points to be generated (must be a power of 2) step=.02; % Time step between input data [U,t]=rinput(N,step); t=t'; % Changing time array to a row vector for ode45 [T,X] = ode45(@example3,[t],[1 .5 .2]); % Designates specific time values plot(T,X(:,1)) xlabel('Time, sec.') ylabel('v') % m-file demonstrating how to get and use the random input values % in the derivative equations function dx=example3(t,x) 43 global U step dx=zeros(3,1); n=fix(t/step+1); %Determines the current point in time u=U(n,1); %Defines u as the input amplitude at the current time dx(1)=x(2); dx(2)=x(3); dx(3)=130*u-65*x(1)-33*x(2)^3-9*x(3); function[GY]=dpsd(f) GY=0.0025/f^2; end 1.5 1 0.5 v0 -0.5 -1 0 5 10 15 20 25 Time, sec. 2.3.2 Example Simulation Using Options and Event Functions with ODE45  Type ‘help odeset’ and ‘doc odeset’ to get detail instructions for ‘event’ functions. % This m-file starts the numerical integration solution options=odeset('events',@StopSim); [t,x,Te,Ye,Ie] = ode45(@SimEqns,[0 10],[0 0],options); plot(t,x(:,1)) % This function contains the differential equations function dx=SimEqns(t,x) dx=zeros(2,1); dx(1)=x(2); dx(2)=-x(1)-x(2)+1; end; % This function determines when to stop the simulation process. function [Val,Ister,Dir]=StopSim(t,x)  %Val(1) is the first event. More than one event can be used, Val(2), etc. Ister(1)=1;% stops simulation when Val(1)=0 Val(1)=x(1)-0.5; % Stops simulation when x(1)-0.5=0 Dir(1)=0;% stops simulation whether Val(1) goes to zero from either direction %Or Dir(1)= +1; stops simulation when Val(1) is increasing and passes through zero  %Or Dir(1)= -1; stops simulation when Val(1) is decreasing and passes through zero 44 For a second example involving events functions, consider the following differential equation. − v − 2v+18v+ 66v+130v = 22u+ 260u (0 ) =1 v (0 ) = 0.2 v (0 ) = 0.5 − Putting the differential equation in state variable format (simultaneous 1st order differential  equations): x v 1 =  x v 2 =  x v u 3  = − 11 x x = 1 2  x x u = + 2 3  11 x u x x x u u x x x = − − − + = − − − 130 65 33 9( 11 ) 31 65 33 9 3 1 2 3 1 2 3 Create an m-file named 'StartSim' ; actually, it can be any name you choose. The simulation is  to run from t = 0 to 2.5 sec. Note below how the start and end times for the simulation are  entered as well as how the initial conditions are entered. Also note that it references the m-file  ‘DerEqns’ where the derivative equations for the state variables have been entered; the name of  this m-file can also be any name you choose. % StartSim [t,X]=ode45(@DerEqns,[0 2.5],[1 .5 .2]);% This command returns a vector of t  % values and matrix X which contains the values for the state variables at each t %value.  The first column of X is x(1), the 2nd column of X is x(2), etc. % You must define the output of interest, y. y=X(:,1);  % This command assigns y to be x(1), all of the values of x in the first column % If you want a plot of ��, then y=X(:,2). To plot v, use y = X(:,3)+11*2; plot(t,y)  title('Numerical Simulation Example') xlabel('Time, sec.') ylabel('v, m') Note, when computing the product of two variables represented by vectors, the dot  product (term by term multiplication) is required. For example, if the output of  interest is v2, use y=X(:,1).*X(:,1) or y=X(:,1).^2. The dot is also required for dividing  term by term: y=X(:,1)./X(:,2).  Create a 2nd m-file named 'DerEqns'. function dx=DerEqns(t,x) N=3; % Number of state variables to be numerically integrated dx=zeros(N,1); % This tells the computer how many dx equations to look for dx(1)=x(2); u=2; % Defines the input which can be a function of time t such as u=sin(3*t) or u=exp(- 6*t);45 dx(2)=x(3)+11*u; dx(3)=31*u-65*x(1)-33*x(2)-9*x(3); end; To run the simulation, type 'StartSim' in the command window >> StartSim; To stop the simulation for a specified event such as an increasing x(1) reaching a value of 4 , you  need: options=odeset('events',@StopSim); [t,x]=ode45(@DerEqns,[0 2.5],[1 .5 .2],options); plot(t,x(:,1)) % This plots all of the values of x(1). For x(2), use x(:,2).  title('Numerical Simulation Example') xlabel('Time, sec.') ylabel('v, m') % The following m-file must be saved under the name ‘StopSim’ function [Val,Ister,Dir] = StopSim(t,x) Val(1)=x(1)-4; Ister(1)=1; Dir(1)=1; end; Problems utilizing this concept: H7, H15, H16, H18 2.4 Using MATLAB to Get the System Steady State Response for Periodic Inputs 2.4.1 Frequency Response Analysis (see Section 2.1.8.2) Any periodic function of time can be represented by a Fourier Series, an infinite sum of sine’s  and cosine’s, i.e, ( ) sin( ) sin( ) ... cos( ) cos( ) ... F t = a0 + a1 ω1t + a2 ω2t + + b1 ω1t + b2 ω2t + Consequently, most inputs will contain combinations of many different frequencies; it is of  interest to determine the effects of different frequencies on the performance of dynamic systems.  Such a study is called frequency response analysis. Recall for a system with output y(t), transfer function H(s), and input F(t), we can write Y(s) = H(s)F(s) Suppose F(t)is a sine wave with magnitude F0and frequency ω0, i.e. ( ) sin( ) 0 0 F t = F ω t Thus, 46 ω F sω ( ) F = 2 s and 0 + 2 0 0 ω Y s H sω ( ) ( ) F = s 2 0 + 2 0 0 Assuming that the poles of H(s) all have negative non-zero real parts, the time response of each  one of the modes corresponding to these poles will go to zero as t → ∞. Thus, [ ] y t Terms that go to zero F H s t ( ) _ _ _ _ ( ) sin( ) = + + t →∞ = ω φ 0 0 φ = angle H s [ ( ) ] s j s j ω 0 = ω 0 By plotting |��(��)|��=���� and ����������[��(��)]��=���� as functions of ��, it is possible to determine how  the frequency affects the dynamics of a system. 2.4.2 Examples of Frequency Response Plots u(t) y(t) Spring k Mass m Damper c ����2 + ���� + ��]��(��) = [2�������� + ����2 ��(��) = ��(��)��(��) = [���� + �� ��2 + 2�������� + ����2]��(��) The MATLAB command ‘bode’ provides both the magnitude and phase plots as a function of  frequency. The magnitude and phase plots for the mass/spring/damper system above are shown  in the figure below for different values of the damping ratio. The magnitude is in decibels (dB)  which is by definition 20������10|��(����|. The frequency has been normalized by the natural  frequency. Note for the lightly damped case, the peak of the magnitude plot occurs at  approximately the normalized frequency of 1 which corresponds to ����. >> h1=tf([2*.2 1],[1 2*.2 1]);  >> h2=tf([2*.6 1],[1 2*.6 1]); >> h3=tf([2*.9 1],[1 2*.9 1]); >> bode(h1,'r',h2,'k:',h3,'r.',{0.1 10})%The numbers in {} limit the freq. range for resolution >> legend('damping ratio = 0.2','damping ratio = 0.6','damping ratio = 0.9','Location','SouthWest') >> xlabel('normalized frequency, \omega/\omega_n (rad/s) /')%Note how to get Greek symbols47 ) Bd( edutingaM) ged( esah10 0 -10 -20 -30 0 -45 -90 damping ratio = 0.2 Bode Diagram 10- 1100101 Pdamping ratio = 0.6 damping ratio = 0.9-135 normalized frequency, ω/ωn (rad/s) / (rad/s) For a second frequency response example, consider the mass/spring/damper system shown  below.  F(t)  M=10 kg k k k M M M  c=100 Ns/m z  k=1000 N/m c v c w c Transfer function: 48  100 s^4 + 4000 s^3 + 70000 s^2 + 600000 s + 3e006 H(s) = ----------------------------------------------------------------------------------  1000 s^6 + 5.e4 s^5 + 1.1e6 s^4 + 1.3e7 s^3 + 9e7 s^2 + 3e8 s + 1e9 >> bode(H) >> bode(H,{1,10}) 49 Repeating this example but this time using a much smaller damping: F(t) M=10 kg  c=5 Ns/m  k=1000 N/m k k M M M c c z v w k c  100 s4 + 200 s3 + 40075 s2 + 30000 s + 3e006 H(s) = ------------------------------------------------------------------------------------  1000s6+2500s5+501500s4+600125s3+6.008e007s2+1.5e007s+1e009    >> bode(H) 50 It is of interest to compare the resonant peaks in the magnitude plot with the eigenvalue  frequencies. For lightly damped modes, these peaks occur at approximately the mode  frequencies. >> damp(H)  Eigenvalue Damping Freq. (rad/s)  -4.95e-002 + 4.45e+000i 1.11e-002 4.45e+000  -4.95e-002 - 4.45e+000i 1.11e-002 4.45e+000  -3.89e-001 + 1.25e+001i 3.12e-002 1.25e+001  -3.89e-001 - 1.25e+001i 3.12e-002 1.25e+001  -8.12e-001 + 1.80e+001i 4.50e-002 1.80e+001  -8.12e-001 - 1.80e+001i 4.50e-002 1.80e+001  Problems utilizing this concept: 4.34, 5.33, H13, H28, H39, E5b, E6b, Section 2.1.8.2B 2.5 Using MATLAB to Get Lower Order Approximations for Transfer Functions 1. Modal Approximations: Drop the least dominant modes and if the original DCgain is a  constant, rescale the transfer function approximation to get the original dc gain. 2. Routh Approximation (Hutton, M. & Rabins, M.J., “Simplification of High-Order Mechanical  Systems Using the Routh Approximation”, Journal of Dynamic Systems, Measurement, and  Control, pp. 383-392, Dec. 1975 3. Inverse Frequency Approximation (Hullender ASME publication) 2.5.1 Examples of Modal Approximation Example 1: Recall section 2.1.9; the partial fractions of a transfer function are the modes of the  system. Consider the following mass/spring/damper system. The transfer function has been  expressed in partial fraction format. F(t)  M=10 kg k k k M M M  c=100 Ns/m z  k=1000 N/mc v c w c 51 0.05431 ( )2 2 2 0.03493 0.01076 H s ⎢⎣⎡+ + ⎥⎦⎤ =32.47 324.7 + + s s s s s s + + 1.981 19.81 + + 15.55 155.5 Dropping the two least significant modes and then rescaling to get the original dc gain of 0.003  gives 0.05431 ( )2 ⎥⎦⎤ (0.003)19.81 H s ⎢⎣⎡+ + =s s 1.981 19.81 0.05431 >> H3 Transfer function:  0.05431 --------------------- s^2 + 1.981 s + 19.81   >> Happ=H3*0.003*19.81/0.05431 Transfer function:  0.05943 --------------------- s^2 + 1.981 s + 19.81 >> dcgain(Happ) ans = 0.0030 >> [y,t]=step(H,4); % this gives step response for H out to 4 sec >> [Y]=step(Happ,t); % this gives step response for Happ using same time series >> plot(t,y,'r',t,Y,'k.','LineWidth',2) >> title('First mode step response accuracy') >> xlabel('time, sec.') >> ylabel('displacement z, meters') >> legend('full transfer function','rescaled first mode approxiamtion') 4.5x 10-3 First mode step response accuracy full transfer function 4 3.5 s rescaled first mode approxiamtion3 retem2.5  ,z tne2 mecal1.5 psid1 0.5 0 0 0.5 1 1.5 2 2.5 3 3.5 4 time, sec. 52 Example 2: Consider the transfer function for the suspension system shown below. Vehicle Suspension U Transfer function: Z  9 s^3 + 145 s^2 + 1365 s + 2664 Z(s)=G(s)U(s) = [ ----------------------------------------------------] U(s)  s^4 + 24 s^3 + 337 s^2 + 1236 s + 3172 >> Num=[9 145 1365 2664]; >> Den=[1 24 337 1236 3172]; >> pfract(Num,Den) Remainder K = [] Transfer function from input 1 to output:  5 s + 36 ---------------- s^2 + 20 s + 244 Transfer function from input 2 to output:  4 s + 9 -------------- s^2 + 4 s + 13 Thus, expressing our transfer function in modal form gives 5 36 ( )2 2 s + 4 9 s + G s ⎢⎣⎡+ + ⎥⎦⎤ =4 13 s s + + 20 244 + s s Approximating G(s) by the dominant of these two modes and then rescaling to make the  approximation have the dc gain of the original transfer function gives 4 9 s + 13 36 9 4.8525 10.918 ⎢⎣⎡+ + ⎜⎝⎛⎥⎦⎤ ⎞ s + G s a⎜⎝⎛⎟ + ⎞ ⎢⎣⎡+ + ⎥⎦⎤ ( )2 2 =4 13 s s 4 13 9 ⎠ 244 13 ⎟ = ⎠ s s 53 >> step(G,Ga) Conclusion: The 2nd order modal approximation gives a fairly accurate step response in  comparison to the step response of the original 4th order suspension model. Modal  approximations don’t always work well for systems that don’t have dominant modes.  2.5.2 Transfer Function Approximations using Inverse Frequency Domain Analysis Introduction  There is often the need to approximate the transfer function for a dynamic system by a lower  order linear ordinary differential equation. This is especially true for time domain simulation of  infinite order distributed parameter systems and for control system design problems where the  order of the plant is very high or infinite. This paper introduces an approach for obtaining lower  order rational polynomial transfer function approximations for high or infinite order systems  using MATLAB algorithms for matching the frequency response characteristics of the original  and approximating transfer functions.   The problem of obtaining lower order transfer function approximations has been researched  extensively for many years. For certain classes of problems, the Routh approximation algorithm  1has been one of the most effective and efficient approaches to this problem. Some of the  more important advantages of the Routh method is that it can be programmed into a digital  computer, it preserves the asymptotic stability properties of the original system, and the initial  and final values of the transfer function approximation are the same as for the original transfer  function. To utilize the Routh approach, the poles and zeros of the original transfer function to  be approximated must be known; for cases such as distributed parameter systems, which may be  infinite order, it is necessary to use series approximations to identify the dominant poles and  zeros. This paper introduces a new approach to obtaining rational polynomial transfer function  approximations that very accurately match the frequency response properties of the original    1 Numbers in brackets refer to publications in the list of references.54 transfer function over a designated frequency band. Not only are the significant properties of the  original transfer function preserved as with the Routh method, the poles and zeros of the original  transfer function do not have to be known.  Due to increases in digital computer speed and memory, many efficient and effective  algorithms have evolved for system identification from frequency response data [2-12]. This  paper pertains to the application of some of these algorithms to obtain transfer function  approximations. The approach is to match the frequency responses using least squares. Levi   formulated a technique based on a linear least-square criterion to minimize output errors in the  frequency domain between an actual function and a rational polynomial transfer function  approximation. The actual function is represented by frequency response data of the system  taken from measurements or generated directly from an analytical function. The polynomial ratio  is a Laplace domain transfer function with real coefficients. However, as indicated in Levi’s  work, the original technique was restricted to only systems without poles on the imaginary axis.  Sanathanan and Koerner  resolved this problem by introducing an iterative search method  which also eliminates fitting errors occurring at a low frequencies commonly associated with a  least square criteria. Further improvements in the iterative minimization approach were achieved  using the damped Gauss-Newton method . These refined techniques are utilized in an m-file  named ‘invfreqs’ available in the Signal Processing Toolbox within MATLAB. The algorithm  presented in this paper utilizes this m-file for single input/single output systems. By formulating  the new transfer function in the MATLAB environment, the results are available for further time  domain simulations and analyses.  Two examples are presented in this paper. The first demonstrates the use of the algorithm to  achieve lower order approximations for a linear system transfer function with a finite order  numerator and denominator; the system is a lumped parameter model for the axial vibrations in a  beam. A simple version of the inverse frequency algorithm, tfapprox.m, is used in the first  example which only requires knowledge of the original numerator and denominator, the desired  reduced order of the approximation, and the frequency range of interest to match the frequency  response. The M-file tfapprox.m is particularily useful for reduced order control problems or  reduced order simulation problems where higher frequency modes are relatively insignificant. The second example pertains to achieving transfer function approximations for a  distributed parameter system with an infinite number of poles and zeros; a somewhat more  comlex version of the inverse frequency algorithm is used which attempts to determine the  necessary frequency range for the desired finite order model. This version of the algorithm is  particularily useful for obtaining an ordinary differential equation model for time domain  simulations from infinite order frequency domain models normally derived from solving a set of  partial differential equations. For these type problems, the importance is placed on time domain  simulation accuracy instead of simply obtaining a reduced order model as in Example 1. It is  important to note that tips and suggestions for simplifying the least curve fitting process are  provided at the end of this section of the notebook. Example 1  Consider the continuous beam shown in figure 1 with input force Fi and output force Fo. The  input force might typically be provided by a pile driver and the output force represents the load  force associated with driving the pile into the ground. This beam has an infinite number of  modal frequencies. However, if the beam model is approximated by five lumped mass-spring damper systems as shown in figure 2, then the model is only 10th order. 55 Fi Fo Figure 1 Schematic of a continuous beam with input force Fi and load force Fo. i k i k i k i k i k Fi Fo mi mi mi mi mi i b i b i bi bi b Figure 2 Schematic of a lumped parameter model approximation for the beam in fig. 1. Using ki=8 N/m, mi=2 Kg, and bi=0.8 Ns/m, the transfer function between the input force and the  output force in terms of the Laplace operator ‘s’ can be shown to be ⎡ ⎤ + + + + + 5 4 3 2 32s 1600s 32000s 320000s 1600000s 3200000 F (s) F (s) = ⎢ ⎥ ⎢ ⎥ + + + + + + + + + + ⎣ ⎦(1) o i 10 9 8 7 6 5 4 3 2 3125s 11250s 126500s 287000s 1611200s 2148032s 7721600s 4832000s 12320000s 1600000s 3200000    A lower order approximation for this transfer function should match the frequency response  of the original transfer function from zero frequency out to some designated maximum  frequency, which would probably be determined by the input frequency. To estimate the  minimum order approximation that will achieve this objective, it is helpful to either know the  modal frequencies of the original transfer function or to examine its frequency response.  Factoring the denominator polynomial of the original transfer function reveals the following  modal frequencies: 0.569, 1.66, 2.60, 3.32, and 3.77 rad/s.  In general, the order of the approximation must be at least two times the number of second  order modes to be matched plus one or two additional orders to allow for a low frequency match  as well as a match at the necessary modes. For cases where the poles and zeros of the original  transfer function are known, the inputs to the MATAB command include the numerator and  denominator of the original transfer function, the desired approximation order, the lower  frequency bound (usually a small frequency but not zero), and the upper frequency bound. The  MATLAB m-file ‘tfapprox.m’ is provided with the other m-files referenced in this notebook.  This m-file is good for any transfer function to be approximated when the original numerator and  denominator polynomials are known.  For a demonstration of using “tfapprox.m”, assume it is desired to match as accurately as  possible the first two modes of the original 10th order system represented by (1). A 6th order  approximation is chosen over the frequency range of 0.1 to 2.0; the numerator and denominator  of (1) are denoted by n10 and d10 respectively. The corresponding MATLAB command and the  resulting approximation are as follows:56 [n6,d6]=tfapprox(n10,d10,6,0.1,2.0)  5 4 3 2 o⎥⎥⎦⎤ ⎢⎢⎣⎡+ + + + + + .005428s 0.105s 0.1439s 0.6507s 1.617s 5.697 F (s)i 6 5 4 3 2 − + − − + + ≈(2) s 0.8465s 9.623s 3.806s 20.68s 1.62s 5.694 F (s)  If it is desired to match only the first mode, then a command and the results for a 4th order  approximation from 0.1 rad/sec out to an upper frequency bound of 1.0 rad/s are given by [n4,d4]=tfapprox(n10,d10,4,.1,1) 3 2 o⎥⎥⎦⎤ ⎢⎢⎣⎡+ + + + 0.01219s 0.1875s 0.1467s 0.7693 F (s)i 4 3 2 − − + + ≈(3) s 0.2471s 2.706s 0.1466s 0.7697 F (s)  Comparisons of the frequency responses of these two approximations with the original  transfer function are shown in figure 3. It is of interest to note in figure 3, the 4th order  approximation is essentially identical to the original in magnitude and phase out to the specified  upper frequency bound of 1.0 rad/s. And, the 6th order approximation is also identical to the  original out to the specified 2.0 rad/s.   A more quantitative method for evaluating the accuracy of the approximations is to compare  the mode frequencies and damping ratios. The comparisons are shown in tables 1 and 2 for both  this least-squares method and for the Routh method. It is of interest to note that the least-squares  curve fit method matches the mode frequencies and damping ratios exactly over the specified  frequency range. Figure 3 Frequency response comparison plots for Example 1.57  Impulse response comparisons are shown in figure 4 for the least-squares approximation  method. The accuracy of the approximations in the time domain is quite evident.

ω1 ω2 ω3 ω4 ω5 4th order  Routh 0.572 2.23

4th order  least-squares .569 1.54

6th order  Routh 0.569 1.71 3.17

6th order  least-squares 0.569 1.66 2.51

10th order  Exact 0.569 1.66 2.60 3.32 3.77

## (1) What are the natural variables of this system? ## How much greater or smaller is the output amplitude compared to the input amplitude? Don't forget about the age old question of college psychology notes
Don't forget about the age old question of 3e8/1e9

Table 1 Mode frequencies of original and approximating transfer functions.

ς 1 ς 2 ς 3 ς 4 ς5 4th order  Routh 0.029 0.443

4th order  Approx. 0.028 0.070

6th order  Routh 0.028 0.115 0.422

6th order  Approx. 0.028 0.083 0.107

10th order  Exact 0.028 0.083 0.131 0.168 0.192

Don't forget about the age old question of research on social loafing demonstrated that european participants worked harder on a task when working alone, whereas chinese participants worked harder on a task when they were part of a group. these results illustrate the importance of the _____ perspe
We also discuss several other topics like cc 302
Don't forget about the age old question of draw the shear diagram for the shaft.

Table 2 Mode damping ratios for original and approximating transfer functions. Figure 4 Impulse response comparisons of the least-squares approximations.58 Example 2  The second example pertains to an engineering system with an infinite number of poles and  zeros. Specifically, a pressure transducer is connected by means of a fluid line to a pressure  source to be measured. A schematic of the line and pressure transducer is shown in figure 7 where Ps denotes the pressure source and Pt denotes the pressure at the pressure transducer. The  motivation for this study is to understand the time domain effects of the fluid line on the  recorded data. The availability of a finite order rational polynomial transfer function will make it  possible to determine the time domain distortion in the recorded pressure due to the line fluid  transients. The transfer function approximation of the infinite order system should be accurate  out through the frequency bandwidth of the pressure changes to be measured. For a step change  in the source pressure, the approximation should be accurate to frequencies greater than the  transducer bandwidth. Unlike Example 1, the end result here is not simply a low order linear  transfer function but, instead, an accurate time response for analyzing the effects of the line and  transducer volume.  Pressure Transducer Fluid Line Ps Qt PtFigure 5 Schematic of a pressure transducer connected to the source pressure, Ps, to be measured  by a line with fluid transients; Pt is the pressure actually recorded by the transducer which is  distorted due to the transient effects of the line and the volume of the transducer.  The equation for the fluid transients in lines is formulated and discussed in Section 5.2. See  also reference . For this example, it can be shown 1 cosh ( ) Γ s Δ = (2)  Q s t s Δ t Γ ( ) P s Δ − P s where for liquids L Dn ν Z s s ( )sinh ( ) Γ ( ) Z s s ( )sinh ( ) ( ) = (3) cr 2 2 s D sn11 r ⎢⎣⎡− ⎥⎦⎤ ν (4) Γ =B ( ) Z  B ( ) (5) Z so− =1 The Bessel function ratioBis as follows:  2 J j sr B s 2 ( / ) ν   = (6) ( )2 2 1 j sr J j sr / ( / ) ν ν o 59 where, oJand 1Jare zero and first-order Bessel functions of the first kind. Note that the Bessel  functions are functions of complex variables.  The equation for the pressure transducer is assumed to be a lumped fluid capacitance represented  by a fixed volume of fluid (see Chapter 5). The Laplace transform of the differential equation  for pressure changes in the volume can be shown to be V Q s Δ t= Δ t ( ) s P(s) β(7) Combining (2) and (7) and eliminating Q (s) tresults in the transfer function between the true  pressure changes at the source ∆����(��) and changes in the recorded pressure P(s) Δ t, i.e. Δ = ⎢⎢⎢⎢⎣⎡Γ + Γ ⎥⎥⎥⎥⎦⎤ 1  (8) P s t Δ s ( ) P s Vs cosh ( ) ( )sinh ( ) ( ) s β Z s s The Bessel and hyperbolic functions can be represented by infinite order product series ;  thus, the transfer function in (8) has an infinite number of poles and zeros. Regardless, it is  straightforward to generate its frequency response and then perform the least-squares curve fit to  obtain a rational polynomial linear system transfer function approximation. As mentioned  above, obtaining a linear transfer function will enable time domain studies to be conducted. The  m-file, ‘PressureTransducer.m’, for generating the frequency response of this transducer and line  system is listed in the appendix. A 26th order transfer function is desired for a line of length of 3  m and radius 0.00635 m; the volume of the transducer is 3.2e-6 m3.  Since the purpose of this example is to generate a time response, compared to the previous  example where a lower order approximation was desired, a relatively high order transfer function  is desired in this example in order to obtain an accurate time response. The fluid properties are  specified in the m-file. Executing the M-file PressureTranducer.m gives the eigenvalues for the transfer function  approximation, frequency domain comparisons and the time response for a step change in the  source pressure. The frequency response comparisons, figures 6 and 7, reveal that the transfer  function is very accurate out through eleven second order resonant modes and four first order  modes. The time response for a unit change in the source pressure is shown in fig. 8; the results  show that the response is lightly damped (100% overshoot) with a time delay at the beginning of  �� ��≈ 0.002 ������. The distortion in the recorded pressure is obvious. DCGain = 1.0001.   >> PressureTransducer   Eigenvalues Damping Frequency Time Constant   ratio (rad/seconds) (seconds)  -3.61e+02 2.77e-03  -2.28e+01 + 7.72e+02i 2.96e-02 7.72e+02 4.38e-02  -2.28e+01 - 7.72e+02i 2.96e-02 7.72e+02 4.38e-02  -3.91e+01 + 2.34e+03i 1.67e-02 2.34e+03 2.56e-02  -3.91e+01 - 2.34e+03i 1.67e-02 2.34e+03 2.56e-02  -5.02e+01 + 3.92e+03i 1.28e-02 3.92e+03 1.99e-02 60 -5.02e+01 - 3.92e+03i 1.28e-02 3.92e+03 1.99e-02  -5.22e+03 1.92e-04  -5.93e+01 + 5.50e+03i 1.08e-02 5.50e+03 1.69e-02  -5.93e+01 - 5.50e+03i 1.08e-02 5.50e+03 1.69e-02  -6.72e+01 + 7.08e+03i 9.49e-03 7.08e+03 1.49e-02  -6.72e+01 - 7.08e+03i 9.49e-03 7.08e+03 1.49e-02  -7.42e+01 + 8.66e+03i 8.57e-03 8.66e+03 1.35e-02  -7.42e+01 - 8.66e+03i 8.57e-03 8.66e+03 1.35e-02  -8.06e+01 + 1.02e+04i 7.87e-03 1.02e+04 1.24e-02  -8.06e+01 - 1.02e+04i 7.87e-03 1.02e+04 1.24e-02  -8.66e+01 + 1.18e+04i 7.32e-03 1.18e+04 1.16e-02  -8.66e+01 - 1.18e+04i 7.32e-03 1.18e+04 1.16e-02  -9.21e+01 + 1.34e+04i 6.87e-03 1.34e+04 1.09e-02  -9.21e+01 - 1.34e+04i 6.87e-03 1.34e+04 1.09e-02  -9.75e+01 + 1.50e+04i 6.51e-03 1.50e+04 1.03e-02  -9.75e+01 - 1.50e+04i 6.51e-03 1.50e+04 1.03e-02  -1.13e+02 + 1.61e+04i 6.98e-03 1.61e+04 8.88e-03  -1.13e+02 - 1.61e+04i 6.98e-03 1.61e+04 8.88e-03  -3.67e+04 2.72e-05  -2.14e+05 4.68e-06  Magnitude Comparison Plots 30 20 10 B d ,edutingaM noitcnuF refsnarT0 -10 -20 -30 -40 Original function Approximation102 103 104 105 Frequency, rad/sec Figure 6 Frequency response comparison of the 26th order transfer function approximation of the  infinite order transfer function. 61 200 150 100 Original funcition Approximation Phase Angle Comparison Plots s eerged ,elgnA esahP50 0 -50 -100 -150 -200 102 103 104 105 Frequency, rad/sec Figure 7 Frequency response comparison of the 26th order transfer function with the infinite  order system frequency response Transducer pressure for a unit step change in source pressure 2.5 2 2m/N1.5  ,erusserp re1 cudsnart ni e0.5 gnahC0 -0.5 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 Time (seconds) Figure 8 Unit step response of the normalized recorded pressure in the pressure transducer. If  there was no distortion, the transducer pressure divided by the source pressure would be unity at  all times. 62 Summary and Conclusions An approach for generating lower order rational polynomial transfer functions for high or infinite  order systems has been demonstrated. The approach is based on a least squares curve-fitting  algorithm originally formulated for identifying transfer functions from frequency response data  associated with unknown systems. Since the curve-fitting algorithm is available in the  MATLAB Signal Processing toolbox, it is a straightforward procedure to generate the lower  order approximations and then perform simulations with the results while operating in the  MATLAB environment.  Two examples were presented. The first example demonstrated how to obtain a lower order  approximation of a linear system transfer function starting with the numerator and denominator  polynomials of the the original transfer function. The second example corresponded to an  infinite order transfer function obtained from the solution to a set of partial differential equations  for a distributed parameter system; the objective was to obtain an accurate finite order rational  polynomial transfer function that can be used for time domain analyses. In both cases, the  frequency response of the approximation matched that of the original transfer function very  accurately over the frequency range of interest. Suggestions for Doing Inverse Frequency Problems The MATLAB inverse frequency command is as follows: [NumCoeffs,DenCoeffs]=invfreqs(FreqResp,Freq,NumOrder,DenOrder,Wt)  where ‘FreqResp’ are the complex frequency response values at the frequencies ‘Freq’,  ‘NumOrder’ and ‘DenOrder’ are the desired orders of the polynomials in the transfer function, and  ‘Wt’ are optional weighting coefficients. A maximum frequency and a minimum frequency are  used to generate the values ‘Freq’. The following suggestions are recommended: 1. The first step is to estimate upper and lower normalized frequencies and generate the  frequency response of the total system model. Examination may reveal the presence of  resonant frequencies and the low frequency requirements for an accurate DC gain.  Consider the following statements used to generate the frequencies for the curve fit. I use  1000 points per decade and all weighting terms of one. wmin=.00001;wmax=120000;  nd=ceil(log10(wmax)-log10(wmin));%Determines the number of decades in the frequency  range and rounds up to the next integer w=logspace(log10(wmin),log10(wmax),1000*nd);%Generates at least 1000 points per  decade 2. If resonant frequencies exist, then it is possible to estimate the order of the transfer function  that will be required to accurately curve fit the response out to an upper frequency; at least  2 orders are required for each resonant peak and possibly additional orders for first order 63 modes that are not apparent from examining the response. I keep increasing the order until  there is little change in the eigenvalues and the time response. dorder=23; % Desired order for the denominator of the transfer function 3. For a specific desired upper frequency, it is better to start with a lower order trial valued  and work up to a higher order with each trial to getting an accurate fit out to that upper  frequency. 4. The order of the numerator should always be one less than the order of the denominator. norder=dorder-1; % Desired order for the numerator of the transfer function 5. With each attempt, the DC gain of the transfer function should be checked. There will  always be a trade off in the accuracy at the lower and upper frequencies. The lower  frequency should be decreased until the desired DC gain is achieved. G=tf(NumCoeffs,DenCoeffs); Dcgain=dcgain(G) step(G) Nomenclature c Speed of sound in fluid, β ρ/m/s L Length of line, 3 m r Internal radius of fluid line, 0.00635 m s Laplace operator V Volume of pressure transducer cavity, 3.2e-6 m3 Zo Characteristic impedance of line, ρc/πr2 β Bulk modulus of fluid, 2e9 N/m2 ρ Density of fluid, 855 Kg/m3 ν Kinematic viscosity of fluid, 50e-6 m2/s References 1. Hutton, M.F. and Rabins, M.J., “Simplification of High-Order Mechanical Systems Using the Routh Approximation,” J. Dynamic Systems, Measurement and Control, v. 97, pp. 383-392, Dec. 1975 2. Ljung, L. “Some Results on Identifying Linear Systems Using Frequency Domain,” 32nd IEEE Conference  on Decision and Control, San Antonio, Texas, pp. 3534-3538, 1993. 3. De Vries, D.K. and Van den Hof, P.M.J. “Frequency Domain Identification with Generalized Orthonormal  Basis Functions,” Proceedings of the 34th IEEE Conference of Decision and Control, New Orleans,  Louisiana, 1995. 4. Bayard, D.S., “High-Order Multivariable Transfer Function Curve Fitting: Algorithms, Sparse Matrix  Methods and Experimental Results,” Automatica, v. 30, pp. 1439-1444, 1994. 5. Leblond, J. and Olivi, M., “Weighted H2approximation of transfer functions,” Mathematics of Control,  Signals, and Systems, v. 11, n 1, pp. 28-39, 1998. 6. Djaferis, T. and Gong, W., “Order reduction of real rational functions with error bounds,” Proceedings of  the IEEE Conference on Decision and Control, v. 4, pp. 3189-3194, 1993. 7. Bayard, D.S., “High-Order Multivariable Transfer Function Curve Fitting: Algorithms, Sparse Matrix  Methods and Experimental Results,” Automatica, Vol. 30 pp. 1439-1444, 1994. 8. Levi, E.C., “Complex-Curve Fitting,” IRE Transactions on Automatic Control, AC-4, pp.37-44, May 1959.64 9. Kim, J.-S.; Song, C.-K.; Jeon, B.-S.; Ryu, J.-W.; Jang, Y.-S.; Kim, S.-S.; Lee, S.-H., “A frequency domain  identification method using total least squares”, IEEE International Symposium on Industrial Electronics, v  3, 2001, p 1855-1859. 10. Spanos, J.T. and Mingori, D.L., “Newton algorithm for fitting transfer functions to frequency response  measurements”, Journal of Guidance, Control, and Dynamics, v 16, n 1, Jan-Feb, 1993, p 34-39. 11. Shirvani, Mansour; Inagaki, Makoto; Shimizu, Tadaaki, “Simplification study on dynamic models of  distributed parameter systems”, AIChE Journal, v 41, n 12, Dec, 1995, p 2658-2660. 12. Sanathanan, C.K. and Koerner, J., “Transfer Function Synthesis as a Ratio of Two Complex Polynomials,”  IRE Transactions on Automatic Control, Vol. AC-8, pp. 56-58, January 1963. 13. Goodson, R.E. and Leonard, R.G., “A Survey of Modeling Techniques for Fluid Transmission Line  Transients,” Journal of Basic Engineering, ASME Transactions, Series D, Vol. 94, June 1972. 14. Wongputorn, P., Hullender, D., Woods, R., and King, J., “Application of MATLAB Functions for  Simulation of Systems with Lines with Fluid Transients”, accepted for publication in the ASME Journal of  Fluids Engineering, FE-03-1080, 2004 15. Hullender, D. ; Healey, A. J., 1981, “Rational Polynomial Approximation for Fluid Transmission Line  Models”, Fluid Transmission Lines Dynamics, ASME Special Publication, I, New York. 2.6 Converting Linear Continuous Differential Equations to Discrete Time Equations Most modern real time monitoring and control systems utilize digital computers for computation  and analysis. In addition, estimation and identification algorithms rely on digital data inputs  compatible with discrete differential equations. Consequently, it is important to be able to  convert continuous differential equation models to discrete format and to understand how to  work with discrete time equations.  2.6.1 Conversion Using Inverse Laplace Transform Consider a first order differential equation for ��(��) with input w(t); the time constant is ��. ����̇ + �� = �� The solution to this equation for �� a constant and initial condition ��(0−) can be shown to be ��(��) = ��(0−)��−���� + (1 − ��−����) �� Suppose instead of starting at time �� = 0, we start at time �� = ���� where �� is a small time  increment usually relatively small compared to �� but not required. The solution after an  additional time increment of �� is as follows ��((�� + 1)�� − ����) = ��(����)��−(��+1)��−���� �� + (1 − ��−(��+1)��−���� �� ) �� Or ����+1 = ������−���� + (1 − ��−���� ) ���� where ���� is the value of the constant input at time �� = ����. Note that �� does not need to be the  same constant for all time but constant over each time interval ��. Thus, �� is said to be  piecewise constant over time intervals of ��. The accuracy of this discrete formulation thus  depends on the magnitude of �� compared to how rapidly �� is changing if at all. For ����= 0.1,  the discrete equation above becomes ����+1 = 0.9048���� + 0.09516����65 So, one way to convert a first order linear continuous differential equation to discrete format is to  first obtain the solution to the continuous differential equation for a constant input and then  select a time increment �� that is small enough to be able to assume that the input is  approximately piecewise constant over that small time interval. Then simply replace �� in the  solution with (�� + 1)�� − ���� which will give the solution one time increment after time ���� which is noted by ����+1. The inverse Laplace approach can be used for higher order differential  equations if the state space solution is used (Chapter 3). 2.6.2 Conversion Using the continuous-to-discrete MATLAB command, ‘c2d’ A second way of converting a linear continuous differential equation to discrete format is to use  the MATLAB command ‘c2d’. If the zero-order-hold option in ‘c2d’ is used, then the input is  assumed to be piecewise constant for time intervals of ��. The accuracy of this approach may  require �� ≤1 10�� where �� is the largest system eigenvalue magnitude. This MATLAB approach coverts an nth order continuous transfer function in terms of the  Laplace transform �� into a discrete transfer function in terms of the Z-transform ��. The discrete  equation is obtained by finding the inverse of the ‘c2d’ generated Z-transform. By definition of  the z transform, the inverse of ��(��) is ��(����) = ����, the inverse of ����(��) is ��([�� + 1]��) = ����+1, the inverse of ��2��(��) is ��([�� + 2]��) = ����+2 , the inverse of z-1X(z) is ��([�� − 1]��) = ����−1, etc.  This is best illustrated by means of an example. The MATLAB commands for the continuous differential equation above with �� = 10 & �� = 1 are >> g=tf(1,[10 1]); >> T=1; >> G=c2d(g,T) Transfer function: 0.09516 ---------- z - 0.9048 So  ��(��) = [0.09516 �� − 0.9048] ��(��) or ����(��) − 0.9048��(��) = 0.09516��(��) which can be inverted to the time domain giving the same result as above: ����+1 − 0.9048���� = 0.09516���� The MATLAB approach becomes quite advantageous for higher order differential equations.  For example, consider the following differential equation which can be shown to have  eigenvalues of -2 and -5±j4 ��⃛ + 12��̈+ 61��̇ + 82�� = 2��̈+ 18��̇ + 492�� The largest eigenvalue magnitude is √52 + 42 = 6.403. Since �� should be smaller than 1/64.03  = 0.0156, �� is selected to be 0.015. 66 The MATLAB commands are as follows >> g=tf([2 18 492],[1 12 61 82]) Transfer function:  2 s^2 + 18 s + 492 ------------------------ s^3 + 12 s^2 + 61 s + 82 >> G=c2d(g,0.015) Transfer function: 0.02956 z^2 - 0.05388 z + 0.02584 ---------------------------------------------- z^3 - 2.823 z^2 + 2.658 z - 0.8353 Thus, in terms of the Z-transform, ��3��(��) − 2.823��2��(��) + 2.658����(��) − .8353��(��) = .02956��2��(��) − .05388����(��) + .02584��(��) Thus, the discrete representation for the continuous differential equation is  ����+3 − 2.823����+2 + 2.658����+1 − 0.8353���� = 0.02956����+2 − 0.05388����+1 + 0.02584���� Note, this discrete differential equation can be rewritten to give the current solution in terms of  past values of �� and ��, i.e. ���� = 2.823����−1 − 2.658����−2 + 0.8353����−3 + 0.02956����−1 − 0.05388����−2 + 0.02584����−3 Problems utilizing this concept: E3c, E4c67 Chapter 3. Expressing Differential Equations in State Variable Format 3.1 Representing an nth Order System by n First Order Differential Equations. Numerical simulation algorithms generally require that the system of equations be expressed in  the format of simultaneous first order differential equations. In addition, many design algorithms  are based on the use of state variable formats for a systems dynamics. For cases without input derivatives, conversion to state variable format is very simple since  assigning definitions for the state variables does not include the input.  Example: Express the following 2nd order differential equation in the form of two simultaneous  1st order differential equations. z+ 4z+10z = 5u  where u is the input. Note, there are no input derivatives (��̇, ��̈,������. ) in this equation. We must define two new variables, x1and x2, and then write the equations for their  derivatives.  Define x1and x2so that we can easily get equations for their derivatives in terms of only x1,  x2, and u. For the equation above, the simplest choice is x z 1 = =  x z 2 Thus, the derivative equations are  x x = 1 2  x 5u 10x 4x = − − 2 1 2 These two simultaneous 1st order differential equations are equivalent to the original 2nd order  differential equation for z. In general, before solving these state variable equations, we must define the output variable of  interest which is denoted by y. The definition for y can be any function of ��1, ��2, … ���� ������ ��.  For example, if our output of interest is z, then x1 y = z = . If our output variable of interest is z, then x2 y = z= . If our output variable of interest is 2z+ 5z − u, then y = 2x2 + 5x1 − u .68 3.2 Simulation Diagram Approach When There are Input Derivatives For differential equations with input derivatives, the most versatile approach that allows for  nonzero initial conditions, allows for unusual choices for the output-of-interest, and allows for  equal order numerators and denominators is the simulation diagram approach. One can either  draw the diagram for the specific equation or use the tables for an nth order system in general  found in Appendix G of Modeling and Simulation of Dynamic Systems by Robert Woods and  Kent Lawrence. If the input derivatives are no greater than 1st order, the process is very  simple not requiring a diagram or the use of a general set of equations. The following example and steps demonstrate the process of drawing a simulation diagram.  Consider the following differential equation. y+12y+ 5y+ 24y = 3u+ 2u+ 6u The steps for drawing a simulation diagram are (1) Solve for the highest derivative of y and  move all input derivatives to the left side of the equation, (2) Initiate drawing the diagram by  passing the terms on the left side of this equation through an integrator, (3) Introduce a summer  to cancel any non-derivatives of u terms, (4) Pass the remaining terms through a second  integrator, (5) Repeat step 3, (6) Repeat step 4, (7) Continue repeating this process until only y  remains, (8) Introduce a summer at the very beginning and feedback terms needed to get all the  terms on the right side of the equation; note, “boot-strapping” may be required if a term is not  available because of the presence of input derivatives. (9) Redraw the diagram leaving only  terms with “u” and assign outputs of the integrators to be the state variables. By definition, the  inputs to the integrators will be the derivatives of the state variables. (10) Using the diagram,  write the equations for the derivatives of the state variables in terms of the state variables and u. For the 3rd order differential equation above, Step 1: y−3u− 2u= −12y−5y − 24y + 6u Steps 2-7: y− 3u− 2u y− 3u − 2u 2u + y− 3u y − 3u+ 3u + y y∫ ∫ ∫∫ + 69 Step 8: 6u (+36u) (+36u) y− 3u− 2u y− 3u − 2u 2u − 36u = −34u + y−3u y − 3u+ 3u + y y + ___ ∫ ∫ ∫∫ + 12(y− 3u) =12y− 36u Step 9: 12 34u 5 24 3u 6u x x3 x1 x _ 3 x + + 2 x ∫ ∫ ∫∫ ___1 + 12 Step 10: 2 ��̇1 = ��2 + 3�� + 5 24 ��̇2 = ��3 − 34�� ��̇3 = −24��1 − 5(��2 + 3��) − 12(��3 − 34��) + 6�� = −24��1 − 5��2 − 12��3 + 399�� Problems utilizing this concept: H17, H18, E5c, E6c 3.3 General Matrix Format for State Variable Equations The general matrix format used in MATLAB and numerous mathematical algorithms assuming y is the output of interest is as follows:  = + X AX Bu y CX Du = + For the example in section 3.1, if the output of interest is y = 2z+ 5z − u, then we have the  following:70 ⎢⎣⎡ x x 1 2 =⎥⎦⎤ 0 1 ⎢⎣⎡− − 10 4 ⎢⎣⎡⎥⎦⎤ x x 1 2 +⎥⎦⎤ ⎢⎣⎡ 0 5 ⎥⎦⎤ u ⎢⎣⎡ x 1 + − ⎥⎦⎤ [ ] [ 1]u y 5 2 = Thus, x 2 A 0 1 ⎢⎣⎡− − =10 4 =50 ⎥⎦⎤ B ⎢⎣⎡ ⎥⎦⎤ C = [5 2] D = [− 1] 3.4 State Variable Equations in Matrix Format if there are Input Derivatives 3.4.1 Only 1st Order Input Derivatives Consider the following 2nd order differential equation: z+ 4z+10z = 5u + 7u Since the input derivative is only first order, the process of drawing the simulation diagram will  lead to a simple result that can be achieved without drawing the diagram. The first step of the  simulation diagram approach is essentially all that is necessary.  Solving for zand moving the uterm to the left side of the equation gives z−7u= 5u −10z − 4z Since z − 7uis the derivative of z − 7u, a logical choice for one of the state variables is  z − 7ubecause we know we have an equation for its derivative. Thus, we can define the state variables as follows x z 1 = x z u 27 = − 71 Thus, we have the following derivative equations  x x u = + 1 2  7 x u x x u = − − + 5 10 4( 7 ) 2 1 2 In matrix form this is  x 0 1 x 7 ⎢⎣⎡23 1  x =⎥⎦⎤ ⎢⎣⎡− − 10 4 ⎢⎣⎡⎥⎦⎤ 1 u x +⎥⎦⎤ ⎢⎣⎡− ⎥⎦⎤ 2 2 If the output of interest is z, then  ⎢⎣⎡ x 1+⎥⎦⎤ [ ] [ ]u y 0 1 7 = x 2 3.4.2 Format Options for Input Derivatives Higher than 1st Order There are an infinite number of ways of defining the state variables for a particular differential  equation (or transfer function). The choice depends on the restrictions. If there are no nonzero  initial conditions and the output of interest is the variable for which the differential equation has  been writer or for which you have the transfer function, the Phase Variable is very simple and  easily applied. However, if there are nonzero initial conditions or if there is an unusual choice  for the output of interest, then the Simulation Diagram approach will most likely be required. 3.4.2.1 Simulation Diagram Approach Recall the example in section 3.2. The following Laplace transform comes from the differential  equation for the case of zero initial conditions and y being the output of interest. Y s ⎥⎦⎤ ⎢⎣⎡+ + + 2 3 2 6 s s + + ( )3 2 = s s s 12 5 24 U s ( ) Using the simulation diagram approach, the following formulations are achieved: x y 1 =  x y u 2 = − 3   x y u u = − + 3 3 34  = + X AX Bu y CX Du = + A = ? B = ? C = ? D = ?72 0 1 0 ⎢⎢⎢⎣⎡− − − ⎥⎥⎥⎦⎤ ⎢⎢⎢⎣⎡ 3 ⎥⎥⎥⎦⎤ A = 0 0 1 B C = [1 0 0] D = 0 = − 34 24 5 12 3.4.2.2 Phase Variable Approach 399 For the Phase Variable approach, the A, B, C, and D matrices can be written using the coefficients of the differential equation or transfer function directly. This is demonstrated below  for the example in section 3.4.2.1. The last row of the A matrix contains the coefficients of the  denominator of the transfer function and the C matrix contains the coefficients of the numerator  polynomial. The B matrix contains zeros except for the last term which is 1. The D matrix is 0.  The numerator must be lower order than the denominator. This simple approach to getting the state variables is preferred as long as the initial conditions  are zero and the output of interest is the dependent variable in the differential equation or transfer  function which in this case is y. Note that the matrices are different than those for the Simulation  Diagram approach. This difference is due to different definitions for the state variables. The  actual definitions for the Phase Variable state variables are not known; this is why there can be  no nonzero initial conditions. 0 1 0 ⎢⎢⎢⎣⎡ 0 ⎥⎥⎥⎦⎤ ⎢⎢⎢⎣⎡− − − ⎥⎥⎥⎦⎤ B C = [6 2 3] D = 0 A = 0 0 1 24 5 12 = 0 1 3.4.2.3 MATLAB Approach The following commands in MATLAB are useful for transitioning between transfer functions  and state variables: "tf2ss" and "ss2tf". This method works even if the transfer function  numerator order is equal to the order of the denominator. Consider the following transfer  function: Y s ⎥⎦⎤ ⎢⎣⎡+ + 3 10 22 ( )22U s s s = + + s s 4 2 ( ) >> n = [3 10 22]; d=[1 4 2]; >> [A,B,C,D]=tf2ss(n,d) These commands give ⎢⎣⎡− − AB = = ⎢⎣⎡ 1 4 2 1 0 ⎥⎦⎤ ⎥⎦⎤ C 0 [ ] = − 2 16 D =  73 These matrices will be totally different from Simulation Diagram or Phase Variable matrices. To confirm the original transfer function, >> [num,den]=ss2tf(A,B,C,D) which gives num = [ 3 10 22] and den = [1 4 2] 3.4.2.4 Example Demonstrating the Different Methods Express the following differential equation in state variable format using the three approaches in  sections 3.4.2.1 - 3.4.2.3; note, zis the desired output in each case. Assume all initial  conditions are zero. z+11z+ 65z+123z = 2u+18u+ 246u (a) Simulation diagram (Note this method is required in general if there are nonzero initial  conditions or if the output of interest is not z) ⎢⎢⎢⎣⎡ x 1 ⎥⎥⎥⎦⎤ 0 1 0 ⎢⎢⎢⎣⎡− − − ⎢⎢⎢⎣⎡⎥⎥⎥⎦⎤ x 1 ⎥⎥⎥⎦⎤ ⎢⎢⎢⎣⎡ 2 ⎥⎥⎥⎦⎤ x 2 = 0 0 1 x 2 + − 4 u x 3 123 65 11 x 3 160 ⎢⎢⎢⎣⎡ x 1 ⎥⎥⎥⎦⎤ y = [ ] [ ]u 1 0 0 0 x 2 x 3 + (b) Phase Variables (note, since the order of the numerator of the transfer function for this  differential equation is less than the order of the denominator, the matrix coefficients come  directly from the transfer function coefficients) ⎢⎢⎢⎣⎡ x ⎥⎥⎥⎦⎤ 0 1 0 x 0 1 ⎢⎢⎢⎣⎡− − − ⎢⎢⎢⎣⎡⎥⎥⎥⎦⎤ 1 ⎥⎥⎥⎦⎤ ⎢⎢⎢⎣⎡ ⎥⎥⎥⎦⎤ x 2 x 3 = 0 0 1 123 65 11 x 2 x 3 + 0 1 u ⎢⎢⎢⎣⎡ x 1 ⎥⎥⎥⎦⎤ y = [ ] [ ]u 246 18 2 0 x 2 x 3 (c) MATLAB << Num=[2 18 246]; + << Den=[1 11 65 123]; << [A,B,C,D]=tf2ss(Num,Den)74 ⎢⎢⎢⎣⎡ x 1 ⎥⎥⎥⎦⎤ ⎢⎢⎢⎣⎡− − − 11 65 123 ⎢⎢⎢⎣⎡⎥⎥⎥⎦⎤ x 1 ⎥⎥⎥⎦⎤ ⎢⎢⎢⎣⎡ 1 ⎥⎥⎥⎦⎤ x 2 = 1 0 0 x 2 + 0 u x 3 0 1 0 x 3 0 ⎢⎢⎢⎣⎡ x 1 ⎥⎥⎥⎦⎤ y = [ ] [ ]u 2 18 246 0 x 2 + x 3 3.4.2.5 Example of Phase Variable Approach when the Numerator and Denominator  Orders are Equal Use the phase variable approach to express the following differential equation in state variable  format; zis the output of interest. Note, you will need to first do long division to obtain a lower  order numerator transfer function plus a remainder term. z+11z+ 65z+123z = 4u+ 46u+ 278u+ 738u Z s ⎥⎦⎤ ⎢⎣⎡+ + + ⎤ 3 2 ⎢⎣⎡+ 2 4 46 278 738 ( )3 2 s s s + + + 2 18 246 ( ) s s = 3 2 s s s 11 65 123 ⎥ = U s ⎦ + + s s s + + + 11 65 123 4 ( ) U s The process is to apply the phase variable approach to the ratio of polynomials and then add the  4u term to the output of interest equation, i.e. ⎢⎢⎢⎣⎡ x 1 ⎥⎥⎥⎦⎤ 0 1 0 ⎢⎢⎢⎣⎡− − − ⎢⎢⎢⎣⎡⎥⎥⎥⎦⎤ x 1 ⎥⎥⎥⎦⎤ ⎢⎢⎢⎣⎡ 0 ⎥⎥⎥⎦⎤ x x 2 3 = 0 0 1 123 65 11 x x 2 3 + 0 1 u ⎢⎢⎢⎣⎡ x 1 ⎥⎥⎥⎦⎤ y z = = [ ] [ ]u 246 18 2 4 x x 2 3 + Use “ss2tf” in MATLAB to confirm that you did the conversion to state variable format  correctly. A=[0 1 0;0 0 1;-123 -65 -11]; B=[0;0;1]; C=[246 18 2]; D=; [n,d]=ss2tf(A,B,C,D) n = 4.0000e+000 4.6000e+001 2.7800e+002 7.3800e+002 d = 1.0000e+000 1.1000e+001 6.5000e+001 1.2300e+00275 3.5 Maintaining Access to Natural Variables When Creating State Variable Equations  from Block Diagrams Consider the block diagram below with input r(t). Each of the blocks represents a particular  component in the system; the outputs of the blocks are called the natural variables since they  represent the output of a component which can potentially be observed, measured and/or  recorded. Design of feedback control systems often requires the measurement of these natural  variables for feedback. The inputs to the blocks are also considered to be natural variables. r e u v z 10( 5) s + s + 2 +-+- ( 10) s + 2 + + s s 3 100 (1) What are the natural variables of this system? (2) What is the transfer function relating Z(s) to V(s)? For this transfer function, define two  state variables ��1 and ��2 and obtain equations for the derivatives of these state variables in  terms of v; also, obtain the equation for z in terms of ��1, ��2, and v. (3) What is the transfer function relating U(s) to E(s)? For this transfer function, define a state  variable ��3 and obtain the equation for its derivative in terms of e. (4) Assume that the variable e is the output of interest. Using the equations for the summing  junctions in the block diagram and the state variable derivative and output equations found  in (2) and (3), eliminate variables u, v, and z and obtain the following matrix state variable  equations remembering that �� = ��, the output of interest: ��̇ = ���� + ���� �� = ���� + ���� �� =? �� =? �� =? �� =? Solution: (1) The natural variables are z, v, u, and e; the input to the system is r. (2) Using the Phase Variable approach to getting the state variable equations: �� = [�� + 2 ��2 + 3�� + 100] �� ��ℎ���� [��̇1 −100 −3][��1 ��̇2] = [0 1 �� = [2 1][��1 ��2] + �� �� = �� − �� (3) �� = [10�� + 50 �� + 10 ] �� = [−50 �� + 10 + 10] �� Thus, ��2] +  �� ��̇3 = −10��3 + �� �� = −50��3 + 10�� �� = �� − �� Thus, the unknowns are ��1, ��2, ��3, ��, ��, ��, ������ �� and the equations for this system are ��̇1 = ��276 Eleminating v, u, z, and e gives ��̇2 = −100��1 − 3��2 + �� ��̇3 = −10��3 + �� �� = 2��1 + ��2 �� = �� − �� �� = �� − �� �� = −50��3 + 10�� ��̇ = ���� + ���� �� = ���� + ���� �� = [ 0 1 0 −122 −14 −50 −2 −1 −10 ] �� = [ 0 10 1 ] �� = [−2 −1 0] �� =  3.6 SIMULINK Simulink is simulation software that runs as part of MATLAB and is included with the student  version of MATLAB. There are numerous reference books and documents ( type ‘doc  simulink’)for learning to perform simulations. To start SIMULINK, click on the home tab and  then the Simulink-library tab in the tool bar on the command window. Then, under the file tab,  click on ‘new’ then click on ‘model’.  Examples 1. Use Simulink with a “workspace” to obtain a plot of y(t) for r(t) a step input with  magnitude of 30. The initial conditions are all zero except y(0-)=3. r 4 + + - - s 4 1 y + s 10 20 Block diagram77 4 s+4 Clock 1 s y Step To Workspace Add Add1Transfer Fcn 2 0 Gain1 SIMULINK diagram Integrator 1 0 Gain   >> plot(y(:,1),y(:,2)) >> xlabel('time, sec.') >> ylabel('y') >> title('Homework #3, problem 1') 78 y Homework #3, problem 1 3 2.5 2 1.5 1 0.5 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 time, sec. 2. Create a Simulink diagram for the following differential equation; the input u(t) will be a  band limited white noise stored in a “workspace”. b k b k k b =100 z+  + =  + = 8 M z M z M u M u M ClockM u From Workspace 100 k/m 1 s Integrator 8 b/m 1 s Integrator1 100 K/m y To Workspace >> [u(:,2),u(:,1)]=BLWNinput(128,.01,2); >> plot(y(:,1),y(:,3),'k',y(:,1),y(:,2),'.r') 79 t nemecalpsiD20 15 10 5 0 -5 Input, u  Output, z -10 -15 -20 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Time, sec Problems utilizing this concept: H32, H42 80
Page Expired It looks like your free minutes have expired! Lucky for you we have all the content you need, just sign up here
References: