Fnd of Computatnl Sci
Fnd of Computatnl Sci CSI 701
Popular in Course
Popular in Computer & Information Science
This 65 page Class Notes was uploaded by Summer Kreiger on Monday September 28, 2015. The Class Notes belongs to CSI 701 at George Mason University taught by Juan Cebral in Fall. Since its upload, it has received 50 views. For similar materials see /class/215157/csi-701-george-mason-university in Computer & Information Science at George Mason University.
Reviews for Fnd of Computatnl Sci
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 09/28/15
PPJNE Interpolation amp Integration Curve tting and interpolation Interpolation on structured and unstructured grids Numerical integration Random numbers Interpolation Given a set of data X1y1 X2y2 Xnyn we want a function wich takes yfx i1 n We also may wish to constrain this continuous function so that some set of derivatives are continuous This would mean that 113 fm 8 fxl em 0 for all i and for kltm derivatives Use of interpolation We use interpolation throughout all the sciences The most common applications are Evaluating between data points Plotting data Approximating a continuous but difficult mathematical function Finding the differential or integral values of tabular data The finite element method Interpolation vs Approximation Interpolation is NOT approximation The Least Squares method gives an optimal fit for an over determined system Interpolation exactly reproduces the data points such that YifXz The simple rule If you know every point is exactly right and has virtually no error you interpolate If you know each point has a measurement uncertainty you approximate Basis functions One of the best ways to represent an interpolating function is to define a set of basis functions X The interpolating function is then a linear combination of these basis functions fx Z yj j x We require yi Zyj jltxo fx i1n j1 Clearly there are an infinite number of functions which satisfy this requirement For example any function where 1 jxi 0 D ij ii Polynomial interpolation Polynomial Points Values N pxZax x1x2xN y1pltx1y2pltx2yNpltxN i0 Polynomial evaluation given a polynomial and a point find the value of the polynomial at that point Polynomial interpolation inverse problem given a set of points and associated values find the polynomial Root nding given a polynomial and a set of values find the points Polynomial evaluation Horner s method is an efficient manner of evaluating polynomials Basically for a given polynomial of degree n 1 Pn1Xq1 72X q3x2 m ann1 Horner s method consists in dividing the polynomial into a series of nested multiplications pn1xq1 Xq2 Xq3 X Xq1 X q This method involves only n multiplications and n additions Lagrange interpolation The classic solution to the interpolation problem is the Lagrange s interpolation formula which can be derived by forming a system of equations using Taylor series pm 2 yj H H Zyjljoo j1 1s jSN 1sisN x j T i i j The Lagrange basis functions are defined as n Hk1k j X xk 509 n Hk1k jxj xk These functions satisfy 1 1392 ljxi0 13 The problem with polynomial interpolation High order polynomials are very complex and can have lots of oscillations In some cases the fit or interpolated data may have little to do with the underlying data distribution e Original function n5 polynomial n 1 0 p olynomial Piecewise polynomials Doing a global fit to all the points in a system can be very expensive and may lead to undesired oscillations A piecewise polynomial is a function which only uses a local set of points to determine the value Linear interpolation between points along a graph is a good example of this type of function Cubic splines In this case we use a cubic polynomial to interpolate the values in the interval X X1I pxeiX3 bX2 cX d i12 N1 We must have pXy and pX1y1 for i12 N1 because the spline must touch the knots The spline must curve smoothly around the knots thus the first derivatives of the spline polynomials must be equal at the knots p 1Xp x for i2 N1 Two more conditions must be given at the spline ends to close the system of equations the natural conditions specify zero second derivatives at the extrema BSplines Probably the most common and useful piecewise interpolation uses the idea of Bsplines Bsplines are a set of basis functions defined recursively which can be used to create interpolations of arbitrary order A Bspline of degree zero is 1 lftilttltti1 elsewhere 9 To create higher order functions we use the recursion formula k k k l k k l Bi 5 2 vi 5 Bi 5 1 Vi1 Bi1 x where v is an interpolating function x x v x xik xi Bsplines order 0 order 1 order 2 IV order 3 Properties of Bsplines lf XltX or if XgtX1 then Bk0 lf XltXltX1 then B Xgt0 For every location X EBX1 A Bspline of order k is continuously differentiable k 1 times Bk1k Bkm are linearly independent functions on X1Xn Any spline of degree kwith points X can be represented with the set Bk1k BKM over the interval X1Xn Interpolation on grids Given Field values at the nodes of a Cartesian grid Position in space Find Field value at the given position Procedure Find cell element containing given position Linearly interpolate field values from cell nodes Interpolation on a Cartesian grid bounding bOX xmin 9 XmaX 9 ymin 9 ymax number of grid points in each direction nx ny fieldvalues lj i1nx jlny spacings Ax XmaX Xmin j ymaX ymin nx my cell that contains the given position xy jI 392 j 1nteger part Ax Ay y local cell coordinates J quotI A xiAx 77yJ39Ay 6 bilinear eld interpolation X 77 i1j 77 1j1 77 i1j1 Interpolation on unstructured grids Host element containing given position cannot be found directly as in Cartesian grids Need to search the mesh for the element containing the given position Element shape functions at a point coordinate interpolation x N axa N bxb N 0x0 constant sum property Na 1 Nb NC gt x Nbxb xaNcxc xaxa de ning c g1 2 Kb xa g2 g2 x0 xa xpa xp xa writing x in the g1 g2 reference frame XXpa Ig1g1 xpa Ig2g2 xa equating Z xpa39g1g1xpalg2g2 NbxbaNCxca Element shape functions at a point we had Xpa Ig1g1Xpa Ig2g2 NbXba NCXca writing X wand Xw in the g1 g2 system Xba Xba g1g1Xba g2g2 Xca Xca I g1g1Xca Ig2g2 substituting Xpa g1g1Xpa g2g2 NbXba g1g1NbXba g2g2 NCXca Ig1g1NcXca Ig2g2 in matrix form mag1 xwglzvbltxmglgt or gm g21NbltXpag1gt Xba39gz Xca39gz NC Xpa39gz giz gzz NC Xpa39gz Element shape functions at a point system of equations gm gZIWLNb XpaIg1 Kglz gzzj NC Xpa39gz where gtxlt gtxlt g11 glx glx g1y gly g12 glx gZx g1y gZy g22 g2x gZx g2y gZy solving the system det g11 g22 g12 gm 77 Nb gm xpa g1gt gm xpa g2gtdet a N0 gm ltxpa g2gt gn ltxpa g1gtdet 5 N quot 1 77 CI Host element conditions for inclusion in clement Nil Nquot 2 0 iabc explicitly gt0 and l gt0 77gt0 and 1 77gt0 gt0 and l gt0 Interpolating the field values interpolation from element nodes X N i X 12 linear interpolation X a77 b c Brute force interpolation Check whether point inside mesh bounding box Loop over all the elements Evaluate element shape functions at given point If point inside gt found host Endloop To interpolate between two grids the algorithm is Npmesh1 X Npmesh2 Bin search Load grid points into a bin data structure Find bin that contains the point Similar to Cartesian grid interpolation Loop over the elements of this bin Evaluate element shape functions at given point lf point inside gt found host Endloop Performance of grid to grid interpolation is Np1 X ltNp1bingt Octree search Load grid points into an octreequadtree data structure Find point closest to given point Loop over the elements surrounding given point Evaluate element shape functions at given point lf point inside gt found host Endloop Performance of grid to grid interpolation is Np1 X log ltNp2surrelemgt Neighbor to neighbor search Start at initial guess element While not found Evaluate shape functions at given point If inside gt found else jump to neighbor element opposite to node with largest negative shape function End while Performance depends on initial guess how many elements away is the host element Layered approach Find initial guess from bin or octree data structures Perform neighbor to neighbor search If not found in a max number of steps fall back to brute force algorithm Initialize guess for next point in grid from the current host if grid points are close in space the host will be found in a few steps Curve fitting We are given a set of data points X1y1 X2y2 Xny We are given the functional form of a function to fit the data fXfC1C2CMX where c1cM are some parameters The curve fitting procedure is to find the choice of parameters that best matches the observed values at the given points If the function was a polynomial and the values were exact then this would be interpolation But now we are considering more general functions and inaccurate data We will consider functions that can be written as linear combination of simpler functions with the unknown parameters being the coefficients fXC1f1X C2 7 200 CM fMX Uses of curve fitting Curve fitting is extensively used in sciences We use curve fitting for data analysis eg understanding experimental data and for constructing phenomenological models The Least Squares Problem When we fit data to some functional form f we generally want to find a parameter X which satisfies min 2 o form x i l This is effectively minimizing the L2 norm of the residual vector error difference between the data and the function evaluated at the given points We could minimize another norm instead but the least squares approach is the most commonly used one and perhaps the easiest to implement Polynomial Least Squares The Least Squares problem can be written in matrix form 1 t1 t1 t1 x1 y1 1 t2 t22 t3 X2 yz 1 t3 t32 tg X3 N 33 2 n 1 rm rm rm xm yn We are looking for a solution of the form 2 y x1x2tx3t xmt where t is now a continuous independent variable General Least Squares We could use any set of functions instead of polynomials and make the same type of fit as long as the coefficients of the fit X are combined linearly with these forms f1tl f2t1 f3t1 fnt1 x1 Y1 fle f2t2 f3t2 fnt2 x2 J Z f1t3 f2t3 f3t3 fnt3 3 3 z Y3 We are looking for a solution of the form yx1 tx2 f2tx3 f3t xm where X are constants and tis a continuous independent variable Numerical integration Important as part of simulation codes Important in finite element methods to compute general elemental matrices Rectangular rule Evaluating an integral is the same as computing the area under a curve We can estimate the area under a curve by summing the areas of small rectangles that nearly fit under the curve Suppose we divide the integration interval ab into N segments delimited by the points X1 X2 XN1 For the height of each rectangle we could use fX or fX1 or the function evaluated at the center of the segment b N lfmdx z ZAxi fol1 xi 2 R Ax xi1 x a i1 Rectangular rule Heightfx Heightfx1 Heightf 12 xm xl Trapezoidal rule Better approximation divide the area under the curve into trapezoids b N jfltxgtdx z ZAxi ifltxi1gt foegt12 T M act1 x a i1 Simpson s rule Errors in the rectangular and trapezoidal rules can be computed as follows Expand fx in Taylor series about the midpoint of each interval Integrate Sum over all intervals After some algebra b Rectangular rule Ifxdx R m3 63 AX5 es b Trapezoidal rule Ifxdx T 2Ax3 e3 4Ax5 es Simpson s rule Idea eliminate the leading term in the error by combining the Rectangular and Trapezoidal rules Multiply the Rectangular rule by 2 add the Trapezoidal rule then divide by 3 b jfxazx 2RT 2Ax5 e5 Simpson s rule b N l fltxgtdx z 21 029 411 fog1 gt a S Other quadrature methods More complicated quadrature methods have been devised that gain accuracy by combining simpler methods with similar errors The best known is Romberg integration which uses two sets of subintervals for its two methods Another approach is to use adaptive quadratures for this an error estimator is needed in order to place more intervals in regions of large errors Gaussian quadrature In the formulas seen so far the integral of a function was approximated by the sum of its functional values at a set of equally spaced points multiplied by a set of properly selected weight coefficients We saw that if we allowed more freedom in selecting the values of the coefficients higher order approximations were constructed The idea of the Gaussian quadrature is to give ourselves the freedom to choose not only the weight coefficients but also the location of the of the points were the function is evaluated Thus we will have twice the number of degrees of freedom at our disposal and we will obtain formulas with twice the order of the NewtonCotes formulas NewtonCotes quadrature Points at which the function is to be evaluated are determined a priori usually at equal intervals a polynomial is passed through the values of the function at these points and exactly integrated An n values of the function define a polynomial of degree n1 the errors will be of the order Ohquot where h is the size of the intervals This leads to the NewtonCotes quadrature formula b N I mm Zhi fxi a i1 N1 Rectangular N2 Trapezoidal N3 Simpson Gauss quadrature Now the location of the sampling points are to be determined in order to achieve better accuracy For n sampling points we have 2n unknowns h and X and hence a polynomial of degree 2n1 could be constructed and integrated exactly b N I l x dx Z w fx i1 a The error is now Oh2 The solution of the simultaneous equations is difficult and can be obtained in terms of Legendre polynomials Thus this process is frequently known as GaussLegendre quadrature Positions and weights have been tabulated for different N s Multidimensional integrals Integrals of functions of several variables over regions with dimension greater than one are not easy for two reasons The number of function evaluations needed to sample an Ndimensional space increases as the Nth power of the number needed to do a 1D integral The region of integration in an Ndimensional space is defined by and N1 dimensional boundary which can be extremely complicated can be nonconvex or multiple connected Multidimensional integration alternatives If the boundary is simple and the function is smooth then breaking up the problem into repeated onedimensional integrals or multidimensional Gaussian quadratures will be effective and relatively fast If the boundary is complex and the function is nore strongly peaked in very small regions and relatively low accuracy is tolerable then Monte Carlo integration is a good option If the integrand is strongly peaked in small regions and you know where those regions are break the integral into several regions where the integrand is smooth and do each separately If you do not know where the strong peaks are it is hopeless Gaussian quadrature A surface or volume integral can be computed as the sum of the integral over nonoverlapping regions or elements that cover the entire domain of integration Gaussian quadrature formulas have been tabulated for several surface and volume elements such as quadrilaterals triangles tetrahedra and hexahedra AAA 00 Linear Quadratic Cubic Linear Quadratic Monte Carlo integrals The integration methods discussed so far are all based upon making polynomial approximations of the integrand The Monte Carlo method is very different it relies upon random numbers the method was named after the famous casino The average of a function in an interval ab is defined as 1 196 b lt f gt Ifxdx So the integral could be computed if we could estimate the average of the integrand Monte Carlo integrals The average can be estimated from a list of N uniformly distributed random points X 1 N ltf gtNFgfltm Therefore the integral can be approximated as b 1 N mm m mo ZN a N i1 The standard deviation of the mean give us an estimation of the error 2 1 1 FEM2 WZ ij N l b lfltxgtdx b agtltlt fgtN MN 0N J Monte Carlo integrals Pros Easily extensible to complex integration regions in multidimensional spaces eg use a rejection method to discard randomly selected points that are outside the integration region in nD Singularities do not affect the method too much Can estimate integrals when no other method can Cons Slow convergence Need many random numbers Need good random number generators uniformfast Need many function evaluations Random numbers Definitions amp properties Linear congruent generators Other generators Nonuniform deviates Random numbers Arbitrary number any number will do Random number every number should be equally likely to occur There is no way to produce true random numbers on a deterministic device Can produce pseudorandom numbers not really random but are useful approximations to random numbers similar statistical properties We will discuss uniform random numbers each number is equally likely other random number distributions are possible for example from a normal or Gaussian distribution Uses of random numbers Cryptology Simulations Molecular modeling Galactic dynamics Climate dynamics Sampling Data analysis of large datasets Monte Carlo integration Decision making among equally likely events Monte Carlo simulations Good random number generators Random pattern of numbers passes statistical tests of randomness Long period before repeating Optimized and efficient algorithm Repeatable number sequences Portability between machines Linear congruent generator Linear congruent generators use a simple linear function to create random numbers xk a xk1 bmod M a b integers M very large integer about the size of the largest integer possible in the system Real numbers are produced by dividing the integer Xk by M X0 is known as the seed of the generator Properties of linear congruent generators Random patterns of numbers yes if a and b are chosen carefully Long period before repeating yes if a and b are chosen carefully Optimized and efficient algorithm yes Repeatable number sequences yes if the seed is chosen by the user Portability between machines yes if there is a standard IEEE implementation of integers A bad choice a1 b19 m381 with seed0 produces the sequence 0 1 20 0 1 20 Not a very random sequence of numbers between 0 and 380 LGC implementations rand a poor example a1103515245 b12345 m232 Initially implemented on IBM computers with a byte swap Byte swap destroys the random number randuO a65539 m231 We guerantee that each number is random individually but we don t guarantee that more than one of them is random ranO Park and Miller Minimal Standard a16807 m2311 b0 Long period 109 Fibonacci generators Fibonacci or subtractive generators use a sequence such as xk xk 17 xk S A special start up routine must be used to fill the array Some lags work well others do not Extra memory is required compared to LCG s Generally longer period than LCG s are obtained Computationally cheaper than LCG s Linear feedback shift registers Start with a register filled with some arbitrary pattern Shift it right say a step at a time Fill in vacated positions from the left with a bit determined by the contents of the register Example 4bit linear shift feedback register 1111 0111 0011 0001 1000 0100 1001 1100 0110 1011 0101 1010 1101 1110 1111 All possible nonzero bit patterns occur the starting value repeats after 15 steps Shuffling Generate a set of random numbers Generate a new random number Use the new random number to pick an element of the random numbers already generated This trick can greatly increase the period of LCG s You can apply this idea to any low quality random number generator Can use two different random number generators Nonuniform deviates Many problems need random numbers that are not uniformly distributed between 0 and 1 We need to transform from a uniform probability distribution to the one specified by the problem There are two approaches to this problem Transformation method Typically used with Exponential and Normal Gaussian deviates Rejection method Typically used with Gamma Poisson and Binomial deviates Transformation method Specify a probability distribution py Form the cumulative probability distribution by integration of the original distribution y W lpmdy 0 Assuming your original function is normalized correctly the maximum value of the integral should be 1 Pick a random number between 0 and 1 X Associate this number with a location on the dependent axis of the cumulative function Transform or interpolate the random number of the independent axis at this chosen location Transformation method uniform deviate X 0 y 6 transformed deviate Rejection method Form bounding function fX that is above the original distribution pX and is chosen to be analytically invertible Pick a random number y and associate it with the dependent axis of the bounding function Invert this value to find a location on the independent axis xF397y Pick a second random number between zero and the bounding function limits y e0fX If the random number is less than the value of the actual distribution function accept it else reject it start over if y ltpX gt accept else pXlty ltfX gt reject yo Rejection method x vFx f fxdx 0 reject accept x0 xa Quasirandom numbers Sometimes having a set of space filling points is better than having a set of truly random numbers eg Monte Carlo integrals Quasirandom numbers are picked to look random but evenly spaced In some algorithms these look better than truly random values Examples Halton sequence Sobol sequence
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'