### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# APPL DYN SYST I ME 215A

UCSB

GPA 3.94

### View Full Document

## 49

## 0

## Popular in Course

## Popular in Mechanical Engineering

This 164 page Class Notes was uploaded by Daren Beatty Jr. on Thursday October 22, 2015. The Class Notes belongs to ME 215A at University of California Santa Barbara taught by J. Moehlis in Fall. Since its upload, it has received 49 views. For similar materials see /class/227078/me-215a-university-of-california-santa-barbara in Mechanical Engineering at University of California Santa Barbara.

## Similar to ME 215A at UCSB

## Popular in Mechanical Engineering

## Reviews for APPL DYN SYST I

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/22/15

a Pmmif i Qiij gmiggcai i iai Dawmip Clancy Rowley Department of Mechanical and Aerospace Englneerlng Prlnceton UanerSlty UC Santa Barbara 3 November 2008 5C0 Outline 0 Overview of this lecture series I The role of reducedorder models 0 Types of models we will consider 0 Outline of future lectures 5C0 Outline 0 Overview of this lecture series I The role of reducedorder models 0 Types of models we will consider 0 Outline of future lectures 9 Proper Orthogonal Decomposition and Galerkin projection o Galerkin projection onto basis functions 0 Empirical basis functions using Proper Orthogonal Decomposition 0 Properties of POD modes 3 Importance of the choice of inner product a Energybased inner products imply stable models Outline 0 Overview of this lecture series 9 The role of reducedorder models 0 Types of models we will consider 0 Outline of future lectures 9 Proper Orthogonal Decomposition and Galerkin projection o Galerkin projection onto basis functions 0 Empirical basis functions using Proper Orthogonal Decomposition 0 Properties of POD modes 0 Importance of the choice of inner product a Energybased inner products imply stable models 9 Example cavity flows 0 Description of flow physics 0 POD modes and loworder models Outline 0 Overview of this lecture series I The role of reducedorder models 0 Types of models we will consider 0 Outline of future lectures 5C0 Motivation 0 Control of fluids 0 Equations of fluid mechanics are well known NavierStokes 0 But they are nonlinear partial differential equations PDEs difficult to solve I Fields of dynamical systems and control theory provide powerful tools for analysis and designing feedback laws if we have a tractable model a lowdimensional system of ordinary differential equations ODEs Q IIII J 5 CG Motivation 0 Control of fluids 9 Equations of fluid mechanics are well known NavierStokes 0 But they are nonlinear partial differential equations PDEs difficult to solve I Fields of dynamical systems and control theory provide powerful tools for analysis and designing feedback laws if we have a tractable model a lowdimensional system of ordinary differential equations ODEs Overall goal Obtain lowdimensional models of complex multiscale systems that approximate the full dynamics in some sense and that are suitable for analysis and control design Motivation 0 Control of fluids 9 Equations of fluid mechanics are well known NavierStokes 0 But they are nonlinear partial differential equations PDEs difficult to solve Fields of dynamical systems and control theory provide powerful tools for analysis and designing feedback laws if we have a tractable model a lowdimensional system of ordinary differential equations ODEs Overall goal Obtain lowdimensional models of complex multiscale systems that approximate the full dynamics in some sense and that are suitable for analysis and control design 0 Examples here will be oriented towards fluid mechanics but the tools we use are appropriate for many different types of systems 0 Differential equations Xt E V Types of models we will consider Consider dynamics on a state space V The evolution ofa variable X E V may be described by X 2 X7 u where ut is an optional external input 5C6 Types of models we will consider Consider dynamics on a state space V The evolution ofa variable X E V may be described by 0 Differential equations Xt E V X fx7 u where ut is an optional external input 0 Maps Xn E V Xn1 fXm Un Note a large numerical code may often be viewed as a map taking an initial condition X0 to the next timestep X1 CG Types of models we will consider Consider dynamics on a state space V The evolution ofa variable X E V may be described by 0 Differential equations Xt E V X fx7 u where ut is an optional external input 0 Maps Xn E V Xn1 fXm Un Note a large numerical code may often be viewed as a map taking an initial condition X0 to the next timestep X1 CG Types of models we will consider Consider dynamics on a state space V The evolution ofa variable X E V may be described by 0 Differential equations Xt E V X fx7 u where ut is an optional external input 0 Maps Xn E V Xn1 fXm Un Note a large numerical code may often be viewed as a map taking an initial condition X0 to the next timestep X1 Model reduction Given dynamics on a large space V obtain dynamics on a much smaller space W that approximate the original dynamics in some sense Outline of future lectures 0 Proper Orthogonal Decomposition and Galerkin projection 0 Example cavity flow Method of snapshots importance of the choice of inner product s 5C6 Outline of future lectures 0 Proper Orthogonal Decomposition and Galerkin projection o Example cavity flo a Method of snapshots importance of the choice of inner product ws 9 Dynamically scaling POD modes 0 Symmetry groups quotient spaces template fitting and slices equation for the scaling parameters 9 Examples Burgers equation free shear layer Outline of future lectures 0 Proper Orthogonal Decomposition and Galerkin projection o Example cavity fows a Method of snapshots importance of the choice of inner product 9 Dynamically scaling POD modes a Symmetry groups quotient spaces template fitting and slices equation for the scaling parameters a Examples Burgers equation free shear layer 9 Balanced truncation for linear systems 9 Linear inputoutput systems controllability and observability Gramians balancing transformations 0 Example linearized channel flow at a single wavenumber Outline of future lectures 0 Proper Orthogonal Decomposition and Galerkin projection a Example cavity flows 0 Method of snapshots importance of the choice of inner product 9 Dynamically scaling POD modes a Symmetry groups quotient spaces template fitting and slices equation for the scaling parameters a Examples Burgers equation free shear layer 9 Balanced truncation for linear systems 9 Linear inputoutput systems controllability and observability Gramians balancing transformations 0 Example linearized channel flow at a single wavenumber 9 Approximate balanced truncation for largescale systems 0 Approximate Gramians from linearized and adjoint simulations method of snapshots output projection a Example 3D linearized channel flow Outline of future lectures 0 Proper Orthogonal Decomposition and Galerkin projection 9 Example cavity flows 0 Method of snapshots importance of the choice of inner product 9 Dynamically scaling POD modes 0 Symmetry groups quotient spaces template fitting and slices equation for the scaling parameters a Examples Burgers equation free shear layer 9 Balanced truncation for linear systems 9 Linear inputoutput systems controllability and observability Gramians balancing transformations 0 Example linearized channel flow at a single wavenumber 9 Approximate balanced truncation for largescale systems 0 Approximate Gramians from linearized and adjoint simulations method of snapshots output projection a Example 3D linearized channel flow 9 Approximate balanced truncation near unstable equilibria 9 Finding unstable equilibria using NewtonKrylov methods decoupling stable and unstable subsp s 0 Example control of separated flow past a flat plat e D L n39m Outline 9 Proper Orthogonal Decomposition and Galerkin projection o Galerkin projection onto basis functions 0 Empirical basis functions using Proper Orthogonal Decomposition 0 Properties of POD modes 0 Importance of the choice of inner product a Energybased inner products imply stable models I C6 El Q I quotI IIII c p a Galerkin projection 0 Dynamics evolve on a highdimensional space possibly infinitedim39l 0 Approximate by projecting onto a lowerdimensional spac f0 1quot X fx x e V 8 Consider r E S C V 0 Define dynamics on the subspace S by l P5 7I 7 0 Two important choices P5 V A S is an orthogonal projection 0 Choice of subspace S 9 Choice of Inner product for the projection 5C6 Proper Orthogonal Decomposition POD 3 Proper Orthogonal Decomposition POD representation of data as a sum of orthogonal empirical eigenfunctions g0jx these span the subspace S 7 X7 0 23109010 391 a For a fixed dimension r and given data ujx uxtj find an optimal basis if P is the orthogonal projection onto spangoj minimize the error m leuj Prujll2 11 0 For fluid problems where u is velocity this maximizes energy captured by the modes so Note lowenergy modes may be important 0 Optimal modes goj computed by a singular value decomposition 0 Also known as Principal Component Analysis or KarhunenLo eve v expansion a a 39 e Low dimensional models 0 Suppose u is approximated by an expansion in POD modes 7 x70 ZEJUMU 1 and we have dynamics given by the PDE u Du7 o Galerkin projection gives D is a differential operator 3er mun Mt lt80k7Dutgt7 hence uxt k 17 i H rt 0 This is a set of r ODEs that govern the dynamics of ajt and a p Q POD in more detail Theorem Proper Orthogonal Decomposition Given a linearly independent set uk 6 V i k 17 i i i m the subspace S ofdimension n lt m Which minimizes AveHu7 PsuH has an orthonormal basis gay 6 V ij 1 i i n where W are solutions of R801 19017 Where R Aveu u and A1 2 A2 2 2 An gt O are the largest n eigenvalues of R The eigenfunctions 301 are called POD modes Here Ave denotes an average over the snapshots So computation of the modes reduces to an eigenvalue problem for an operator R determined by the snapshots in the dataset U Q I III um Sketch of the proof Let gay 6 V j 1PH r be an orthonormal basis for the subspace S and let P5 denote the orthogonal projection onto 5 given by P5 Zlt 780jgt803 11 5C6 Sketch of the proof Let gay 6 V lj 1PH r be an orthonormal basis for the subspace S and let P5 denote the orthogonal projection onto 5 given y 7 P5 El 901W 11 It is simple to show that minimizing Avellu7 Psull is equivalent to maximizing AvellP5ull or maximizing 2171 Ave u 901 l over orthonormal functions so Sketch of the proof Let gay 6 V lj 1PH r be an orthonormal basis for the subspace S and let P5 denote the orthogonal projection onto 5 given by PSUZltU7gt 901 11 It is simple to show that minimizing Avellu7 Psull is equivalent to maximizing AvellP5ull or maximizing 2171 Ave u 901 l2 over orthonormal functions so Using the Lagrange multiplier theorem the desired goj are thus extremals of 1M Ave l ltU790gt l2 40W 7 1 Sketch of the proof Let gay 6 V lj 1PH r be an orthonormal basis for the subspace S and let P5 denote the orthogonal projection onto 5 given by P5 Zluvsojl 901 11 It is simple to show that minimizing Avellu7 Psull is equivalent to maximizing AvellP5ull or maximizing 2171 Ave u 901 l2 over orthonormal functions so Using the Lagrange multiplier theorem the desired goj are thus extremals of Jlsol Ave l ltU790gt l2 Mllsollz 71 It is not obvious at this point that these extremals will be orthogonal but we will see soon that they are Sketch of the proof cont d Before finding these extremals we introduce some notation 0 Recall that the dual space V of a linear space V is the space of all linear functionals on V For any vector v E V its dual vector product v E V is v lt v Note that v depends on the inner 5C6 Sketch of the proof cont d Before finding these extremals we introduce some notation a Recall that the dual space V of a linear space V is the space of all linear functionals on V For any vector v E V its dual vector v E V is lt v Note that v depends on the inner product a For v E V and a E V the tensor product v a V A V is the linear map given by V O V0 16 Vi C0 I IIII d p o Sketch of the proof cont d Before finding these extremals we introduce some notation a Recall that the dual space V of a linear space V is the space of all linear functionals on V For any vector v E V its dual vector v E V is lt v Note that v depends on the inner product a For v E V and a E V the tensor product v a V A V is the linear map given by V O V0 16 V a In particular 0 99 WW VW V W Wgt C0 I IIII d p Q Sketch of the proof cont d Recall that we wish to find extremals of 1M AVeU ltqu0gt 2 AUWHZ U o Rewrite the quantity ltugpgt 2 as HUM 2 ltU790gtlt907Ugt W gtWgt ltU U MW 5C6 Sketch of the proof cont d Recall that we wish to find extremals of 1M Ave i ltU790gt i2 iisoii2 1 0 Rewrite the quantity i ltugpgt 2 as i ltU790gt i2 lt11 90gt lt90 ugt W 11gt U 90gt ltu 11907 90gt 0 Define R Aveu u and note that R is selfadjoint 5C6 Sketch of the proof cont d Recall that we wish to find extremals of 1M Ave i ltU790gt i2 iisoii2 1 0 Rewrite the quantity i ltugpgt 2 as i ltU790gt i2 lt11 90gt lt90 ugt W 11gt U 90gt ltu MW e Define R Aveu u and note that R is selfadjoint o The cost function may then be written Jlsol ltR90790gt 7 iisoii2 71 CG Sketch of the proof cont d Recall that we wish to find extremals of 1M Ave i ltU790gt i2 iisoii2 1 o Rewrite the quantity i ltugpgt 2 as i ltu7sogt i2 ltu7sogt lt90 Ugt W 11gt u 90gt ltU 11907 s0gt7 a Define R Aveu u and note that R is selfadjoint o The cost function may then be written vanish Jlsol ltR90790gt 7 iisoii2 71 o A necessary condition for an extremal is that the first variation cl 7 J 6 0 cm H so 11 7 for all 1 E V a We have Sketch of the proof cont d 1 d6 Jlso6w1 ltR 6 7 6 gtAlt905 7905 gt1 60 ltR 7 gt ltR 7 gt MWW MW 2Re ltR907A9071Jgt CG 9 Sketch of the proof cont d a We have 1 d6 Jwaw mwwwxwawJawwwmwew 60 and so we have ltR 7s0gt ltR 7 gt MWW MW 2Re ltR907A9071Jgt a Ifgo is an extremal this quantity must vanish for all variations 1b Rgo Ago 5C6 Sketch of the proof cont d a We have 1 d6 6 and so we have Jwaw mwwmmw 7 Amwawwaw 71 0 ltR 7s0gt ltR 7 gt 7 MWW MW 2Re ltR907A907W a Ifgo is an extremal this quantity must vanish for all variations 1b Rgo Ago 0 In addition taking an inner product with so A ltRgpgpgt Ave iltu790gt i2 Example finite dimensions 0 Let V RP with the standard inner product ltxygt xTy 5C0 Example finite dimensions 0 Let V RP with the standard inner product ltxygt xTy 0 Suppose the data is given as a collection of m linearly independent vectors ult 6 R i k 1 7 m 5C6 Example finite dimensions a Let V RP with the standard inner product ltxygt xTy 0 Suppose the data is given as a collection of m linearly independent vectors ult 6 R l k 1 7 m o For xy 6 RP we have x y xyT the usual dyadic product so the operator R RP A R is the matrix given by 1 m RAveu uE Ukuly k1 a p X p real symmetric matrix Q Ca U Q I quotI IIII o p o Example finite dimensions a Let V RP with the standard inner product ltxygt xTy 0 Suppose the data is given as a collection of m linearly independent vectors ult 6 R l k 1 7 m o For xy 6 RP we have x y xyT the usual dyadic product so the operator R RP A R is the matrix given by 1 m RAveu uE Ukuly k1 a p X p real symmetric matrix 0 Our equation Rgo Ago is then a standard eigenvalue problem on RP Example scalar valued functions a Let V L2O1 the set of complexvalued squareintegrable functions on the interval 01 with the inner product u v 01 VX uX dxi 5C6 Example scalar valued functions a Let V L2O1 the set of complexvalued squareintegrable functions on the interval 01 with the inner product u v 0 VXux dx 0 For u vgp E L2O1 m vow M a vgt 1uxvysoydy 5C6 Example scalar valued functions a Let V Up 1 the set of complexvalued squareintegrable functions on the interval 01 with the inner product u v 0 VXux dx 0 For u vgp E L2O1 m vow M a vgt uxvysoy dy o The POD eigenvalue problem then becomes 0 Ave mono my dy Aux a Fredholm integral equation with kernel KXy AveuxDy Example vector valued functions a Let V CQC3 the space of continuous functions from some spatial domain 0 C R2 to the space C3 of complex 3vectors Define an inner product on V by ltuvgt QITXAUX clx7 where A E C3X3 is a positivedefinite Hermitian matrix CG Example vector valued functions Define an inner product on V by a Let V CQC3 the space of continuous functions from some spatial domain 0 C R2 to the space C3 of complex 3vectors ltuvgt QITXAUX clx7 n Then where A E C3X3 is a positivedefinite Hermitian matrix U V4PX quotgt0 lt4P7Vgt Q quotX 7TYA4PY dy Example vector valued functions a Let V CQC3 the space of continuous functions from some spatial domain 0 C R2 to the space C3 of complex 3vectors Define an inner product on V by ltuvgt QITXAUX clx7 9 Then where A E C3X3 is a positivedefinite Hermitian matrix u lt8 meow ux ltwgt Q quotX7TYAltPy dy a The POD eigenvalue problem then becomes 0 Ave um m Am dy mx Outline 9 Proper Orthogonal Decomposition and Galerkin projection 0 Properties of POD modes 5C0 Properties of POD modes 0 The energy captured by each POD mode is given by the corresponding eigenvalue Ak AVe iipwuiiz AVeltltU7SDIltgtSDIlt7 ltU780kgt80kgt AVe i ltU780kgt i2 k 5C6 Properties of POD modes I The energy captured by each POD mode is given by the corresponding eigenvalue Ak Ave llpwullzl AveltltU7sokgtsok7 017900900 Ave l ltU790kgt l2 7 k a Each POD mode is contained within the span of the snapshots used to compute the modes Computation of POD modes can often be sped up dramatically by representing the modes as m 80 Z Ckuk k1 and solving for ck This is called the method ofsnapshots CG IIII d p Q Properties of POD modes I The energy captured by each POD mode is given by the corresponding eigenvalue Ak Ave llpwullzl AveltltU7sokgtsok7 017900900 Ave l luysokl l2 Ak 0 Each POD mode is contained within the span of the snapshots used to compute the modes Computation of POD modes can often be sped up dramatically by representing the modes as m 80 Z Ckuk k1 and solving for ck This is called the method ofsnapshots Inherited properties it follows that if every snapshot ul satisfies a particular linear constraint say Luk 0 then the POD modes will each satisfy that constraint as well Lgo O For instance if each snapshot is divergencefree then the POD modes will be v divergencefree as well a a 39 e Wm Computing modes method of snapshots 0 Consider the finitedimensional case where data is given as snapshots uk 6 R k 1 5C6 Computing modes method of snapshots 0 Consider the finitedimensional case where data is given as snapshots uk 6 R k 1 Wm 0 Construct the data matrix X i i X u1 um i i n rows m columns 5C6 Computing modes method of snapshots 0 Consider the finitedimensional case where data is given as snapshots uk 6 R k 1 t m 0 Construct the data matrix X i i X u1 um i i n rows m columns 0 POD modes are eigenvectors of R OT an n X n matrix 5C6 Computing modes method of snapshots I Consider the finitedimensional case where data is given as snapshots uk 6 R t t t m 0 Construct the data matrix X X u1 um n rows m columns I POD modes are eigenvectors of R lXXT an n X n matrix 0 Suppose m lt n Since POD modes are linear combinations of snapshots we can write so Xc for c E R Then we have XXTgo Ago ltgt XXTXc AXc IIII d p Q Computing modes method of snapshots I Consider the finitedimensional case where data is given as snapshots uk 6 R 17 m 0 Construct the data matrix X l u1 um l n rows m columns X a POD modes are eigenvectors of R lXXT an n X n matrix 0 Suppose m lt n Since POD modes are linear combinations of snapshots we can write so Xc for c E R Then we have XXTgo Ago ltgt XXTXc AXc a It suffices to solve XTXc Ac an m X m eigenvalue problem When the number of snapshots is much smaller than the dimension of the state this method is much more efficient Computing modes method of snapshots Consider the finitedimensional case where data is given as snapshots uk 6 R k 17 m Construct the data matrix X X u1 um n rows m columns POD modes are eigenvectors of R lXXT an n X n matrix Suppose m lt n Since POD modes are linear combinations of snapshots we can write so Xc for c E R Then we have XXTgo Ago ltgt XXTXc AXc lt suffices to solve XTXc Ac an m X m eigenvalue problem When the number of snapshots is much smaller than the dimension of the state this method is much more efficient Note that POD modes are also left singular veDctorsof X L Outline 9 Proper Orthogonal Decomposition and Galerkin projection 0 Importance of the choice of inner product 5C0 Choice of inner product 0 Can even change stability type of equilibria and peri CFD 2 Reducedorder models can behave unpredictany odic orbits Rempfen Theoret 000 Choice of inner product Reducedorder models can behave unpredictany 0 Can even change stability type of equilibria and periodic orbits Rempfen Theoret CFD 2000 a Simple example consider the system lt2 i i lt2 s p Choice of inner product Reducedorder models can behave unpredictany 0 Can even change stability type of equilibria and periodic orbits Rempfen Theoret CFD 2000 a Simple example consider the system ltgtltgtltgt 0 Projection onto X1 axis is X1 X1 unstable 0 p Choice of inner product Reducedorder models can behave unpredictany 0 Can even change stability type of equilibria and periodic orbits Rempfen Theoret CFD 2000 0 Simple example consider the system 1 X1 7 1 71 X1 dt X2 7 3 72 xz 0 Projection onto X1 axis is X1 X1 unstable a Can at least fix this simple problem by changing the inner product used for the projection equivalent to a nonorthogonal projection o R o Energy based inner products imply stable models X x Consider a system with a stable equilibrium point at the origin 0 o XERquot 5C6 Energy based inner products imply stable models Consider a system with a stable equilibrium point at the origin X XL f0 07 x E R Consider an inner product whose induced norm is a Lyapunov function energybased ltX7ygt XTQy Q gt 0 VX XTQX is a Lyapunov function 2XTQfX g O7 VX E U Energy based inner products imply stable models Consider a system with a stable equilibrium point at the origin gt39lt1 X7 2 OO7 XER Consider an inner product whose induced norm is a Lyapunov function energybased ltX7ygt XTQy Q gt 0 VX XTQX is a Lyapunov function 2XTQfX g O7 VX E U Reducedorder dynamics given by an orthogonal projection P r PX P2 P r Pm ltxPygt ltPxygt gt QPPTQ p Q CG Energy based inner products imply stable models Consider a system with a stable equilibrium point at the origin gt39ltfx7 f007 XER Consider an inner product whose induced norm is a Lyapunov function energybased ltX7ygt XTQy Q gt 0 VX XTQX is a Lyapunov function 2XTQfX g O7 VX E U Reducedorder dynamics given by an orthogonal projection P r PX P2 P r Pm ltxPygt ltPxygt gt QP PTQ Then V is a Lyapunov function for the reducedorder system Vr 2rTQPfr 2PrTQfr 2rTQfr g 0 D 5 f Energy based inner products imply stable models Conclusion If an energy based inner product is used the origin is stable for the reducedorder system regardless of the subspace used for the projection Here energybased means that the induced norm is a Lyapunov function does not increase in time CG IIII d p o Outline 9 Example cavity flows 0 Description of flow physics 0 POD modes and loworder models Cavity flow physics I Consider the compressible flow past a cavity at Mach number M 06 a Often oscillations occur a Free shear layer amplifies disturbances o Shear layer disturbances produce acoustic waves at downstream corner I Acoustic waves propagate upstream and excite more shear layer disturbances o This process gives rise to discrete resonant frequencies a Movie 0 Cavity LD 2 M 06 Res 60 L9 53 Q Ca El Q I quotI IIII c p a OD modes and loworder models 0 Simulation of a flow past a cavity movie 0 Full simulation 400 000 gridpoints gtlt 4fow variables 1 600 000 states 0 Reduced order model projection onto 4 POD modes 4 states POD modes and low order models Simulation of a flow past a cavity movie Full simulation 400 000 gridpoints gtlt 4flow variables 1 600 000 states Reduced order model projection onto 4 POD modes 4 states Note have traveling waves Resulting POD modes come in pairs like sin and cos General theorem if system has translation invariance that gives rise to traveling waves the optimal modes are Fourier modes In the next lecture we look at other ways to treat traveling waves Limitations 0 A fourmode model is not bad but what can we do better by increasing the number of modes Limitations 0 A fourmode model is not bad but what can we do better by increasing the number of modes it Answer sometimes but beware POD models are often very fragile and can break when adding more modes Illl I f u 03 ibimm i Lewinwmai m bmimg mmm dke imam Clancy Rowley Department of Mechanical and Aerospace Englneerlng Prlnceton UanerSlty UC Santa Barbara 12 November 2008 Outline 0 Approximate balanced truncation 0 Approximate Gramians from linearized and adjoint simulations 0 Computing the balancing transformation 0 Projection using direct and adjoint modes 0 Output projection for systems with large number of outputs Outline 0 Approximate balanced truncation 0 Approximate Gramians from linearized and adjoint simulations 0 Computing the balancing transformation 0 Projection using direct and adjoint modes 0 Output projection for systems with large number of outputs 9 Connections between POD and balanced truncation 0 POD modes are most controllable states a Balanced truncation is POD with a nonstandard inner product Outline 0 Approximate balanced truncation 0 Approximate Gramians from linearized and adjoint simulations 0 Computing the balancing transformation 0 Projection using direct and adjoint modes 0 Output projection for systems with large number of outputs Connections between POD and balanced truncation 0 POD modes are most controllable states a Balanced truncation is POD with a nonstandard inner product Linearized channel flow 0 Single wavenumber comparison with exact balancing 0 Multiple wavenumbers comparison of POD and balanced truncation Setting large linear systems 0 Consider stable linear inputoutput systems of the form gt39AgtltBu7 XOXg y CX Du where u is an input eg actuator or external disturbance and y is an output eg sensed quantity 0 Next lecture unstable systems Setting large linear systems I Consider stable linear inputoutput systems of the form gt39AgtltBu7 XOXg y CX Du where u is an input eg actuator or external disturbance and y is an output eg sensed quantity 0 Next lecture unstable systems Goal Compute a reducedorder model using balanced truncation in a way that is computationally tractable even when the dimension of the state X is very large m 105 Balanced truncation Recall what is involved in balanced truncation 0 Compute controllability and observability Gramians WC e ABBe A clt AWC WCA 88 0 W0 etACCetA d AWo WOA C C 0i 0 Balanced truncation Recall what is involved in balanced truncation 0 Compute controllability and observability Gramians 00 Jr WC e ABBe A clt AWC WCA 88 0 W0 etACCetA d AWo WOA C C or 0 9 Find a transformation T such that in new coordinates x gt gt T2 the Gramians are equal and diagonal CG p Balanced truncation Recall what is involved in balanced truncation 0 Compute controllability and observability Gramians WC e ABBe A clt AWC WCA 88 O 0 W0 etACCetA d AWo WOA C C or 0 9 Find a transformation T such that in new coordinates x gt gt T2 the Gramians are equal and diagonal z0or 9 Truncate for a model of order r set 2 7 residualize setting time derivatives to zero Balanced truncation Recall what is involved in balanced truncation 0 Compute controllability and observability Gramians WC e ABBe A clt AWC WCA 88 O 0 W0 etACCetA d AWo WOA C C or 0 9 Find a transformation T such that in new coordinates x gt gt T2 the Gramians are equal and diagonal z0or 9 Truncate for a model of order r set 2 7 residualize setting time derivatives to zero Balanced truncation Recall what is involved in balanced truncation 0 Compute controllability and observability Gramians WC e ABBe A clt AWC WCA 88 O 0 W0 etACCetA d AWo WOA C C 0 0 9 Find a transformation T such that in new coordinates x gt gt T2 the Gramians are equal and diagonal 9 Truncate for a model of order r set 2 2 O or residualize setting time derivatives to zero 0 Problem if the state dimension is large m 105 cannot even store the Gramians or the transformation T much less compute them For instance for n 105 singleprecision storage of T requires 4 terabytes Outline 0 Approximate balanced truncation 0 Approximate Gramians from linearized and adjoint simulations Approximate controllability Gramian a Main idea Lall et al 2002 instead of solving the Lyapunov equation AWC WCA 88 O approximate the Gramian by evaluating the integral via quadrature WC etABBetA d J Approximate controllability Gramian a Main idea Lall et al 2002 instead of solving the Lyapunov equation AWC WCA 88 O7 approximate the Gramian by evaluating the integral via quadrature WC etABBetA d J 0 Suppose we have a single input then B is a single column Approximate controllability Gramian a Main idea Lall et al 2002 instead of solving the Lyapunov equation AWC WCA 88 O7 approximate the Gramian by evaluating the integral via quadrature 00 Jr WC e ABBe A clt 0 0 Suppose we have a single input then B is a single column 0 Consider the problem x Ax x0 B 1 which has solution Xt etABi Note that if we have a numerical solver for X AX then we may compute this solution numerically by starting with initial condition XO B Approximate controllability Gramian II a The Gramian may then be written WC Am xtxt dt 2 Approximate controllability Gramian II a The Gramian may then be written 00 WC xtxt clti 2 0 o If our numerical solution for Xt is given as snapshots X1X2Hixm where Xk Xfk then we may approximate this integral via quadrature m WC m WC Zxkxknsk k1 where 6k are quadrature coefficients For instance if the snapshots are equally spaced in time with tk 7 111 At then 6k At Approximate controllability Gramian II a The Gramian may then be written 00 WC xtxt clti 2 0 o If our numerical solution for Xt is given as snapshots X1X2Hixm where Xk Xfk then we may approximate this integral via quadrature m WC m WC Zxkxknsk k1 where 6k are quadrature coefficients For instance if the snapshots are equally spaced in time with tk 7 111 At then 6k At a Construct the data matrix XlEX1 WEXZ Mm Approximate controllability Gramian II a The Gramian may then be written 00 WC xtxt clti 2 0 o If our numerical solution for xt is given as snapshots thzp i i xm where xk xtk then we may approximate this integral via quadrature m WC m WC Zxkxknsk k1 where 6k are quadrature coefficients For instance if the snapshots are equally spaced in time with tk 7 111 At then 6k At 9 Construct the data matrix X l WEXZ V5me l 0 Then the approximate Gramian WC may be written WC XX Approximate controllability Gramian lll Some remarks 0 lfwe have more than one input then need to do one simulation for each input one for each column of B Stack all of the resulting snapshots into a data matrix X and as in the singleinput case WC XX Approximate controllability Gramian lll Some remarks 0 lfwe have more than one input then need to do one simulation for each input one for each column of B Stack all of the resulting snapshots into a data matrix X and as in the singleinput case WC XX 0 Since A is stable the solution Xt e AB decays to zero and the infinitetime integral always converges So in practice one should compute the numerical solution of Xt until the transient completely decays Approximate controllability Gramian lll Some remarks 0 lfwe have more than one input then need to do one simulation for each Input one for each column of B Stack all of the resulting snapshots into a data matrix X and as in the singleinput case WC XX 0 Since A is stable the solution xt e AB decays to zero and the infinitetime integral always converges So in practice one should compute the numerical solution of xt until the transient completely decays o For a very large system the number of snapshots m needed to approximate the integral 2 is much smaller than the number of states n Approximate controllability Gramian lll Some remarks 0 lfwe have more than one input then need to do one simulation for each Input one for each column of B Stack all of the resulting snapshots into a data matrix X and as in the singleinput case WC XX 0 Since A is stable the solution xt e AB decays to zero and the infinitetime integral always converges So in practice one should compute the numerical solution of xt until the transient completely decays o For a very large system the number of snapshots m needed to approximate the integral 2 is much smaller than the number of states n n Thus the rank of the approximate Gramian WC is typically less than n and WC XX may be viewed as a lowrank approximate of WC in which the nearly uncontrollable modes are not included Approximate observability Gramian a Observability Gramian may be computed similarly from adjoint simulations Recall W0 e ACCe A d 0 Approximate observability Gramian a Observability Gramian may be computed similarly from adjoint simulations Recall W0 e ACCe A d 0 0 Suppose we have a single output C is a row vector and consider the system 2 A z7 20 C with solution zt e C Approximate observability Gramian a Observability Gramian may be computed similarly from adjoint simulations Recall W0 e ACCe A d 0 0 Suppose we have a single output C is a row vector and consider the system 2 4 27 20 C with solution zt e C n Then 00 W0 ztzt dti 0 Approximate observability Gramian a Observability Gramian may be computed similarly from adjoint simulations Recall W0 e ACCe A d 0 0 Suppose we have a single output C is a row vector and consider the system 2 4 27 20 C with solution zt e N C5 0 Then 00 W0 ztzt dti 0 o Stacking snapshots 2k 2tk into a data matrix Y we have Wow W0 W s Outline 0 Approximate balanced truncation I Computing the balancing transformation Computing the balancing transformation Main idea Instead of transforming the whole coordinate system first and then truncating transform and truncate in one step Computing the balancing transformation Main idea Instead of transforming the whole coordinate system first and then truncating transform and truncate in one step o Transform X T2 2 T IATer T 1 Bu y CTx Du Computing the balancing transformation Main idea Instead of transforming the whole coordinate system first and then truncating transform and truncate in one step 0 Transform X T2 2 T IATer T 1 Bu y CTx Du 0 Writing 2 2122 with 4 T lAT etc Z1 741121 A1222 E1 i2 742121 A2222 32 y 6121 6222 DU CG p Computing the balancing transformation Main idea Instead of transforming the whole coordinate system first and then truncating transform and truncate in one step 0 Transform X T2 2 T IATer T 1 Bu y CTx Du 0 Writing 2 2122 with 4 T IAT etc Z1 741121 A1222 E1 i2 742121 A2222 32 a Truncate set 22 O y 6121 6222 DU Z1 741121 Bl y a121DU CG p Computing the balancing transformation II 0 Write TT1 T2 T 1S z21 Computing the balancing transformation II I Write T T1 T2 Then S 2 71 7 7 1 7 1 T isiiszi 27L 7 71 7 51AT1 51247 2 N A T AT iSZATl SZATZ B T lB C CT CT1 51 B 523 CTZ i Computing the balancing transformation II 0 Write T T1 T2 T 1S z I Then A 22 22 E 22 6 CT CT1 sz 0 The truncated equations are then 21 1AT121 18u y CT121 Du Computing the balancing transformation II I Write T T1 T2 T 1S z I Then A 22 22 E 22 6 CT CT1 sz 0 The truncated equations are then 21 1AT121 18u y CT121 Du a Note these depend only on T1 51 So we do not need to compute the whole transformation T all we need is the first few columns T1 v and the first few rows 51 of the inverse Computing the balancing transformation III a We still need a method for computing T1 and 51 that does not involve computing the whole transformation T and its inverse Computing the balancing transformation III a We still need a method for computing T1 and 51 that does not involve computing the whole transformation T and its inversel I To do this first compute a singular value decomposition of Y X where WC XX WO W Y X uzw U1 U2 01 ulzlvp 2 Computing the balancing transformation III a We still need a method for computing T1 and 51 that does not involve computing the whole transformation T and its inversel I To do this first compute a singular value decomposition of Y X where WC XX WO W Y X UXV U1 U2 01 UlilVf 2 o Next define T1 lezflZ s YUliflzl 3 Computing the balancing transformation III a We still need a method for computing T1 and 51 that does not involve computing the whole transformation T and its inversel I To do this first compute a singular value decomposition of Y X where WC XX WO W Y X UXV U1 U2 01 UlilVf 2 o Next define T1 lezflZ s YUliflzl 3 Computing the balancing transformation III a We still need a method for computing T1 and 51 that does not involve computing the whole transformation T and its inversel 0 To do this first compute a singular value decomposition of Y X where WC XX WO W Y X uzw U1 U2 01 ulzlvp 2 0 Next define T1 lezflZ s YulzflZi 3 Theorem Suppose Y X has rank r n Then With T1 and 51 de ned above T1 is a balancing transformation and 1 is its inverse That is 1WCSf Tf WOTl XL Computing the balancing transformation IV a Our main interest is in large systems for which the number of snapshots and thus the rank of WC W0 is much smaller than n The following theorem establishes that in this case X1 also contains all nonzero Hankel singular values T1 contains the first r columns of the balancing transformation and 51 contains the first r rows of its Inverse Computing the balancing transformation V Theorem Suppose Y X has rank r lt n and de ne T1 and 51 as in Then there exist matrices T2 52 6 BMW such that for S T in T2 5 5 T is invertible With T 1 S and z 0 SWCWOT7O O and furthermore SWCS 7 1 o 71 0 0 M1 Twanio M2 Computing the balancing transformation Vl Notes 0 Computationally this involves SVD ofa matrix YX which has dimension adjoint snapshots X 7 direct snapshots This is feasible as long as the number of snapshots is not too large eg thousands not millions Computing the balancing transformation Vl Notes 0 Computationally this involves SVD ofa matrix YX which has dimension adjoint snapshots X 7 direct snapshots This is feasible as long as the number of snapshots is not too large eg thousands not millions 0 Columns of T1 are linear combinations of direct snapshots Xk columns of Sf are linear combinations of adjoint snapshots zk T1 lezflZ s YUliflzl Outline 0 Approximate balanced truncation 0 Projection using direct and adjoint modes Projection using direct and adjoint modes 0 We can think of columns of T1 as direct modes 50k and columns of Sf as adjoint modes m Projection using direct and adjoint modes 0 We can think of columns of T1 as direct modes 30k and columns of Sf as adjoint modes m l l l l T1 lsm gal Sf1 1 w l l l l 0 Since 1T1 I we have Il I or equivalently lt j780kgt WW 61k Projection using direct and adjoint modes 0 We can think of columns of T1 as direct modes gok and columns of Sf as adjoint modes m l l l T1 lsm gal Sf1 1 w l l l l 0 Since 1T1 I we have Il I or equivalently MAM WW 51k o The sets 901p i i 790 and 11P W14 are biorthogonal Projection using direct and adjoint modes II a The transformation X T12 is equivalent to i i 21 39 x Z 01 84 f ZZWM i i 2 71 7 so we may think of this as expanding X in modes gok and 2k are the coefficients of the modes Projection using direct and adjoint modes II a The transformation X T12 is equivalent to i i 21 39 x Z 01 84 f ZZWM i i 2 1 so we may think of this as expanding X in modes gok and 2k are the coefficients of the modes 0 The transformed truncated equations are 2 WA z Il Bu y C z Projection using direct and adjoint modes II a The transformation X T12 is equivalent to i i 21 39 x Z 01 84 f ZZWM i i 2 1 so we may think of this as expanding X in modes gok and 2k are the coefficients of the modes 0 The transformed truncated equations are 2 WA z Il Bu y C z 0 Equivalently p 21 Z ltwj Aw 2k Z ltwj Bkgt uk k1 r k1 r y Z CSDka k1 Projection using direct and adjoint modes lll Implications 0 Computing the lowdimensional model is tractable even for large systems 9 The projection may even be applied to nonlinear systems Projection using direct and adjoint modes lll Implications 0 Computing the lowdimensional model is tractable even for large systems 9 The projection may even be applied to nonlinear systems o The reducedorder model is 2 42 Eu y 62 with N N N Ajk hwy49007 Bjk W17 Bkgt7 Cjk stok where C1 B lBl Bpl v C 1 C Projection of nonlinear equations 0 Suppose we have biorthogonal sets of direct and adjoint modes smyuwsi r 1117H 71pr 1790 jk Projection of nonlinear equations 0 Suppose we have biorthogonal sets of direct and adjoint modes 8017 i i 7 WHOM 39k 1117m711r j j a Suppose we have nonlinear dynamics X 2 X7 u Projection of nonlinear equations 0 Suppose we have biorthogonal sets of direct and adjoint modes 8017 i i 7 WHOM 39k 1117m711r j j a Suppose we have nonlinear dynamics X 2 X7 u 0 Expand X in terms of direct modes soj xm 221er Projection of nonlinear equations 0 Suppose we have biorthogonal sets of direct and adjoint modes 8017 i i 7 WHOM 39k 1117m711r j j 0 Suppose we have nonlinear dynamics X 2 X7 u 0 Expand X in terms of direct modes gay 7 Xf 240 11 0 The projected equations are then 2k W10 Xv 10gt Projection of nonlinear equations 0 Suppose we have biorthogonal sets of direct and adjoint modes 8017 i i 7 WHOM 39k 1117m711r j j a Suppose we have nonlinear dynamics X fx7 u 0 Expand X in terms of direct modes gay 7 Xf 240 11 0 The projected equations are then 2k W10 Xv gt o This may be viewed as a weighted residual method in which trial v functions say are different from test functions m U Outline 0 Approximate balanced truncation 0 Output projection for systems with large number of outputs Output projection for systems with large number of outputs 0 Recall that in order to compute the controllability Gramian we require one transient simulation for each input and for the observability Gramian one adjoint simulation for each output Output projection for systems with large number of outputs 0 Recall that in order to compute the controllability Gramian we require one transient simulation for each input and for the observability Gramian one adjoint simulation for each output a If the number of inputs or outputs is large eg if we are interested in modeling the entire state X then this is not feasible Output projection for systems with large number of outputs 0 Recall that in order to compute the controllability Gramian we require one transient simulation for each input and for the observability Gramian one adjoint simulation for each output a If the number of inputs or outputs is large eg if we are interested in modeling the entire state X then this is not feasible 0 One solution instead of the system X Ax Bu C 4 y 7 x consider the system X Ax Bu 7 5 y 7 P CX where P is an orthogonal projection with rank r Output projection for systems with large number of outputs a Recall that in order to compute the controllability Gramian we require one transient simulation for each input and for the observability Gramian one adjoint simulation for each output a If the number of inputs or outputs is large eg if we are interested in modeling the entire state X then this is not feasible 0 One solution instead of the system XAxBu 4 y CX consider the system X Ax Bu 7 5 y 7 P CX where P is an orthogonal projection with rank r c We wish to find a projection P such that the projected system 5 is as close as possible to the original system in some norm a a Output projection II 0 Suppose C is a q X n matrix that is the number of outputs is q Output projection II 0 Suppose C is a q X n matrix that is the number of outputs is q I To compute the observability Gramian from snapshots need one adjoint simulation for each column of C q simulations Output projection II 0 Suppose C is a q X n matrix that is the number of outputs is q c To compute the observability Gramian from snapshots need one adjoint simulation for each column of C q simulations 0 Write P 96 where G is a q X r matrix with 96 I this can always be done for an orthogonal projection Output projection II 0 Suppose C is a q X n matrix that is the number of outputs is q c To compute the observability Gramian from snapshots need one adjoint simulation for each column of C q simulations 0 Write P 96 where G is a q X r matrix with 96 I this can always be done for an orthogonal projection a The observability Gramian of the projected system is then W0 e AceeCe d 0 Output projection II 0 Suppose C is a q x n matrix that is the number of outputs is q 0 To compute the observability Gramian from snapshots need one adjoint simulation for each column of C q simulations 0 Write P 96 where G is a q x r matrix with 96 I this can always be done for an orthogonal projection o The observability Gramian of the projected system is then 00 W0 em cee Ce A d 0 a To approximate WO by snapshots we then need to run one adjoint simulation for each column of C G r simulations Output projection III a What is the optimal choice of 3 Output projection III a What is the optimal choice of 3 0 One choice minimize the 2norm of the error between original and projected systems If Ct is the matrix impulse response minimize Am HGt 7 PGtH2 clti Output projection III a What is the optimal choice of 3 0 One choice minimize the 2norm of the error between original and projected systems If Ct is the matrix impulse response minimize Am HGt 7 PGtH2 clti Output projection III a What is the optimal choice of 3 0 One choice minimize the 2norm of the error between original and projected systems If Ct is the matrix impulse response minimize m llGt 7 PGtll2 clti 0 Solution P is a projection onto POD modes of the impulse response Ct J Output projection III a What is the optimal choice of P 0 One choice minimize the 2norm of the error between original and projected systems If Ct is the matrix impulse response minimize m llGt 7 PGtll2 clti 0 Solution P is a projection onto POD modes of the impulse response Ct J a In fact if we write P 963 columns of O are the POD modes Output projection III a What is the optimal choice of P 0 One choice minimize the 2norm of the error between original and projected systems If Ct is the matrix impulse response minimize m llGt 7 PGtll2 clt 0 Solution P is a projection onto POD modes of the impulse response Ct J a In fact if we write P 00 columns of O are the POD modes 0 In practice we already need to compute this impulse response data in order to compute the controllability Gramian Minimal extra computation required to compute these POD modes Outline 9 Connections between POD and balanced truncation 0 POD modes are most controllable states POD modes are most controllable modes 0 A standard way of computing POD modes for the linear system X AX Bu would be to gather snapshots from impulse response simulations eg one simulation for each input and compute POD modes for this dataset POD modes are most controllable modes I A standard way of computing POD modes for the linear system X AX Bu would be to gather snapshots from impulse response simulations eg one simulation for each input and compute POD modes for this dataset 9 Using the notation from above these snapshots are columns of the data matrix X POD modes are most controllable modes I A standard way of computing POD modes for the linear system X AX Bu would be to gather snapshots from impulse response simulations eg one simulation for each input and compute POD modes for this dataset 0 Using the notation from above these snapshots are columns of the data matrix X o The resulting POD modes are dominant eigenvectors of R XX POD modes are most controllable modes I A standard way of computing POD modes for the linear system X AX Bu would be to gather snapshots from impulse response simulations eg one simulation for each input and compute POD modes for this dataset 0 Using the notation from above these snapshots are columns of the data matrix X o The resulting POD modes are dominant eigenvectors of R XX 0 But XX WC the approximation of the controllability Gramian Thus the standard POD modes are the most controllable modes However they do not take observability into consideration Outline 9 Connections between POD and balanced truncation o Balanced truncation is POD with a nonstandard inner product Balanced truncation is POD with a non standard inner product 0 Define an inner product on the state space by ltX17X2gtWO Xf Wosz where W0 YY is our approximation of the observability Gramian Balanced truncation is POD with a non standard inner product 0 Define an inner product on the state space by ltX17X2gtWO Xf Wosz where W0 YY is our approximation of the observability Gramian o The POD modes of a datasetNX with respect to this inner product are eigenvectors of R XX WO see lecture 1 Balanced truncation is POD with a non standard inner product 0 Define an inner product on the state space by ltX17X2gtWO Xf Wosz where W0 YY is our approximation of the observability Gramian o The POD modes of a dataset X with respect to this inner product are eigenvectors of R XX W0 see lecture 1 I These are thus the balancing modes so normalized differently Balancing modes are eigenvectors of WC W0 Furthermore eigenvalues of R are the squares of the Hankel singular values Balanced truncation is POD with a non standard inner product 0 Define an inner product on the state space by ltX17X2gtWO Xf Wosz where W0 YY is our approximation of the observability Gramian o The POD modes of a datasetNX with respect to this inner product are eigenvectors of R XX WO see lecture 1 I These are thus the balancing modes so normalized differently Balancing modes are eigenvectors of WC W0 Furthermore eigenvalues of R are the squares of the Hankel singular values a More details see Rowley 2005 Outline 9 Linearized channel flow a Single wavenumber comparison with exact balancing Single wave number comparison with exact balancing y I at 0 Consider linearized NavierStokes equations in a plane channel Single wave number comparison with exact balancing y at 0 Consider linearized NavierStokes equations in a plane channel 0 Consider perturbations of the form q 3107 Oem zy q Single wave number comparison with exact balancing y I at 0 Consider linearized NavierStokes equations in a plane channel 0 Consider perturbations of the form q 3107 Oeml z q a 0 System can be analyzed one wavenumber at a time effectively reduces to a 1D system for which exact balanced truncation is tractable Single wave number comparison with exact balancing y I at Consider linearized NavierStokes equations in a plane channel Consider perturbations of the form 7 A iaxi z 7 V q 7 qy7 05 7 q 7 l 77 System can be analyzed one wavenumber at a time effectively reduces to a 1D system for which exact balanced truncation is tractable System is stable up to Re 5772 but get large transient growth that we wish to capture in a reducedorder model Single anenumber input and output a B matrix for this example a single disturbance B byeiocgtlti z where a 31 and by is the optimal disturbance that gives the highest transient energy growth for this wavenumber a The output is the entire state KE yvawm III IIII39 5 2 v

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "I used the money I made selling my notes & study guides to pay for spring break in Olympia, Washington...which was Sweet!"

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.