### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# ANALYSIS OF LIN SYS E C E 801

Clemson

GPA 3.84

### View Full Document

## 53

## 0

## Popular in Course

## Popular in ELECTRICAL AND COMPUTER ENGINEERING

This 18 page Class Notes was uploaded by Eloy Ferry on Saturday September 26, 2015. The Class Notes belongs to E C E 801 at Clemson University taught by Staff in Fall. Since its upload, it has received 53 views. For similar materials see /class/214315/e-c-e-801-clemson-university in ELECTRICAL AND COMPUTER ENGINEERING at Clemson University.

## Similar to E C E 801 at Clemson

## Popular in ELECTRICAL AND COMPUTER ENGINEERING

## Reviews for ANALYSIS OF LIN SYS

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/26/15

Adaptive Camera Calibration Hariprasad Kannan mail hkannan clemsonedu 1 Objective Obtain the constant internal and external camera calibration parameters using an adaptive estimator We will do that by moving a feature point in front of a camera or moving a camera about a xed feature point and use the video measurements to update the estimator 2 Notation We will use the following convention 5 vectors are speci ed as 1 which signi es the vector v of the bframe relative to the 6 frame and expressed in the 71 frame A rotation matrix is speci ed as R 6 503 which transforms coordinates de ned in the f frame to the t frame For a variable v 5 denotes its estimated value and v v 7 5 gives the estimator error 3 The Experimental Setup Refer to Figure 1 The frames xed to the robot s base the robot s end effector and the camera are denoted as B E and C respectively The xed camera system has a stationary camera looking at the feature on the end effector and the robot is moved about to get enough images of the feature at various positions and orientations Also the current location of the feature with respect to B is always known Since the robot link lengths are known and the current joint angles can be measured it is reasonable to assume that the end effector feature can be located In the moving camera case the feature is stationary and the camera is moved about to record the images We assume that the feature is located at a known distance from B This is a reasonable assumption too Suppose we want to calibrate a camera xed to a robotic arm of a space station When the camera needs calibration all that it has to do is turn back at the space station and record images of some feature on the space station like a corner This location of the feature will be known because we know the dimensions of the space station 4 What is Camera Calibration The camera is a mapping between the 3D world object space and a 2D image Camera models are matrices with particular properties that represent the camera Fixed Camera System Feature Point F Figure l The two types of setups mapping The simplest camera model is the pinhole camera model Refer to Figure 2 The following descriptions are for the xed camera case but it is straightforward to write down the moving camera equations In Figure 2 H C 1 X x Y 1 Z XE 0 30 ZC where t 6 R3 and x0 t E R3 are the euclidean coordinates of the feature on the robot s end effector ie the object that is being captured with respect to B and C respectively and yt E R2 is the image coordinates of the feature It is seen that 2 Image Plane Optical Axis Figure 2 The pinhole camera model a feature that is bottom left with respect to the camera point of View is captured top right on the image plane By observing the similar triangles in Figure 2 we get the following expressions lac 7X0 7X0 w 2 f Z0 7 f Zc 2 in N in f Z 7 f N Z The image sensor has some nonidealities ie it is a parallelogram rather than a rectangle and also each pixel coordinate is scaled differently about each axis From Figure 3 we get the following relationship i 0 J17mmefk wkucot l f k Y V 0 k sin a sin Z my The origin of the camera is xed to the image sensor at the pixel coordinates uo v0 Hence the mapping of xt to yt has the following form uo 7 m1 fku fkv COt 17 uo v v0 7m 3 v0 a 1 1 0 0 1 1 fku fkv cot gt uo 9 0 75 v0 4 0 0 1 y 39Ysensor p lt6 sensor Figure 3 The usual shape of the sensor where Q E RSXS denotes the Intrinsic Calibration Matrix We would also like to know the position and orientation of the camera frame C with respect to robot base frame B Frame B is a convenient known xed world frame in our setup so if we can get the transformation to C with respect to B we know where the camera is in the world From Figure 1 x0 t is related to xt as follows XE X Z 01X3 1 Z 1 1 R0 P0 T I B CB l 01x3 1 l where Hg 6 503 is the rotation matrix transforming coordinates from frame B to frame C and P33 6 R3 is the position of frame B with respect to frame C expressed in frame C and T E R4 denotes the Extrinsic Calibration Matrix an homogeneous transformation to the camera frame from a xed world frame Read 9 and refresh your knowledge about these concepts Combining equations 4 and 5 we get y Zion ng p003 ku 7 k t 9 f kf CO If M and Z0 rTx P3 6 0 5 vo 3 where TST E R3 is the last row of Hg and P3 is the last element of 1333 This form of Z0 follows directly from 5 To calibrate a camera is to nd the individual parameters of the matrices Q and T ie to nd f kn k1 i7 uo v0 Hg and P53 In the moving camera case nding the location of the camera means to nd the location of the camera with respect to the end effector we know or can measure all the other distances and orientations in the system ie to nd Hg and PgE 1 3 4 7 are good references Have a look 5 Estimator Design Consider the following system 11 7 0 6f bl la2llocdllbgl lt7 where a1t a2t b1t 62t E R1 are measurable known variables and c d e f E R1 are unkown constant system parameters By multiplying out 7 we get the following expression 11 7 Cbl Cfbg 12 i Cdbg The right hand side of 8 can be re written in its Linear parameterized form as follows c Cb1 fb2 b1 b2 0 J 6f 9 Cdb2 0 0 b2 Cd 9 where Wt E R2X3 contains only known elements and G E R3 contains all the unknown elements Such a seperation of variables is not possible for all systems By taking su icient measurements of the known variables a good estimate of G can be obtained Observe that some of the unknowns appear multiplied together in G and that is acceptable in many cases This is because apart from the algebraic relationships in 6 there will be other conditions on the system which will impose more conditions on these variables and we can end up nding all the variables seperately For example in the above system c is estimated straightaway from which d can be estimated because we have an estimate of Cd To get 6 and f seperately one needs another algebraic relationship between 6 and Usually there will be such relationships In our project the rotation matrix R has properties like its determinant value is equal to 1 and its inverse is equal to its transpose We will use these properties to get the individual parameters In our system yt in 6 can be written in the Linear parameterized form as follows 7 W91 y H92 Note that the H92 Z0 is always greater than zero since the camera is always in front of the feature Hence there is a positive constant 8 E R1 such that the following condition always holds 1 192 gt E As said before each of the individual camera calibration parameters can be obtained using theArelatioriships in 92 and 91 and the properties of the system To nd accurate 92 and 91 we do the following mathematical manipulations W l H g yn g W l 12 Subtract 12 from 10 and addsubtract 1192 to get yHez 31192 1192 7 31162 w l 171192 171162 W61 171192 W61 7 171162 1 N N W A G 13 y H92 11 where G GlT GZT and j W 71711 1f 3 goes to zero after enough updates then C is close to the true value Here are the computational steps toAachieve that objective 1 Initialize 91t and 92t to random numbers 2 Measure yt and yt is got from the camera and x is measurable because it is just the location of a point on a robot s end effector with respect to its base and as said earlier this can be obtained 3 Calculate Wt and 11t 4 Calculate using 12 Here we have to make sure H92 gt 0 This can achieved using a projection algorithm its description will be added soon Hence you obtain mt ya 7 at 5 Calculate 6 Update an Estimation Gain l t as follows d T r1 2 W W 14 d lt gt lt gt P is a square matrix of side equal to the length of 6 By selecting suitable non zero initial values P 10 is ensured to be positive de nite 7 Obtain C t as follows T 3 Proj7 15 739 al WTyN a1p andpXYZ2H 2 16 6 where pX Y Z 6 R1 is a positive function de ned such that the condition in 16 always holds 8 Obtain the updated from 9 Go back to step 2 until t is very small 51 Projection Algorithm We ensure that H t gt 0by de ning a projection operator on t To facilitate further development an auxiliary function is de ned and its gradient is also computed as follows E H g V m EH vy 01m in where E is de ned in 11 Two convex sets based on the function are de ned as follows R 3 e RP1P273C 0 R5 3 e RP1P273C 6 where 6 E R1is a positive constant that is very close to zero Let us denote the O boundary of a set R by BR and its interior by R Given these de nitions the projection of T is de ned as follows A O 6 ER or V PTT S 0 A A A A O I 7 c ag 2 T o e R5 7a and v PTT gt 0 A 739 09 minlT Q It is helpful to note that CBR Oand CB R 1 In the subsequent stability analysis the following property of the projection operator is used refer to 6 for the proof Proj 739 Tl AProj T i TrlT v e R5 9 e R 17 6 Stability Analysis Theorem 1 The update law defined in 15 ensures that Ct a 0 as t a 00 provided the following persistent excitation condition 8 is satisfied toTT 71L 3 W ltTgtWltTgtd7 v21 18 to where 71 72 E R1 are positive constants and 1 6 RMquot is n X n Identity Matrix Proof We consider the following Lyapunov function V Trrl 19 After taking the time derivative of 19 and substituting 14 and 15 we obtain the following expression d dt i TF lPrqj T TWTW V Trrl T we Using 17 the expression can be upper bounded as follows V l N N T ioTr17Wo we a TF IFWTyNnL 119921727 QH9217T17H92217Ty 7 H92 ll ll2 7 p H92 ll ll2 H922H17H2 l Based on 16 and followed by 11 Vt can be further upper bounded as follows V s 7 H92 W V lt 8 WM2 20 From 20 and 19 we can conclude that fa mng lt We 7 Woo 21 From 21 and the fact that Vt is non negative it can be concluded that Vt lt V0 for any t Hence Vt E 00 Le Vt is bounded It can be concluded from 21 and Vt E 00 that yNt E 2ie the Z norm of yNt is bounded Hence from 11 and 13 Wt t 6 2 Therefore from 19 TtF 1tCt e 00 Le TtF 1tCt is bounded Since F 10 is positive definite and it is assumed that the persistent excitation condition in 18 is satisfied 14 can be used to show that F 1t is always 8 positive definite Hence Ct E 00 Sinoe Wt Ht and Wt are oomposed of measurable signals which are known to be bounded Wt Ht Wt E 00 Similarly 91 92 Q 6 Log since they are composed of bounded physical quantities Since t G 7 t t E 00 From 12 yAt E 00 and from 13 yNt E 00 Henoe from 15 3 t E 00 and therefore 3 t E 00 The derivative of Wt Ht will be oomposed of bounded rigid body motion velocities which are bounded for the motions of our system hence W t H t E 00 Therefore W t E 008inoe W t 3 t E 00 it follows that ltWtCt E 00 Henoe WtCt is uniformly oontinuous and sinoe WtCt 6 L2 we can conclude that 2 Wt t a 0 as t a 00 It can be shown that if the persistent excitation oondition given in 18 is satisfied then Cta0astaoo References J Corso Camera Calibration Lecture Notes from Computer Vision http www csjhueduNjcorsonb notescamera 7 calibrationifall2002p df W E Dixon A Behal D M Dawson and S Nagarkatti Nonlinear Control of Engineering Systems A Lyapunovi Based Approach Birkhauser ISBN 081764265X 2003 O Faugeras ThreeiDimensional Computer Vision The MIT Press ISBN 0262061589 1993 D A Forsyth and J Ponce Computer Vision A Modern Approach PrenticeiHall ISBN 0130851981 2002 T I Fossen Marine Control Systems Guidance Navigation and Control of Ships Rigs and Underwater Vehicles Marine Cybernetics AS Norway ISBN 8292356010 2002 M Krstic I Kanellakopoulos and P KokotoVic Nonlinear and AdaptiVe Control Design New York NY John Wiley and Sons 1995 Y Ma S Soatto J KoSecka and S Sastry An InVitation to 3D Vision SpringeriVerlag ISBN 0387008934 2003 J J E Slotine and W Li Applied Nonlinear Control Prentice Hall ISBN 0130408905 1991 M W Spong and M Vidyasagar Robot Dynamics and Control John Wiley and Sons ISBN 047161243 1989 Adaptive Camera Calibration Hariprasad Kannan mail hkannan clemsonedu 1 Objective Obtain the constant internal and external camera calibration parameters using an adaptive estimator We will do that by moving a feature point in front of a camera or moving a camera about a xed feature point and use the video measurements to update the estimator 2 Notation We will use the following convention 5 vectors are speci ed as 1 which signi es the vector v of the bframe relative to the 6 frame and expressed in the 71 frame A rotation matrix is speci ed as R 6 503 which transforms coordinates de ned in the f frame to the t frame For a variable v 5 denotes its estimated value and v v 7 5 gives the estimator error 3 The Experimental Setup Refer to Figure 1 The frames xed to the robot s base the robot s end effector and the camera are denoted as B E and C respectively The xed camera system has a stationary camera looking at the feature on the end effector and the robot is moved about to get enough images of the feature at various positions and orientations Also the current location of the feature with respect to B is always known Since the robot link lengths are known and the current joint angles can be measured it is reasonable to assume that the end effector feature can be located In the moving camera case the feature is stationary and the camera is moved about to record the images We assume that the feature is located at a known distance from B This is a reasonable assumption too Suppose we want to calibrate a camera xed to a robotic arm of a space station When the camera needs calibration all that it has to do is turn back at the space station and record images of some feature on the space station like a corner This location of the feature will be known because we know the dimensions of the space station 4 What is Camera Calibration The camera is a mapping between the 3D world object space and a 2D image Camera models are matrices with particular properties that represent the camera Fixed Camera System Feature Point F Figure l The two types of setups mapping The simplest camera model is the pinhole camera model Refer to Figure 2 The following descriptions are for the xed camera case but it is straightforward to write down the moving camera equations In Figure 2 H C 1 X x Y 1 Z XE 0 30 ZC where t 6 R3 and x0 t E R3 are the euclidean coordinates of the feature on the robot s end effector ie the object that is being captured with respect to B and C respectively and yt E R2 is the image coordinates of the feature It is seen that 2 Image Plane Optical Axis Figure 2 The pinhole camera model a feature that is bottom left with respect to the camera point of View is captured top right on the image plane By observing the similar triangles in Figure 2 we get the following expressions lac 7X0 7X0 w 2 f Z0 7 f Zc 2 in N in f Z 7 f N Z The image sensor has some nonidealities ie it is a parallelogram rather than a rectangle and also each pixel coordinate is scaled differently about each axis From Figure 3 we get the following relationship i 0 J17mmefk wkucot l f k Y V 0 k sin a sin Z my The origin of the camera is xed to the image sensor at the pixel coordinates uo v0 Hence the mapping of xt to yt has the following form uo 7 m1 fku fkv COt 17 uo v v0 7m 3 v0 a 1 1 0 0 1 1 fku fkv cot gt uo 9 0 75 v0 4 0 0 1 y 39Ysensor p lt6 sensor Figure 3 The usual shape of the sensor where Q E RSXS denotes the Intrinsic Calibration Matrix We would also like to know the position and orientation of the camera frame C with respect to robot base frame B Frame B is a convenient known xed world frame in our setup so if we can get the transformation to C with respect to B we know where the camera is in the world From Figure 1 x0 t is related to xt as follows XE X Z 01X3 1 Z 1 1 R0 P0 T I B CB l 01x3 1 l where Hg 6 503 is the rotation matrix transforming coordinates from frame B to frame C and P33 6 R3 is the position of frame B with respect to frame C expressed in frame C and T E R4 denotes the Extrinsic Calibration Matrix an homogeneous transformation to the camera frame from a xed world frame Read 9 and refresh your knowledge about these concepts Combining equations 4 and 5 we get y Zion ng p003 ku 7 k t 9 f kf CO If M and Z0 rTx P3 6 0 5 vo 3 where TST E R3 is the last row of Hg and P3 is the last element of 1333 This form of Z0 follows directly from 5 To calibrate a camera is to nd the individual parameters of the matrices Q and T ie to nd f kn k1 i7 uo v0 Hg and P53 In the moving camera case nding the location of the camera means to nd the location of the camera with respect to the end effector we know or can measure all the other distances and orientations in the system ie to nd Hg and PgE 1 3 4 7 are good references Have a look 5 Estimator Design Consider the following system 11 7 0 6f bl la2llocdllbgl lt7 where a1t a2t b1t 62t E R1 are measurable known variables and c d e f E R1 are unkown constant system parameters By multiplying out 7 we get the following expression 11 7 Cbl Cfbg 12 i Cdbg The right hand side of 8 can be re written in its Linear parameterized form as follows c Cb1 fb2 b1 b2 0 J 6f 9 Cdb2 0 0 b2 Cd 9 where Wt E R2X3 contains only known elements and G E R3 contains all the unknown elements Such a seperation of variables is not possible for all systems By taking su icient measurements of the known variables a good estimate of G can be obtained Observe that some of the unknowns appear multiplied together in G and that is acceptable in many cases This is because apart from the algebraic relationships in 6 there will be other conditions on the system which will impose more conditions on these variables and we can end up nding all the variables seperately For example in the above system c is estimated straightaway from which d can be estimated because we have an estimate of Cd To get 6 and f seperately one needs another algebraic relationship between 6 and Usually there will be such relationships In our project the rotation matrix R has properties like its determinant value is equal to 1 and its inverse is equal to its transpose We will use these properties to get the individual parameters In our system yt in 6 can be written in the Linear parameterized form as follows 7 W91 y H92 Note that the H92 Z0 is always greater than zero since the camera is always in front of the feature Hence there is a positive constant 8 E R1 such that the following condition always holds 1 192 gt E As said before each of the individual camera calibration parameters can be obtained using theArelatioriships in 92 and 91 and the properties of the system To nd accurate 92 and 91 we do the following mathematical manipulations W l H g yn g W l 12 Subtract 12 from 10 and addsubtract 1192 to get yHez 31192 1192 7 31162 w l 171192 171162 W61 171192 W61 7 171162 1 N N W A G 13 y H92 11 where G GlT GZT and j W 71711 1f 3 goes to zero after enough updates then C is close to the true value Here are the computational steps toAachieve that objective 1 Initialize 91t and 92t to random numbers 2 Measure yt and yt is got from the camera and x is measurable because it is just the location of a point on a robot s end effector with respect to its base and as said earlier this can be obtained 3 Calculate Wt and 11t 4 Calculate using 12 Here we have to make sure H92 gt 0 This can achieved using a projection algorithm its description will be added soon Hence you obtain mt ya 7 at 5 Calculate 6 Update an Estimation Gain l t as follows d T r1 2 W W 14 d lt gt lt gt P is a square matrix of side equal to the length of 6 By selecting suitable non zero initial values P 10 is ensured to be positive de nite 7 Obtain C t as follows T 3 Proj7 15 739 al WTyN a1p andpXYZ2H 2 16 6 where pX Y Z 6 R1 is a positive function de ned such that the condition in 16 always holds 8 Obtain the updated from 9 Go back to step 2 until t is very small 51 Projection Algorithm We ensure that H t gt 0by de ning a projection operator on t To facilitate further development an auxiliary function is de ned and its gradient is also computed as follows E H g V m EH vy 01m in where E is de ned in 11 Two convex sets based on the function are de ned as follows R 3 e RP1P273C 0 R5 3 e RP1P273C 6 where 6 E R1is a positive constant that is very close to zero Let us denote the O boundary of a set R by BR and its interior by R Given these de nitions the projection of T is de ned as follows A O 6 ER or V PTT S 0 A A A A O I 7 c ag 2 T o e R5 7a and v PTT gt 0 A 739 09 minlT Q It is helpful to note that CBR Oand CB R 1 In the subsequent stability analysis the following property of the projection operator is used refer to 6 for the proof Proj 739 Tl AProj T i TrlT v e R5 9 e R 17 6 Stability Analysis Theorem 1 The update law defined in 15 ensures that Ct a 0 as t a 00 provided the following persistent excitation condition 8 is satisfied toTT 71L 3 W ltTgtWltTgtd7 v21 18 to where 71 72 E R1 are positive constants and 1 6 RMquot is n X n Identity Matrix Proof We consider the following Lyapunov function V Trrl 19 After taking the time derivative of 19 and substituting 14 and 15 we obtain the following expression d dt i TF lPrqj T TWTW V Trrl T we Using 17 the expression can be upper bounded as follows V l N N T ioTr17Wo we a TF IFWTyNnL 119921727 QH9217T17H92217Ty 7 H92 ll ll2 7 p H92 ll ll2 H922H17H2 l Based on 16 and followed by 11 Vt can be further upper bounded as follows V s 7 H92 W V lt 8 WM2 20 From 20 and 19 we can conclude that fa mng lt We 7 Woo 21 From 21 and the fact that Vt is non negative it can be concluded that Vt lt V0 for any t Hence Vt E 00 Le Vt is bounded It can be concluded from 21 and Vt E 00 that yNt E 2ie the Z norm of yNt is bounded Hence from 11 and 13 Wt t 6 2 Therefore from 19 TtF 1tCt e 00 Le TtF 1tCt is bounded Since F 10 is positive definite and it is assumed that the persistent excitation condition in 18 is satisfied 14 can be used to show that F 1t is always 8 positive definite Hence Ct E 00 Sinoe Wt Ht and Wt are oomposed of measurable signals which are known to be bounded Wt Ht Wt E 00 Similarly 91 92 Q 6 Log since they are composed of bounded physical quantities Since t G 7 t t E 00 From 12 yAt E 00 and from 13 yNt E 00 Henoe from 15 3 t E 00 and therefore 3 t E 00 The derivative of Wt Ht will be oomposed of bounded rigid body motion velocities which are bounded for the motions of our system hence W t H t E 00 Therefore W t E 008inoe W t 3 t E 00 it follows that ltWtCt E 00 Henoe WtCt is uniformly oontinuous and sinoe WtCt 6 L2 we can conclude that 2 Wt t a 0 as t a 00 It can be shown that if the persistent excitation oondition given in 18 is satisfied then Cta0astaoo References J Corso Camera Calibration Lecture Notes from Computer Vision http www csjhueduNjcorsonb notescamera 7 calibrationifall2002p df W E Dixon A Behal D M Dawson and S Nagarkatti Nonlinear Control of Engineering Systems A Lyapunovi Based Approach Birkhauser ISBN 081764265X 2003 O Faugeras ThreeiDimensional Computer Vision The MIT Press ISBN 0262061589 1993 D A Forsyth and J Ponce Computer Vision A Modern Approach PrenticeiHall ISBN 0130851981 2002 T I Fossen Marine Control Systems Guidance Navigation and Control of Ships Rigs and Underwater Vehicles Marine Cybernetics AS Norway ISBN 8292356010 2002 M Krstic I Kanellakopoulos and P KokotoVic Nonlinear and AdaptiVe Control Design New York NY John Wiley and Sons 1995 Y Ma S Soatto J KoSecka and S Sastry An InVitation to 3D Vision SpringeriVerlag ISBN 0387008934 2003 J J E Slotine and W Li Applied Nonlinear Control Prentice Hall ISBN 0130408905 1991 M W Spong and M Vidyasagar Robot Dynamics and Control John Wiley and Sons ISBN 047161243 1989

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "Selling my MCAT study guides and notes has been a great source of side revenue while I'm in school. Some months I'm making over $500! Plus, it makes me happy knowing that I'm helping future med students with their MCAT."

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.