New User Special Price Expires in

Let's log you in.

Sign in with Facebook


Don't have a StudySoup account? Create one here!


Create a StudySoup account

Be part of our community, it's free to join!

Sign up with Facebook


Create your account
By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here

Numerical Linear Algebra

by: Otilia Murray I

Numerical Linear Algebra MAT 229A

Otilia Murray I
GPA 3.88


Almost Ready


These notes were just uploaded, and will be ready to view shortly.

Purchase these notes here, or revisit this page.

Either way, we'll remind you when they're ready :)

Preview These Notes for FREE

Get a free preview of these Notes, just enter your email below.

Unlock Preview
Unlock Preview

Preview these materials now for free

Why put in your email? Get access to more of this material and other relevant free materials for your school

View Preview

About this Document

Class Notes
25 ?




Popular in Course

Popular in Mathematics (M)

This 17 page Class Notes was uploaded by Otilia Murray I on Tuesday September 8, 2015. The Class Notes belongs to MAT 229A at University of California - Davis taught by Staff in Fall. Since its upload, it has received 73 views. For similar materials see /class/187454/mat-229a-university-of-california-davis in Mathematics (M) at University of California - Davis.

Similar to MAT 229A at UCD

Popular in Mathematics (M)


Reviews for Numerical Linear Algebra


Report this Material


What is Karma?


Karma is the currency of StudySoup.

You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/08/15
Chapter 1 Direct Methods for the Solution of Linear Systems 11 Introduction Typical matrix computations 1 Linear systems A17 177 A E Cm 2 Eigenvalue problems Am A17 A E Cm 3 Leastsquare problems minm H17 ALEHQ7 A E men Euclidean norm 4 Linear programs min CT7 A E Cnxn 1720 Largescale case n is large or mm are large For all problems that actually arise in practice the large matrix exhibit special structures De nition A matrix computation problem is called large scale if it can be solved only by methods that exploit its special matrix structures sparsity7 structured dense matrices 111 Sparsity De nition A matrix A ajk E men is said to be sparse if only a small fraction of its entries ajk are nonzero Example Examples of sparse matrices l Discretization of elliptic partial differential equations N 8112 8112 8112 LufonQgtAub egLu7Au7 iaTy7 2 Network problems 3 Web search eg the Google matrix 112 The graph of a sparse matrix Let Q qjk E Cnxn We can associate With Q a directed graph nodes N 1237 7n edges E j7 k l j7 k E N and ajk 0 Example 0 6 0 0 0 0 6 0 Q g g g g g S 6 C5 where 96 denotes a potentially nonzero entry of A 0 0 6 0 0 0 6 0 6 0 Figure 11 9A The Google matrix View the worldwide webs as a graph G with Node N l23 n n numbers of websites that are Visited to the world Edge E jk l jk E N and ajk 0 Corresponding sparse matrix Q qjk E Rn such that qjk 0 ltgt j k E E Result connectivity of the web ltgt sparsity structure of What would be the value if qjk 0 De nition For each j E N the outdegree dj ofj is the number of edges j 16 Figure 12 ln links and outlinks of a website j For the Google matrix set 7 0 otherwise ifltjkgteE Lljk For our earlier example let Q be the preVious matrix and thus unwound O Dle O lt0 H O O O Dle O O O O O Dasha Dle O O Owha NH O O Ovalle 3le O H O O Note that all rows except row 4 sum up to l and all entries of row 4 are zero There is no link from website 4 to any other website and a Visitor of website 4 will be stuck there Figure 13 In order to resolve the issue with zero rows such as row 4 in the above example the nal Google matrix A aij E Rn is obtained by setting ajk if dj gt0 0439 i m ifdj 0 For our earlier example ts H O coma Dle o O coma O Dasha mlwalwoah A Dle O Contaan O Dosh A N H OmlH OWlWlH Cosh Iowa H O O The Google matrix A is raw stochastic n ajk 2 0 for all jk 12Hin Zajk 1 for allj 12Hini kil That is T Aee wheree11 1 and hence A l is an eigenvalue of A with eigenvector er Note that A l is also an eigenvalue of AT since the eigenvalues of A and A are the same So there exists an eigenvalue m f 0 such that A m m One can show that since A is row stochastic there is a unique eigenvector m such that T ATmm 17 11 I2 In M 20 andzlzgiuxn1i zj is the stationary probability that the random walker is at node j The entries of m determine the PageRank of the websites If A is row stochastic one can further show that A l is the dominant eigenvalue of A or AT ie 117 A27A37m7 n satisfy Mjl lt1 for allj23iuni 113 Dense Structured Matrices De nition A matrix T 6 CW of the form 750 7571 7L2 t7n71 t1 to L1 1 T 752 751 750 7L2 i i L1 757171 752 751 750 is called a Toeplitz matn39aa Remark 1 A ajk E Cnxn is a Toeplitz matrix ltgt ajk tjk for all jk 12n 2 An n X n Toeplitz matrix is dense in general but we only need to store the 2n 7 1 numbers LOLA tn2 L1 t0 t1 a twl instead of n2 entries CA3 There are algorithms that exploit the special structure of Toeplitz matrices and as a result have a lower complexity than algorithms for general matrices For example linear systems with n X n Toeplitz matrices as coef cient matrices can be solved with only 0n2 ops instead of the 0n3 ops that are required in the case of general linear systems Example Let Q be a discrete time stochastic process Time j 12 n Let E the expectation of the stochastic process mj mean at time t j Ujk 7 mj j 7 is the covarianve j k 1 2 n The covariance matrix A Ujkljk12n 6 CW The stochastic process is weakly stationery if Ujk tjk for all jk 12n That is a stochastic process is weakly stationery if its covariance matrix A is a Toeplitz matrix 12 Sparse Cholesky Factorization Problem Solve A17 b where A E Cnxn is nonsingular and b E C 121 Cholesky Factorization Special case of LU factorization for the case that A gt 0 De nition A matrix A E Cnxn is said to be Hermitizm positive de nite denoted by A gt 0 if 1 A AH 217HA17 gt 0 for all 17 E C 17 0 Remark 1 m mHAmH mHAHm mHAm that is mHAm E R 2 A ajk gt 0 implies ajj efAej gt 0 for all j where ej is the vector with all zero entries except 1 at the jth entry Theorem Let A ajk E Cnxn Then 1 For any nonsingular X E Cnxn Agt0 gt XHAXgt0 2 If A gt 0 then A ajkln15jk5n2 gt 0 for all 1 S n1 S n2 S n 3 A gt 0 if and only if there exists a unique lower triangular matrix 111 0 0 121 122 L i 0 lnl lnma lnn with I gt 0 j 12i i i n such that A LLHi Note that this is called the Cholesky factorization of A and no pivoting is needed Proof 1 Let X be nonsingulari Note that AH A if and only if XHAHH XHAHX XHAXi Now let 1 E C 17 y 0 0 lt mHAm wHX HXHAXX lw HXHAX where g X lm 0i X H I XH 1i That is one implies the other 2 liet lNS n1 S n2 S n and A ajklnlgj gm E CM MJAVWTMJAL Observe that A AH implies A AHi Let g rm zm1 zmlT g f 0 be arbitrary and set M 7 1 n 7 n2 mmxmxm1zmmTEC 170 Note that z E C and m f 0 By construction of m and since A gt 0 we have 0 lt mHAm siHZli for all i f 0 This completes the proof that A gt 0 3 We prove A gt 0 ltgt A LLH by induction on n For n l A an gt 0 implies an gt 0 Let 11 Man gt 0 L 11 Thus all1 1115 ie if and only if A LLHi For the casenilan Let AECmm Agt 0n22 0421 an wH i n71 A w A22 where an gt 0 39w E C A22 ajkl25j 5ni anl Thus H H Axamp11 0 1 NO xau ya 7 X xall ya 7 m I 0 A22 0 I 0 I where H 1122 A22 7 W E Cn71gtltn71 1 a1 is called Schur complement of A22 in Al By previous steps 1 0 gt 0 by part 1 which implies A22 gt 0 0 A22 Using the induction hypothesis applied to A22 6 Cn lxn 1 we have 122 0 0 132 133 A22ZZH whereZ ljjgt0j23iuni 3 0 n2 717171 lnn Thus 7 111 0 1 0 111 l1 7 H A ltd 1H0 LLH 0 I LL Where 111 1a11l1 and 11 0 0 121 122 L i 0 1n1 lma lnn with I gt 0 j 12 l l l n More the construction of the matrix L shows that L us unique This completes the proof D The proof of part 3 can be restated as an actual algorithml Since A AH 7 ajk we only need the matrix entries ajk j gt k j k 12 l l nl The algorithm overrides these entries of A With the entries of the lowertriangular Cholesky factor L an 6 11 0 0 an agg 121 122 A a32 A L 132 am awn am lnl lnma lnn This is done column by column At the rst k 7 1 steps the rst k 7 1 columns of L already contain the nal values of its entries 111 0 0 0 0 121 l22 i 132 2 kig ig 0 Z lkim llama llama 0 0 L N 11m lk2 aka 1ch 0 0 lk11 lk12 lk1k Nk1k1 lk22 i 71711671 N 0 ln1 mica mica mic Zn n Here lmeans that this value Will in general be changed in the remaining stepsl Notation For L ljk E Cnxn lm m 11m ljl1k 112le Algorithm 1 Cholesky factorization Input the elements ajk j 2 k ofA ajk E Cm A gt 0 Set ljk ajk for alljZ k j7k 1727Hi7n for k12iun 0 Set lick k Set lk1nk ll um forjk1k2uindo 561 ljnj ljnj ljnkljk end end Output The Cholesky factor L ljk E Cnxn of Al 122 Cholesky Factorization of Sparse Matrix Let A gt 0 be sparsei How can we ensure that the Cholesky factor L is also sparse Example Consider the socalled arrow matrices of the form 95 95 0 0 A 9 0 9 E 60 Agt0i E E a 0 95 0 0 95 ln general7 such matrices A have only n2n71 nonzero entriesi But7 when we apply Cholesky factorization to A7 we obtain a Cholesky factor L that is a dense lowertriangular matrix 95 0 0 95 95 L 95 0 95 95 95 That is7 all sparsity is lost Remedy before applying Cholesky fact reorder the rows and columns 12Hi7ngt23Hi7n717 iiei7 0 0 95 0 0 0 1 N 0 9 5 1 0 0 PTAPA up 0 whereP 0 1 0 0 96 0 0 0 0 1 0 Then LTAP g where 95 0 0 0 95 L i 0 95 0 Thus all sparsity is preserved Algorithm 2 Sparse Cholesky Factorization Input A sparse matrix A gt 0 Step I Symbolic facton39zation ase on the sparsity structure of A determine a permutation matrix P such that the Cholesky factor L of PTAP LLH is sparse and determine the sparsity structure of L Step Numericalfacton39zation Compute the entries 0 L Output A permutation matrix P and a lower triangular matrix L such that PTAP LLH Question is HOW to nd P Notation For a matrix A ajk E Cnxn nnzA number of nonzero entries ajk 76 0 of A Optimal choice of P the Cholesky factor L of PTAP is such that nnzL is minimum Theorem The problem of determining the optimal P is NPcomplete Consequence In practice only heuristics for nding a good P are feasible The problem of nding P can be Viewed as a graph problem Let A AH gt 0 ajk 0 ltgt akj Ejk 76 0 Connection For A AH View gA as an undirected graph fa Let A ajk E Cnxn A gt 0 First step of Cholesky factorization A A 1122 Ejkjk2 3 n Where 7 N a39lakl J ajk ajk T7 17 27376667716 1 In general Ejk 76 0 ltgt ajk 76 0 or ajl 76 0 and akl 76 0 Ejk 76 0 is called a llin element if ajk 76 0 but ajl 76 0 and akl 76 0 G A G 1122 Example 096009696 96096969696 960096960 9696960960 096960096 9609696960 First step of Cholesky factorization 1122 llin element 7 96009696 969696096 096063 96 969696 O 123 Minimal degree algorithm Notation dj out degree of nodej numbers of edge of the form j k k f j Recall that the graph CA of an Hermitian matrix A AH is undirected Thus the degree dj of a node j is the same as the out degree ofj or the in degree of j In order to reduce the possibility of creating llin elements we reorder the nodes such that the node corresponding to the kth step of Cholesky factorization has minimal degree Example See the earlier example With a 6 X 6 matrix again Tie breaker If more than one node has minimized degree use the node With the smallest index 9 I 92 I 93 9 9 9 9 3 94 I 95 Figure 14 gA and 91122 The llin of edge 2 5 due to agl 0 and a51 0 The nodes are deleted in this order 135246 Note that the llin element is a34 not directly related to the edge 25 Reordering of A 123 456 A 1 352 46 100000 000 000100 00gtk0 7010000 T7000 1370000107PAPixxOxxx7 001000 00 000001 00 Where ooo 960960960 oas aeoo 969696000 aeoooo 00000 In this case nnzL LH nnzA 2 General case Let g N7 E be an undirected graph and i E N gi NZEi Where Ni Ni and Eij7kEElj i7 kiiUj7kElji7 L EEandkiiy 0670613 Algorithm 3 Minimal degree algorithm Input The undirected graph 90 N07 E0 associated With a matrix A E Cnxn A gt 0 for k 172n do 1 Determine a node ik E Nk1 of minimal degree in gk 2 Set 9k M71316 1 9164 end 1 Output A reordering of the n rows and columns of A 1237ngt212223772n Remark 1 ln general7 the minimal degree ordering does NOT minimize the sparsity of the Cholesky factor L7 ie7 it is not optimal 2 There are many other heuristics for reordering the rows and columns of A 13 Sparse LU Factorization 131 Review of LU Factorization Let A ajk E Cnxn be nonsingular Special case no pivoting needed A LU Where 1 0 0 U11 U12 uin 21 E 0 ugg L U 0 un1n lni lnmil 1 0 0 unn are unit lower triangular and upper triangular respectively Actual algorithm A A U7 I A L After k 7 1 steps 1 0 u11 u12 uln 121 1 0 ugg 7 0 0 ukil il quot39 quot39 quot39 ukilm 7 1 U 0 0 L 7 n aka 1 uk1k uk1k1 39 39 39 uk1n 6 0 0 m an am i ln1 mica 0 Where 117 6 means that these values are not nal yet and in general Will still change in the remaining step of the algorithm The top k X n submatrix of U contains the nite values of the factor U no pivoting Algorithm 4 LU factorization Without pivoting Input A 6 CW for k 12n71 do forjk1k2ndo if ukk 0 then Stop pivoting is needed or A is singular en Set zjk u ukk Set 1mm 1mm 7 ljkuchm end end Output L and U such that A LU Remark For general cases 1 ukk 0 can occur if A is nonsingular 2 Numerical instability if ukk 0 but ltlt Remedies To be done en each kth step Partial pivoting Find t E k k 1 n such that W ml and interchange rows t and k of U Final factorization PA LU Where P is a permutation matrix Complete pivoting Find tc k k 1i 2 n such that lutcl luill and interchange rows 7 and 16 along With columns 5 and In Final factorization PAQ LU Where P and Q are permutation matrices In both cases algorithm does not stop prematurely ltgt A is nonsingular 132 Sparse LU Factorization Let A E Cnxn be sparse Goal LU factorization PAQ LU Where L and U are sparse Dif culty In general P and Q cannot be determined by symbolic factorization alone since we also need to pivot for stability Instead P and Q are determined during the actual factorization Note Symbolic factorization minimal degree can be used as a preprocessing step gA A permutation matrices P0 and Q0 Sparse LU factorization then run on reordered version POAQO of the original matrix A zeros nal values nal 7 values zeros wk Figure 15 Step k of sparse LU factorization Any entry uil 0 of the submatrix wk uillilkk1n is a candidate for the kth pivot element Markowitz criterion de ne Ti Elk numbers of nonzero entries in the i th row umquot of Mk cl 05k numbers of nonzero entries in the lth column ukm of Mk If uij is used as the pivot element then in the worst case k k TE gt91gtltc gt91 ltgt llin elements are created in step 16 Basic idea choose il so that is minimal Example In step k 9 9 0 THE 342224 6 CHE 523223 9 6 95 6 6 6 9 9 0 9 9 95 95 00096960 96000960 09696000 9 These 5 location 2 3 5 3 6 3 4 6 5 6 will be llin to be nonzero by Gaussian elimination Note that min39ri 9 lcl 9 l l with equality for i 3 and l 2 So swap rows 3 and 1 column 2 and l and thus 96969696960 0 95 95 0 0 95 00009696 960096096 96000960 09696000 Only 1 location 2 3 will be llin to be nonzero General case To guarantee numerical stability we need to make sure that is not too small Practical Markowitz criterion Among all uil 0 il E k k 1 n with Will 2 alffgluijl Will 2 aryfglujll Choose uil such that TEk 7 lc k 7 l is minimal Tie breaker choose smallest i and 17 Here 0 lt a S l l is a parameter Typical choice a 0 Note There are many variants of this basic criterion 14 Storage of Sparse Matrices Let A ajk E men or Rmxn be sparse7 With nnz nnzA be the numbers of nonzero entries ajk 0 Coordinate COO format used in Matlab Three arrays VA complex or real array of length nnz that contains the values ajk 0 in any order JA integer array of length nnz that contains the values j in the same order as in VA KA integer array of length nnz that contains the values 239 in the same order as in VA Then the potentially nonzero entries ajk 0 of A are given by aJAiKAi VAU7 i 1727 7nnz Example 0 127 715 0 3 7 23 0 0 75 0 4X5 A7 0 7711 0 0 2 ER nnzilO 33 72 0 12 0 VA1123 5 127 12 33 71 15 2 2 3 JA 11 2 2 1 4 4 3 1 3 4 1 141111 1 4 2 4 1 2 3 5 2 5 The order of C00 is arbitrary Compressed sparse row CSR format Same example as before v11 11 127 15 3 1 5 23 1 71 2 1 33 2 12 141111 2 3 51 4 1 2 5 1 2 4 IA 11 1 4 6 8 11 Three arrays VA complex or real array of length nnz that contains the values ajk 07 stored row by row Within each row7 the order can be arbitrary KA integer array of length nnz that contains the corresponding column indices 16 in the same as VA IA integer array of length m 1 that contains pointers to the beginning of each row in VA and KA IAj index of rst stored entry of row j7j 12 7m IAm l IA1 nnz We have IAj l 7 IAj numbers of potentially nonzero entries in the jth row aj of A7j 127 mm The potentially nonzero entries in row j are given by ajwa VAi7 i IAjIAj lIAj 1 7 l for allj 172717m 15 Fast Elliptic Solvers Large sparse systems Av I often exhibit special structures that can be exploited their solution Standard example model problem Poissonls equation on every simple domain 13 151 One Dimension Case d211 z 7 dxg fz 0ltzltl 110 11l 0 1 J TH zj jh m17 0717271117ml Centereddifference approximation 11j m 11zj7 d211z 211139 7 11j1 7 11j1 7 M 1 h2 f1 7 fltzjgt1 Hence 2vj vj71 vj1h2fj7 1172711177 17 110 110 07 11m1 111 01 122 linear equations for m unknowns 111112 1 1 7117 Compact form 2 71 O f 71 2 71 1 1 U2 2 f2 1 h 1 7 7l 2 7l i i v f O 71 2 m m or Tm11 th Note that Tm gt 01 Solution 11 of this system 11j m 11zj7 2ndorder accuracy 11j 11zj 0h2 Lemma The eigenvalues A and eigenvectors 2 of Tm are given by 7rl Al2 l7cosm1 7 17271117122 z 7 2 7rl 27d m7rl TERm 174m71 s1nmi1 s1nmi1 s1nerl 1 Since A is an eigenvalue and z is the associated eigenvector of Tm7 we have Tmzl A12 The set 21 are orthonormal 0 ifl j lezj 2 7 f 1 Hle2 71 1H7 Compact formulation TmZ ZA7 ZTZ ZZT 1m mxmidentity matrix7 where Z Z1 z 1 zm E Rmxm A diag12111m E Rm Corollary Tm ZAZT7 ZELZ A 152 Two Dimension Case Model problem quot21117 quot21117 JCIW7 0ltI7ylt1 11 0 on the boundary Let h and Wk 7 7 jh 1m Centereddifference approximation at zy zjyk is given by 4v c 7 vj71k 7 vj1k 7 vjk1 7 ijHl h2fjk j k 12 l i i m Where v0c vm1h v vjm1 0 and fm fIj7yk7 Um 39Uijyk Compact formulation of the above system is my VTm h2F Where F fjklj k12m E Rmxm is given and V vjkjk1 2 gt m E Rmxm is unknown represents a system of n m2 linear systems v11 fll v21 f21 Uml fml Tm 2 71m 0 v12 f12 m Tm 21m 1quot v22 f22 mem H 7v 3 7f 3 m Tm 21m 1quot vm2 fm2 0 71m Tm 21 i Ulm flm v2m f2m vmm fmm memvh2f memeR X Rm2Xmi vfeRnRm5 Solution of Recall that Tm ZAZT and ZTTmZ A By multiplying from the left by ZT and form the right by Z and by using the relation 1m ZZT it follows that ZTTmZZTVZ ZTVZZTTmZ hQZTFZ iiei AV V A hQF ltgt Ajv k v kAk h2 for jk 12Himi Thus th 39k v vkjk forallkl2iumi Remark Algorithm for solving the two dimensional model problemi lnput mi F fzj ykjk12mmi 1 Set F ZTFZi h2f v 2 Y 1k vjk AjAk 3 Set V ZV ZTi Output Approximation vjk m vzj yk j 12i mi Accuracy vjk vzj yk 0h2i In order to make this fast use FFT s to perform the matrixmatrix multiplicationi With Z and ZTi Note that for the model problem for all jkl2iumi vzy sinL7rz sinM7ry LM 2 l B2UIvy 62v y 2 2 2 7W 7 W 7 L M 7r s1nL7rz s1nM7ry Flop count naive implementation 1 In step l7 2 matrix multiplication in Rmxm m 47213 ops 2 37212 ops 3 Similar to step 1 Total 07213 2 ops7 where 71 7212 Recall the algorithm gives the solution of memv h2f7 Where mem E Rnxn Totally naive implementation 0713 no sparsity is exploited An optimal algorithm for solVing the twodimensional model problem requires at least C71 ops Rec all Z lzjkljk12m 6 1Rme Where 2 jlwr 1 Notes 1 Z ZT 2 Z is related to the 2721 2 X 2721 2 DFT discrete Fourier transform matrix C2m2 gtlt 2m27 i39 l jkljk0122m1 6 7r 39k 39k jke jmk lcos Jf1iisin jflg i 717 m 721 In fact7 96 95 97 97 97 92 2 ti 97 l 7 92 Where Z jkjk12m7 2 cos J m 1k 7 m 721 17 2 Z 7 lmZ 721l Note ImZzt ImZzt if 17 E Rm Proposition For any 17 E Rm the matrixvector product 3 Z17 ZTm can be computed as follows 1 Set 1 m m1 T A A 0 11 In 0 0 ERM 2 Compute g i Where 39i39 is the 2721 2 X 2721 2 DFT matrix 16 3 Partition g as follows m m1 A A T A 1ml0 390l 6R2 ltZwgt 4 Set 3 7 lmy Corollary The product 3 Z17 ZT m E Rm can be computed With 0mlogm ops using DFTi Remark 1 In Matlab g gt51 ltgt g may 2 For any F E Rmxm we can compute F ZTFZ With 0m2 logm 2 0nlogn ops using DFTsi n mQ7 Let G ZTF7 F GZ7 F T ZTGTi 3 With these fast DFT computations7 the above algorithm for twodimensional model problem requires On log n opsi The algorithm is a fast elliptic solveri


Buy Material

Are you sure you want to buy this material for

25 Karma

Buy Material

BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.


You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

Why people love StudySoup

Jim McGreen Ohio University

"Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

Anthony Lee UC Santa Barbara

"I bought an awesome study guide, which helped me get an A in my Math 34B class this quarter!"

Steve Martinelli UC Los Angeles

"There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

Parker Thompson 500 Startups

"It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

Become an Elite Notetaker and start selling your notes online!

Refund Policy


All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email


StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here:

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.