### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# PETE 604 PETE 604

Texas A&M

GPA 3.72

### View Full Document

## 75

## 0

## Popular in Course

## Popular in Petroleum Engineering

This 142 page Class Notes was uploaded by Lenore Medhurst on Wednesday October 21, 2015. The Class Notes belongs to PETE 604 at Texas A&M University taught by Robert Wattenbarger in Fall. Since its upload, it has received 75 views. For similar materials see /class/225882/pete-604-texas-a-m-university in Petroleum Engineering at Texas A&M University.

## Popular in Petroleum Engineering

## Reviews for PETE 604

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/21/15

CHAPTER 3 Fully Implicit Simulators Stability Finite Difference Equations The Tri Diagonal Template Components of Matrix A Direct Solution with GBAND Root Finding Methods Numerical Determination of Jacobian Matrix Fully Implicit Simulators Stability Instability leads to a simulator quotblowing upquot or at least oscillating with solutions that are not physically realistic Instability is usually associated with an oscillating solution An oscillation will sometimes get worse but occasionally will disappear as the simulation progresses There often is confusion about the type of stability that comes from different sources We will identify these following types of instability 1 Timestep sequence stability 2 Newton iteration stability 3 Matrix solution stability Timestep sequence instability This is the most common analysis that is made It refers to the oscillation that occurs as timesteps are computed One timestep will be too high the solution then the next timestep will be too low We are familiar with the Von Neumann analysis for linear diffusion equations This shows the cause of instability and shows that explicit pres sure gradients are stable only as long as Atsl ct M2 2 k for a 1 D problem Other instabilities involve nonlinealities in the finite difference equations IMPES uses explicit kr and Pc and has the following limits on timestep sizes For PCquot Fully Implicit Simulators AtS sz 71 nnn y k maxIP cl km km MH For krquot V At S p qt maXflw IMPES is practical for many field simulation problems in spite of these limitations We can define a quotCourant Freidrich39s Lewyquot CFL Number as qt At N f 39w CFL VP It is observed from experience that the simulator is unstable if Nch gt 20 IfNCpL lt 10 the simulator is stable For 10 lt Nch lt 20 the simulator will oscillate but not blow up For miscible displacement the same stability orientation can be used with fw 10 When Nch 10 it is equivalent to a throughput of 10 pore volume during the timestep The following figure shows numerical experiments for a 5 spot water ood It shows that the solution oscillates when 10 lt Nch lt 20 but does not blow up Fully Implicit Simulators 100 1000 I I I I I I I I E WORJMPES WORAIM Max CFL IMPES 39 quot Max ChangethPES a c m alum may a 01005 LT j II A 39 Z 8 O 3 5 L z I m 5 1 0 1 fquot 5 e E I s 3 39 E 39 I 0010 g h 39 i 392 I I m I 2 1 g I L I I quotWu N p 01 I l 39 I I I I 0001 080 032 034 086 038 040 Pore Volumes Injected Fig 7 5Spot waterflood results caIculated with IMPES and AIM AtOVp 15 From Young and Russell SPE 25245 1993 A fully implicit simulator is unconditionally stable if at each timestep the solution converges to the exact solution of the nite di erence equations If a fully implicit simulator is unstable it is because we are not converging to the finite difference equations at each timestep Newton iteration instability Newton iteration or some other iteration scheme is used to solve the nonlinear finite difference equations Newton iteration sometimes does not converge for a variety of reasons It is possible for the solution to quotblow upquot during the Newton iteration It is also possible for the timestep sequence to be unstable if the Newton iteration does not converge at each timestep Matrix solution instability Another source of instability is in the matrix solution itself For a number of reasons the matrix solution may not solve the linearized equations and may blow up during the matrix solution Summary There are a number of reasons for a simulator blowing up It is important to know whether the matrix solution is solving the linearized equations properly and whether the nonlinear Newton iteration is converging Verifying these is an important step in debugging simulators Fully Implicit Simulators Finite Difference Equations 1 D oil water model Implicit kr Ignore PC graV Co Cw Cf Mass Balance Eqs A a A cp M 6 S q 0 0 W 0 AQWAQWZM6SW qw At Where 65w SW 53 Implicit kn kn1kk n1 k SW quotSW k J1k Jaw Si ka k39m SW Si all ails aloE S111 Slfv Swupstream symmetric Recalling Eq 1 Vp Bo At Assume ow direction to the right side then using upstream kr n1 n1 n1 doE 11 P171 P n1 gt a3 pm pf srl so qo 1 1 15511439 aloEi71Slv1irl 513171 39 Pl r1 VpBo 1 2 7 1 t w Sm qo aloEiStv1i SluiPll11 pl Hl S Fully Implicit Simulators 1 2 3 4 5 6 1 k Si SW SW pn1 pk pquot Where p pkH pk S Si S Then aim a39om 3 113 111 Pf p VpBo At afigia39oai1Sm 1121 I llpig p SQ S CVL Squotwlqo 8 deirl 131 39 a39oEi71Stvi1P139 a 5i71 PL 39 VpBo I VpBo doEiylSwirl pill pi SW quotK 9 N L N SW SW qo U a135 P214 39 a39oEiSipf1pf aim pill p a39oEiSLip1p RHS Ignore J J This is the in form a ab b ab ab ba ab akoEi71prl akoEip1 d39oEH 13139 Pbstm azExpiil pf s RHS 10 At Where Fully Implicit Simulators V IBO n RHs at swlgt qo pi pf alga p111 p 11 Reversing signs to give us a positive main diagonal we have the final form 39dkoEl1p1dkoEHdkogJP39dkoElpld39aleinlrl39plSm I k k VpBo k doElpi1pi7 SW1 r 12 At Where k A A k A 1 k 1 quot r0 do p aggoAz KW So be A t VP So b0 L butuse The Tridiagonal template p31 S wirl p i S wi p 11 S wi1 PM 0 pk 1316 k k k k k 539 l l quotd 05H 39a onl p Hquot p i a 0514 051 At 3921013i 0 l7 n In y I W I K K aoElpzl pl VpBw 0 pk p13 k k k k k wl zl z a WEH aWEH P H P i a WEN Fa WE At aWE 0 F n I n L f aWEpf139 pf In the form Fully Implicit Simulators aWn aW12 aCn aCu aEn aElZ aWZl aW22 aC21 339sz aEll aEzz written form aw11 3013171 k k aw12 aloEirl P 171 Pi Note that a oEi1 pkH pik gt 0 and occurs upstream k k awE171p171Pi lt0 Fully Implicit Simulators Components of Matrix A ATT39DPc39Q39 T p H S wirl P 1 S W1 P i1 S wi1 quot3018471 0 30E aoE 0 quot30184 0 aWEH 0 aWEH aWE 0 aWF4 0 T39 If ow is in the right direction p H S wirl P 1 S W1 P i1 S wi1 39 a oErlkpmi39 Do 39 a oEy 1 10H 139 Do a WEH w71 Wi a WEll Wll Wl Fully Implicit Simulators If ow is in the left direction i i i i p 171 S wirl pi S wi p 11 S wi1 39 a oEq0171 cDo 39 a oEcDoH139 Do a wEchyi Wi a WE1 WIl Wi D p11 Sw 171 p Swi p11 Sw 11 V b V b s P 0 o 0 At Cf c At V b V b s 71 W P W W At Cf CW At Fully Implicit Simulators p11 Sw 171 p S wi p11 S w i1 aszrl P6171 aszrl P zrl Llsz Pt Llsz P zl Note PC is negative Fully Implicit Simulators Q39 Submatrix Constant pwf Case AcllTerrn 311 k 1 ion J 7 13 api B m ml 12MB AclzTerrn at 1 70 J 7 14 aSWi B opl pwf m Another approach uses fw SW fractional ow 1 1 kr kr 71 71 J 7 7 15 qo 3 mm 3 fwo wpl pwf gt A m a 16 x1 x1 Ac Terrn 311 1 0 7139 W A api Bo f 1 1fwJ AXEYWJow I Bf1fWJ139Piow 17 Aclz Terrn 311 1 02 I J aSWi B0fw xwpl pwf 1 1fwmzpipw 18 Fully Implicit Simulators In each case the Q39 has the following meaning for the oil water equations o ac ap aCZI 8 Constant water injection rate qwi No variation in qwi and q0 0 so ac11 aciz ac21 aczz 0 Constant pwf k a 1 611 1 P ow kn g a 1 Bill WW km a Fully Implicit Simulators 19 20 21 22 23 24 25 26 Constant qoz Constant qt Where Fully Implicit Simulators 27 28 29 30 31 32 33 34 35 6119 36 amq 7 1 fw 37 Fully Implicit Simulators Direct Solution with GBAND Subroutine BAND1 D2 ADXNROWLMAX Subroutine BAND1 D2 000000 Subroutine Band Puts Coefs in 39A39 for GBAND 1D 2EQN Case Implicit Rea8AN OZ Inser t Common Dimension ADX 2 NROW EQ IMAX LLEFT 1 LRIGHT NROW MAX IM IM is the Index of the Main Diagonal of A IMAX DO100EQ 1NEQ IM M1 ROW ROW1 ADD Right HalfBAND from Row Above IM LRIGHTLMAX LRIGHT LRIGHT 1 LLEFT LEFT 1 ADD LEFT HalfBAND from this Row IM IM MINOLLEFT LMAX IFIEQEQ 1 THEN DIROW R1 I IFIFIRST EQ 0 THEN IFI GT 1 THEN AIM2 AW11I AIM1 AW12I AIM AC11I AM1 AC12I DROW R2 FFRST EQ 0 THEN F GT 1 THEN AIM3 AW21 AM2 AW22 ENDIF AIM1 ACZ1I AI M A0220 F LT IMAX THEN AM1 AE21I AM2 AE22I ENDIF ENDIF NDIF 100 CONTINUE RETURN END Fully Implicit Simulators 000000000OOOOOOOOOOOOOOOOOOOOOOOO 00 LRIG WORKING VERSION 42088 RAW SUBROUTINE GBAND2 ADXNMEPSIERRIFIRST IMPLICIT REAL8AHOZ IMPLICIT INTEGER4IN DIMENSION A D X Solution of system of equations with band matrix by standard elimination without pivoting Aonedimensional array containing the band of the matrix stored by rows the required dimension of A is N 1M Mnumber of diagonals above the main diagonal Number of diagonals below the main diagonal is also M Therefore the total bandwidth is 2 M 1 N number of equation D R H 8 vector X solution vector EPS the element ofthe main diagonal is compared to EPS during elimination If it is smaller value of the counter lERR is incremented lERR number of times the element on the main diagonal was smaller than EPS lFlRST lf lFlRST0 Matrix is inverted and the inverse is stored in place of the diagonal matrix lf lFlRSTGT0 routine assumes that the matrix has een inverted and will calculate solution corresponding to the new RHS vector This option is useful when solving repeatedly the same equations with different RHS vectors l column index for pivot J pivot element new GBAND code RAW 3888 HT KPIVOT eleminate by columns DO 100 JPIVOT 1 NCOLS OT JPIVOT LHALF LHALF MINLHALFIPIVOT1 KPIVOT KPIVOT LLEFT LRIGHT 1 KMAINX KPIVOT LRIGHT go down the pivot column to eliminate elements DO100XX 1LRIGHT X IPIVOT IXX LL MlNLHALFX1 KMAINX KMAINX LL LR 1 KX KMAINX IXX LR MlNLHALFNROWSX INC KX KPIVOT IF ABSAKPIVOT LT EPS IERR IERR 1 TIO AKXAKPIVOT IF FIRST EQ 0 THEN subject pivot row form X row Fully Implicit Simulators Mommas 11911de fund LI CINEI NH iEIH El NliNOO 002 iOAIdMN MOHIC MOHIX El NliNOO 09L NMOHIM 4 MN 39 M0800 M0800 f JOd1 1 39iHDIHTL f 09L OCI L39MOHIEI39IVH39DNIW JET L 39 iHDIH39I 39 JE3939 39 iOAIdM iOAIdM MOHI39SMOHNH39IVH39IWIW iHDIH39I L39TSMOHN MOHI 003 CG 0 JET uounmsqns peq O L iOAIdM iOAIdM El NliNOO 09 ONI39PPN 4 OIiVH 39 PPN PPN LHDIH39HXM LXgt1 PF 09 CG Matrix Size For GBAND Nomenclature I 2 cells in i direction I 2 cells in j direction K 2 cells in k direction N eq number of equations per cell M 2 matrix half band size for GBAND N total number of equations matrix rows 1 D M 2 Neq 1 N I Neq 2 D M 2 11 Neq 1 N I J Neq 3 D M IJ1Neq 1 N I J K Neq Band Width all cases 2 2M 1 Fully Implicit Simulators Root Finding Methods 1 Simple substitution Direct substitution a Easy b Unstable 2 Newton Iteration a Most common b Works best with smoothness near root C W x fx m f x 3 Binary HalVing a Straddles root b xk1 xhigh Xlow 2 4 Secant method chord slope Fully Implicit Simulators a Like Newton but uses last two iterations for slope k k4 b f xgt f2fk3f fx 5 Regula falsi a lnterpolate on slope root I II 6 Quasi Newton method a Re uses sarne slope b Accurate evaluation of fx each iteration fx l X 20 Fully Implicit Simulators Numerical Determination of Jacobian Matrix The following is a sketch Of a simple program which shows hOW the Jacobian coefficients could be calculated by peIturbing each pressure for an IMPES simulation 20 CALL PROPS CALL COSYM CALL RATES CALL RESIDS DO 10 J 1 JMAX DO 101 1 IMAX RSAVEIJ RIJ PSAVEIJ PIJ CONTINUE DO 20 J 1 JMAX DO 20 I 1 IMAX PIJ PIJ 10 CALL PROPS CALL COSYM CALL RATES CALL RESIDS ASIJ 1 RIJ 1 RSAVEIJ 1 10 AEI 1J RI 1J RSAVEI 1J 10 ACIJ RIJ RSAVEIJ 10 AWI1J RI1J RSAVEI1J 10 ANIJ1 RIJ1 RSAVEIJ1 10 PIJ PSAVEIJ CONTINUE DO 30 J 1 JMAX DO 30 I 1 IMAX RIJ RSAVEIJ CONTINUE 21 Fully Implicit Simulators COEFS COEFS NJACOB PERTUR TEMPL BOUND 70000000C COEFS SUBROUTINE coefs 0 Calculates jacobian matrix for the fully implicit method INCLUDE 39rZcmn39 CALL njacob RETURN NJACOB SUBROUTINE njacob 0 Calculates Jacobian matrix numerically INCLUDE 39rZcmn39 Oil equation Water equation Gas equation 00000000 DO 10 k1kmax DO 10 i 1imax DO 10 ir1neq roldirikririk 10 CONTINUE Iper t TRUE DO 20 ipass15 CALL perturp dp 1ipass IFIwater CALL perturswdsw2ipass FIgas CALL pertursgdsg3ipass 20 CONTINUE Ipert FALSE DO 30 k 1 kmax 30 CONTINU CALL bound RETURN 22 Fully Implicit Simulators PERTUR SUBROUTINE perturvardvarkcipass Calculates the effect of perturbing the variable VAR by an ammount DVAR The perturbation occurs simultaneously in one of every live cells 000 INCLUDE 39rZcmn39 DIMENSION varnxnzvaroldnxnz DO 10 k1kmax istart1MOD2kipass5 DO 10 iistartimax5 IFkcLE2 THEN varoldik varik varik varik dvar NDIF IFkcEQ3 THEN IFIvarpbub AND pikGTpbubik THEN varoldik psatik var 0005 psatik psatik dvar E varoldik varik varik arik dvar NDIF NDIF E 10 CONTINUE CALL props CALL resids DO 20 k1kmax istart1MOD kipass5 DO 20 i istartimax5 DO 20 ir1neq IFkLTkmax anirkcik1ririk1roldirik1dvar IFiLTimax awirkci1kr iri1kroldiri1kdvar acirkcik ririkroldirikdvar IFiGT1 aeirkci1kriri1kroldiri1kdvar IFkGT1 asirkcik1ririk1roldirik1dvar 20 CONTINUE DO 30 k1kmax istart1MOD kipass5 O 30 iistartimax5 IFkcLE2 THEN varik varoldik NDIF IFkcEQ3 THEN IFIvarpbub AND pikGTpbubik THEN psatik varoldik E varik varoldik ENDIF ENDIF 30 CONTINUE 23 Fully Implicit Simulators TEMPL SUBROUTINE tempiklcentrtitle INCLUDE 39rZcmn39 LOGICAL Icentr CHARACTER12 title FIdebug1 WRITE39ENTERING TEMPL FIdebug10 THEN WRITE71000 titleik FIcentr THEN WRITE71100 addc11addc12 WRITE71100 addc21addc22 ELSE FkGT1 THEN WRITE71100 addn11addn12 WRITE71100 addn21addn22 NDIF WRITE71200 addw11addw12 1 amp addc11addc 2 amp adde11adde12 WRITE71200 addw21 ddw22 amp addc21addc22 amp adde21adde22 FkLTkmax THEN WRITE71100 adds11adds12 WRITE71100 adds21adds22 NDIF ENDIF ENDIF DO 11 kc1neq DO 11 ir1neq acirkcik acirkcik addcirkc 11 CONTINUE FNOTIcentr THEN DO 9 kc1neq DO 9 ir1neq aeirkcik aeirkcik addeirkc awirkcik awirkcik addwirkc asirkcik asirkcik addsirkc anirkcik anirkcik addnirkc 9 CONTINUE DIF DO 5 kc1neq DO 5 ir1neq addcirkc0 addeirkc0 addwirkc0 addsirkc0 addnirkc0 5 CONTINU 1ooo FORMATa1239I39i22x39K39i2 1100 FORMATt512e124 1200 FORMAT12332e1244x FIdebug1 THEN WRITE39LEAVNG TEMPL WRITE939LEAVNG TEMPL 24 Fully Implicit Simulators BOUND SUBROUTINE bound Sets constant pressure boundary ption for constant pressure bottom layer INCLUDE 39rZcmn39 Fbotm THEN O 75 i 1imax DO 75 kc1neq DO 75 ir1neq anirkcikmax 00 asirkcikmax awirkcikmax 00 aeirkcikmax 00 FirNEkcacirkcikmax 00 FirEQkcacirkcikmax 10 75 CONT NUE NDIF RETURN END 25 Fully Implicit Simulators Program for 3 D Evaluation of Jacobian coefficients using numerical perturbation method subroutine co efs 0000 00000000 calculates the matrix coefficients using fully implicit method this subroutine calculates jacobian matrix numerica y if the variable bubble point pressure is considered then the variable substitution method will be used p sw2sg oil mass bal water mass bal gas mass bal include 39paramcmn39 include 39hwsimcmn39 call timercof7tl if ldebugl then Write 39entering coefs39 Write 8 39entering coefs39 endif small increments for psw amp sg if lbbl then p d else dpp 001 endif dsw 00001 dsg 00001 save residual values prior to perturbation do k l kmax enddo enddo for each path perturb psw amp sg respectively lpert true do ipass l 7 call perturp dpp l ipass if lwater call pertursw dsw 2 ipass if lgas call pertursg dsg 3 ipass 26 Fully Implicit Simulators enddo lpert false c restore residual values after to perturbation do k 1 klnax do j 1 jInax o i 1 ilnax do ir 1 neq rir i j k roldir i j k enddo enddo enddo enddo c consider boundary effects call bound call timercof7t2 coficpu coficpu cofitZ cofitl return end c pertur subroutine perturvar delta n ipass this subroutine calculates the effect on residuals of the perturbation of the variable var by an amount delta the perturb occurs simultaneously in one of every seven cells 000 include 39paraIncnin39 include 39hwsilncmn39 diInension var0nx1 0ny1 02n21 varold0nx1 0ny1 02n21 1 te1np10nx1 0ny1 02n21 te1np20nx1 0ny1 02n21 2 te1np30nx1 0ny1 02n21 te1np40nx1 0ny1 02n21 3 te1np50nx1 0ny1 02n21 te1np60nx1 0ny1 02n21 4 te1np70nx1 0ny1 02n21 te1np80nx1 0ny1 02n21 5 te1np90nx1 0ny1 02n21 te1np100nx1 0ny1 0nzl doj 11nax istart 1 Inod2j 2k ipass 7 do i istart ilnax 7 varoldi j k vari j k vari j k varij k delta c perturbation of pressure c save values affected by perturbation if n eq 1 then teInp1ij k rsi j k temp2i j k boi j k temp3i j k bwi j k temp4i j k bgi j k teInp5ij k voij k teInp6ij k vgi j k 27 Fully Implicit Simulators teInp7i j k denoi j k teInp8i j k denwi j k teInp9i j k dengi j k temp10i j k vpij k call tabseqptab bgtab noptab pi j k bgi j 1 k nwain 39PERTUB 39 39 BG call tabseqptab vgtab noptab pi j k vgi j 1 k nwain 39PERTUB 39 39VG39 if pa J k is pbo J k then call tabseqptab rstab noptab pi j k rsi 1 j k nwain 39PERTUB 39 RS call tabseqptab botab noptab pi j k boi 1 j k nwain 39PERTUB 39 39 13039 call tabseqptab votab noptab pi j k voi 1 j k nwain 39PERTUB 39 39VO39 el se ISO J k r5130 J k boi j k bobi j kexpcorefpi j k 1 13131st k voi j k vobi j k1 cvorefpij k 1 13131st k end39 1 bwi j k bwref1 cwrefpij k pcowi 1 J3 k pwre denoi j k denost rsi j kdengscboi J dengi j k dengscbgij k denwi j k denwst1 cwrefpij k pcow 1 is J k PWIBD vpi j k porvolij kexpcfrefpi j k 1 pref perturbation of water saturation save values affected by perturbation elseif n eq 2 then teInp1i j k rkoi j k teInp2i j k rkwi j k teInp3i j k rkgi j k teInp4i j k pcowi j k teInp5i j k denwi j k call stoneswtab rkowtb rkwwtb nowtab sgtab rkogtb rkggtb nogtab swi j k sgi j k 2 rkoi j k rkwi j k rkgi j k call tabseqswtab pcowtb nowtab swi j k pcow 1 i j k nwarn 39PERTUB 39 39 PC39 denwi j k denwst1 cwrefpij k pcow 1 11 k pwret perturbation of gas saturation save values affected by perturbation c if lbbl true variable bubble point pressure elseif not lbbl or pij k le pbi j k then teInp1i j k rkoi j k teInp2i j k rkwi j k teInp3i j k rkgi j k 28 Fully Implicit Simulators temP Kia j k P0015011 k call stoneswtab rkowtb rkwwtb nowtab sgtab 1 rkogtb rkggtb nogtab swi j k sgi j k 2 rkoi j k rkwi j k rkgi j k call tabseqsgtab pcogtb nogtab sgi j k pcog 1 i j k nwarn 39PERTUB 39 39 PC39 0 perturbation of saturation pressure 0 save values affected by perturbation else teInp1ij k rsi j k temp2i j k boi j k teInp3ij k voij k teInp4i j k denoi j k val j k varoldij k psvar psi j k delta call tabseqptab rstab noptab psvar rsbvar 1 HWaITl 39PERTUB 39 39RSB39 call tabseqptab botab noptab psvar bobvar 1 HWaITl 39PERTUB 39 39BOB39 call tabseqptab votab noptab psvar vobvar 1 HWaITl 39PERTUB 39 B39 rsi j k rsbvar boi j k bobvarexpcorefpi j k psvar voi j k vobvar1 cvorefpi j k psvar 1 denoi j k denost rsi j kdengscboi 1 j k endif enddo enddo enddo c call subroutines related to perturbation call dpotenO call rates call cosyInO call resids c form numerical jacobian matrix do k 1 klnax doj 1 39max istalt 1 Inod2j 2k ipass 7 do i istalt ilnax do ir 1 neq if i gt 1 aeir n i 1j k rir i 1 1 j k 101d11 1 1 j kdelta 110 gt 1 asir n 11 1 k 11 11 1 1 k 1011101 1 j 1 kdelta if k gt 1 abir n 11 k 1 11 ijk 1 1 1011101 11 k 1delta aCGI n 113 k rGL 113 k roldGL 1 j 1 kdelta 1f k 11 kmax atir n 1 j k 1 111 1 1 jk1roldirijk1delta 29 Fully Implicit Simulators if j 11 jmax anir r1 1 j 1 k 11 1 j 1 k 1011101 1 j 1 kdelta if 1 11 11112111 awir r1 1 1 j k 11 1 1 1 j k 1011101 1 1 j kdelta c restore values after penurbation do k 1 kmax do j 1 jmax istalt 1 mod2j 2k ipass 7 do i istalt imax 7 val j k varoldij k if n eq 1 then rsij k temp1i j k boi j k temp2i j k bwi j k temp3ij k bgi j k temp4ij k voi j k temp5ij k Vgij k temp6i j k denoi j k temp7i j k denwi j k temp8i j k deng j k temp9ij k Vpi j k temp10ij k elseif r1 eq 2 then rkoij k temp1i j k rkwi j k temp2i j k rkgij k temp3ij k pcowi j k temp4ij k denwi j k temp5i j k elseif not lbbl or pij k 15 pbi j k 1 then rkoij k temp1i j k rkwi j k temp2i j k rkgij k temp3ij k pcog j k temp4ij k else rsij k temp1i j k boi j k temp2i j k voi j k temp3ij k denoi j k temp4i j k endif enddo enddo enddo return and 30 Fully Implicit Simulators CHAPTER 5 Iterative Matrix Methods Some Basics of Linear Algebra Conjugate Gradient Methods in Reservoir Simulation Method of Steepest Descent Classical Conjugate Gradient Method Generalized Conjugate Residual Method ORTHOMIN Method Convergence Acceleration Watt s Conjugate Gradient Method Appendix A Derivation of 0 Appendix B Some Computer Programs Numerical Examples of Convergence Methods Iterative Matrix Methods Some Basics of Linear Algebra Inner Product of Vectors Let u u1i uzj u3k and V v1i v2i V3k 1 be vect01s in three dimensional vector space where i j and k represent the coordinate vectors The vect01s u and V can be written as column vectors V V2 2 The standard inner product sometimes called the dot product of the above vectors in three dimensional space is determined as u v uTv uvT 3 and is also given by Lav um 11 4 where n is the dimension of the vector space equal to 3 in the above case uT is the transpose of the quotmatrixquot u obtained by interchanging the rows and columns of u Standard inner product is also denoted as u v Example Let u and v be column vectors in two dimensional vector space such that 5 2 u v 1 3 Iterative Matrix Methods then the standard inner product of u and V is calculated as 2 uv 5 l lt gt 1L 52 t 1 3 7 Linear Independence of Vectors Two vectors v1 and v2 in a vector space are linearly independent if we can satisfy the condition a1v1 azvz 0 5 only when the constants a1 and a2 are equal to zero For a more general case let S vb v2 vk be a set of distinct vectors in a vector space V Then S is said to be linearly independent if there exist constants a1 a2 ak all zero such that alv1 a2v2 akvk 0 6 That is S is linearly independent if and only if a1a2 ak0 7 It should be emphasized that for any set for distinct vectors S vb v vk the above condition will hold if we choose a1 a2 ak all equal to zero However in the given set for distinct vectors S If we can express one vector in terms of another they are linearly dependent In this case the constants a1 a2 ak are not all zero Example Show linear independence for the following set of vectors in a four dimensional vector space 0 l l l 0 0 l v10 v21 v31 v40 0 0 l 1 In order to determine if the vectors are linearly independent we need to nd the values of a1 a2 a3 and a4 such that alvl am agvg am 0 Iterative Matrix Methods Substituting for the vectors 1 1 1 0 0 D ID IOO We now have four simultaneous equations of the form 1211 la2 0a3 0a4 0 1211 0212 0213 1214 0 0211 la2 la3 0a4 0 0211 0a2 la3 la4 0 Solving as we usually do we get a1 1 a2 1 a3 1 and a4 1 and the expression is v1 v2v3 v40 Thus we have found values for the constants not all equal to zero to satisfy the condition for linear independence of vectors We conclude that the given vectors are linearly dependent even though each pair is linearly independent Example Check for linear independence of the following set of vectors mm via The vectors are dependent since v can be expressed in terms of u as v 2u Orthgonality 0f Vectors By de nition two vectors are orthogonal to each other if their standard inner product or dot product is equal to zero ie uvuv0 8 Iterative Matrix Methods Example Check for orthogonality between vectors u and V in a two dimensional vector space where will vl l The inner product is equal to Lav 2 llsz 1 3 2 1 6 0 Thus the vectors u and vare orthogonal to each other De nition p is A conjugate if p Ap 0 ie p is orthogonal to Ap where A is a matrix Construct a vector p1 orthogonal to p1 given V using the Gram Schmidt method Given two linearly independent vectors p1 and v construct a vector p2 orthogonal to p1 Thus the inner product of p1 and p2 is p2 p1 0 Let p2 be a linear combination of v and p1 p2 V Bpl 9 Substituting for p2 in the condition for orthogonality V 3131 P1 0 V P1 5031 P1 0 Solving for 5 ma pr pr 10 where i is the scaling factor multiplying the vector p1 in order to nd a vector p2 orthogonal to it Example 4 Iterative Matrix Methods Find a vector p2 orthogonal to vector p1 where WE vlfl The scaling factor 0h 1Ih 3 17 3 3 7 2 2 1713 P2 V P1 5 1713 3 ll gtl2l L077 L615 Construct a transformed vector Apz 01th0g0nal t0 Apl given V such that V and p1 are independent Let A be a coef cient matrix formed by the system of simultaneous equations to be solved p is the unknown dependent va1iable written as a column vector In order to converge to a single value for p the search is carried out starting with an initial guess at p say p1 The next estimate of p p2 is such that Apz is orthogonal to Apl P2V P1 or Apz Av Apl Since Apz Apl 0 substituting for Apz as in the previous derivation we get 0 AV API API AV API WAPI API 0 Iterative Matrix Methods 11 12 13 14 Solving the above expression for 5 Aw1101 App1H 15 P2 V P1 16 Example Using the vectors given in the above example and the coef cient matrix 1 3 2 4 nd a vector p2 such that Apz is orthogonal to Apl A The scaling factor Au1171 APPAPJ 8 9 14 14 9 9 14 14 268277 The vector p2 is P2 V P1 Iterative Matrix Methods 5 3 268 277 l 2 7 2097 0935 Construct a vector p3 orthogonal to vectors p1 and p1 Given another vector V such that p3 is a linear combination of V p1 and p2 p3 V lp1 zpz 17 We know that the three vectors p1 p2 and p3 are mutually orthogonal to each other thus P3 P1 0 P3 P2 0 P2 P1 0 Substituting for p3 in the rst condition V lpl zpz p1 0 18 or V9131 51031 P1 52032 P1 0 19 In the above expression the term containing 52 is equal to zero as p2 p1 0 We now have the scaling factor 51 in terms of the vectors as V A pl 20 p1 101 Similarly substituting for p3 in the second condition we obtain an expression for 52 V z pl 21 ppm Example 7 Iterative Matrix Methods Find a vector p3 orthogonal to p1 and p2 where 3 4 5 p1 2 p2 5 v 1 1 2 2 The scaling factor MD A em 5 3 1 2 2 1 3 3 2 2 1 1 1914 The scaling factor we ml pl 5 4 1 5 2 2 4 4 5 5 2 2 1145 The vector p3 is p3 V 51131 52132 5 3 4 1 19142 1145 5 2 1 2 00492 04921 11317 Iterative Matrix Methods Conjugate Gradient Methods in Reservoir Simulation Taken from notes by R Chick Wattenbarger 1985 This section discusses the conjugate gradient CG and conjugate residual CR methods used in reservoir simulation The conjugate gradient method was introduced in 1952 by Hestenes and Stiefel as an iterative technique for solving systems of linear equations CG methods initially received a great deal of attention but interest lessened in favor of successive overelaxation SOR During the 197039s the CG and CR methods gained popularity in reservoir simulation as the petroleum industry attempted to simulate increasing difficult problems Reservoir simulators model the behavior of petroleum reservoirs by solving partial differential equations that mathematically describe the relevant physical processes occurring in the reservoir These partial dilTerential equations are usually solved using finite dilTerences The finite difference solution creates a system of equations that may be expressed in the matrix form where A is an NxN real matrix x is the unknown solution vector b is a real vector and N is the number of equations be solved In reservoir simulation these matrices are sparse containing mostly zero elements banded and conventionally diagonally dominant and nearly symmetric The methods used to solve these problems are generally classi ed as either direct or iterative Direct methods usually employ some form of guassian elimination The work and storage requirements of direct methods are proportional to the matrix bandwidth The matrix bandwidth is determined by a some product of the problem39s two smallest dimensions and is large enough in many problems to make direct methods impractical The advantage of direct methods is that they always solve the matrix problem The D4 ordering technique is a popular direct method in reservoir simulation Iterative methods are well suited to solving many reservoir simulation problems because they take advantage of the sparseness of the matrix being solved The work requirement is proportional to the number of iterations required times the number of equations being solved Iterative methods usually require less storage and less work than direct methods but may fail to converge Because iterative techniques can be solved to any tolerance of accuracy and direct methods are subject to roundolT error the relative accuracy these techniques is not a central issue Initially the standard iterative methods used in reservoir simulation were the Strongly Implicit Procedure SIP and the Successive Over Relaxation SOR method For best results these methods require the trial and error choice of iteration parameters These techniques work well for the type of problems encountered in early simulation of nearly homogeneous reservoirs using the IMPES implicit pressure explicit saturation formulation These problems produced positive de nite nearly symmetric matrices However for nonhomogeneous problems with high transmissibility ratios SIP and SOR may fail to converge or converge very slowly Iterative Matrix Methods Conjugate gradient methods have become popular in reservoir simulation because though essentially iterative they are guaranteed to converge after N iterations For this reason CG methods are called semiiterative The CG method proposed by Watts 1981 is a means of handling the nonhomogeneous but nearly symmetric problems resulting from the IMPES formulation5 With the increased interest in EOR recovery more difficult problems had to be solved Thermal problems in particular are solved with formulations that yield illconditioned highly non symmetric matrices Because the early CG methods required a symmetric A matrix a number of variations have been developed The most prominent of these are the conjugate residual CR methods The basis of the CG and CR method is described on the following pages Classi cation of Iterative Matrix Solution Techniques Given Ax b we seek x k ngbdd391 1 such that lim ka x aw Iterative techniques may be classified as follows Degree m Stationary if Gquot Gfor all k Linear if Gk linear function of 8 139 1k First degree ml iterative techniques may be expressed as 93 98 ckpk or 93 Tkxk f The Successive Over Relaxation SOR and Strongly Implicit Procedure SIP techniques are the most commonly used methods based on this concept The following discussion will use the rst form with ck as a scalar ock Thus xj J xk akpk The signi cance of this equation may be interpreted as follows Over the iteration from k to k1 xk proceeds in the pk direction for a distance ock along the surface x x is a measure of the error in x The choice of a and p depends on the iterative technique at is usually chosen to minimize x in the pk direction The choice of the direction vector pk is often related to the gradient of x6 Iterative Matrix Methods Error Gradient Minimization For the linear system Ax b we can de ne the following measures of error at k e x xf error vector rf b Adf residual vector From these basic error measurements we can de ne a set of error lnctions x which can be expressed in the form rSr To minimize 091 in the pk direction we must choose oak such that d xk 0tkpk0 dot rSAp thus ApSAp The gradient of xk is 2ATSr dx S dx a 68 MATquot 2A1r Aim P9P mu 1 2ATr rAp AR1P em AW 2r up MP 1 is not use ll since it involves A39I 3 is only useful for positive de nite PD matrices otherwise the denominator pAp may be zero6 Similar results to 3 can be obtained using the quadratic function F x Axx2 bx2 Iterative Matrix Methods Method of Steepest Descent Algorithm At the beginning of the timestep ie at the start of iterations let X0 arbitrary and r0 b AXO For iterations kl l 2 the calculations are carried out as follows pkr rap 0 kAk p The unknown vector and the residual vector are updated as follows w xn M W b Axk 1 o Apk The above steps are repeated until convergence Thus the direction vector pk is chosen in the direction of steepest descent that is pk rk ock is chosen to minimize er in the p direction where e is the error vector Although er and FX are minimized locally in the most rapid manner this method can be very slow particularly for ill conditioned matrices This method tends to overshoot2 Conjugate Direction Techniques The direction vectors p0 p1 pk391 are selected to be nonzero and mutually Aconjugate That is Ap pl 0 for i different of j The GramSchmidt orthogonalization procedure can be used to generate Aconjugate vect01s pk from any set of linearly independent vectors v i0 k l Hestenes and Stiefel 1952 show that the CG method is equivalent to Gaussian elimination when v is chosen as the set of unit base vectors ie 000l 00l0 The conjugate gradient method is a conjugate direction technique with v chosen to be equal to the residual vectors Iterative Matrix Methods Classical Conjugate Gradient Method Algorithm At the beginning of the timestep ie at the start of iterations let X0 arbitrary and p0 r0 b AXO39 For iterations kl l 2 the calculations are carried out as follows k k k k k 0 Pr hr pkApk pkAp The unknown vector and the residual vector are updated as follows Xk1 Xk 0ck pk r1lt1b AXk1rk akApk The magnitude of the scaling factor is determined as rkApk rk1rk1 pkApk r irk The new value of the search direction vector is determined from the previous known value as follows k1 rk1 k k P P A must be symmetric positive definite SPD oak is chosen to minimize er or FX Thus er decreases every iteration but r may not 5 is chosen such that p Apj 0 for i gt j ie the direction k vector pk is Aconjugate to Ap 39 The residual vect01s r0 r1 are mutually orthogonal Thus rk 0 for some k S N That is the classical CG method converges in at most N iterations neglecting roundolT errors The larger ock on the more roundoff err01s effect the orthogonality relations However the CG method is stable with respect to roundoff err01s The CG method varies from the method of steepest descent in its choice of direction vectors 2 making the CG method more suited to solve illconditioned matIices Iterative Matrix Methods Generalized Conjugate Residual Method Algorithm At the beginning of the timestep ie at the start of iterations let X0 arbitrary and p0 r0 b0 AXO For iterations kl l 2 the calculations are carried out as follows 0ck k k Ap iApk The unknown vector and the residual factors are updated as follows Xk1Xkxk pk k1 k1 k k k 1 bAX rocAp The magnitude of the scaling factor is determined as m ApJApJ j 0 1 2 k The new value of the search direction vector is determined from the previous known value as follows pk1 rk1 211 kjpj i 0 A does not have to be symmetric but A AT must be positive de nite oak is chosen to minimize I kHJkH Thus I k J k decreases every iteration The orthogonality coefficients jk are chosen to make Ap Ap39 0 fori dilTerent than j Without roundoff error CR gives the exact solution in N iteration or less The work and storage grows with the number of iterations with the CR method Each iteration a new set of is must be calculated and a new search direction vector p calculated and stored The number of 5 s in the set grows by one for each iteration Iterative Matrix Methods ORTHOMIN Method Algorithm At the beginning of the timestep ie at the start of iterations let x arbitrary r0 b0 Ax0 and p0 v0 i H391 r0 For iterations kl l 2 the calculations are carried out as follows First a shift vector is calculated Vk H r rk where H is an approximation of A such that H391 is approximately equal to Al This step is called preconditioning oak aw Ap iApk The unknown vector and the residual vector are updated as follows Xk1 Xk 0ck pk1 rk1b AXk1rk 0ckApk The magnitude of the scaling factors are determined as jk M121 ApJApJ j 0 1 2 k The new value of the search direction vector is determined from the previous known values as follows k1 k k k 39 P i V Z 5 1PJ j max0k7ml CR requires large amounts of work and storage This lead Vinsome to propose a truncated version of CR called ORTHOMIN Instead of making Apk1 orthogonal to all the preceding direction vectors Ap39 ORTHOMIN makes Apk1 orthogonal only to the last m vectors This requires less storage and work than the classical CR However the convergence is no longer guaranteed in N iterations1 ORTHOMIN requires that a shi vector be computed each iteration by preconditioning approximating x A39lr with v H39 r The vector v is the m vector Any preconditioner can be used which suits the problem being solved ORTHOMIN is the most commonly method in the industry for large problems Iterative Matrix Methods Convergence Acceleration The convergence of the CG and CR methods depend on the eigenspectrum of the matrix A The convergence may be accelerated by solving the equivalent system H39IAX H39lb where H is sparse easy to factor and a good approximation of A The closer H is to A the faster the convergence This is called preconditioning and should reduce the work requirements4 For the case where A is symmetric A may be decomposed as LDLT where L is a lower triangular matrix D is a diagonal matrix For nonsymmetric case A may be factored as LU where U is an upper diagonal matrix Generally these decompositions are not as sparse as A but since H only needs to be an approximation of A the factorization only needs to be approximate Several papers describe decomposition method which retain the sparseness of A to some degree Most of these methods take advantage of the fact that the magnitude of the element in the decomposition matrices decrease in the direction of the arrows below This allows a factorization of the A matrix which has a limited number of bands the more insigni cant bands being set to zero by compensating the remaining bands Conclusion The CG methods have gained wide acceptance in the petroleum industry because of their robustness in solving difficult problems The nontruncated versions of the CG and CR methods are guaranteed to converge within N iterations but do so slowly However of the preconditioning methods only the ORTHOMIN procedure theoretically guarantees convergence in N iterations ORTHOMIN and its variations seem to be the most widely used even though the CG methods have now been adapted to handle nonsymmetric matrices A majority of current research is devoted to developing preconditioning methods for faster more ef cient convergence Iterative Matrix Methods Nomenclature coefficient matrix b vector CD conjugate direction CG conjugate gradient CR conjugate residual e error vector X X FX the quadratic function AXX 2 bX H preconditioning matrix N number of equations being solved PD positive de nite p search direction vector rk residual vector b AXk v vector used for orthogonalization X solution vector GREEK LETTERS 0c minimization parameter 5 orthogonality scaling factor in eae in far b3 ear LINEAR ALGEBRA DEFINITIONS AT has elements aji where A has elements aij N A is diagonally dominant if lanl Z Zlaijl for all I j at l 171 A is PD if AXX 0 for all X at 0 A is PD if A is symmetric and strictly diagonally dominant if A is PD then AX b has a unique solution N T uv u v Zulu 1nner or scalar product ofu and v 171 A set of vectors vi is linearly dependent if there eXists scalars hi not all 0 N such that 2 mi 0 171 if uv 0 u and v are orthogonal Note Matrices are represented as uppercase letters Vectors are represented as lovercase letters Scalars are represented as lowercase letters Iterative Matrix Methods REFERENCES 1 Eisenstat S Elman H and Schultz M quotVariational Methods for Non Symmetric Systems of Linear Equationsquot SIAM Journal of Numerical Analysis Vol 20 No 2 April 1983 p 345357 2 Jea K C and Young DM quotGeneralized Conjugate Gradient Acceleration of Non symetrizable Iterative Methodsquot Linear Algebra and its Applications 34 1980 pp 159194 3 Vinsome P quotOrthomin an Iterative Method for Solving Sparse Banded Sets of Simultaneous Linear Equationsquot paper SPE 5729 presented at the SPEAIME Fourth Symposium on Numerical Simulation of Reservoir Performance Los Angeles CA Feb 1976 4 Wallis J R quotIncomplete Gaussian Elimination as a Preconditioning for Generalized Conjugate Gradient Accelerationquot paper SPE 12265 presented at the Seventh SPE Symposium on Reservoir Simulation San Francisco 1983 5 Watts J W quotA Conjugate Gradient Truncated Directed Method for the Iterative Solution of the Reservoir Simulation Pressure Equationquot SPE Journal June 1981 345353 6 Westlake J R A Handbook of Numerical Matrix Inversion and Solution of Linear Eguations John Wiley amp Sons Inc New York 1968 Iterative Matrix Methods Watt39s Conjugate Gradient Method Summary A preconditioning CG method for near symmetrical IMPES matrices Pre conditioning method ef ciently handles coef cient anisotropy solving the problem Ax h overview Step 1 construct a symmetric matrix Step 2 rn b Axn Step 3 if H rn H lt tolerance stop Step 4 solveAS x rnw PCG method Step 5 xquot1 xn x Step 6 Go To step 2 Step 1 for IMPES A is almost symmetric ie 61E 72 am but almost for our symmetric matrix As aEt am let 51515 awtw 5 2 Step 4 Solution of As x rn w PCG Method Step 4a Preconditioning step Approximate H m AS Solve H v r v H391 r v is the quotshift vectorquot Iterative Matrix Methods Step 4b conjugate gradient step use Watts Equations 1319 to compute 0c 5 Ap r H Step 4c repeat 4a 4b until r gt 0 Note that r is for r rn As xii not the original AX b Step 4aThe quotpreconditioning stepquot H is chosen so that H LDU is easy to form This is a quotfactorizationquot Once we haVe L D and U then H39lr is easy HV LDU V r DU V L39lr is forward substitution U V D39lL39lr is a diVide V U39lD39lL39lr is backward subst V H39lr Finding H LDU Any matrix can be factorized into LDU orLU o 2 nding the elements of L and U may not be easy so H is selected so that it is reasonably easy to nd L U and H is reasonably close to A Finding U If we do forward elimination for Gaussian Elimination we de ne U Forward Elim A U 20 Iterative Matrix Methods or LUV b UV L39Ibg Of course this is time consuming and we would have solved the problem Incomplete Gaussian Elimination Watts notices that the quotinducedquot elements of U get smaller as we get further from the outer diagonal Each is an quotorderquot smaller than the element above So he simply stops his Gaussian Elimination after only a few bands have been calculated He does a few steps in forward elimination and gets an approximate U call it UV A U39 He makes some adjustments to make UV more accurate then assumes that L U T He also sets the main diagonal to l and uses a D steps So T U Anisotropy Watts used D2 ordering This seemed to handle coefficient anisotropy in any direction Step 4b conjugate gradient step Each time a preconditioner vector v is computed in step 4a a conjugate gradient step is taken Together these comprise one iteration in solving A5X rquot The steps are as follows Initialize p0 v H39lrn preconditioning 0 H r r 21 Iterative Matrix Methods Iterate k 01 kk k rv a PkaAst Xk1 Xk akp k1 k k k r raAsp Vk1 H391 rk1preconditioning k1 k1 k r v pk1Vk pk This iteration process is continued until convergence is reached for As X rquot 22 Iterative Matrix Methods spa 1113M anyy 341117131 EZ 70 0 dV dW7OZC1 dWZ39 1 H911 1 dV dV go 1431sz 14 31 de de go g 31511770 1 dVo 130 11110 3 11110 3 Ml Ml M1 M Z JO 110mlme ad ad d x 70 2 214 0 qd qd70zqd q41yz 0 11329 2 0 11d I39VAOZ 119 319 11329 70 11d 170 1170 11d 1701170 1 IVyl I39V 11d 3 3917 31d 3 I39V 11de 17 1 177 11110 IVU 170 1fll39V 1 39V H119 449 Ma Ma I JO 110mlme W nil NH m 1111 11x Mix XV x x a 11x 77 q 114 qxV 70 30 uoyeApaq V xypuaddv a Aphrk lApkApkl Minimization of 3ek1 rk awn1 141 It It A391 1 A39Io Apk 2 o Apb Aquot rquot rquot A391rkaquotApquot A391akApk rquot aw 01W e i rquot o e i Apquot 1 07quot r ain t1pquot k1 k1 yea r 0 ekApk pk rk 2apkApk 0 a But we can show that 6quot Apk ATek pk So ifA is symmetrical then e i Apquot Ae i pk 2 pk a pk rk PkaAPk k VIM ifA is sysmmetrical and PD i e pAp 0 and which minimizes e The following demonstrates that Ax y x ATy a a x y 11 12 I I 6111 x1 6112 xzy1 6121 x1 6122 x2 y2 all all xl yl 6111 MM 6112 MM 6121 MW 6122 x2 y2 x1 a a y l i 11 21 1 xzauy1a21y2xza12y1a22y2 ll yl 6111 MM 6121 MW 6112 MM 6122 x2 y2 So 6quot Apk 14Tekpk as used above 24 Iterative Matrix Methods Appendix B Some Computer Programs 0 Conjugate Gradient cg o Solve o Orthomin o MVprod o DKR o Fact2 o Ortho 0 Block 25 Iterative Matrix Methods SUBROUTINE CG N TOL A B X AP P R DIMENSION ANl Bl Xl APl P1 Rl CG solves Ax b using the conjugate gradient method INPUT OUTPUT WORK N 7 number of equations X 7 solution vector AP 7 Ap vector TOL 7 tolerance P direction vector A 7 symmetric positive definite matrix R 7 residual vector B 7 rhs vector 0000000 initialize RR00 DO lOI lN XI00 RIBD PIRD RR RR RIRI INUE ITERITER1 C calc Ap DO 100 I lN API00 DO 1001 lN APOAPIALDPD lOOCONTINUE C calc App APP00 DO 200I lN APP APP APIPI 200CONTINUE C updatexandr calc rr ALPHARRAPP RR 00 DO 300I lN XI XI ALPHAPI RI RI 7 ALPHA API RR R RIRI 300 CONTINUE IF RR LT TOL RETURN BETA RRRRN C neW search direction DO 400I lN PI RI BETAPI 400 CONTINUE IF ITER LT N10 GO TO 50 WRITE 6 CG did not converge RETURN END 26 Iterative Matrix Methods 000 SOLVE SUBROUTINE solve EXAMPLE SUBROUTINE this solves the matrix problem Ax b using ORTHOMIN with any preconditioner This is just an example to show how ORTHOMIN is used north 5 index 0 DO k 11000 index index l this is the restart feature IF index GT northl index 1 CALL preconditioner CALL ORTHOMININDEX END DO RETURN END 27 Iterative Matrix Methods L ORTHOMIN SUBROUTINE orthorninxrVAVpApApiApirnaxneq amp nocellnindexnxynorthidebug C C This subroutine calculates the orthogonalizations for C any preconditioning method and any indexing method C C INPUT K C V shi Vector AV AV Vector C max no of cells in xdir p search Vector C neq no of eqns per cell Ap Ap Vector C nocell no of cells OUTPUT C n no of unknowns x solution Vector C index no of orthrestart r residual Vector REAL 4 xrvAVpnxynorth1Apnxynorth1 amp AplAp DO 100 I ln piindex VI 100 CONTINUE IF index GT1 THEN calculate AV A V CALL MVJJroductVAVimaxneqnocell orthogonalize With beta ApAVApAp DO 200j lindexl iAV 00 DO 1 ln ApiAV ApiAV ApII AVI END DO beta ApiAV ApiApGl0E20 DO 1 ln piindex piindex beta pij END DO 200 CONTINUE ENDIF Ap A p CALL MVJJroductp1indexAp1indexirnaxneqnocell alpha rApApAp Apir 00 ApiApindex 00 DO 300 1n Apir Apir Apiindex ri ApiApindex ApiApindex Apiindex Apiindex 300 CONTINUE Apir ApiApindex1 0E20 updatexandr DO 400 1n xi xi alpha piindex ri ri alpha Apiindex 400 CONTINUE RETURN END 28 Iterative Matrix Methods MVPROD 00 SU39BROUTINE MVPRODXXYYIMAXNEQNOCELL C This subroutine solves for the 2D matrix problem C AX y Where A is a 2D matrix with coefficients C ACGRJCJJ etc Calculate y for given x C C INPUT c XX gt input vector C IMAX gt no of cells in xdirection c NEQ gt no of equations per cell C NOCELL gt no of cells C OUTPUT c YY gt output vector C INCLUDE 39CGZDCMN DIMENSION XX1YY1 NESQ NEQNEQ IMNEQ IMAXNEQ DO 30 I 1NOCELL L 11NESQ LC I1NEQ LN LCIMNEQ LW LCNEQ LE LCNEQ Ls LCIMNEQ DO 20 II 1NEQ SUMN 00 SUMW 00 SUMC 00 SUME 00 SUMS 00 INDC L II DO 10 II 1NEQ SUMN SUMN ANNaNDCXXLNII SUMW SUMW AWWINDCXXLWH SUMC SUMC ACCINDCXXLCH SUME SUME AEEINDCXXLEH SUMS SUMS ASSINDCXXLSH INDC INDC NEQ 10 CONTINUE YYLCJJ SUMNSUMWSUMCSUMESUMS 20 CONTINUE 30 CONTINUE RETURN END 29 Iterative Matrix Methods subroutine dkrimax ijmax neq nocell iteri ldebug o dkr preconditioning method include 39paramcmn39 include 39cg73dcmn39 real8 tmplnxeq nxeq tmp2nxeq nxeq tmp3nxeq nxeq l tvplnxeq tvp2nxeq logical ldebugl if ldebugl then write 39entering dkr39 write8 39entering dkr39 endif calculate main diagonal d once if iteri eq 1 then do i l nocell it i ijmax if iw gt 0 then call mmimultawl l i dl l iw tmpl neq call mmimulttmpl ael l iw tmp2 neq else call m7eq stmp2 0D0 neq endif call mmisubacl l i tmp2 tmp3 neq if in gt 0 then call mmimultanl l i dl l in tmpl neq call mmimulttmpl asl l in tmp2 neq call mmisubtmp3 tmp2 tmp3 neq endif if it gt 0 then call mmimultatl l i dl 1 it tmpl neq call mmimulttmpl abl 1 it tmp2 neq call mmisubtmp3 tmp2 tmp3 neq endif call m7invtmp3 dl l i neq endd endif c forward it i ijmax if iw gt 0 then call mvimultawl l i zl iw tvpl neq 30 Iterative Matrix Methods o C C else call v7eq stvp1 0D0 neq endif if in gt 0 then call mvimu1tan1 1 1 21 in tvp2 neq call vviaddtvp1 tvp2 tvp1 neq endif if it gt 0 then call mvimu1tat1 1 1 21 it tvp2 neq call vviaddtvp1 tvp2 tvp1 neq endif call vvisubres1 i tvp1 y1 i neq call mvimu1td1 1 i y1 1 21 1 neq enddo do i noce11 1 1 ie i 1 is i imax ib i ijmax if ie 1e nocell then call mvimu1tae1 1 i v1 ie tvp1 neq e se call v7eq stvp1 0D0 neq endif if is 1e nocell then call mvimu1tas1 1 i v1 is tvp2 neq call vviaddtvp1 tvp2 tvp1 neq endif if ib 1e noce11 then call mvimu1tab1 1 i v1 ib tvp2 neq call vviaddtvp1 tvp2 tvp1 neq endif call vvisub 1 i tvp1 tvp1 neq call mvimu1td1 1 i tvp1 v1 i neq enddo write1 iterivi 1 i 1 neq 1 format1x 39iter 39 i4 3e2010 pause return end 31 Iterative Matrix Methods c fact2 subroutine fact2imax ijmax neq nocell iteri ldebug factorization 2 preconditioning metho 0 include 39paramcmn39 include 39cg73dcmn39 real8 tmplnxeq nxeq tmp2nxeq nxeq tmp3nxeq nxeq l tvplnxeq tvp2nxeq logical ldebugl if ldebugl then write 39entering fact239 write8 39entering fact239 endif c calculate diagonals dblb2b3b4b5b6clc2c3c4 once c c b5i atidipb2ip c c3i aniatidipc2ip c cli b5idinb4in atidipc4ip c bli awic3idilb4il atidipb6ip c b3i c3idilb2il b5idinc2in c c4i asiclidimabim c b4i blidijc4ijclidimb6im c b6i blidijabij c c2i c3idilabil b3idikb6ik c b2i aeib3idikc4ik b5idinabin c di laciblidijb2ij c amp b3i dikb4ik b5i dinb6in c amp c3idilc4ilclidimc2im c amp atidipabip c if iteri eq 1 then 0 i l nocell ij i l ik i imax 1 il i imax im iijmaximaxl ini ijmaxl ip i ijmax c calculate b5 if ip gt 0 then call mmimultatl l i dl 1 ip tmpl neq call mmimulttmpl b2l 1 ip b5l l i neq call misignb5l l i neq calculate c3 call mmimulttmpl c2l 1 ip tmpl neq e se 32 Iterative Matrix Methods call m7eq stmp2 0D0 neq endif call mmisuban1 1 i Uan c31 1 i neq c calculate c1 if ip gt 0 then call mmimultb51 1 i d1 1 in tmpl neq call mmimulttmp1 b41 1 in tmp2 neq call mmimultat1 1 i d1 1 ip tmpl neq call mmimulttmp1 c41 1 ip tmp3 neq call mmiaddtmp2 tmp3 c11 1 i neq call m7signc11 1 i neq endi c calculate b1 if 11 gt 0 then call mmimultc31 1 i d1 111 tmpl neq call mmimulttmp1 b41 1 il tmp2 neq else call m7eq stmp2 0D0 neq endif if ip gt 0 then call mmimultat1 1 i d1 1 ip tmpl neq call mmimulttmp1 b61 1 ip tmp3 neq else call m7eq stmp3 0D0 neq endif call mmisubaw1 1 i tmp2 tmpl neq call mmisubtmp1 tmp3 b11 1 i neq c calculate b3 if 11 gt 0 then call mmimultc31 1 i d1 111 tmpl neq call mmimulttmp1 b21 1 il tmp2 neq if in gt 0 then call mmimultb51 1 i d1 1 in tmpl neq call mmimulttmp1 c21 1 in tmp3 neq else call m7eq stmp3 0D0 neq endif call mmiaddtmp2 tmp3 b31 1 i neq call m7signb31 1 i neq endi c calculate c4 if ip gt 0 then call mmimultc11 1 i d1 1 imtmp1 neq call mmimulttmp1 ab1 1 im tmp2 neq else call m7eq stmp2 0D0 neq 33 Iterative Matrix Methods endif call mmisubas1 1 i Uan c41 1 i neq c calculate b4 if ij gt 0 then callmmimultb11 1 i d1 1 ijtmp1 neq call mmimulttmp1 c41 1 ij tmp2 neq if ip gt 0 then call mmimultc11 1 i d1 1 imtmp1 neq call mmimulttmp1 b61 1 im tmp3 neq else call m7eq stmp3 0D0 neq endif call mmiaddtmp2 tmp3 b41 1 i neq call m7signb41 1 i neq endif c calculate b6 if ij gt 0 then callmmimultb11 1 i d1 1 ijtmp1 neq call mmimulttmp1 ab1 1 ij b61 1 i neq call m7signb61 1 i neq endif c calculate c2 if 11 gt 0 then call mmimultc31 1 i d1 111 tmpl neq call mmimulttmp1 ab1 1 il tmp2 neq call mmimultb31 1 i d1 1 iktmp1 neq call mmimulttmp1 b61 1 ik tmp3 neq call mmiaddtmp2 tmp3 c21 1 i neq call m7signc21 1 i neq endi c calculate b2 if 11 gt 0 then call mmimultb31 1 i d1 1 iktmp1 neq call mmimulttmp1 c41 1 ik Uan neq else call m7eq stmp2 0D0 neq endif if ip gt 0 then call mmimultb51 1 i d1 1 in tmpl neq call mmimulttmp1 ab1 1 in tmp3 neq else call m7eq stmp3 0D0 neq endif call mmisubae1 1 i tmp2 tmpl neq call mmisubtmp1 tmp3 b21 1 i neq 34 Iterative Matrix Methods calculate d 1f 1j gt 0 then call mmimultb11 1 1 d1 1 1 tmp2 neq call mmimulttmp2 b21 1 1 tmp3 neq else call m7eq stmp3 0D0 neq end1f call mmisubac1 1 1 tmp3 tmpl neq 1f 1k gt 0 then call mmimultb31 1 1 d1 1 1k tmp2 neq call mmimulttmp2 b41 1 1k tmp3 neq call mmisubtmp1 tmp3 tmpl neq 1f 11 gt 0 then call mmimultc31 1 1 d1 1 1l tmp2 neq call mmimulttmp2 c41 1 1l tmp3 neq call mmisubtmp1 tmp3 tmpl neq 1f 1p gt 0 then call mmimultc11 1 1 d1 1 1m tmp2 neq call mmimulttmp2 c21 1 1m tmp3 neq call mmisubtmp1 tmp3 tmpl neq call mmimultb51 1 1 d1 1 1n tmp2 neq call mmimulttmp2 b61 1 1n tmp3 neq call mmisubtmp1 tmp3 tmpl ne call mmimultat1 1 1 d1 1 1p tmp2 neq call mmimulttmp2 ab1 1 1p tmp3 neq call mmisubtmp1 tmp3 tmpl neq end1f end1f end1f call m71nvtmp1 d1 11 neq en 0 end1f c forward subst1tut10n on the lower triangular matrix y1 res1b11z11b31z1 1max1 amp c31z1 1maxcl 1z11jmax1max 1 amp b51z11jmax1at1z11jmax 21 diYi cocoon 1k 1 1max 1 11 1 1max 1m 11jmax1max1 1n11jmax1 1 1 1jmax 1f 1j gt 0 then call mvimultb11 1 1 21 1 tvp2 neq else call v7eq stvp2 0D0 neq 3 5 Iterative Matrix Methods endif ca11 vvisubres1 i tvp2 tvp1 neq if ik gt 0 then call mvimu1tb31 1 1 21 ik tvp2 neq ca11vvisubtvp1 tvp2 tvp1 ne ca11mv7mu1tc31 1 1 21 11 tvp2 neq ca11vvisubtvp1 tvp2 tvp1 ne if ip gt 0 then ca11mv7mu1tc11 1 1 21 im tvp2 neq ca11vvisubtvp1 tvp2 tvp1 neq ca11 mvimu1tb51 1 1 21 in tvp2 neq ca11vvisubtvp1 tvp2 tvp1 neq ca11mv7mu1tat1 1 1 21 ip tvp2 neq ca11vvisubtvp1 tvp2 tvp1 neq endif endif ca11 v7eqtvp1 y1 i neq ca11mv7mu1td1 1 i y1 1 21 1 neq enddo c backsubstitution on the upper triangular matrix c c vi di ib2ivi1b4iviimax1 c amp c4iviimaxcZiviijmaximax1 c amp b6iviijmax1abiviijmax c do i noce11 1 1 lj i 1 1k i imax 1 11 i imax 1m iijmax imax1 1niijmax1 1p i ijmax if lj 1e noce11 then call mvimu1tb21 1 i v1 1j tvp2 neq else ca11 v7eq stvp2 0D0 neq endif ca11vvisub 1 i tvp2 tvp1 neq if 1k 1e noce11 then call mvimu1tb41 1 i v1 1k tvp2 neq ca11vvisubtvp1 tvp2 tvp1 neq ca11mv7mu1tc41 1 i v1 11 tvp2 neq ca11vvisubtvp1 tvp2 tvp1 neq if 1m 1e noce11 then ca11mv7mu1tc21 1 i v1 1m tvp2 neq ca11vvisubtvp1 tvp2 tvp1 neq if 1n 1e noce11 then call mvimu1tb61 1 i v1 1n tvp2 neq ca11vvisubtvp1 tvp2 tvp1 neq ca11mv7mu1tab1 1 i v1 1p tvp2 neq ca11vvisubtvp1 tvp2 tvp1 neq 36 Iterative Matrix Methods endif endif endif call mvimultd1 11 tvpl v1 i neq enddo return end Iterative Matrix Methods 37 BLOCK SUBROUTINE BLOCKNEQ c 0 Put the matrix variables into a reduced form if phases 0 are missing 0 INCLUDE COMEM NEQ 1 IF LWATER NEQ NEQ 1 IF LGAS NEQ NEQ 1 D0100 J 1JMAX DO 1001 1IMAX JCX 0 DO 100 JC 13 IF NOT LWATER AND JC EQ 2 GO TO 100 IF NOT LGAS AND JC EQ 3 GO TO 100 JCX JCX 1 IRX 0 DO 100 IR 13 IF NOT LWATER AND IR EQ 2 GO TO 100 IF NOT LGAS AND IR EQ 3 GO TO 100 IRX IRX 1 ACIRXJCXIJ ACIRJCIJ AEIRXJCXIJ AEIRJCIJ ASIRXJCXIJ ASIRJCIJ AWIRXJCXIJ AWIRJCIJ ANIRXJCXIJ ANIRJCIJ 100 CONTINUE c residuals DO 200 J1JMAX DO 200 11IMAX IRX 0 DO 100 IR 13 IF NOT LWATER AND IR EQ 2 GO TO 200 IF NOT LGAS AND IR EQ 3 GO TO 200 IRX IRX 1 RIRXIJ RIRIJ 200 CONTINUE RETURN END 38 Iterative Matrix Methods Numerical Examples of Convergence Methods Point Gauss Seidel PSOR with a 1 This iterative solution converges as long as the matrix is diagonally dominant This method is ideal for nonsymmetric matrices However the convergence is diagonally dominant This method is ideal for notsymmetric matrices However the convergence is slowed when the degree of diagonal dominance is close to l A X b 10 9 x1 1 1 0 0 use arb1trary guess of x 9 10 x2 8 0 x 2 to get the nal solution of I 1 solution has not yet converged x 2 Point Successive Over Relaxation Method 100 090 080 070 060 I 050 040 030 020 010 000 000 020 040 060 080 100 120 140 160 180 200 X1 39 Iterative Matrix Methods Steepest Descent Method SD This iterative solution is similar to PSOR it converges as long as the matrix is diagonally dominal It is also ideal for nonsymmetric matrices and the convergence is slowed when the degree of diagonal dominance is close to l 10 9 x1 1 l 0 0 use arbitrary guess of x 9 10 x2 8 0 2 to get the nal solution of x1 1 solution has not yet converged x 2 steepest Descent Method 120 100 080 060 040 020 000 020 040 060 000 020 040 060 080 100 120 140 160 180 200 220 x1 40 Iterative Matrix Methods Conjugate Gradiant Method CG This method guarantees convergence as long as the matrix is symmetric For this example CG converges on the third iteration CG does not require diagonal dominance but it diverges for nonsymmetric matrices A X b 10 9 x1 11 0 0 use arbitrary guess of x 9 10 x2 8 0 XI 2 to get the nal solution of x2 1 Conjugate Gradient Method 120 100 080 060 040 020 000 020 040 060 x1 4l Iterative Matrix Methods Conjugate Residual Method CR Unlike the Conjugate Gradiant Method this method converges even if the matrix is not symmetric However matrix A AT must be positive de nite for convergence The Conjugate Residual Method converges on the third iteration for this example A x 10 9 x1 1 1 0 0 use arb1trary guess of x 9 10 x2 8 0 x1 2 to get the nal solutlon of x Conjugate Residual Method 120 100 080 060 040 020 000 020 040 060 42 Iterative Matrix Methods Combination of Four Methods This plot shows the four different convergence methods and the way they converge The Steepest Descent Conjugate Gradient and Conjugate Residual Methods have initial directions away from the correct answer but they eventually converge CG and CR converge on the third iteration Steepest Descent takes a zig zag path with more iterations The PSOR Method starts in the right direction but slows down as it approaches the correct answer because it takes successfully smaller steps after each iteration Combination of Four Methods 120 100 080 060 040 020 000 020 040 060 x1 43 Iterative Matrix Methods Appendix C Debugging and Validation of a Fully Implicit Reservoir Simulator Bryan Maggard 593 Introduction The procedures discussed in this essay are directed toward the debugging and validation of fully implicit control volume based nite difference reservoir simulators such as developed in PETE 604 Newton s Method will be the only nonlinear solution method that is discussed here However many of the ideas presented apply to IMPES reservoir simulators and general numerical modeling Simulation Methodology The goal of the calculations performed in reservoir simulation is to advance the solution values of independent variables in time by satisfying the finite difference equations FDEs over a sequence of timesteps The FDEs of each and every timestep must be satisfied within tolerance for succeeding following results to be valid The FDEs are developed by combining conservation equations eg mass balance on the oil pseudo component in each grid block One FDE is written for each independent variable in every grid block Newton s Method is used to iteratively determine the solution of the nonlinear FDEs for each timestep In applying Newton s Method a residual formulation is commonly used The FDEs will be satisfied as the residuals and errors approach zero For each Newton iteration the solution of set of linear equations must be solved X vector The solution of the linear equations is used to update the solution of the nonlinear FDEs and the FDEs themselves Debugging and Validation Procedure The procedure used to debug and validate a fully implicit reservoir is presented below using an outline format 1 We need to know when we have solved an implicit timestep Can we obtain the correct residual vector for the FDEs In other words are we calculating the RHS right hand side of matrix equation correctly Confirm subroutine RESIDS i Are initial uids in place correct pore volumes saturations and solubilities ii Are transmissibilities calculated correctly in subroutine TRANS iii Are rock and uid properties calculated correctly in subroutine PROPS iv Are productioninj ection well rates calculated correctly in subroutine RATES 44 Iterative Matrix Methods V COSYM a b c Are symmetric mass ow coefficients calculated correctly in subroutine Are phase potentials between grid blocks calculated correctly and consistently Are upstream phase relative permeability calculated and applied correctly What about the case where phase potential is zero Are averagedupstream phase formation volume factors and viscosities calculated and used correctly vi Are the residuals calculated in subroutine RESIDS correct a Are phase potentials between grid blocks calculated correctly and consistently b Are the volumetrically calculated mass in place values eg XOIP calculated correctly Do they match hand calculations c Are interblock mass ow rates eg OFLOW calculated correctly 2 Using Newton s Method for the solution of nonlinear equations requires the calculation of a Jacobian ie lst partial derivitive matrix of residual vector with respect to each unknown solution variable independent variable Can we calculate the Jacobian of the residual vector correctly A Analytical J acobian i ii Have we differentiated correctly including the chain rule and the product rule terms Note that it is possible to validate this using computer programs that perform symbolic mathematic calculations such as Mathematica Macsyma MathCad and Maple to name a few Check each matrix component seperately T T39 D R and Q39 B Numerical J acobian i ii Have we saved independent variables and original residuals before perturbing the system of equations Are we perturbing each coupled independent variable independently remember the definition of a partial derivative 45 Iterative Matrix Methods iii Are PROPS COSYM and RESIDS calculated correctly for each perturbation iv Is the lst order Taylor s Series approximation chord slope of each partial derivative calculated correctly V Are independent variables restored before the next perturbation of coupled independent variables is made vi Are residuals restored after Jacobian calculations are made and before the matrix problem is solved vii Are the magnitudes of the perturbations acceptable Note that the perturbation of an independent variable should not change upstream downstream relationships etc Also note that the magnitude of perturbations for differing equations does not have to be constant 3 Can we solve the matrix problems we generrate in applying Newton s Method ie can we invert the matrices Check the matrix solver One effective way to check the matrix solver is to set the RHS equal to the sum of the rows of the matrix If this is done the solution vector should be equal to 1000 This concept can be used to check for both errors and roundoff effects An advantage of this particular method is that we are checking the solvers effectiveness on the actual matrix we wish to solve A norm of the solution vector eg I xl I 2 would be a good indicator 4 Is our nonlinear solution method Newton s Method converging Check Newton s Method i Are the magnitudes of residuals errors becoming smaller as iterations proceed Examine norms of residualerror vectors ii Are we handling out of rance infeasible values of independent variables correctly ie SW gt lSor SW lt SWC etc iii If timestep cutbacks are used are we going back to the previous timestep s solution and trying again with a smaller timestep size iv Are we having overshoot problems due to extreme notlinearities residuals ipping signs and diverging very slow convergence large oscillating changes in 46 Iterative Matrix Methods independent variables between iterations Newton s Method has problems with extreme notlinearity Are there better methods Why aren t they commonly used V Is each timestep sufficiently converged before continuing with the next timestep One check is to look for oscillations and discontinuities in independent variables and matrix coeffiecients It is critical to converge each timestep before proceeding Continuing calculations with a garbage timestep solution can only yield more garbage a Norms of residual vector should approach 0 b Norms of error vector shold approach 0 c Material Balance Error MBE our accounting of all mass in all cells should approach 0 This includes local over last timestep raw local relative and cumulative relative and raw MBE relative to hydrocarbon production 5 Is our now running simulator giving realistic accurate results A TestingValidation Procedure i Always run the static case This includes now data sets on any reservoir simulator ii Compare results with available analytical solutions remember to match all assumptions used in developing the analytical solutions iii Compare with other reservoir simulators a Make sure both simulators are converging tight tolerances to insure accuracy b IMPES and Fully implicit simulators should both converge to a single answer as grid block size and timestep size both approach 0 c SPE comparative solution projects provide explicit data sets and extensive results to compare with A variety of simulation scenarios are covered These results have been widely supported by different independent research organizations B Testing Procedure During Continuing Modifications to Simulator i Collect a set oftest cases 47 Iterative Matrix Methods a Include some very simple test cases with most features turned off via data if possible b Add data sets that test features of incresing complexitydifficulty until all features have been tested ii As modi cations are made a Keep detailed notes of modifications as they are made b Document source code using comments as modifications are made c Systematically validate the simulator at each major modification step using the full set oftest cases iii Continue to add data level debugging output as modifications are made a Use LDEBUG switches instead of hardwired code Conditional compilation can be used and results in faster code but 1 Recompiling can becomew tedious for large programs and 2 End users must have source code access b In general every calculation made by the simulator will have to be validated Make is easy on yourself by programming debug output and documenting as you go C Consideration of Model Assumptions When analyzing simulator results it is necessary to consider the fundamental assumptions associated with the finite defference formulation this also applies to other model formulations i Only discrete values of independent variables are known While we calculate the pressure at the grid block center we do not know the values of the nsurface of pressure ii The value of a cell s independent variables are considered 0th order discrete approximations A cell is considered to be a stirred tank with uniform properties throughout These assumptions which are inherent to our control volume basded finite difference model can cause model behavior that is difficult to analyze Examples include burping for adverse mobility ratios numerical dispersion especially miscible displacement and missing transient periods 48 Iterative Matrix Methods PETE 604 READING ASSIGNMENTS FOR EXAM A l Monograph l3 Chapter 6 Appendices A and B 2 Class notes Chapters 1 2 and 3 3 Aziz and Settari Chapter 5 4 Selected papers on convergences a Blair and Weinaug BW68 b Coats Coa68 c Ledkeman and Ridings LR7 d MacDonald and Coats MC70 e Nolen and Berry NB70 f Chappelear and Rogers CR74 5 Reprint No 20 a Spillete et al Sequential method b Thomas and Thurnau Adaptive Implicit Method AIM 6 Weinstein et al SPE comparative case No 2 Coning 49 Iterative Matrix Methods CHAPTER 1 ThreePhase Relative Permeability Laboratory Tests Stone39s Method II Relative Permeability Base Conclusion ThreePhase Relative Permeability Three phase relative permeability is commonly used for reservoir simulation The most common method is Stone39s Method 11 This method was described in Stone39s Paper JCPT1973 Before Stone39s Methods there were no good methods for computing three phase relative permeability Laboratory Tests Steady state laboratory methods were not very satisfactory because of the difficulty in laboratory experimentation Instead Stone39s Methods made use of two phase experiments which were then used to construct the three phase relative permeability for oil One experiment was water displacing oil with no gas present This gives experimental values for kIW and kmw The second experiment was the displacement of oil by gas with connate irreducible water present but not owing This gives experimental values for kg and kmg These are both simple experiments because their relative permeabilities can be computed from dynamic laboratory data with the Welge Method It is important to note that the first laboratory experiment is carried out with increasing water saturations This is the usual direction which we expect water saturations to follow in the reservoir If we expect the water saturation to decrease in the reservoir we need to redesign the laboratory experiment The same is true for gas saturation We also expect the gas saturation to increase in the reservoir so laboratory test is designed to follow the same path Again if we expect gas saturation to decrease in the reservoir we should redesign the laboratory experiments Both of these phases are subjected to hysteresis of course if the direction of saturation is reversed during the process However for the usual case it is assumed that these two laboratory experiments properly represent the relative permeabilities to water and gas ThreePhase Relative Permeability Stone39s Method 11 It is assumed that klg depends only on Sg and kIW depends only on SW both independent of the oil saturation This tends to be correct for a normal reservoir rock because water is the most wetting phase and gas is the most non wetting phase EW Sw 1 kg 1758 2 However kr0 depends both on SW and Sg since it is the phase of intermediate wettability Consequently Stone39s method uses the two phase experiments to quotconstruct a relative permeability for oilquot This method is as follows Era lgr0w 1rwr0g rg ng krg 3 But if kr0 is negative then set km 2 0 ThreePhase Relative Permeability Relative Permeability Base This equation is not fully understood by many reservoir engineers The confusion is that this equation assumes that the relative permeabilities are based on effective permeability to oil with connate water present not on absolute permeabilities This method of defining relative permeability has become common in the industry We must use caution to make sure that we are using the data properly The following sketches show two different ways to define relative permeabilities and their corresponding relative permeability plots from laboratory experiments as mentioned above efine 16 relative to kabs kkabs A k relative to kOCW Sg 0 SW ch kkocw The following two sets of curves show how the normal lab data looks when each of the two kr phases is used Reported in terms of a base kabs wateroil gasoil run with SWC water oil Relative Permeability Gas Oil Relative Permeability T hreePhase Relative Permeability Reported in terms of kr base koow most common wateroil gasoil run with SWC Water 0quot Relative Permeabllity Gas 0quot Remive Permeability km Consistency of Simulator Grid Data The simulator must simulate effective permeabilities regardless of which base is used The effective oil permeabilities can be given by either of the following k0 lgw lgab k0 kyvkw There are two possibilities of places where they can be used for relative permeability datai Example km 100 mdi k0CW 80 md Grid k Proper kE curves Q at SWC 100 km 1000i80 SOmd 80 km 801i00 SOmd 100 km Wrong 1001i00 lOOmd Question what kr base did Stone used not in paper 4 T hreePhase Relative Permeability 4 5 Assume Stone39s equation is kr then SW SWC Em 1 ONE 23 0 23 Egg 6 This traces lab data Also Sg 0 E0 Low RH 0 krw 0 Low 7 This also traces lab data Aziz39s Eqn 2119 p 36 waninglawn mag magma ma a 8 Note Aziz and Settari39s form also traces lab data but his kIW and klg imply that a different kr basis was used for these ThreePhase Relative Permeability CONCLUSION 1 Stones original form should be used Method 11 2 Lab data should belltr Stone assumes this 3 Aziz and Settali s Eq 2119 is not consistent With Stone s Method 11 4 k data in grid data should be effective oil kmw not kabs STONE II E Em gm Egg Lg Ew g 9 fro Akrow lfrw limg Akrg Akrw Akrg 10 krocw krUCW krUCW krocw krUCW krUCW krUCW Aziz and Settali 1977 p 36 A A A g A A A 1mm 8 kin km lag 11gt krocw krocw krocw These are not the same relationships However it was noticed that some simulators used in the industry as VIP are using the Aziz and Settari equations ThreePhase Relative Permeability Problems 1 Calculate the effective permeability to oil using Stone39s method 11 for SW 050 and ng 010 given the following data km 40 Ind k w SW 020 30 Ind amp 191 km g 12g hag 020 000 10 00 00 10 050 010 060 010 00 080 070 030 00 050 035 055 070 055 000 2 Calculate the effective permeability to oil using Stone39s method 11 for SW 050 and ng 005 given the following lab data km 40 Ind w hm hm g 12g 120g 020 000 080 00 00 080 050 010 050 005 00 070 070 030 00 0 50 0 35 0 45 070 055 000 ThreePhase Relative Permeability CHAPTER 6 Local Grid Re nement and Domain Decomposition Taken from the PhD Dissertation of Liyan Zhaol August 1994 Introduction Literature Review Gridding Techniques Local Grid Refinement Classification Local Grid Refinement Construction Numbering Scheme Irregular Shaped Gridblocks Errors Generated by Local Grid Refinement Flow across Boundaries Domain Decomposition Description Domain Decomposition Classification Matrix Level Domain Decomposition Reservoir Level Domain Decomposition Acceleration Methods Proposed Methods Nomenclature References 1 Local Grid Re nement and Domain Decomposition Local Grid Refinement and Domain Decomposition Intro du ction Local Grid Re nement LGR and Domain Decomposition DD are useful simulation techniques when greater detail is desired in a region of interest The most important application of these techniques is in the simulation of horizontal wells The interest in horizontal wells arises because these wells oiTer solutions to the problem of producing oil and gas in reservoirs where conventional technology fails Some examples are thin reservoirs reservoirs with gas andor water coning problems reservoirs with natural fractures reservoirs with low permeability and high anisotropy and reservoirs with poor sweep ef ciency Much work has been done on horizontal wells since 1986 However most of the reservoir engineering work published has been in speci c areas of horizontal wells such as pressure transient analysis well productivity and gas or water coning Only a few publications deal with horizontal well simulation The speci c requirement of simulating a horizontal well is to have more detail ne grid in the vicinity of the horizontal wellbore This can be done by the conventional reservoir simulator with a conventional re ned grid system However the gridlines inside the re ned region have to extend all the way to simulation boundaries resulting in excessive unnecessary gridblocks Local grid re nement can solve the above problem by locally re ning the desired regions without extending the gridlines to simulation boundaries This is very effective with respect to computer memory and computing speed On the other hand for reservoirs with natural fractures it is also necessary to have more detail in the vicinity of fractures intersected by horizontal wells These fractures can also be handled effectively with local grid re nement Thus LGR in reservoir simulation offers two main advantages over conventional grid system 1 reduction in number of gridblocks necessary to model a reservoir containing special wells zones or layers of interest and 2 improvement of accuracy of solution in those desired regions Unlike the usual banded matrix which is the case in conventional reservoir simulation the Jacobian matrix resulting from local grid re nement is no longer banded Ifnew locally re ned grid regions are introduced the bandwidth of the Jacobian matrix will increase As a result the computation work and computer memory will increase This increase is particularly great if direct methods are used Therefore a special mathematical method called domain decomposition mathematical method called domain decomposition must be developed to solve the nonbanded matrix Domain decomposition is to decompose the overall domain into smaller subdomains and gather the subdomain solutions to form the solution to the overall domain The beauty of this method is its applications to large scale problems saving of both memory and computing time and its potential for use with parallel computers 2 Local Grid Re nement and Domain Decomposition Literature Review The earliest investigator of local grid re nement was von Rosenberg2 He presented the concept of local grid re nement for reservoir simulation and illustrated the effectiveness of local grid re nement Pedrosa and Aziz3 presented a hybrid grid approach utilizing different coordinate system in different parts of the reservoir Two matrix solutions related to this hybrid grid approach were recommended One is an iterative approach which is equivalent to domain decomposition and the other is a direct approach which is faster than iterative approach Forsyth and Sammon4 presented local grid re nement technique to model faults and pinchouts in complex reservoirs They described two preconditioners coupled with ORTHOMIN acceleration to handle Jacobian matrix Quandalle and Besset5 developed two new nite difference schemes a nine points scheme and a simpli ed ve point scheme for calculating uid ows at subgridding boundaries in a quite general composite grid Example tests showed that the classical two point scheme could not be used at subgridding boundaries because it induced a drastic grid effect and this grid effect could be avoided with new schemes Wasserman6 developed a static local grid re nement technique and implemented it in a 3D 3phase reservoir simulator An ef cient linear equation method was used which took lll advantage of the domain decomposition The time required to solve a multiwindow model increased linearly with the number of windows not exponentially as would be the case when no advantage was taken of the matrix structure Killough and Wheeler7 proposed a 3D domain decomposition algorithm for the numerical solution of the linear equations In comparisons of a class of dif cult problems the method was more ef cient than other current techniques Ewing et 6118 presented accurate difference equations for irregular and composite grids which conserved mass Patch grid re nement could be incorporated in existing multiphase simulator without destroying the efficiency of the original code Nacul et 611940 introduced a new iterative domain decomposition technique to solve the matrix resulting from local grid re nement Three domain decomposition techniques associated with the Jacobian matrix resulting from local grid re nement technique were presented In the rst method a relaxation in the unknowns is performed In the second one overlapping boundaries between the subdomains is used In the last one a coarse grid solution is obtained rst and it provides the boundary conditions for the ne grid solution Quandalle11 reported the results of the Eighth SPE Comparative Solution Project which compared different exible gridding techniques now available in some reservoir simulators The problem is a 3D simulation of oil production associated with gas injection in a four layer reservoir The results showed with exible or locally re ned grids all participants could reduce the number of grid blocks by a factor four or more while keeping the simulation results close to those obtained with the regular lele4 Cartesian grid 3 Local Grid Re nement and Domain Decomposition Gridding Techniques Suppose that there is a reservoir and two wells are completed in the opposite comers of the reservoir Four types of gridding techniques can be used to simulate this example problem as shown in Fig 1 These are l coarse grid 2 fully re ned grid 3 conventional re ned grid and 4 locally re ned grid 1 Coarse Grid A coarse grid for reservoir simulation corresponds to a preliminary subdivision of the reservoir It will be referred to as the base grid Once the coarse grid is de ned it can be selectively re ned by reducing the size of its grid blocks if necessary For the example problem as shown in Fig 1 a 3x3 coarse grid can be used and the resulting Jacobian matrix has the size of 9x9 2 Fully Refined Grid A fully re ned grid is constructed by subdividing all gridblocks in the base coarse grid For the example problem to have more details around the two wellbores and to achieve high simulation accuracy a 9x9 fully re ned grid can be applied as shown in Fig 6 The resulting Jacobian matrix however has the size of 81x81 This matrix is much bigger than that resulting from the previous coarse grid a Coarse Grid 3x3 b Fully Refined Grid 9x9 gee gee c Conventional d Coarse Grid 3x3 Refined Grid 7x7 Locally Refined Grid 3x32 Fig 1 Four types of gridding techniques that can be used in reservoir simulation 4 Local Grid Re nement and Domain Decomposition 3 Conventional Refined Grid In the conventional approach to grid re nement the areas of interest can be re ned as shown in Fig 1 However the ne gridlines must be extended all the way to the external boundaries of the reservoir resulting in excessive unnecessary gridblocks For the example problem a 7x7 conventional re ned grid is used and the resulting Jacobian matrix has the size of 49x49 4 Locally Re ned Grid A locally re ned grid can be considered a special case of a conventional re ned grid In this case the areas of interest can be re ned in such a way that the ne gridlines are not extended to the external boundaries of the reservoir but contained within the areas of interest Consequently the number of gridblocks used in this case is smaller than the number used in the conventional re ned grid For the example problem as shown in Fig 1 a 3x3 coarse grid is rst used as the base grid And then two 3x3 locally re ned grid systems are applied The resulting Jacobian matrix has the size of 25x25 This matrix is smaller than that resulting from the conventional re ned grid Therefore use of local grid re nement may not only be very effective in terms of memory and computation cost but achieve reasonable simulation accuracy also Fig 2 shows the Jacobian matrix resulting from local grid re nement for the above example problem It is noted that natural ordering is used in this gure The coarse grid region is ordered rst and then for two locally re ned grid regions Unlike the usually banded matrix which is the case in most reservoir simulation the resulting Jacobian matrix is no longer banded Therefore a special mathematical method called domain decomposition must be developed to decouple the non banded Jacobian matrix into several banded submatrices and make most solution methods used in reservoir simulation available for each of these submatrices In the next Chapter will describe this method xx x xxx x x x x x x xxx x xxx x x xx xxx x x xxx x x x x xx x x x xxx x xx x x x x x xxx x x xx x x x xx x x xxx x xx xx x x xxx x x xx x x x x x xxx x x x xx x x xx x xxx x x xx Fig 2 Sketch showing the Jacobian matrix resulting from local grid re ne ment for Fig1 5 Local Grid Re nement and Domain Decomposition Local Grid Re nement Classi cation According to variation of gridblocks with time and systems of coordinates the local grid refinement techniques can be classi ed as 1 dynamic and static and 2 cartesian and hybrid 1 Dynamic and Static Ifthe location of the locally re ned regions changes with time the LGR technique is called dynamic otherwise it is called static The dynamic LGR technique may be useful in simulating the front tracking problems raised by water ood or steam injection reservoirs where the location of high gradient regions changes with times The static LGR technique is useful in simulating water and gas coning problems fractures faults pinchouts and ow in low permeability or heterogeneous reservoirs Due to complexity and lack of reliability so far the dynamic LGR technique has proven to be of limited value in eld applicationslz Therefore the static LGR technique is only considered in this research work 2 Cartesian and Hybrid Depending on the system of coordinates the LGR technique can be classified as Cartesian and hybrid as shown in Figs 3 and 4 respectively In the Cartesian LGR technique both locally refined grid and coarse grid regions have the same system of Cartesian coordinates In the hybrid LGR technique the locally re ned region can be circus or ellipse while the coarse grid is the Cartesian coordinate system The Cartesian LGR technique is special useful in simulating reservoirs with high anisotropy and fault as well as strati ed reservoirs while the hybrid LGR technique is useful in simulating the near well region with high resolution and fewer gridblocks compared with the Cartesian LGR technique In the numerical simulator developed for this research the Cartesian LGR technique is currently available We have programmed the transmissibility calculations for the hybrid LGR technique We will incorporate the hybrid LGR technique into the simulator later if necessary 6 Local Grid Re nement and Domain Decomposition Fig 3 Diagram showing Cartesian local grid re nement Fig 4 Diagram showing hybrid local grid re nement 7 Local Grid Re nement and Domain Decomposition Local Grid Re nement Construction There are some restrictions on applying the LGR technique Because of numerical approximation requirements and coding considerations the construction of local grid re nement is subject to some restrictions The following are the most important restrictions 1 There is no limit in the number of locally re ned grid regions However two diiTerent locally re ned regions can not have a common interface This means that at least one coarse gridblock is needed between any two locally re ned regions as shown in Fig 5 2 A locally re ned grid region can have any number of gridblocks but the number of re ned gridblocks in each of corresponding coarse gridblocks must be constant as shown in Fig 5 3 The gridlines for a locally re ned grid region must include all gridlines corresponding to the coarse grid region as shown in Fig 6 4 Wells can be located anywhere However only one well per grid block is allowed no matter whether it is in a coarse gridblock or in a locally re ned gridblock as shown in Fig 6 5 A well cannot be in two different grid levels This means that a well can be only in either the coarse grid region or a locally re ned grid region as shown in Fig 6 8 Local Grid Re nement and Domain Decomposition Allowed Not Allowed a b Allowed Not Allowed C d Fig 5 Diagrams illustrating restrictions on local grid re nement Allowed Not Allowed C I C a b Allowed Not Allowed 0 d Fig 6 More diagrams illustrating restrictions on local grid re nement 9 Local Grid Re nement and Domain Decomposition Numbering Scheme Fig7 illustrates an example gridblock system which includes both Cartesian and hybrid locally re ned grids This example gridblock system will be referred to the lndamental gridblock system It consists of eight coarse gridblocks while three of them have been locally re ned As shown in Fig 7 there are two locally re ned regions The rst locally re ned region has the Cartesian re nement in two coarse gridblocks The second locally re ned region has the cylindrical re nement in one coarse gridblock c d Fig 7 Diagrams illustrating the numbering scheme adapted in local grid re nement 10 Local Grid Re nement and Domain Decomposition The principles of the numbering scheme adapted in this research are brie y described as follows 1 Gridblocks are numbered by rst numbering the rst level grid ie coarse grid We also refer to it as the base grid Gridblocks which have been locally re ned in the base grid should be considered as quotdummyquot gridblocks and they should also be numbered These dummy gridblocks can be removed later if we want to For the example problem shown in Fig 7 though gridblocks 4 5 and 6 have been locally re ned they are still numbered Therefore the number of gridblocks in the base grid is 8 as shown in Fig 7 2 Then start numbering each locally re ned region in the second level grid sequentially For the example system the Cartesian locally re ned region is rst numbered And then the cylindrical locally re ned region is numbered 3 In the Cartesian system of coordinates the natural numbering scheme can be applied ie the outermost loop corresponds to the zdirection followed by the ydirection and nally the x direction The positive zdirection is downward in this system of coordinates In the z direction number from top to bottom in the ydirection from front to back and in the x direction from left to right as shown in Fig 7 Therefore the number of gridblocks in the Cartesian locally re ned region is 8 4 In the cylindrical system of coordinates since this local grid re nement is cylindrical gridblocks in the locally re ned region must be numbered for z q and rdirection sequentially The numbering scheme in the zdirection in the cylindrical system of coordinates is same as in the Cartesian system of coordinates In the qdirection number counterclockwise In the rdirection from the innermost radius to the outermost radius as shown in Fig 7 For the example gridblock system the number of grid blocks in the cylindrical re nement region is 8 Summing up all gridblocks in the coarse grid region and two locally re ned regions the total number of gridblocks for this example gridblock system is 24 If we remove three dummy gridblocks from the coarse grid region then the total number of gridblocks will be 21 11 Local Grid Re nement and Domain Decomposition Irregular Shaped Gridblocks When the hybrid grid re nement is used in a second level grid region the gridblocks between the coarse grid region and the re ned cylindrical region have irregular shapes This irregularity arises from the way cylindrical re nement inserted into a rectangular region As shown in Fig 8 these irregular gridblocks have some common interfaces with rectangular gridblocks Since the ow across these interfaces is neither radial nor linear speci c treatment is needed to compute transmissibility coe icients for these interfaces Fig 8 Diagram showing wellreservoir region border gridblock Fig 9 illustrates that an irregular shaped gridblock can be considered as a pseudocylindrical gridblock if ow is from a rectangular gridblock in the coarse grid region to a cylindrical gridblock in the local re ned region or as a pseudorectangular gridblock if ow is from a cylindrical gridblock to a rectangular gridblock In both cases perfect material balance is achieved because volume of the irregular shaped gridblock is preserved 12 Local Grid Re nement and Domain Decomposition a Pseudocylindrical Grid Block b Pseudorectangular Grid Block Fig 9 Diagram showing irregular shaped gridblocks in the hybrid local grid re nement As also shown in Fig 9 the ow through the curvilinear surface of the irregular shaped gridblock can be considered radial for an isotropic medium around the well and elliptical for an anisotropic medium If ow is radial the transimissibility in the rdirection is de ned by 7 000633A61Azk n1 iln Z 1n k r km rm TE 1 I 1 The only information in Eq 1 which is not available is the external radius rm2 of this new pseudocylindrical gridblock and the corresponding radius r of the grid point The external radius can be calculated by 2ij I 1 2 2 1 A6 r17 where AU is the actual area of the irregular shaped gridblock and the radius of the grid point can be calculated by 13 Local Grid Re nement and Domain Decomposition 2 a 1 r rzizeXP IHQE 3 As also shown in Fig 9 the ow through the rectilinear surface of the irregular shaped gridblock can be considered as linear If the ow is linear the apparent lengt A L 4 Ax h of the pseudorectangular gridblock can be calculated by and its transmissibility is readily calculated by 0 00633ij Azk XML x Jaw Kiwi 2 2 kx kw TE 5 I where 000633 is a unit conversion factor A similar procedure can be applied to determine the angular transmissibility coefficient between two adjacent irregular shaped gridblocks Total angle of the two gridblocks is kept constant and the gridblocks are transformed into two pseudocylindrical gridblocks with the same total volume and same rock properties as shown in Fig 10 Therefore the external radius of the two pseudocylindrical grid blocks can be obtained by 2AUAU1 r AQJJFAQJH 1 n1 2 6 mlK Fm 1 A216 1 J 6144 6 6J396yi 2 2 kg kg 000633 ln T s 7 and the angular transmissibility coefficient can than calculated from Notice that in both Figs 9 and 10 pore volumes of the irregular shaped gridblocks are preserved in the above transformation procedures If one considers that these interfaces are far away from the wells ie potential gradient is small then the errors introduced by the above ow approximation may be small However one possible weakness of this approach is that to approximate an irregular shaped gridblock the pseudorectangular gridblock and the 14 Local Grid Re nement and Domain Decomposition pseudocylindrical gridblock have two different grid points which are the mass centers of those pseudogrid blocks Aej A6141 A4 A1 A4 A2 A3 Fig 10 Diagram showing transformation of two irregular shaped grid blocks to determine the angular transmissibility coefficient 15 Local Grid Re nement and Domain Decomposition Errors Generated by Local Grid Re nement Grid systems generated by local grid re nement have special features which do not exist in these generated by conventional grid system When using conventional grid system the gridlines must be extended all the way to external boundaries of the reservoir and gridlines in all directions are orthogonal In local grid re nement the gridlines of re ned gridblocks are confined within the boundaries of the locally re ned region At these boundaries one coarse gridblock connects several the locally re ned gridblocks if the locally re ned region is not on the edge ofthe reservoir In addition to local truncation errors introduced by using different gridblock sizes in the conventional grid system another type of discretization error is generated by using gridblocks from two different re nement levels at the boundaries between coarse grid and locally refined grids In some cases grid effect may be signi cant when the normal twopoint nitedifference approximation is used Therefore in order to understand this undesired grid effect it is necessary to analyze errors generated by local grid re nement Mathematical notation is rst introduced to represent coarse grid and locally re ned grid respectively Indices 139 j represent a coarse gridblock whose position is in row 139 and column j of a coarse grid region while indices ijz39j represent a locally refined grid block whose position is l in the coarse gridblock 13939 of the coarse grid region and 2 in row 139 and column j of the locally re ned grid region Using these mathematical notations the gridblock system shown in Fig 11 can be assigned an index for each of gridblocks Coarse Grid 3x3 Iocally Re ned Grid 3x3 Fig 11 Diagram showing the Cartesian local grid re nement with math ematical notation used in this research 16 Local Grid Re nement and Domain Decomposition Consider ow between gridblocks 2lll and 22 in Fig 12 Because the block boundary between these two gridblocks is not perpendicular to the line joining these grid points some kind of potential gradient approximation must be considered at the block boundary For example the component of potential gradient in the xdirection Fx should be approximated at point 0 located at the interface between grid points 2lll and 22 as shown in Fig 12 Another way to illustrate this problem is to assume a pseudogrid block 22ll inside gridblock 22 and evaluate potential function F272 at the new point 22ll With this value of F2101 and F 271071 the component of potential gradient in the xdirection Fx can be evaluated by using the twopoint nitedifference approximation This pseudogrid point is the only point aligned in the xdirection with the grid point of the gridblock 2lll and also aligned in the ydirection with the grid point of gridblock 22 There are several nitedifference approximation methods available in the literature Among those different methods it is necessary to comment on the following three finite difference approximation methods and to analyze errors generated by these three methods l l 1 l gt 12 11 21 0 i 0 0 21 11 C C 21 21 22 0 0 21 31 Fig 12 Diagram showing the grid points of gridblocks and indices for a LGR system 17 Local Grid Re nement and Domain Decomposition Two Point Finite Difference Approximation Method In this method only two gridblocks 2lll and 22 are used to approximate Fx at point 0 as shown in Fig 12 In order to obtain an expression for this approximation the potential functions Fzyzayl and F272 can first be expanded in terms of F 0 by Taylor series 2 A A Qua 0 q 0013 8 and 2 A9622 Ay22 A9622 dbxx q q q q 2 2 0 i 2 x 3 y 2 2 2 Ax22Ay22 Aygg qDyy W T 70 9 Then subtracting Eq 9 from Eq 8 gives q3yoh3 10 A A A y 22 39 Quay x H xzmU x 39 322 2 Next solving this equation to obtain an expression of Fx we have 2q322 2121 2Ay22 Axu A X2111 3A X22 A X2111 q yoh1 11 Finally ignoring the in uence of first derivative Fy the twopoint finitedifference approximation for first derivative Fx at point 0 becomes 2q32 Quay 0 I h 12 q Am2 Aynmu 0 Eq 12 indicates that the additional error due to effect of different gridblock sizes is of order zero This zero order error means that this method may be inconsistent ie the discretization error does not necessarily go to zero with locally refmed gridblocks being smaller and smaller If the coarse gridblock 22 is divided into three pseudogrid blocks then the value of potential function F at the pseudo grid point 22l 1 can be expressed as 18 Local Grid Re nement and Domain Decomposition Ay22 q32211 q322 3 q3y 22 0011 13 Three Point Finite Difference Approximation Method In this method in addition to two gridblocks 2lll and 22 another gridblock 12 is also considered to approximate Fx at point 0 as shown in Fig 12 If rst derivative Fx is approximated by the threepoint nitedifference approximation method there will be an improvement of one order in this kind of approximation In order to obtain an expression of Fx at point 0 the potential function F172 can rst be expressed in terms ofFo by Taylor series 2 A9612 A312 A3 22 A9612 q 02 x 2 6 day 2 qaxx 14 2 A1612 Ayn A3 22 Ayn A3 22 3 1W 77 WWW Then rearrange Eq 11 for F0 A A an 0 21Mj xwjloh3 15 2 2 2 Next substitute Eq 15 into Eq 12 and Eq 14 respectively A9622 Ax2111 AX22J2 Mammy dbxx q q 4 q 22 2111 2 2 x 2 2 2 2 Ay Ax Ay Ay I 32 2 y39 2 26 2 2 xyi 322 2yy0h3 16 19 Local Grid Re nement and Domain Decomposition and 2 2 A9612 A96211 A9612 A96211 x xx 2 2 2 2 A3 12Ay22 q A9612JAJ 12A3 22 q 2 6 y 2 2 6 quoty 2 y12 y22 3 2 6 q3yy0h l7 q312 q32111 Notice that Dxlyz is equal to szyz in the above two equations To eliminate Fy and F xy multiplying Eq 16 by Dy1722 Dyzyz6 Eq 17 by Dyzyz3 adding the resulting equations and ignoring the in uence of second derivatives FM and Fyy the following equation can be obtained A3 22 A3 12 A3 22 A3 12 A3 22 q312 t q322 q32111 3 2 6 2 2 Ay Ay A A 12 22 x12 x22 xoyh2 2 2 2 Finally rearranging Eq 18 to express Fx in terms oszmyl F172 and F272 the threepoint nite difference approximation becomes 12Ay12d322 21z1 4Ay22 2 12 22 Quay 6Ayij2 0029 19 This threepoint approximation method shows an improvement of discretiztaion error by one order compared with the twopoint finitedifference approximation method Five Point Finite Difference Approximation Method If the component of the potential gradient in the xdirection is approximated by a vepoint nitedifference approximation method ie using gridblocks 12 22 ll2l 21ll and 212l there will be an improvement of one order in this kind of approximation over the threepoint nitedifference approximation This vepoint approximation method is of order ohz 20 Local Grid Re nement and Domain Decomposition Following the procedure as in threepoint approximation the expression of Fx at point 0 using the fivepoint approximation can be expressed as where q Cd2 Dd CDA 2111 B Qua A Bq32111 20 Ax21CD ABA 3 A Ay1121 AyZJ J 21 2 B AyZJ J Ayum 22 2 A A C 7 y12 y22 2 6 and A D 22 24 This fivepoint approximation method shows an improvement of discretization error by one order compared with the threepoint finitedifference approximation method and by two orders compared with the twopoint finitedifference approximation method It is more appropriate if the potential function is smooth However when there is a saturation front at the boundary between a locally re ned grid region and coarse grid region it is usually better to use twopoint approximations Another problem with fivepoint approximation is that this method introduces extra nonzero elements in the Jacobian matrix which may result in drastic increase of memory and computation work during the matrix solution process The twopoint finitedifference approximation can be an adequate approximation when the component of potential gradient in the ydirection is relatively small ie when the ratio between ow in the direction perpendicular to the interface xdirection and ow in the direction parallel to the same interface ydirection is large This is due to the fact that while deriving Eq 103 the derivative Fy is assumed negligible In the simulator developed in this research only the twopoint finitedifference approximation method is incorporated The type of extra discretization error discussed above is inherent in local grid refinement To reduce this error our numerical experience indicates that 1 both the coarse grid system and the locally refined grid system should be as uniform as possible 2 refinement ratio number of refined gridblocks in a corresponding coarse gridblock should be less than 10 and 3 each of locally refined regions should be large enough to account for most changes in pressure and saturations in the region including both the refined region itself and adjacent coarse gridblocks A potential solution to avoid this descretization error is to use grid system whose block boundaries are orthogonal to the grid points 21 Local Grid Re nement and Domain Decomposition Flow across Boundaries As mentioned before twopoint nitedifference approximation is used in this research to approximate potential gradient at the boundaries between coarse grid and locally re ned grid In reservoir simulation however it is more important to take rock and uid properties into QM am u W 27 consideration Therefore this twopoint nitedifference approximation method must be modi ed to account for variation of rock and uid properties QDWZ gammlqaw dam 28 1 In order to derive the ow equations for calculating the uid ow across these boundaries locally re ned gridblocks 22l3 2223 and 2233 as well as coarse grid block 23 in Fig 11 are considered First the following two assumptions behind the towpoint nite difference approximation method must be made The gridblock pressure in the coarse gridblock 23 represents the average mass balance pressure of that gridblock 2 The coarse gridblock 23 is divided into three pseudogrid blocks whose gridlines match the gridlines of the adjacent re ned gridblocks Under the above assumptions by using the harmonic average of uid properties the east transmissibility of locally re ned gridblock 22l3 can be expressed as The east ow coef cient of gridblock 22l3 becomes km 6105mm Tm 26 The uid ow across boundaries between coarse gridblock 23 and locally re ned gridblock 0 01266hA km 1 km y 25 A X23 kxm A X2213 km 22l3 is obtained by In a similar way the west ow for coarse gridblock 23 can be calculated by summing up all ow rates between coarse gridblock and corresponding locally re ned gridblocks ie One of the advantages to treat the uid ow across the boundaries in such a way is that rock and uid properties can vary in the locally re ned grid region which makes the simulator more exible For all the cases we tested the twopoint nitedifference approximation method worked well 22 Local Grid Re nement and Domain Decomposition Domain Decomposition Description Unlike the usually banded matrix which is the case in most reservoir simulation the Jacobian matrix resulting from local grid re nement is not banded Therefore a special mathematical method called domain decomposition must be developed to solve this nonbanded Jacobian matrix Suppose there is a reservoir where a high resolution of pressure and saturations is desired in two speci c regions as shown in Fig 13 In this gure WC refers to coarse grid region and W1 and W2 refer to two locally re ned grid regions Fig 14 shows the block elements of the resulting Jacobian matrix for this reservoir where some local grid re nement techniques are applied in these two speci c regions 91 92quot Fig 13 Diagram showing a reservoir with two locally re ned grid regions and the rest coarse grid In general the Jacobian matrix A can be divided into four submatrices as shown in Fig 15 The upper left submatrix Ace represents the coefficients of the equations for the coarse grid region The lower right submatrix Aff is a block diagonal matrix Each block represents the coef cients of the equations for a different locally re ned region This submatrix is a block diagonal matrix whose diagonal elements are All and A22 The other two submatrices represent the elements of equations for coupling the coarse grid region with the locally re ned grid regions The lower submatrix Afc 23 Local Grid Re nement and Domain Decomposition A22 Fig 14 J acobian matrix resulting from the example reservoir Fig 13 represents the elements of the boundary conditions for the locally re ned grid regions and the upper right submatrix Acf represents the elements of the boundary conditions for the coarse grid region As mentioned before the Jacobian matrix A resulting from local grid re nement is no longer a banded matrix If new locally re ned grid regions are introduced the bandwidth of matrix A will increase Consequently computation work and computer memory will increase This increase is particularly large if direct methods are used However all submatrices in the matrix A still conserve the banded structure of the normal Jacobian matrix This speci c matrix structure motivated people to develop a new mathematical method called domain decomposition As the name implies domain decomposition is to decompose the overall domain into smaller subdomains and gather the subdomain solutions to form the solution to the overall domain This idea is not new but has received recent attention because of its applications to large scale problems saving of both computer memory and computing time and its potential for use with parallel computers Fig 15 Generic Jacobian matrix resulting from local grid re nement 24 Local Grid Re nement and Domain Decomposition 25 Local Grid Re nement and Domain Decomposition Domain Decomposition Classi cation Once domain decomposition is introduced there will be two types of iteration loops One is the domain iteration loop and the other is the Newton iteration loop The domain decomposition technique can be applied at two different levels of iteration either at the inner loop or the outer loop Based on this idea two types of domain decomposition techniques have been developed in reservoir simulation Matrix Level Domain Decomposition Like most conventional solution methods the matrix level domain decomposition technique solves the following linear system of equations A E 29 However when local grid re nement is applied the Jacobian matrix A is not banded This method can be considered an extension of conventional solution methods for solving this particular linear system The basic idea behind the method is to divide the Jacobian matrix into several matrix partitions based on its structure Since each diagonal matrix partition is banded some efficient solution methods for solving banded linear system can be applied The solutions for the whole linear system can be obtained by means of domain iteration The domain iteration can be performed at the inner loop where the Jacobian matrix is solved ie at the matrix level as shown in Fig 16 In this domain decomposition technique the Newton iteration is considered as the outer loop At each Newton iteration it is rst necessary to perform all the domain iterations until convergence The boundary conditions between the coarse grid region and the locally re ned grid regions are treated at the inner loop Then advance to the next Newton iteration This approach has been applied in most of the works presented in the petroleum engineering literaturelz39ls39lg Reservoir Level Domain Decomposition Unlike most conventional solution methods the reservoir level domain decomposition technique solves the following nonlinear system of equations M f b 30 where matrix M is a function of solution vector This method decouples the entire reservoir into several subdomains and solves each of subdomains independently The nonlinearity is treated inside each subdomain Different simulation techniques such as IMPES Implicit Pressure Explicit Saturation semiimplicit and fully implicit can be applied to different subdomains The solutions for the entire reservoir can be obtained by means of domain iteration 26 Local Grid Re nement and Domain Decomposition Newton Iteration Outer Loop Domain Linear Iteration Inner Loop 9 c Parallel 21 22 Fig 16 Diagram illustrating the matrix domain decomposition using block GaussSeidel This showing the parallel solution of domains in domain linear iteration loop Converge The domain iteration can be performed at the outer loop as shown in Fig 17 It represents a domain decomposition at the reservoir level In this domain decomposition technique the Newton iteration is considered as the inner loop ie the nonlinearity will be treated inside each subdomain At each domain iteration it is rst necessary to perform all the Newton iterations for each subdomain until convergence The boundary conditions between coarse grid region and locally re ned grid regions are treated at the inner loop Then advance to the next domain iteration This approach was proposed by Nacul and Azizg39lo39zo In the simulator developed for this research the reservoir level domain decomposition technique has been used 27 Local Grid Re nement and Domain Decomposition Domain Iteration I Outer Loop Newton Iteration Q c to Convergence Inner Loop Parallel 9 Newton Iteration 1 to Convergence Q 2 Inner Loop D I Converge Fig 17 Diagram illustrating the reservoir level domain decomposition This shows the parallel solution of domain in domain iteration loop 28 Local Grid Re nement and Domain Decomposition Matrix Level Domain Decomposition Several methods based on the idea of the matrix level domain decomposition are available for solving reservoir simulation problems with local grid re nement in petroleum literature The following will give a brief discussion of those methods Block Gaussian Elimination Method This method is also called the Schur matrix method or the capacitance matrix method Considering the above example reservoir with local grid re nement in two speci c regions as shown in Fig 13 the resulting linear system of equations from the Newton Raphson method has the following form A5 A51 A52 56 7 A1 A11 f1 71 31 A25 A22 9 72 The block Gaussian elimination method can be outlined as followszl lCompute the quotcapacitancequot matrix C which is equivalent to eliminating matrices A11 and A22 from the block matrix A C Am Ac Aii A A52 142121425 32 2 Modify the right hand side of Eq 31 a7 147171 33 Q32 1421272 34 and b 75A51E21Ac2 2 35 The second and third terms of the right hand side of Eq 32 and Eq 35 correspond to the boundary conditions applied to the interfaces between coarse grid region and locally re ned grid regions At this point the block matrixA is in block lower triangular form 3 Solve for solution variables of coarse grid region is C 13 36 4 Solve for solution variables of locally re ned regions using backsubstitution 29 Local Grid Re nement and Domain Decomposition a E21 A A1f 37 and f2 52 A212A25fc 38 Since each submatrix has the band structure a band solver quotGBANDquot can be used to invert submatrices A 22 and A 3 3 This solver has a reentry option which allows for solution of multiple right hand side vectors without repeating the forward elimination process In Eqs 32 through 38 the submatrices A22 A 33 and C are not actually inverted The terms containing these submatrices are solved for column wise ie each post multiplied matrix becomes a set of RHS vectors for solution with GBAN D The forward elimination procedure is carried out only for the first column of the post multiplied matrix subsequent solutions are obtained by what amounts to a forward and backward substitution process This block Gaussian elimination method is straight forward and ease to code However the method cannot be used for large reservoir simulation problems with local grid refinement in several specific regions This is due to the fact that the Jacobian matrix resulting from such local grid refinement is particularly large and use of any direct solution method will be impractical On the other hand although the quotcapacitancequot matrix C belongs to a subspace of the original problem it is in general much more dense than the original submatrix ACC and may have loss some of the banded structure that submatrix ACC possessed Preconditioned Generalized Conjugate Gradient Method Killough and Wheeler22 showed how an iterative solution for the Schur matrix method using the preconditioned generalized conjugate gradient method could be developed For an application with local grid refinement this technique can be outlined as follows 1 Set the initial guess for the unknowns for the coarse grid region x5 lo 39 where in most cases no 0 40 2 Solve for unknowns of locally refined grid regions f1 quotAllin 147111141510 41 30 Local Grid Re nement and Domain Decomposition and x2 A512 72 2132142 710 42 3 Compute the residual of coarse grid region W x x x V V A51 A11 V1 A1510 A52 A22 V2 A2 10 Act10 43 The right hand side of Eq 43 corresponds to 170 quotC10 39 E1 44 As explained in the previous method the evaluation of the matrix C is expensive in terms of work and storage so this preconditioned generalized conjugate gradient method indirectly evaluates this matrix This procedure avoids the additional work and storage compared with the method discussed in the previous method 4 Compute the shift vector M 220 720 45 or 220 M 391 F EU 46 where M is a preconditioning matrix ie an approximation to the original matrixA 5 Compute two matrix vector products 9 Ari1125 47 and E2 xii1213 48 where 130 550 49 6 Evaluate the product of matrix and vector C m Acc ne1AUj1A5272 50 It is noticed that this method does not evaluate the product of matrix and vector directly 31 Local Grid Re nement and Domain Decomposition 7 Compute parameter a and update solution and residual vectors i lt29 aki lt if CW gt 51 2M Z 5m 52 and FM F EMCW 53 8 Calculate new shi vector gig1 Mr Fg and compute parameter b lt if 2 gt k lt 7 Egg gt 55 9 Calculate new direction vector anew 4k we 1 Z l kp 56 10 Go back to Step 5 for k 1 2 until convergence It is noticed that the solutions of each subdomain i at each iteration k is necessary in order to perform the product of matrix and vector in Step 6 The initialization also requires a solution of the Dirichlet problem on each of the subdomains in Step 2 Although this method has shown excellent results for solving linear systems whose matrix are asymmetric it still has some limitations in terms of applicability One of these limitations is due to the fact that this method was not developed for use with local grid re nement Another limitation is that there is no effective method to generate the preconditioning matrix M for two or three dimensional problems Block Gauss Seidel Method An altemative way to solve Eq 31 is to use the Block GaussSeidel method23 This iterative procedure takes advantage of the banded structure of the submatrices and allows full vectorization of the algorithm This method is outlined as follows 1 Guess the initial estimations of solutions for the coarse grid region In most cases these initial guesses can be set to zero 9 5 57 32 Local Grid Re nement and Domain Decomposition where superscript k refers to domain iteration number 2 Solve the linear equation systems for locally re ned regions This is equivalent to shifting olT diagonal submatrices of the matrix A to the right hand side and considering the solutions for coarse grid region as the boundary conditions of each locally re ned grid region AME 4 1 AM 58 and A22 72 A2cfg 59 3 Solve the linear equation system for coarse grid region using the latest information from locally re ned grid regions Ami 439 ACNE ACNE 60 4 Go back to Step 2 for k 0 l 2 until convergence This method is simple and straight forward It takes advantage of the banded structure of the each submatriX and decomposes the Jacobian matrix in an ef cient way in terms of computer storage However the main disadvantage of this method is due to the fact that its convergence rate strongly depends on the dif culty of the problem Very slow convergence rate has been observed in most practical applications 33 Local Grid Re nement and Domain Decomposition Reservoir Level Domain Decomposition The methods described above perform domain decomposition at the matrix level ie at the inner loop These methods directly solve the linear system However the reservoir level domain decomposition technique is based on a diiTerent idea It performs domain decomposition at the reservoir level ie at the outer loop Nacul and Azizg39lo3920 investigated this idea in detail In their work the reservoir level domain decomposition technique and three acceleration techniques were presented The following will brie y describe the reservoir level domain decomposition technique Instead of decomposing the Jacobian matrix resulting from local grid refinement at the matrix level the reservoir level domain decomposition technique physically decouple the reservoir into several smaller regions locally refines these regions solves each of these regions separately in the inner loop and then links the individual results in the outer loop Since this method treats nonlinearity of the problem inside each of subdomains any kind of simulation technique can be used to simulate each of subdomains Thus it is more physically understandable Fig 18 illustrates the method and the principles of performing reservoir level domain decomposition Blc 2c 21 92 EA 1 Simulate each domain individually 2 Exchange information between domains with BC 3 Iterate between domains until BC converge Fig 18 Diagram physically illustrating the reservoir level domain decomposition and its principles 34 Local Grid Re nement and Domain Decomposition The potential advantages of reservoir level domain decomposition over matrix level domain decomposition can be explained as follows 1 It may be faster because it may not require the performance of the full set of Newton iterations until convergence After the first domain iteration the semiimplicit method can be applied to treat the nonlinearity of each subdomain because it is still necessary to achieve convergence in the outer iteration This is equivalent to performing a single Newton iteration in the inner loop 2 For reservoirs with large heterogeneity in speci c identifiable regions this method would be more useful because users can specify subdomains and simulation techniques For example for a reservoir where gas or water coning is important the gridblocks around the wellbore can be locally re ned The IMPES implicitpressure explicit saturations method may be applied to coarse grid region while the fully implicit method to locally refined grid regions 3 Some subdomains can be overlapped by other subdomains in order to accelerate the domain iteration convergence This is impossible for matrix level domain decomposition 4 This technique is relatively easy to incorporate into any conventional simulator without destroying the original code Since the reservoir level domain decomposition technique puts the Newton iteration in the inner loop and the domain iteration in the outer loop its mathematical notation will different from the matrix level domain decomposition technique Now considering the same example problem the following nonlinear equation system will be obtained for the reservoir level domain decomposition technique M5 M51 M52 is 7 M1 M11 1 71 61 M2 M22 62 72 It is noticed that the matrix structure of Eq 61 is same as that of Eq 31 However Eq 61 is a nonlinear system which directly comes from the finitedifference equations of the problem before the NewtonRaphson method is applied The mathematical representation of reservoir level domain decomposition can be achieved by shifting the offdiagonal submatrices to the right hand side of Eq 61 MC 1 51 M11 1 d1 62 M22 62 512 where j quotI71 39 M1556 35 Local Grid Re nement and Domain Decomposition d2 72 39 Mai and d5 39Fc39McifJ39Mch 65 The procedure for the reservoir level domain decomposition can be outlined as follows 1 Guess the initial estimates for coarse grid region This will lead to give the boundary conditions for each of the locally re ned grid regions Solve for the unknowns of each locally re ned region separately a 7 06 M11 xlk 39 d1 66 and M22 fgk 39 6 67 for k l 2 until Newton iteration converges If locally re ned grid regions have some common interfaces the boundary conditions may be updated alter the solution of each subdomain 3 Update the boundary conditions for coarse grid region 4 Solve for the unknowns of coarse grid region a1m k M a x 611 68 for k l 2 until Newton iteration converges 5 Update the boundary conditions for locally re ned grid regions 6 Go back to Step 2 and repeat until convergence of domain iteration From the above description it is clear that the basic ideal behind reservoir level domain decomposition is to treat each subdomain independently Handling boundary conditions between coarse grid region and the locally re ned grid regions will become more important First each locally re ned grid region can be solved sequentially by specifying its boundary conditions with coarse grid region Then the coarse grid region can be solved by using the latest information from all locally re ned grid regions At the rst domain iteration the Dirichlet boundary conditions should be used for all subdomains Then either Dirichlet or Neuman boundary conditions can be used If all locally re ned grid regions are separated from each other and the boundary conditions for those subdomains are at the same level of iteration the different subdomains can be treated independently and hence solved in parallel 36 Local Grid Re nement and Domain Decomposition Acceleration Methods As mentioned before the key to the reservoir level domain decomposition technique is to treat the nonlinearity inside each of subdomains Since the domain iteration deals with nonlinear systems it may be slow for some difficult problem Therefore techniques must be developed to accelerate the process The following will brie y describe the dynamic relaxation acceleration techniqueg39lo One possible way to accelerate convergence of the domain iteration is to apply some relaxation technique to unknowns for each subdomain The basic idea behind this method is not to accept the calculated unknowns as a nal solution but use them as an intermediate solution in domain iteration Then the following relaxation is performed Jane n ff wlnflfnlfln 69 where i refers to the iLh subdomain w is the relaxation factor and 1ltwlt2 The superscripts k n and refer to domain iteration time level and intermediate step during the domain iterative process respectively The key to success of such a scheme is to select the optimum value of w Unfortunately it is usually very difficult to obtain theoretical optimum value of w because this relaxation method was originally developed for solving the linear system of equations and not for the nonlinear system of equations However empirical approaches can be applied to obtain a value of w near optimum Nacul et al939103920 presented the following two empirical expressions to obtain the relaxation factor w 1 Dynamic Relaxation I 1 an i 1 39 w 70 where subscripts i and j refer to the iLh subdomain and the jLh gridblock respectively a is the error vector that can be calculated by 1 71 xx x53 aw T k xiii 39 xivU 71 The main disadvantage of this method is that it is necessary to perform at least three domain iterations to evaluate the error vector Therefore relaxation can be applied only after at least three domain iterations are performed 2 Dynamic Relaxation II 0 1 xx x11 an xiin1 x n 72 37 Local Grid Re nement and Domain Decomposition where xi represents the value obtained at the previous timestep level 99me represents the initial estimate at the rst domain iteration at the new timestep level and xia wj represents the value obtained at the end of the rst domain iteration In this method the relaxation factor w is calculated for each unknown in each gridblock using the information available at the end of previous timestep Therefore only two domain iterations are needed to obtain the relaxation factor It is necessary to note that Eq 72 is applied only at the end of rst domain iteration A er the rst domain iteration relaxation does not improve convergence rate The above two dynamic relaxation techniques have been incorporated into the simulator developed for this research work Our numerical experience indicates that this relaxation technique is highly sensitive to the dif culty of the problem For some easy problems it does improve the convergence rate of domain iteration But for some dif cult problems its very slow or may not even converge The reason is because the equation system for each of subdomains is nonlinear but the relaxation technique was originally developed for solving the linear system of equations 38 Local Grid Re nement and Domain Decomposition Proposed Methods Because of the limitations of the above acceleration technique it is worth investigating some other acceleration techniques In this research the following three techniques were proposed Solution Extrapolation An effective way to accelerate convergence of domain iteration is to perform solution extrapolation This improves the initial estimate of new solution at the new timestep level fron1 014 dominion 1 n1 am 4A nj q M 2quot J 74 73 where Eq 73 reduces not only number of timesteps and number of Newton iterations but number of domain iterations as well This is because convergence of domain iteration is equivalent to convergence of boundary conditions applied to the interface between coarse grid region and locally refined grid regions during domain iteration Eq 73 effectively improve the initial estimate of new boundary conditions at the new timestep level Fig 19 shows a sketch of pressure at the boundary gridblock in locally re ned grid region vs domain iterations for the conditions with and without solution extrapolation Our numerical experience also indicates that the use of solution extrapolation always reduces number of domain iterations no matter how difficult the simulated problem is 39 Local Grid Re nement and Domain Decomposition 1700 1680 E No Extrapolation 8 1660 a 8 1640 1620 I I I I I 0 5 10 15 20 25 Domain Iterations Fig 19 Sketch showing the boundary gridblock pressure vs domain iteration in conditions with and without solution extrapolation Dirichlet Neuman Boundary Conditions A usual way to perform reservoir level domain decomposition is to use Dirichlet boundary conditions for both coarse grid region and all locally re ned grid regions It is no doubt that this technique is the most reliable because it always works despite of its slow convergence rate for some dif cult problems An alternative is to use Dirichlet Neuman boundary conditions The Neuman boundary conditions can be obtained by knowing the ux information at boundaries Since the latest ux information at boundaries is unknown the ux information from the previous domain iteration is usually applied Nacul and Azizg39lo3920 found that DirichletDirichlet DD boundary conditions must be used for all subdomains at the rst domain iteration and a er then either Dirichlet or Neuman boundary conditions can be used except that one subdomain must keep using Dirichlet boundary conditions Our numerical experience indicates that the best way to accelerate convergence of domain iteration is to always use Dirichlet boundary conditions for all locally re ned grid regions and Neuman boundary conditions for the coarse grid region except the rst domain iteration For the first domain iteration Dirichlet boundary conditions should be used for both the coarse grid region and all locally re ned grid regions The main advantage of this technique is that it conserves mass during the domain iteration process As a result this technique results in much less material balance error than Dirichlet Dirichlet boundary conditions For all the cases we tested The DN technique always reduces the overall material balance error by at least two orders of magnitude Another advantage of this technique is the fact that it may be faster that the DD technique For most cases we tested the DN technique is faster than the DD technique Preconditioned General Conjugate Gradient Approach The preconditioned generalized conjugate gradient method was originally developed to solve the nonsymmetric linear system of 40 Local Grid Re nement and Domain Decomposition equations Later as described above it was modi ed to solve the linear system of equations from the matrix level domain decomposition What we have found is that this method is still available for solving the system of equations from the reservoir level domain decomposition after slight modi cations The reasons are based on the following observations by running numerous test cases 1 N For most cases we tested only one Newton iteration is needed after several domain iterations This indicates that the system approaches the linear system It is noted that occurrence of the linear system depends on the dif culty of the problem as well as the convergence criteria used for the Newton iteration For example if the convergence criteria for pressure is set to 10 psi for a given problem then the nonlinear system of equations will approaches linear behavior a er two or three iterations However if the convergence criteria is set to 01 psi then the system will approach linear behavior a er four or ve domain iterations During domain iteration the nonlinear system of equations will gradually approach a linear system In other words corresponding to convergence of solutions each matrix element in the resulting matrix will gradually approach constant The constant is diiTerent for different matrix element Fig 20 shows a sketch of a matrix element versus domain iterations for a given problem It is this feature that motivates us to extend the preconditioned generalized conjugate gradient method to reservoir level domain decomposition To make this approach succeed we must make sure that the convergence criteria for Newton iteration is small Only a er every subdomain approaches linear during domain iteration can we use the above described method Our numerous experience indicates that the convergence criteria should be even smaller for dif cult problems 1888 1886 1884 Matrix Element 1882 I I I 10 15 20 Domain Iterations Fig 20 Sketch showing a matrix illement vs domain iterations 1880 39 Local Grid Re nement and Domain Decomposition 42 Local Grid Re nement and Domain Decomposition Nomenclature FFWHWFUQOOQUJ U DgtDgt ZBT quotwwgwp o ow coef cient matrix area ft2 formation length it right hand side vector formation volume factor rbSTB rock compressibility psi391 wellbore storage bblpsi capacitance matrix distance ft unit normal vector reservoir energy level formation thickness ft productivity index absolute permeability md relative permeability fraction oil relative permeability in the presence of connate water fraction horizontal well length it slope matrix unit outward normal vector cumulative oil production STB pressure psia bubble point pressure psia capillary pressure for oilwater system psia capillary pressure for oilgas system psia initial pressure psi saturation pressure psi bottom hole owing pressure psi production rate STB day latest production rate prior to shutin STBday sandface ow rate bblday production rate STB day mass ow rate rcfday radius ft equivalent radius ft wellbore radius ft residual vector solution gasoil ratio SCFSTB skin factor apparent skin factor saturation fraction surface area it time days 43 Local Grid Re nement and Domain Decomposition mlt W Subscripts QNKltgtlt gamg3 wozr W Q39H39Uothmow por Homer pseudoproducing time hours transmissibility volumetric ux vector ft day bulk volume rcf pore volume rcf volume at standard conditions SCF gas volume at standard conditions SCF elevation positive upword ratio of r and ri1 root of a nonlinear system solution extrapolation factor fraction reservoir boundary viscosity cp density at reservoir conditions lbmrcf density at standard conditions lbmscf porosity fraction osity at the reference pressure fraction ow potential psia relaxation factor domain bottom direction coarse grid east direction locally refined grid gas phase grid block indeX in the Xdirection grid block indeX in the ydirection grid block indeX in the zdirection upstream liquid north direction oil phase phase radial direction relative reservoir conditions standard conditions south direction top direction water phase west direction Xdirection ydirection zdirection angular direction 44 Local Grid Re nement and Domain Decomposition Superscripts Newton iteration index n timestep index Operators D difference N del d full derivative 45 Local Grid Re nement and Domain Decomposition References 1 2 Zhao L quotHorizontal Well Simulation with Local Grid Refinementquot PhD Dissertation Texas AampM University College Station TX August 1994 von Rosenberg D U quotLocal Mesh Refinement for Finite Difference Methodsquot paper SPE 10974 presented at the 57th Annual Technical Conference and Exhibition New Orleans LA Sept 1982 Pedrosa Jr 0 A and Aziz K quotUse of Hybrid Grid in Reservoir Simulationquot paper SPE 13507 presented at the SPE 1985 Middle East Oil Technical Conference and Exhibition Bahrain March 1985 Forsyth P A and Sammon P H quotLocal Mesh Re nement and Modelling of Faults and Pinchoutsquot paper SPE 13524 presented at the SPE 1985 Reservoir Simulation Symposium Dallas TX Feb 1983 Quandalle P and Besset P quotReduction of Grid Effects Due to Local SubGridding in Simulations Using a Composite Gridquot paper SPE 13527 presented at the SPE 1985 Reservoir Simulation Symposium Dallas TX Feb 1983 Wasserman M L quotLocal Grid Refinement for ThreeDimensional Simulatorsquot paper SPE 16013 presented at the SPE Reservoir Simulation Symposium San Antonio TX Feb 1987 Killough J E and Wheeler M F quotParallel Iterative Linear Equation Solvers An Investigation of Domain Decomposition Algorithms for Reservoir Simulationquot paper SPE 16020 presented at the SPE Reservoir Simulation Symposium San Antonio TX Feb 1987 Ewing R E Boyett B A and Babu D K quotEfficient Use of Locally Re ned Grids for Multiphasequot paper SPE 18413 presented at the SPE Reservoir Simulation Symposium Houston TX Feb 1989 Nacul E C Lepretre C Pedrosa Jr P Girard P and Aziz K quotEfficient Use of Domain Decomposition and Local Grid Refinement in Reservoir Simulationquot paper SPE 20719 presented at the 65th SPE Annual Technical Conference and Exhibition New Orleans LA Sept 1990 Nacul E C and Lett G S quotUnder and Over Relaxation Techniques for Accelerating Nonlinear Domain Decomposition Methodsquot paper SPE 25244 presented at the 12th SPE Symposium on Reservoir Simulation New Orleans LA Feb 28 March 3 1993 Quandalle P quotEighth SPE Comparative Solution Project Gridding Techniques in Reservoir Simulationquot paper SPE 25263 presented at the 12th SPE Symposium on Reservoir Simulation New Orleans LA Feb 28 March 3 1993 Aziz K quotReservoir Simulation Grids Opportunities and Problemsquot paper SPE 25233 presented at the 12th SPE Symposium on Reservoir Simulation New Orleans LA Feb 28 March 3 1993 Joshi S D quotA Review of Horizontal Well and Drainhole Technologyquot paper SPE 16868 presented at the 62nd SPE Annual Technical Conference and Exhibition Dallas TX Sept 1987 Hermitte T and Guerillot D quotA More Accurate Numerical Scheme for Locally Refined Meshes in Heterogeneous Reservoirsquot paper SPE 25261 presented at the 12th SPE Symposium on Reservoir Simulation New Orleans LA Feb 28 March 3 1993 46 Local Grid Re nement and Domain Decomposition 20 21 22 23 Peaceman D W quotInterpretation of WellBlock Pressure in Numerical Reservoir Simulationquot SPEJ June 1978 18394 Gosselin O and Thomas J M quotDomain Decomposition Methods in Reservoir Simulation Coupling Well and Full Field Modelsquot paper presented at the second European Conference on the Mathematics of Oil Recovery Arles France Sept 1990 Becker B L Chan A E Wooten S O and Jones T A quotSimulation of Naturally Fractured Reservoirs Using a Subdomain Methodquot paper SPE 21241 presented at the 11th SPE Reservoir Simulation Symposium Anaheim CA Feb 1991 Chien M C H and Northrup E J quotVectorization and Parallel Processing of Local Grid Re nement and Adaptive Implicit Schemes in a General Purpose Reservoir Simulationquot paper SPE 25258 presented at the 12th SPE Symposium on Reservoir Simulation New Orleans LA Feb 28 March 3 1993 Scott S L Wainwright R L Raghavan R and Demuth D quotApplication of Parallel MIMD Computers to Reservoir Simulationquot paper SPE 16020 presented at the SPE Reservoir Simulation Symposium San Antonio TX Feb 1987 Nacul E C quotUse of Domain Decomposition and Local Grid Re nement in Reservoir Simulationquot PhD Dissertation Stanford University March 1991 Norris S 0 quotModeling Fuild Flow around Horizontal Wellboresquot PhD Dissertation Texas AampM University 1990 Blair P M and Weinaug C quotSolution of TwoPhase Flow Problems Using Implicit Difference Equationsquot SPEJ Dec 1969 41724 Duda J R and Kumar H K quotProduction Forecasts for Alternative Completion Methods LowPermeability Gas Reservoir Simulation of HighAngle and Horizontal Wellsquot paper SPE 18291 presented at the 63rd SPE Annual Technical 47 Local Grid Re nement and Domain Decomposition CHAPTER 4 Methods of Solution Introduction Sequential Method SEQ Adaptive Implicit Method AIM Methods of Solution Intro du ction This chapter discusses some of the formulations options Formulation refers to the type and structure of the equations ie how the reservoir ow problems are reduced to and expressed in equations MLhOd 13 2 IMPES n n Linearized Implicit n1 n1 SemiImplicit n1 n1 Fully Implicit n1 n1 Sequential n1 n1 AIM n1 n1 IMPES In the IMPES method the saturations are eliminated by adding the individual phase material balance equations The resultant equation has only one unknown a phase pressure which is obtained by simultaneous solution of a set of equations The saturations are determined explicitly by solving material balance equations The capillary pressure is assumed constant during a timestep Linearized Implicit Letkeman and Ridingsz MacDonald and Coats3 In this method to ensure stability relative permeability and capillary presure are extrapolated implicitly to the new time level Semi Implicit Nolen and Berry4 ouline the development of a semiimplicit method in which the evaluation of relative permeability and capillary pressure are done at the time level n1 using the chord slope Fully Implicit Blair and Weinaugl in their method evaluate all quantities at the new time level which results in a completely implicit difference system This method yields a stable solution for large timesteps Methods of Solutions Sequential Metho d The sequential solution separates the solution of the pressure equation from the saturation equation The rst step is the solution of a set of pressure equations ie this is similar the IMPES The second step is the solution of a set of saturation equations using the pressure from step one An illustration for 2D two phase is given below p1 SW A 1 p2 2 N cells E SW2 Reorder equations unknowns P SW p1 oil P2 E S W water 2 Multiply the oil equation by B0 and the water equation by Bw The rst set of equations are now twophase volumetric equations Methods of Solutions 2 phase Vol eqn water The upper LH matrix is the IMPES matrix The upper RH matrix contains k39r P39C Q39 terms Subtract the water equation form the oil equation in every gridblock and the matrix becomes We can then solve A11 p b Followed by A22 SW b2A21 p Methods of Solutions Adaptive Implicit Method The adapave lmphen methnd hpehazes wlth dlffehehl levels hflmphenhess m adjacent gndblncks raLhEr had pmvldlng a xed degree hf lmphelchess m eve gndblnck at every umestep These levels Shl m space and me as needed m malhlalh stablllty The advantage ls substantial heduehhh m ehmpuahg tlme and stnrage requlrements whlle sall yleldlhg uhehhdlhhhally stable shluahhs m Fl g l l 2 3 Ml Ml Ml l 395 X2 395 X22 23 Ma am am Ml M Ml 2 am am am am Mas am Fig 1e Examplepmblem Alsh h 39 ehmphsmg the nal matnx phhhlem ead he used as a Entena The sauemre nthe reduced 2 mamx prnblem ls shnwn m h we Fig 2 e Redueed mamx pmblem Mdhaai aSalutz39lm References 1 Blair PM and Weinaug CF Solution of TwoPhase Flow Problems Using Implicit Difference Equations Paper SPE 2185 presented at the 43rd Annual Fall Meeting Houston Sept 29 Oct 2 1968 2 Letkemann JP and Ridings RL A Numerical Coning Model Paper SPE 2812 presented at the Second Symposium on Numerical Simulation of Reservoir Performance Dallas Feb 56 1970 3 MacDonald RC and Coats KH Methods for Numerical Simulation of Water and Gas Coning Paper SPE 2796 presented at the Second Symposium on Numerical Simulation of Reservoir Performance Dallas Feb 56 1970 4 Nolen JS and Berry DW Tests of the Stability and TimeStep Sensitivity of Semi Implicit Reservoir Simulation Techniques Paper SPE 2981 presented at SPE 45th Annual Fall Meeting Houston Oct 47 1970 5 Thomas GW and Thumau DH Reservoir Simulation Using an Adaptive Implicit Method SPEJ June 1983 6978 6 Spillete AG Hilestad JG and Stone HL A HighStability Sequential Solution Approach to Reservoir Simulation paper SPE 4542 presented in the 48th Annual Fall Meeting Las Vegas SeptOct 1973 Methods of Solutions

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "Selling my MCAT study guides and notes has been a great source of side revenue while I'm in school. Some months I'm making over $500! Plus, it makes me happy knowing that I'm helping future med students with their MCAT."

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.