Review Sheet for EECS 678 with Professor Kulkarni at KU
Review Sheet for EECS 678 with Professor Kulkarni at KU
Popular in Course
Popular in Department
This 13 page Class Notes was uploaded by an elite notetaker on Friday February 6, 2015. The Class Notes belongs to a course at Kansas taught by a professor in Fall. Since its upload, it has received 21 views.
Reviews for Review Sheet for EECS 678 with Professor Kulkarni at KU
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 02/06/15
5 Virtual Memory Outline I Background l Demand Paging l CopyonWrite l Page Replacement l Allocation of Frames l Thrashing l MemoryMapped Files I Allocating Kernel Memory l Other Considerations l OperatingSystem Examples EECS 678 Introduction to Operating Systems 7 Spring 2009 5 Background l Virtual memory separation of user logical memory from physical memory 0 only part of the program needs to be in memory for execution o logical address space can therefore be much largerthan physical address space 0 allows address spaces to be shared by several processes 0 allows for more efficient process creation I Benefits of not requiring the entire program in memory before execution 0 programs can be largerthan the physical address space 0 more programs can be resident in memory at the same time o faster program startup EECS 678 Introduction to Operating Systems 7 Spring 2009 g Virtualaddress Space l Space between heap and Max stack is not used unless the heap or stack grows stack 0 virtual memory does not require physical pages for such holes I Shared memory 0 pages can be easily shared by virtual memory 0 shared libraries can be mapped readonly into address space of heap each process 0 shared memory IPC can be data implemented easily code 0 parent pages can be shared with 0 child during fork EECS 678 Introduction to Operating Systems 7 Spring 2009 Shared Library Using Virtual Memory stack stack shared I shared library pages shared library heap heap data data code code EECS 678 Introduction to Operating Systems 7 Spring 2009 s l Bring a page into memory only when it is needed 0 less O needed 0 less memory needed Demand Paging o faster response 0 more applications simultaneously resident in memory I Page is needed gt reference to it 0 invalid reference gt abort O notinmemory gt bring to memory I Lazy swapper never swaps a page into memory unless page will be needed 0 Swapper that deals with pages is a pager EECS 678 Introduction to Operating Systems 7 Spring 2009 5 Demand Paging System program A program B swap out main memorv swapm 16El17 18 19 0l 1D 2D 3D 4 5 e 7 8E 9D10D11E 12D13D14D15E 20 121 D22 D23 1 EECS 678 Introduction to Operating Systems 7 Spring 2009 g ValidInvalid Bit l Each page table entry associated with a valid invalid bit 0 v gt inmemory i gt notinmemory o initially set to invalid for all page table entries Frame validinvand bit I Example of a page table snapshot v V V V page I Valid invalid bit in page table set to i during address translation causes a page fault EECS 678 Introduction to Operating Systems 7 Spring 2009 7 a Page Table When Some Pages Are meAQM to Not in Main Memory 01m00mgt H logical memory valid7invalid frame omwmmamm o page table EDD l E EDD ohvsical memorv EECS 678 Introduction to Operating Systems 7 Spring 2009 5 Steps in Handling a Page Fault page is on backing store operating system reference load M i restart page table instruction free frame reset page bring in table missing page physical memory EECS 678 Introduction to Operating Systems Spring 2009 9 3 Steps in Handling a Page Fault 2 1 First reference to data or instructions on a page occurs 2 Memory access traps to the 08 with a page fault 0 operating system looks at anothertable to decide gt invalid reference gt abort orjust not in memory 3 Find a free frame in physical memory 4 Read page from disk into free frame 5 Update page table validinvalid bit to 39v39 6 Restart the instruction that caused the page fault EECS 678 Introduction to Operating Systems Spring 2009 10 5 Performance of Demand Paging l Page Fault Rate 0 s p s 10 o ifp 0 no page faults o ifp 1 every reference is a fault I Effective Access Time EAT EAT 1 p x memory access p page fault overhead swap page out swap page in restart overhead EECS 678 Introduction to Operating Systems Spring 2009 11 3 Demand Paging Example I Memory access time 200 nanoseconds l Average pagefault service time 8 milliseconds l EAT 1 p x 200 p 8 milliseconds 1 p x 200 p x 8000000 200 p x 7999800 I If one access out of 1000 causes a page fault then EAT 82 microseconds This is a slowdown by a factor of 40 EECS 678 Introduction to Operating Systems Spring 2009 12 3 Process Creation CopyonWrite Original process creation 0 fork system called duplicates parent address space for child 0 demand paging with copy on write allow sharing address spaces Copy on Write COVV allows both parent and child processes to initially share the same pages in memory 0 shared pages marked as copy on write pages 0 if either process modifies a shared page then the page is copied 0 only the modified pages are copied COW allows more efficient process creation as only modified pages are copied Free pages are allocated from a pool of zeroed out pages EECS 678 Introduction to Operating Systerns 7 Spring 2009 13 Before g Process 1 Modifies Page C physical process1 memory processz page A page B page C physical process1 memory processz page A Afte r page B page 0 Copy of page C EECS 678 Introduction to Operating Systems 7 Spiing 2009 14 Need For Page Replacement Review of demand paging o separates logical memory from physical memory 0 allows logical address space gt physical address space 0 enables a greater degree of multiprogramming gt higher CPU utilization and throughput 0 allows fast process startup Drawbacks of demand paging 0 may increase later individual memory access times 0 potential for overallocation of physical memory Over allocation of memory 0 currently active processes may reference more pages than room in physical memory 0 pages from active processes may need to be replaced EECS 678 Introduction to Operating Systems 7 Spring 2009 15 9 Page Replacement I Page replacement is needed when a page fault occurs and there are no free frames to allocate o terminate user process o swap out an entire process 0 find some page in memory hopefully not really in use and swap it out I Same page may be brought into memory several times EECS 678 Introduction to Operating Systems 7 Spiing 2009 16 3 Basic Page Replacement Find the location of the desired page on disk Find afree frame If there is a free frame use it If there is no free frame use a page replacement algorithm to select a victim frame Bring the desired page into the newly free frame update the page and frame tables 4 Restart the process I Reducing page fault overhead 0 use modify bit to with each frame 0 only copy the replacement page to memory if modi ed N 00 EECS 678 Introduction to Operating Systems 7 Spring 2009 17 a Basic Page Replacement 2 frame valid invalid bit swap out change victim 0 i to invalid 6 page D f V l victim c reset page table for 39 7 page table new page swap D desired pagein physical memory EECS 678 Introduction to Operating Systems 7 Spring 2009 5 Page Replacement Algorithms l Criteria 0 get the lowest page fault rate I Page replacement schemes 0 rst in rst out FIFO 0 optimal page replacement 0 least recently used LRU c LRU approximation algorithms 0 counting based replacement I Evaluation of replacement algorithm 0 simulate it on a particular string of memory page references 0 compute the number of page faults on the reference string I In all our examples the reference string is 7 012 0 3 0 4 2 3 0 3 212 017 01 EECS 678 Introduction to Operating Systems 7 Spring 2009 19 Expected Graph of Page Faults Versus The Number of Frames 16 14 12 10 number of page faults N Jgt O CO I 1 2 3 4 number of frames EECS 678 Introduction to Operating Systems 7 Spring 2009 20 FirstlnFirstOut FIFO Algorithm I Simplest page replacement algorithm 0 each page associated a time when it was brought in memory replace the oldest page during page replacement implemented using a FIFO queue of all pages replacement page found from the head of the queue new page added to the tail of the queue reference string 0 1 2 page frames 3 O 3 2 1 2 O 1 7 O 1 El EECS 678 Introduction to Operating Systems Spring 2009 21 59 Belady39s Anomaly I For some page replacement algorithm the page fault rate may increase as the number of frames increases 0 can happen with FIFO replacement scheme I Example 0 reference string 1 2 3 4 1 2 5 1 2 3 4 5 o FIFO replacement algorithm 18 14 number of page laulls number of frames EECS 67 8 Introduction to Operating Systems Spring 2009 22 5 Optimal Page Replacement I Algorithm with the lowest page fault rate Q replace the page that will not be used for the longest period 0 provably optimal 0 does not suffer from Belady39s anomaly 0 needs future information reference string 01 IE II page frames EECS 67 8 Introduction to Operating Systems Spring 2009 23 Least Recently Used LRU Algorithm I Approximate the optimal page replacement algorithm detect and store when each page is used replace page that has not been used for the longest period a very good replacement policy requires hardware support to be accurate and fast does not suffer from Belady39s anomaly reference string 70120304230321201701 0 00 o 003 3 a a 11 3 322 2 pageframes EECS 67 8 Introduction to Operating Systems Spring 2009 24 5 LRU Implementation in LRU Implementation 2 l Counter implementation l Stack implementation 0 every page table entry has a timeof use field 0 keep a stack of page numbers in a linked list 0 copy the clock into the timeof use field on every page access 0 move page to top of stack when referenced o for replacement find the page with the smallest timeofuse field 0 find replacement page from bottom of stack 0 replacement algorithm requires search of the full page table 0 each update is more expensive due to stack operations 0 each memory access needs additional write to timeof use field 0 no search for replacement 0 counter overflow reference String 4 7 O 7 1 O 1 2 1 2 7 1 2 l l 2 7 1 2 O 1 7 O 4 4 stack stack befo re afte r a b EECS 678 Introduction to Operating Systems Spring 2009 25 EECS 678 Introduction to Operating Systems Spring 2009 26 5 LRU Approximation Algorithms g LRU Approximation Algorithms 2 reference pages reference pages l Reference bit algorithm l Secondchance algorithm bits bits 0 associate reference bit with each page 0 single reference RF bit E E 0 set bit when page is references clear all bits occasionally 0 basic FIFO replacement algo E E o replace page that has reference bit 0 if one exists 0 find replacement page 0 cannot store the order of accesses in one bit gt proceed if RF0 vigil E l Record reference bit algorithm 91 Page 2 dchance if RF1 E and move onto next page 0 each page associated with a 8bit field for recording reference bit eear reference bit h 0 at regular intervals 0 circular queue of pages rightShift 8bitfield O pointer indicates next I gt move reference bit into highest order bit replacement page 0 page with lowest value in 8bit field is the LRU page 0 what is all bits are set 0 Example reference bit 8bit field circular queue of pages circular queue of pages gt110101100 gt 01101110 a bl gt o 11001101e 0 01100110 EECS 678 Introduction to Operating Systems Spring 2009 27 EECS 678 Introduction to Operating Systems Spring 2009 28 g CountingBased Algorithms I Keep a counter of the number of references that have been made to each page I Least frequently used 0 replace page with the smallest count 0 most active page should have largest count I Most frequently used 0 replace page with the largest count 0 page with smallest count was brought in last and will be accessed next I Not very popular algorithms ms 67 mama to owning System Spunng 29 g Page Buffering Algorithms l Optimizations by maintaining a list of victim frames l Scheme1 0 bring new page into one ofvictim frame slots 0 then copyout the replacement page to memory 0 memory access to new page does not wait for copyout l Scheme 2 0 copyout modi ed pages to disk when paging device is idle 0 replacement is now faster I Scheme 3 o rememberwhich page is in each victim frame 0 if new page is in one ofvictim frame then use directly 0 works well if replacement algorithm replaces a page that is needed a er a erwards Escsmx Mama to owning Sy am swgzoav 3quot 3E Allocation of Frames I How many frames in physical memory should the OS allocate to each process l Minimum number of frames 0 is there a minimum number offrames a process needs 0 depends of the maximum number of pages any instruction in the ISA can reference at a time gt what can happen rfwe do not providetnrs mrnrmum 0 direct addressing loadstore need 2 frames gt one for instruction and onefor memory reference 0 onelevel indirect addressing needs 3 frames gt one for instruction one for address and one for memory reference 0 PDP11 needs 6 frames IBM 370 needs 8 frames 0 should there be a maximum number ms 67 mama to owning System Spunng 31 3 Frame Allocation Algorithms l Equal allocation 0 divide m frames equally among n processes 0 each process gets about mn frames 0 may give some process more frames than it needs I Proportional allocation 0 each process may need different amounts ofmemory o allocate memory proportional to process size 0 Example In 64 s size of pmcessp s 10 s 2 s 52 127 m lolalnumberofframes a x a 5 s 137 a allocallonfor p E x m m i 64 z 59 a2 137X EESmthbdmnanmUpunmgsy am swgzoav 32 5 Other Frame Allocation Issues l Global Vs local allocation 0 global replacement process selects a replacement frame from the set of all frames gt one process can take a frame from another gt process cannot control its own page fault rate 0 local replacement each process selects from only its own set of allocated frames gt may hinder progress by not allowing high priority process access to frames of low priority process I Nonuniform memory access 0 memory may be distributed on multiprocessor systems 0 how to assign frames in such situations o assign memory which is close to where a process is running EECS 678 Introduction to Operating Systems 7 Spring 2009 33 is Thrashing l A process is thrashing if it spends more time paging than executing process keeps swapping active pages in and out leads to a very high page fault rate I Causes of thrashing behavior process does not have enough pages I Vicious cycle of thrashing not having enough frames cause more page faults lowers CPU utilization operating system thinks that it needs to increase the degree of multiprogramming another process added to the system even less frames per process leads to more page faults cycle continues EECS 678 Introduction to Operating Systems 7 Spring 2009 34 Thrashing 2 l Thrashing becomes more likely with increasing degree of multiprogramming n CPU utilization thrashing degree of multiprogramming I Can we prevent thrashing by using a local replacement algorithm EECS 678 Introduction to Operating Systems 7 Spring 2009 35 3 Preventing Thrashing Behavior I Give a process as many frames as it needs 0 how to know how many frames a process needs l WorkingSet model 0 based on the locality model of process execution gt each process phase uses a small set of memory references frames gt execution moves from one program phase to another 0 the number of frames required for each phase is called the working set 0 Implementation gt assume a particular working set window A gt WSS working set of Process P total number of pages referenced in the most recent A gt D 2 W88 E total demand frames gt If D gt m gt Thrashing then suspend one of the processes EECS 678 Introduction to Operating Systems 7 Spring 2009 36 5 Working Sets and Page Fault Rates Preventing Thrashing Behavior 2 l Page faults spike at the start of every new program I PageFault frequency scheme workIng set also Called a phase 0 working set model is needs several assumptions and is 0 page fault rate falls until the next program phase CompllCated a behavior repeats 0 measure the page fault rate routinely 0 establish acceptable pagefault rate working set gt If actual rate too low process loses frame or increase processes 1 gt If actual rate too high process gains frame or suspend processes page fault In rate 3 Increase number of frames E upper bound 3 Q 0 u 1 time lower bound decrease number of frames number of frames EECS 678 Introduction to Operating Systems 7 Spring 2009 37 EECS 678 Introduction to Operating Systems 7 Spring 2009 g MemoryMapped Files 5 Memory Mapped Files 2 l Memorymapped file lO allows file lO to be treated as l Allows several processes to map the same file allowing routine memory access by mapping a disk block to a the pages in memory to be shared page In memory 0 removes the need to read write system calls 0 converts disk access to memory access I I l I 391 L39I I 39rI t39l lI I o simplifies disk access for the user I Mechanism details 0 a file is initially read using demand paging quotl r 1 l4 I l quotquotquotlquotquot1quot39l r r l39 process A O a pagesized portion of the file is read from the file system into a virtual memow physical frame process B virtual memory quotquotIquotquotlquotquot l 4 2 physical memory L I o subsequent readswrites tofrom the file are treated as ordinary memory accesses 123456 disk file EECS 678 Introduction to Operating Systems 7 Spring 2009 39 EECS 678 Introduction to Operating Systems 7 Spring 2009 g MemoryMapped Files 3 I Examples of 08 using memorymapped files 0 Solaris Unix and Linux provide the mmap system call to create a memorymapped file 0 files accessed using open read write are mapped to kernel address space in Solaris 0 Windows NT and XP use memory mapped files to accomplish shared memory EECS 678 Introduction to Operating Systems 7 Spring 2009 41 MemoryMapped NO I Special lO instructions may be available to transfer data and control messages to the HO controller I Memorymapped HO 0 lO device registers mapped to logical memory address spaces 0 convenient and fast I There may be a control bit to indicate if data is available in the data register 0 if CPU polls the control bit then method of operation is called programmed HO 0 if device sends interrupt to CPU to indicate data availability then mode of operation is interrupt driven IIO EECS 678 Introduction to Operating Systems 7 Spring 2009 42 J Allocating Kernel Memory I Kernel memory often allocated from a free memory pool 0 not using demand paging I Reasons for treating kernel memory allocation differently 0 some kernel memory needs to be contiguous o attempt to minimize memory waste due to internal fragmentation gt kernel requests memory for structures of varying size gt do not use paging system I Strategies for managing free memory 0 buddy system 0 slab allocator EECS 678 Introduction to Operating Systems 7 Spring 2009 43 E Buddy System I Powerof2 allocator 0 satisfies requests in units sized as power of 2 0 request rounded up to next highest poweron 0 when smaller allocation needed than is available current chunk split into two buddies of nextlower power of 2 gt continue until appropriate sized chunk available I Example 0 21kb requested from 256kb physically contiguous pages 256 KB 77 128 KB 128 KB AL An as 64 KB 64 KB BL En 32 KB 32 KB CL CH EECS 678 Introduction to Operating Systems 7 Spring 2009 44 a Slab Allocator l Slab contains several physically contiguous pages I Cache consists of one or more slabs l Single cache for each unique kernel data structure 0 cache filled with instantiations of the data structure objects I Slab allocation algorithm 0 create cache of objects in contiguous space mark as free 0 store objects in free slots mark as used 0 if current slab is full of used objects allocate next object from empty slab o if no empty slabs go to free slots in the next slab l Benefits 0 no fragmentation 0 fast memory request satisfaction EECS 678 Introduction to Operating Systems 7 Spring 2009 45 g Slab Allocation 2 kernel obj caches slabs 3 KB objects physical contiguous pages 7 KB objects EECS 678 Introduction to Operating Systems 7 Spring 2009 46 39 Other Virtual Memory Issues l Prepaging 0 reduce page faults at process startup o prepage all or some process pages before they are referenced gt HO and memory wasted if prepaged pages are unused l Issues in deciding a page size 0 internal fragmentation gt small page size preferred to reduce memory lost on the final page 0 page table size gt large page size preferred to reduce size of page table 0 HO overhead gt large page size preferred to reduce disk latency and seek time o Page faults gt large page size reduces number of generated page faults EECS 678 Introduction to Operating Systems 7 Spring 2009 47 g9 Other Virtual Memory Issues 2 l TLB Reach 0 amount of memory accessible from the TLB o TLB Reach TLB Size X Page Size 0 ideally the working set of each process is stored in the TLB o improving TLB reach gt increase the page size gt provide multiple page sizes I lO interlock 0 pages must sometimes be locked into memory 0 consider lO Pages that are used for copying a file from a device must be locked from being selected for eviction by a page replacement algorithm 0 provide a lock bit with every page frame EECS 678 Introduction to Operating Systems 7 Spring 2009 48 Other Virtual Memory Issues 3 I Program structure I being aware ofthe paging structure can help performance 0 example Int128 128 data 0 Each row is stored in one page 0 program 1 128 1 128 16 384 page faults for 1 r 0 j lt128 jgt for 11 r 0 1 lt 128 1gt data1j r 0 0 Program 2 128 page faults for 41 7 0 1 lt 128 1gt for 1 r 0 j lt 128 jgt data1j 7 0 ms 67 11mm to owning System Spunng
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'