Class Note for CMPSCI 377 at UMass(29)
Class Note for CMPSCI 377 at UMass(29)
Popular in Course
Popular in Department
This 3 page Class Notes was uploaded by an elite notetaker on Friday February 6, 2015. The Class Notes belongs to a course at University of Massachusetts taught by a professor in Fall. Since its upload, it has received 11 views.
Reviews for Class Note for CMPSCI 377 at UMass(29)
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 02/06/15
CMPSCI 377 Operating Systems Fall 2008 Lecture 17 November 4 Lecturer Prashzmt Sherwy Scribe Shashz39 Singh 171 Demand Paging Demand paging is an application of virtual memory In a system that uses demand paging the operating system copies a disk page into physical memory only if an attempt is made to access it ie if a page fault occurs It follows that a process begins execution with none of its pages in physical memory and many page faults will occur until most of a process7s working set of pages is located in physical memory This is an example of lazy loading techniques The page table indicates if the page is on disk or memory using a valid bit Once a page is brought from disk into memory the OS updates the page table and the valid bit For efficiency reasons memory accesses must reference pages that are in memory the vast majority of time In practice this is mostly true because of Locality the working set size of a process must t in memory The 9010 rule states that 90 of the time the program is execution only 10 of the code 1711 Advantages 0 Demand paging does not load the pages that are never accessed so saves the memory for other programs and increases the degree of multiprogramming 0 There is less loading latency at the program startup 0 There is less of initial disk overhead because of fewer page reads o It does not need extra hardware support than what paging needs since protection fault can be used to get page fault 0 Pages will be shared by multiple programs until they are modi ed by one of them so a technique called copy on write will be used to save more resources 0 Ability to run large programs on the machine even though it does not have sufficient memory to run the program This method is easier for a programmer than an old manual overlays 1712 Disadvantages 0 Individual programs face extra latency when they access a page for the rst time So prepaging a method of remembering which pages a process used when it last executed and preloading a few of them is used to improve performance 0 Programs running on lowcost lowpower embedded systems may not have a memory management unit that supports page replacement 0 Memory management with page replacement algorithms becomes slightly more complex 17 1 172 Lecture 17 November 4 1713 Pre paging OS guesses in advance which pages the process will need and preloads them into memory If the guess is right most of the time7 less page faults will occur7 which in turn allows more overlap of CPU and lO The branches in a code make it dif cult to guess the set of pages that should be pre paged 1714 Implementation of Demand Paging A valid bit in the page table being 1 indicates that the corresponding page is in memory Valid bit being 0 indicates that the page is not in memory ie either it is on disk or it is an invalid address If the page is not in memory7 trap to the OS on the rst reference The OS upon verifying that the address is valid7 performs the following steps l selects a page to replace page replacement algorithm invalidates the old page in the page table starts loading new page into memory from disk context switches to another process while lO is being done gets interrupt that page is loaded in memory updates the page table entry NQP FP DN continues faulting process 1715 Swap Space It is a portion of the disk that is reserved for storing pages that are evicted from memory If a page containing code is removed from memory7 we can simply remove it since it is unchanged and can be reloaded from disk If the page containing data is removed from memory7 we need to save the data possibly changed from the last time so that it can be reloaded of the process it belongs to refers to it again Such pages are stored in the swap space At any given time7 a page of virtual memory might exist in one or more of the le system on disk7 physical memory7 or the swap space Page table must be sophisticated to gure out where to nd a page 1716 Performance of Demand Paging In the worst case7 a process could access a new page with each instruction But7 typically processes exhibit locality of reference Temporal locality If a process accesses an item in memory7 it will tend to reference the same item again soon Spatial locality If a process accesses an item in memory7 it will tend to reference an adjacent item soon If p is the probability of a page fault 0 S p S l7 then Effective access time l 7 p ma p pagefaulttime 172 Page Replacement Algorithms 0 MIN This is the theoretically optimal page replacement algorithm When a page needs to be swapped in7 the operating system swaps out the page whose next use will occur farthest in the future This Lecture 17 November 4 173 algorithm cannot be implemented in the general purpose operating system because it is impossible to compute reliably how long it will be before a page is going to be used except when all software that will run on a system is either known beforehand and is amenable to the static analysis of its memory reference patterns or only a class of applications allowing runtime analysis is allowed Despite this limitation algorithms exist that can offer nearoptimal performance the operating system keeps track of all pages referenced by the program and it uses those data to decide which pages to swap in and out on subsequent runsi This algorithm can offer nearoptimal performance but not on the rst run of a program and only if the program s memory reference pattern is relatively consistent each time it runs 0 FIFO The rstin rstout FIFO page replacement algorithm is a lowoverhead algorithm that requires little bookkeeping on the part of the operating systemi The idea is obvious from the name the operating system keeps track of all the pages in memory in a queue with the most recent arrival at the back and the earliest arrival in front When a page needs to be replaced the page at the front of the queue the oldest page is selected While FIFO is cheap and intuitive it performs poorly in practical application Thus it is rarely used in its unmodi ed formi o LRU The least recently used page LRU replacement algorithm keeps track of page usage over a short period of time LRU works on the idea that pages that have been most heavily used in the past few instructions are most likely to be used heavily in the next few instructions tool While LRU can provide nearoptimal performance in theory it is rather expensive to implement in practice There are a few implementation methods for this algorithm that try to reduce the cost yet keep as much of the performance as possible 0 Random Random replacement algorithm replaces a random page in memory This eliminates the overhead cost of tracking page referencesi Usually it fares better than FIFO and for looping memory references it is better than LRU although generally LRU performs better in practice
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'