New User Special Price Expires in

Let's log you in.

Sign in with Facebook


Don't have a StudySoup account? Create one here!


Create a StudySoup account

Be part of our community, it's free to join!

Sign up with Facebook


Create your account
By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here


by: Allie West II


Allie West II

GPA 3.51


Almost Ready


These notes were just uploaded, and will be ready to view shortly.

Purchase these notes here, or revisit this page.

Either way, we'll remind you when they're ready :)

Preview These Notes for FREE

Get a free preview of these Notes, just enter your email below.

Unlock Preview
Unlock Preview

Preview these materials now for free

Why put in your email? Get access to more of this material and other relevant free materials for your school

View Preview

About this Document

Class Notes
25 ?




Popular in Course

Popular in ComputerScienence

This 76 page Class Notes was uploaded by Allie West II on Thursday October 29, 2015. The Class Notes belongs to CSCI 3753 at University of Colorado at Boulder taught by Staff in Fall. Since its upload, it has received 32 views. For similar materials see /class/232000/csci-3753-university-of-colorado-at-boulder in ComputerScienence at University of Colorado at Boulder.




Report this Material


What is Karma?


Karma is the currency of StudySoup.

You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/29/15
Device Management Device IO von Neumann Computer again CPU Vi Primary Memory RAM BUS l Device Files We have talked about how the le manager handles the secondary memory The secondary memory can be thought of as a device as shown in the diagram In UNIX and LINUX all devices are manipulated through an abstraction of a le You can look in the dev directory for a listing of devices We will discuss these in this lecture To manipulate a device requires a device driver A device driver bridges the gap between the software and hardware worlds UNIX Typical Device Names SVR4 Device Linux Device Significance ch or dskcOt6d052 cdrom CDROM fdO or diskette de Floppy Drive dskf0q18dt deH l 440 144 MB Floppy rdskqul 8dt deHl440 144 MB Raw Floppy thO or dslchtOdOsZ hda First Hard Disk hdl 0 or dsldclt3d052 hdb Second Hard Disk lpO lpO Printer rcdt0 or rmtO st0 Tape Drive terml ttyl Terminal ttyl a cuaO Serial Port 1 tty2A ttysl Modem Port 2 echo This message goes to the terminal gt devtty02 This is a list of device names that you might see in the dev directory Notice that UNIX treats devices like les We have already seen redirection to a le Note that you can treat the character device devtty02 as a le and redirect output to it as seen in the command at the bottom of the screen Device Listing System V Major Device Number Minor Device Number S 15 l dev total 52 1 root sys 51 0 Aug 31 0728 cd0 CDROM 2 bin bin 2 64 Feb 23 1997 fd0 Floppy 2 sysinfo sysinfo 1 0 May 7 1996 hd00 First HD 2 bin b 6 0 Dec 5 1412 lp0 Printer 2 root root 50 0 Aug 10 0728 rcdt0 Tape Drive 2 Zach term 0 0 Oct 15 1023 tty01 Terminal 2 bin bin 5 0 Apr 24 1996 tty1a Serial Port 1 2 bin bin 5128 Feb 23 1997 tty1A Modem Port 1 File Device First Character Device Type The permissions eld denotes the device type either block or ccharacter The set of routines needed to operate a device is known as the device driver When a particular device is called the kernel calls the right device driver and passes some parameters for it to act properly The device numbers tell us what is going on The major device number represents the device driver the type of device The minor number indicates the parameters that the kernel passes to the device driver Often it indicates the special characteristics of the device If you had two oppy drives then the minor device number would be used to distinguish between them Device les are a special type of le and are of zero length Significance of Device Names Can often not always identify a device from its le name IdeVdsk 18dt Represents a 35 oppy block device It is bootable 0 0f quad density and has 18 sectors per track 7 a 144 MB diskette Device Management Organization A 39 li c ation v i User Layer API File Manager Device Driver V Hardware Layer Command Register Def196 4 3Da1ia Kagistier Controller 3 Status Register This is similar to the slide showing a conventional le system and device interface from the VFS lecture We now concentrate on the hardware layer where the device controller is accessed The registers we will concentrate on to simplify the discussion are command data and status registers The command register stores the command requested of the device by the device driver The status register indicates the status of the device busy or done The data register is a location where requested data is put before it is passed back to the calling application Device 10 with Polling Instructions rea df Data Segmeht Operating System Interface Device Interface zomInan Register DEVICE ata Register 7quot Controller Status Register In this slide we show the polling method of IO broken down into 3 steps 1 7 The user application issues a read system call 2 7 The system read function accesses the status register via the device interface If the device is busy then the system function continuously polls the status register until it is not busy When it is free the read command is placed into the command register of the device controller The busy ag is then set high The device is then polled again until it is not busy 3 7 When the device is done with the read the busy ag is set low and the data is placed inside the data register in the user program Device IO with Interrupts Instructions rea df s m e39gme Operatlng System Interface Device Status Table Interrupt Handler I d 39 Dev1ce 3 man Regular 1 Data Register Controller 3 Status Register 4 In this slide we show the interrupt method of IO broken down into steps 1 7 The user application issues a read system call 2 7 The system read function accesses the status register via the device interface If the device is busy then the system function continuously polls the status register until it is not busy When it is free the read command is placed into the command register of the device controller The busy ag is then set high 3 7 Instead of polling the status register the device driver places the command and associated information into the device status table This frees up the CPU for other use while the read process is waiting for the data from the device 4 7 When the device is compete it interrupts the CPU and the interrupt handler is called 5 7 The job of the interrupt handler is to nd out which device caused the interrupt When found the device handler is called 6 7 The device handler get the stored IO information from the device table 7 7 The device handler gets the data from the device data register and places it into the process data register 8 Control is returned to the application process CPUIO delay Problem concept Solution concept slorTReaddevice quotAdquot 2 readz read from device 0 z 1 Y 2 while lr ead y20z1 CPU register Variable z 74gt Containing space For quot2 Device Buffer This slide shows the possible problem with using a value read from a device before the value has been put into the data register If program execution is allowed to continue without blocking IO causing the process to wait until the data is read The variable data may be out of sync with the process The solution idea is to somehow wait for 10 to complete before execution can proceed Memory Mapped IO Memory Addresses Device Addresses exte n d Hmigt Memory Addresses 06 D Prnnawmemory Del Memory mapped IO extends the memory address space to include devices so that the device address becomes a logical memory address In contrast with the memory mapped IO approach the original scheme has separate logical memory and device addresses Further a device address has two components where one is the device address and the other address is used for one of command data status registers The key to memory mapped 10 is for the memory instructions to be able to interact with the device registers In the original approach more instructions are needed to access the device s registers Direct Memory Access Usually the data readwritten from a device has to get passed into primary memory via the CPU register before it is put into the primary memory With a DMA controller the data readwritten from the device is handed to the device controller and then directly into primary memory Instead of the interrupt handler doing the work the DMA accesses primary memory via the system bus by having knowledge of the appropriate block of memory where the data address is located Direct Memory Access The dotted arrow shows the sneak route for direct memory access Hardware Buffering Process Do you remember readerWriters Hardware buffering is a technique useful for taking advantage of the relative speed of process execution vs IO operations Since 10 is much slower it would be nice to have some device data in a controller buffer before it is needed to that any subsequent requests for data via the device driver will be handled quickly if the necessary data is available ahead of time to eliminate the usual wait Shown is input buffering Output buffering is similar except that the direction of the arrows is reversed Double Buffering Obviously the concept of double buffering can be extended to multiple buffers Here the device driver decides to play the game with its own buffer This has the effect of forstalling the occurrence of an empty buffer since it is like a bucket brigade for data bits The device driver buffer when exhausted can still deplete the controller buffer before the controller has to access the device Taking Advantage of Buffers CPU usage W l IUU Hg H mm Wait for IO For a process execution shown above the IO buffers are being accessed independently by the controllerdriver to ll the buffers while a process is using the CPU For some programs maybe ones that do all of their IO up front this scheme would not help that much For example an image processing program that loads an image into memory rst before doing image manipulation in memory would not have a good mix of compte vs IO bound execution API gtInterface functions available to user programs gtAbstraction of device interface gtMake interfaces as similar as possible gtBlock vs character gtSequential vs direct access gtDevice driver contains details of particular device hidden below API layer A device API is a library of function calls for accessing a device without having to deal with the details of the particular device commands Thin gs get ugly when we look at the details of a device driver What is 21 Device Driver A device driver is a separate software module that the kernel uses to accomplish 10 with a speci c device It can sometimes be an API that contains functions or device specific commands that together implement the functionality of the interface Class That makes no sense to me how about an example Zach OK let s look at a printer driver Frustrated Driver Printer Driver Example Command Sequence Send an ESC command to initialize the printer Set the line spacing e g ESC 3nwhere spacingn216 inches Set the printing area Repeat for each 11ne Repeat for each Page End printing by sending the ESC command ESC is ASCII character number 27 rinter Driver 7 Parallel Interface Sigmle Return Pin Sign Directinn Descripan 1 19 STROBE IN STROBE pulse lo read dala 5 useC mln WldLh recelve 279 20727 DATA 178 IN These slgnals represmt lnfo ofblts 1 Lo 8 ofparallel data 10 28 ACKN39LG OUT LOW Whm data has bem recelved and ready for more 11 29 BUSY OUT HIGH Whm prlnta ls busy 12 30 PE OUT HIGH when punter 15 out ofpaper 13 r SLCT OUT HIGH Whm prlnta ls on 14 7 AUTO IN when Low paper 15 auw fed 1 llne alter pnnung 15 7 NC 7 Not used 16 7 GND 7 Loglc gound level 17 r CHASSIS r Prlnter chassls gound 18 7 NC 7 Not used 1930 7 GND 7 TWlsLed palr return slglal ground level 31 16 BUT IN W39hm LOW pnnter ls reset 32 r ERROR OUT Becomes LOW when punter 15 out ofpaper 33 7 GND 7 Same as for plus 1930 34 7 NC 7 Not used 35 r V OUT 5V Lhrougi 3 3 kllo Ohm reslslance 36 r SLCT 111 IN DC 1DC3 Code when HIGH DriverKernel Interface gtKemel may use modules to access device drivers Linux gtAbstract function layer is used drivers implement the abstract interface gtVFS illustrated the idea of modules for file systems gt Similar concept for device drivers 20 A Driver Interface UNIX Style open Prepare dev for operation cl ose No longer using the device i 0c 1 1 Character dev speci c info read Character dev input op wri te Character dev output op se 1 e c 1 Character dev check for data These are familiar As we saw with the virtual le system one API can be used to access les and devices alike Instead of opening a le the open command might send some check that a device is on and send some initialization commands to the device to get it ready for operation Some devices Surface 1 Sector 13 Surface 2 Surface 3 Surface 4 Rotating Magnetic Cylinder 75 Disk Hard Drive i5 V SSMI Satellite 0 Image Dam Sequential Mag Tape Serial Port RS 232 22 Look at Rotating Disk Optimization gtTransfer Time Disk access time surface gt memory gtDisk latency time Wait for sector to spin into position gtDisk seek time Time for readwrite head to move into position cylinder gtAccess Time transfer latency seek 23 Seek Time Optimization 0 Seek time dominates access time 0 Attempt to minimize seek time 0 Look at several strategies FCFS SSTF Scan regular and circular Look regular and circular 24 Seek Time Optimization Look or Scan Look or Scan regular circular 25 Virtual Memory Backing store and 64bit VM CSCI 3753 Outline Swapping motivation Backing store Anonymous memory vs files Buffer cache Swap daemon 64bit VM issues CSCI 3753 mmap2 example id mmapfd MAPisHARED NULL size VM area int fd openquotcshrcquot OiRDONLY CSCI 3753 3 We return to the mmapO example given last lecture A user memory maps their This creates a new virtual memory area This memory area is asso representing csrhc cshrc file ciated with the vnode XIV3 All Together Rewsuted 00010000 304K readexec tcsh 0006A000 24K readwriteexec tcsh 00070000 424K readwriteexec heap 1717190000 656K readexec libcso1 1717232000 32K readwriteexec libcso1 1717290000 512K readexec libnslso1 171730151000 40K readwriteexec libnslso1 1717330000 16K readexec libcipsrso1 1717340000 16K readexec libmpso2 1717350000 9K readwriteexec libmpso2 1717370000 32K readexec libsocketso1 1717396000 16K readwriteexec libsocketso1 1717390000 9K readexec libdlso 1 17173130000 120K readexec 11130 1 FF3DC000 9K readwriteexec ldso1 1717131519000 32K readwriteexec stack CSCI 3753 4 We return to the example address space of protections and the files they39re associated with tcsh which shows its memory areas their VM Area Tree libnslso 1 CSCI 3753 XIV5 Swapping Motivation Don39t want everything in memory Can39t keep everything in memory Move data to large cheap storage disk 9 CSCI 3753 6 Swapping is when the kernel moves unneeded or hopefully unneede d memory to disk Swapping allows a user to run programs Whose address spaces are larger than physical memory the data that39s in use will reside in memory While data that hasn39t been used recently can be temporarily stored on disk When the data is needed again it can be moved back into memory Understandably moving data to disk isn39t something to be done lightly even if the disk transfer can be done in lms only a small seek is needed this is equiva lent to 300000 cycles on a 300MHz machine Swapping Distinctions Data with backing store file system files code Data without backing store swap partition heap stack anonymous memory CSCI 3753 7 Since swapping is such a costly operation a few distinctions ca n be used for better performance There are two basic sorts of memory areas those Wi th a corresponding structure on disk say a file and those that don39t allocated memory on the heap the stack etc The latter is often called anonymous as there39s no file that names it The distinction is important due to Where the data can be stored on disk Data with backing store can just be written out to disk as if a user had issued a write Anonymous memory doesn39t have a place already reserved for it on disk therefore most sy stems have swap partitions on disk Where data can be stored when needed The swap partition h as a format decided by the kernel with which pages can be easily written out VM Area Tree CSCI 3753 8 Returning to the memory map of tcsh we can see that the data segments since they don39t have backing files are actually all anonymous memory Backing Store Extremely slow in comparison to memory Nonvolatile If something is going to be accessed keep it in memory Don39t always need entire file eg X server Demand paging CSCI 3753 9 Backing store has very different characteristics than memory Fi rst it39s much much slower instead of taking at most 100 or so cycles a write to backing s tore can easily take millions of cycles However unlike memory backing store is nonvolatile if the machine loses power the data Will still be there Because of the costs of moving data to and from backing store i t follows that needed data should be kept in memory While data that isn39t being used is mo re likely to be safe to move to disk Similarly if the data in memory represents a file it39s v ery possible that the entire file isn39t needed at once For example X Window servers are often very large executables but only need a small amount of their program text at any given time Correspo ndingly the often used code Will be kept in memory While the rarely used code can be kept o n disk and not loaded until needed This technique is know as demand paging as a page is on ly brought into memory when explicitly demanded by a process Buffer Cache Abstraction of access to storage Bookkeeping reference counts etc Process Request Backing CSCI 3753 10 The buffer cache is a kernel system that presents an abstraction of backing store When a piece of data is needed actually the page the data resides on the p rocess can request the buffer from the buffer cache If the buffer is already in memory the b uffer cache can return a pointer to it immediately if the buffer isn39t in memory then the buffe r cache finds it in backing store loads it and returns the newly allocated buffer to the process Bringing in a page from backing store is called a page fault XIV10 TLB Miss Again if page table entry exists dro i p n TLB entry else find appropriate segment if segment exi age from vnode can block hitting disk add page table entr drop in TLB entry else SIGSEGV CSCI 3753 Returning to the TLB miss pseudocode we can see when the buffer cache is used When a piece of data is needed the process can obtain it through the vnode which knows enough about the backing store to request the corresponding buffer from the buffer cache Of course since the buffer cache might need to hit disk this operation can block Another process can run While the disk request is serviced XIVll TLB Miss to the Buffer Cache CSCI 3753 12 Here we step through an example TLB miss to the buffer cache Th e file isn39t in memory understandably the virtual address corresponding to fi1e0 is not in the TLB XIV12 TLB Miss to the Buffer Cache char rue charmmapw7 char flrst flleO TLB Miss Handler CSCI 3753 XIV13 TLB Miss to the Buffer Cache char rue charmmapw7 char flrst flleO TLB Miss Handler Find Memory Map vno de CSCI 3753 14 XIVl4 TLB Miss to the Buffer Cache IV char flle charmmapv v 7 char flrst flleOi TLB Miss Handler Find Memory Map vno de TLB Miss Handler Buffer Cache CSCI 3753 15 XIV15 TLB Miss to the Buffer Cache V TLB Miss Handler Find Memory Map vno de vno de CSCI 3753 16 XIV16 TLB Miss to the Buffer Cache VI charmmapw7 flleO char flle char flrst vvv TLB Miss Handler TLB Miss Handler Find Memory Map vno de TLB Miss Handler CSCI 3753 17 XIV17 TLB Miss to the Buffer Cache VII charmmapw7 flleO char flle char flrst vvv TLB Miss Handler TLB Miss Handler Find Memory Map vno de TLB Miss Handler TLB Miss Handler CSCI 3753 18 XIV18 TLB Miss to the Buffer Cache VIII charmmapw7 flleO char flle char flrst vvv TLB Miss Handler TLB Miss Handler TLB Miss Handler Find Memory Map vno de TLB Miss Handler CSCI 3753 19 XIV19 Swap Daemon Moves pages to disk Woken when memory low Page replacement algorithms CSCI 3753 20 The question arises as to how and when pages are moved to disk Most UNIX systems accomplish this with a swap daemon When a process tries to get more memory but no pages are free the process wakes up the swap daemon and goes to sleep The swap daemon examines the pages in memory and uses some algorithm to decide which to m ove to disk and which to keep in memory Exactly how the swap daemon accomplishes this task is a bit of b lack magic and very complicated how does one tell every process that references a p age that the page is no longer in memory update the PTE etc We won39t go into it here XIV20 Page Replacement Algorithms FIFO Least Recently Used Least Frequently Used Working Set Page Fault Frequency Twobit algorithm MacOS Page weighting FreeBSD CSCI 3753 21 There are a variety of algorithms that can be used to decide Whi ch pages should be moved to disk ranging from the simple FIFO to the impossible LRU to the complex page weighting XIV21 FIFO First in first out Really easy really problematic eeeee CSCI 3753 22 FIFO is perhaps the easiest algorithm but as we39ll show it has very troublesome behavior XIV 22 Belady39s Anomaly Page faults expensive Want to minimize faults Adding more memory should reduce faults Not so Belady39s Anomaly See also John Knight39s brilliant PhD thesis CSCI 3753 23 FIFO page replacement exhibits a behavior known as Belady39s Anomaly With FIFO there are situations in which adding more free pages memory to the syste m can lead to a greater number of page faults Since page faults are very expensive we want to avoid this behavior as much as possible XIV23 Page Reference String Sequence of page accesses Use to test behavior of page replacement algorithm 124532154123 CSCI 3753 24 Here we39ll show an example of Belady s anomaly with this page reference string XIV24 Try with 3 and 4 pages of memory Reference string 1234512351451234 FIFO CSCI 3753 XIV25 Belady39s Anomaly 1234512351451234 L gt 1 L gt CSCI 3753 26 XIV 26 Belady39s Anomaly 1234512351451234 CSCI 3753 27 XIV27 Belady39s Anomaly 1234512351451234 CSCI 3753 28 XIV28 Belady39s Anomaly IV 1234512351451234 CSCI 3753 29 XIV29 Belady39s Anomaly V 1234512351451234 g CSCI 3753 30 XIV30 Belady39s Anomaly VI 1234512351451234 E Cl Ea CSCI 3753 31 XIV31 Belady39s Anomaly VII 1234512351451234 CSCI 3753 32 XIV32 Belady39s Anomaly VIII 1234512351451234 CSCI 3753 33 XIV33 Belady39s Anomaly IX 1234512351451234 CSCI 3753 34 XIV34 Belady39s Anomaly X 1234512351451234 39 CSCI 3753 35 Each red dot signifies that the set of buffers has one more page fault than the other XIV35 Belady39s Anomaly XI 1234512351451234 O O CSCI 3753 36 XIV36 Belady39s Anomaly XII 1234512351451234 O O CSCI 3753 37 XIV37 Belady39s Anomaly XIII 1234512351451234 39 CSCI 3753 38 XIV38 Belady39s Anomaly XIV 1234512351451234 CSCI 3753 39 XIV39 Belady39s Anomaly XV 1234512351451234 CSCI 3753 40 XIV 4O Belady39s Anomaly XVI 1234512351451234 CSCI 3753 41 XIV41 Belady39s Anomaly XVII 1234512351451234 39 CSCI 3753 42 Aieeee XIV 42 Least Recently Used Doesn39t suffer Belady s Larger page caches are superset of smaller ones Stackbased algorithms CSCI 3753 43 Another seemingly simple algorithm which intuitively seems smar t is least recently used or LRU LRU does not exhibit Belady39s anomaly It39s something called a slackbased algorithm A stack based page replacement algorithm in one for which the set of pages in n1 available pages is always a superset of the set of pages when there are n available pages Since you39re assured to have the n most recently referenced pages with n1 pa ges this algorithm does not suffer from Belady39s anomaly XIV 43 LRU Problems Can39t log time of every memory access Priority queue of pages CSCI 3753 LRU has a significant problem for it to be correctly implemente d then every memory access must update the time stamp on its corresponding page Thi s isn39t possible without a great deal of expensive help from architecture In addition the re39s the issue of maintaining the queue of pages Keeping the queue ordered requires logn operat ions where n is the number of pages and this cost could be present on every memory referen ce Similarly if the pages are only ordered when needed when the swap daemon wakes up the co st of freeing k of the n pages would be n klogn which is a harsh penalty on systems with a lot of memory XIV 44 Least Frequently Used Compute average of usage over time Get rid of infrequently used pages Doesn39t have to be exact CSCI 3753 45 An alternate algorithm LFU considers the rate at which pages a re referenced Pages that are referenced very often are kept in memory While those that only rarely referenced are moved out to disk This calculation doesn39t necessarily have to be exa ct one place pages in classes of reference frequency XIV45 LFU Problems Brand new pages haven39t been accessed many times 111222343434 CSCI 3753 46 If a page has just been brought into memory chances are it hasn 39t been referenced many times Correspondingly LFU will think it39s a good candidate to move ou tto disk This has obvious problems as shown in the reference string above XIV 46 Working Set Per process allocation of pages Global vs local allocation Differently sized working sets based on behavior One active process can39t hog the pages Within a working set use another algorithm CSCI 3753 47 The previous page replacement policies have a global policy that is one which treats all pages in memory as equal citizens While this strategy leads to good s ystemwide performance it means that processes can harm each other39s behavior by stealing pages A process with a large address space that it skips across could use up most of the pages in the system The working set algorithm uses a local allocation policy instead While some sort of global policy must be used when processes con ict the page daemon tri es to keep some number of pages the working set in memory for each process The justific ation is that while a process might have a large address space most execution has excellentt emporal locality the program tends to only be using a small part of its address space at any time The algorithm tries to determine how large a given process working set is and provide s that number of pages to it If there aren39t enough pages for the working sets of all the proces ses then some processes can be swapped out in entirety to be restored later XIV47 Page Fault Frequency Processes that fault a lot may be thrashing Try allocating more pages to them Based on idea of fair use not performance Performance would call for heavily faulting processes to get fewer pages Problems with shift of working set CSCI 3753 48 Page fault frequency is a slightly different local allocation policy The kernel keeps track of the last page fault of a process When a process page faults if the time since the last fault is less than a threshold the process is allocated another page If the time is above another threshold a page is revoked from the page39s working set The PFF algorithm is based on the idea of fair use that is eve ry process should have a certain level of performance not that the system should process as much as possible The latter strategy would call for heavily faulting pages to have fewer pag es in their working set39 since they39re going to fault anyways those pages might be best alloca ted to another process working set PFF also exhibits problems when shifting working sets or locali ty in the address space The process will have a series of rapid page faults which will caus e more pages to be allocated After this brief period however the page fault frequency might fall back to a normal level but frequent enough to prevent the working set from shrinking This could cause the process working set to grow with no bound There are more complex algori thms that deal with this situation such as variable interval sampled working set VSWS but we won39t go into them here XIV 48 Page Weighting FreeBSD Perform statistical analysis of page usage through TLB misses read write calls Pages start at middle weight Over time drop in weight Usage increases weight Swap out lower weight pages CSCI 3753 49 FreeBSD uses a rather complex global page replacement algorithm The ker nel estimates page usage through 10 calls and TLB faults A page enters the system with a medium weight 5 Use of the page increases its weight up to 10 Occasionally e very page39s weight is decremented Pages that aren39t used much will drop in weight Wh ile frequently used ones Will keep a high weight When pages need to be swapped out the swap daemon chooses the lowest weight pages first XIV 49 64bit Address Spaces 64 bits of address is 1844674403709551616 bytes 16 Exabytes Assuming 8K pages at least 6 levels of page tables With SPARC v8 11 levels of page tables Need new page table organization CSCI 3753 50 64bit architectures are becoming more common as processors develop A 64 bit address space presents a variety of problems for virtual memory notably curr ent page table structures present a tremendous performance cost as referencing a byte of memory might cause up to 11 page faults Also with 64 bits of address process address space s can be much sparser than before Correspondingly new page table organizations are needed XIV50 Clustered Page Tables CSCI 3753 51 Clustered page tables group sets of pages together with a single hash table entry Having a single entry per page would require 3 words of data for every page one for the hash table entry tag one for the link to the next entry and one for the entry i tself This is a lot of overhead By using small groups of pages then this overhead is amortized over the group for example if 16 page groups are used there39s 18 words of overhead or 1125 per page little more than a standard forwardmapped scheme Since the hash table can grow as needed not much memory need be wasted while still providing good performance XIV51


Buy Material

Are you sure you want to buy this material for

25 Karma

Buy Material

BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.


You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

Why people love StudySoup

Jim McGreen Ohio University

"Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

Amaris Trozzo George Washington University

"I made $350 in just two days after posting my first study guide."

Bentley McCaw University of Florida

"I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"


"Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

Become an Elite Notetaker and start selling your notes online!

Refund Policy


All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email


StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here:

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.