Operating Systems Design & Construction
Operating Systems Design & Construction CMPSC 473
Popular in Course
Popular in ComputerScienence
verified elite notetaker
This 0 page Class Notes was uploaded by Isabel Eichmann on Sunday November 1, 2015. The Class Notes belongs to CMPSC 473 at Pennsylvania State University taught by Staff in Fall. Since its upload, it has received 18 views. For similar materials see /class/233061/cmpsc-473-pennsylvania-state-university in ComputerScienence at Pennsylvania State University.
Reviews for Operating Systems Design & Construction
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 11/01/15
4 Operating Systems CM PSC 473 Memory Management March 4 2008 Lecture 14 Instructor Trent Jaeger Last Class Deadlocks Today Memory Management Memory us 69 PC1 Binding of Programs to Addresses Address binding of instructions and data to memory addresses can happen at three different stages Compile time If memory location known a priori absolute code can be generated must recompile code if starting location changes Load time Must generate relocatable code if memory location is not known at compile time Execution time Binding delayed until run time if the process can be moved during its execution from one memory segment to another Need hardware support for address maps eg base and limit registers Loading User Programs comp e Mme compiler or assembler oad Mme e ecuho Mme run Mme Logical and Physical Addresses The concept of a logical address space that is bound to a separate physical address space is central to proper memory management Logical address generated by the CPU also referred to as Virtual address Physical address address seen by the memory unit Logical and physical addresses are the same in compile time and load time address binding schemes logical Virtual and physical addresses differ in execution time address binding scheme Memory Management Unit CPU 4 relocation register 14000 logical physical address address f 346 U 14346 MMU memory Need for Memory Management Physical memory is limited A single process may not all t in memory Several processes their address spaces also need to fit at the same time Swapping operating system process P1 swap out I lt2 swap in user space backing store main memorv Swapping If something either part of a process or multiple 1 processes does not t in memory then it has to be quot quot kept on disk Whenever that needs to be used by the CPU to fetch the next instruction or to read write a data word it has to be brought into memory from disk Consequently something else needs to be evicted from memory Such transfer between memory and disk is called swapping Disallow 1 process from accessing another process s memory Usually a separate portion of the disk is reserved for swapping that is referred to as swap space Note that swapping is different from any explicit le 1 0 read write that your program may contain Typically swapping is transparent to your program Early Days Overlays Overlay Region I I Disk 1 Done explicile by applicaTion Even if a process fits entirely in memory we do not want to do the following Context switching will be highly inefficient and it defeats the purpose of multiprogramming Need for Multiprogramming Say your program does explicit file 10 read write for a fraction f of its execution time then with p processes CPU efficiency I lt1 9 To maintain high CPU efficiency we need to increase p But as we just saw these processes cannot all be on disk We need to keep as many of these processes in memory as possible So even if we are not keeping all of the process keep the essential parts of as many processes as possible in memory We willget bat 0 this 235216 at a aterpozm AllocoTed Queue of woiTing r39equesTsjobs Fr39ee Regions Holes 5 Question How do we perform This allocation Goals 39 Allocation and Free should be fairly ef cient 39 Should be able to satisfy more requests at any time ie the sum total of holes should be Close to 0 with waiting requests Solution Strategies Contiguous allocation The requested size is granted as 1 big contiguous chunk Eg rst t best t worst t buddy system Non contiguous allocation The requested size is granted as several pieces and typically each of these pieces is of the same xed size Eg paging Contiguous Allocation Data structures Queue of requests of different sizes Queues of allocated regions and holes Find a hole and make the allocation and it may result in a smaller hole Eventually you may get a lot of holes that become small enough that they cannot be allocated individually This is called external fragmentation Easing External Fragmentation No re Tha r This can be done only wi rh reloca rable code and da ra use indir ec rindexedr39ela rive addressing Contiguous Allocation Which hole to allocate for a given request First fit Search through the list of holes Pick the first one that is large enough to accommodate this request Though allocation may be easy it may not be very efficient in terms of fragmentation Best Fit Search through the entire list to nd the smallest hole that can accommodate the given request Requires searching through the entire list or keeping it in sorted order This can actually result in very small sized holes making it undesirable 39 Worst t Pick the largest hole and break that The goal is to keep the size of holes as large as possible Allocation is again quite expensive searching through entire list or keeping it sorted What do you do for a free 39 You need to Check Whether nearby regions on either side are free and if so you need to make a larger hole This requires searching through the entire list or at least keeping the holes sorted in address order Costs Allocation Keeping list in sorted size order or searching entire list each time Q Free Keeping list in sorted address order or searching entire list each time Q Buddy System 39 LogN cost for allocation free An Example 1 MB block 100K Request 5 jg 7397 Request 240K Reques r 256K Release B Release Reques r Release C Release E Release D Slab Allocator Slab is one or more physically contiguous pages Cache consists of one or more slabs Single cache for each unique kernel data structure Each cache lled with objects instantiations of the data structure When cache created filled with objects marked as free When structures stored objects marked as used If slab is full of used objects next object allocated from empty slab If no empty slabs new slab allocated Benefits include no fragmentation fast memory request satisfaction Slab Allocation kernel obj 3 KB objects 7 KB objects caches E slabs physical contiguous pages Summary Memory Management Limited physical memory resource Keep key process pages in memory Swapping and paging later Memory allocation High performance Minimize fragmentation 39 Next time Paging
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'