New User Special Price Expires in

Let's log you in.

Sign in with Facebook


Don't have a StudySoup account? Create one here!


Create a StudySoup account

Be part of our community, it's free to join!

Sign up with Facebook


Create your account
By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here

Operating Systems

by: Mrs. Carolyne Abbott

Operating Systems CS 4414

Mrs. Carolyne Abbott
GPA 3.71

Sang Son

Almost Ready


These notes were just uploaded, and will be ready to view shortly.

Purchase these notes here, or revisit this page.

Either way, we'll remind you when they're ready :)

Preview These Notes for FREE

Get a free preview of these Notes, just enter your email below.

Unlock Preview
Unlock Preview

Preview these materials now for free

Why put in your email? Get access to more of this material and other relevant free materials for your school

View Preview

About this Document

Sang Son
Class Notes
25 ?




Popular in Course

Popular in ComputerScienence

This 48 page Class Notes was uploaded by Mrs. Carolyne Abbott on Monday September 21, 2015. The Class Notes belongs to CS 4414 at University of Virginia taught by Sang Son in Fall. Since its upload, it has received 10 views. For similar materials see /class/209669/cs-4414-university-of-virginia in ComputerScienence at University of Virginia.


Reviews for Operating Systems


Report this Material


What is Karma?


Karma is the currency of StudySoup.

You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/21/15
CS 414 Operating Systems UNIVERSITY OF VIRGINIA Department of Computer Science Spring 2008 Topic 2 Processes a Context switching saving the state to where and restoring it What gets saved Everything that next process could damage a program counter processor status word registers a all of memory all disks all the stuff on tape 0 options for memory contents a trust next process b move everything to disk Alto sys tem 0 memory protection a Why tricky a All machines provide some special hardware support a Motorola 68000 hardware just moves PC and status word to the stack OS then handles rest of state itself Must be done carefully Why 0 Intel 432 hardware does all state saving and restoring into process control block and even dispatching Hardware dispatcher Desirable a Ugly issue of performance sometimes making dirty shortcuts a Birth of a process A Creating it from scratch a Allocate memory 0 Load code and data into memory 0 Create empty call stack 0 Create and initialize process control block a Make process known to dispatcher 7217 a B Forking copying existing process a Make sure process to be copied isn t running and has all state saved a Make a copy of code data stack a Copy PCB into new PCB a Make process known to dispatcher What s missing a Unix FORK when user typed quotlsquot pid fork if pid lt 0 printfstderr quotFork Failedquot exit 1 else ifpid 0 execlpquotbinlsquotquotlsquotNULL else waitNULL printfquotChild Completequot exit0 7227 0 Independent process neither affect nor affected by the rest of the universe 0 its state is not shared in any way with other processes a deterministic a reproducible a scheduling does not affect the result 0 There are many different ways in which a collection of independent processes might be exe cuted on a processor a How often are processes independent 0 Cooperating processes a cooperating processes share state May or may not actually be cooperating a nondeterrninistic depends on relative execution sequence a irreproducible 0 example one process writes ABC to the monitor another writes CBA 0 Why allow processes to cooperate a want to share resources a one computer many users a one le of checking account records many tellers a want to do things faster 0 read next block while processing current one a divide job into subjobs execute in parallel a Basic assumption for cooperating process systems is that the order of some operations is ir relevant certain operations are independent of certain other operations Only a few things matter 0 example A l B 2 has same result as B 2 A l a another example A Bl B 2B can t be reordered a another example suppose A l and A 2 are executed in parallel race condition 7237 CS 414 Operating Systems UNIVERSITY OF VIRGINIA Department of Computer Science Fall 2005 Topic 14 Virtual Memory Demand Paging and Working Sets Readings for this topic Ch9 Separation of the programmer s View of memory from the system s View using a mapping mechanism Each sees a different organization This makes it easier for the OS to shuf e users around and simpli es memory sharing between users 0 However until now a user process had to be completely loaded into memory before it could run Is it absolutely necessary 0 What is Virtual memory Atechnique that allows o Is it the same as demand paging Intoduce the third party into the scene 0 The idea is to produce the illusion of a disk as fast as main memory The reason that this works is o What is 9010 rule 0 Is past is a good predictor of future 0 What if processes access pages randomly 0 Two issues 0 How does it work Mechanisms to handle when it references a page that is only in the backing store 0 Scheduling decisions When to move which pages between memory and disk 71417 0 Basic mechanism what to do when the page is not in memory 0 First extend the page tables with an extra bit present If present isn t set then a refer ence to the page results in a trap What is this special trap called 0 Any page not in main memory right now has the present bit cleared in its page table entry 0 When page fault occurs 0 Operating system brings page into memory 0 Page table is updated present bit is set 0 The process continues execution 0 Continuing process is very tricky Why 0 Can the instruction just be skipped 0 Try to reexecute the instruction from the beginning Problems Without additional information from the hardware it may be impossible to restart a pro cess after a page fault Nonvirtualizable machines 0 Scheduling decisions 0 Page selection when to bring pages into memory 0 Page replacement which pages should be thrown out and when 0 Page selection Algorithms 0 Demand paging start up process with no pages loaded load a page when a page fault for it occurs ie wait until it absolutely MUST be in memory 0 Request paging let user say which pages are needed What s wrong with this 0 Prepaging bring a page into memory before it is referenced Hard to do effectively without a prophet may spend a lot of time doing wasted work 0 Page Replacement Algorithms 0 Random pick any page at random works surprisingly well o FIFO throw out the page that has been in memory the longest The idea is to be fair give all pages equal residency 0 MIN throw out the page that won t be used for the longest time into the future This re 71427 quires a prophet so it isn t practical but it is good for comparison As always the best algorithm arises if we can predict the future 0 LRU throw out the page that hasn t been used in the longest time Use the past to predict the future With high locality it s a a good approximation to MIN Example Try the reference string A B C A B D A D B C B assume there are three page frames of physical memory MIN is optimal can t be beaten but the principle of locality states that past behavior predicts future behavior thus LRU should do just about as well Belady s anomaly for nonstack algorithms Implementing LRU need some form of hardware support in order to keep track of which pages have been used recently 0 Perfect LRU Keep a register for each page and store the system clock into that register on each memory reference To replace a page scan through all of them to nd the one with the oldest clock This is expensive if there are a lot of memory pages 0 In practice nobody implements perfect LRU Instead we settle for an approximation that is ef cient Just nd an old page not necessarily the oldest Clock algorithm also called second chance algorithm keep use bit for each page frame hardware sets the appropriate bit on every memory reference The operating system clears the bits from time to time in order to gure out how often pages are being referenced Introduce clock algorithm where to nd a page to throw out the OS circulates through the phy sical frames clearing use bits until one is found that is zero Use that one Some systems also use a dirty bit to give preference to dirty pages This is because it is more expensive to throw out dirty pages clean ones need not be written to disk 7 143 i o What does it mean if the clock hand is sweeping very slowly o What does it mean if the clock hand is sweeping very fast 0 Three different styles of replacement 0 Global replacement all pages from all processes are lumped into a single replacement pool Each process competes with all the other processes for page frames 0 Perprocess replacement each process has a separate pool of pages A page fault in one process can only replace one of that process s frames This relieves interference from other processes 0 Per job replacement lump all processes for a given user into a single replacement pool 0 In perprocess and perjob replacement must have a mechanism for slowly changing the allocations to each pool Otherwise can end up with very inef cient memory usage 0 Global replacement provides most exibility but least pig protection 0 Thrashing consider what happens when memory gets overcommitted 0 Suppose there are many users and that between them their processes are making fre quent references to 50 pages but memory has 49 pages 0 Each time one page is brought in another page whose contents will soon be referenced is thrown out o How serious Compute average memory access time o What is a more serious real problem The system will spend all of its time reading and writing pages It will be working very hard but not getting anything done 0 Thrashing occurs because the system doesn t know when it has taken on more work than it can handle LRU mechanisms order pages in terms of last access but don t give absolute numbers indicating pages that must not be thrown out o What can be done i144i o If a single process is too large for memory there is nothing the OS can do That process will simply thrash o If the problem arises because of the sum of several processes 0 Figure out how much memory each process needs 0 Change scheduling priorities to run processes in groups whose memory needs can be satis ed Shed load 0 Working Sets proposed by Peter Denning An informal de nition is the collection of pages that a process is working with and which must thus be resident if the process is to avoid thrashing The idea is to use the recent needs of a process to predict its future needs 0 Choose tau the working set parameter At any given time all pages referenced by a process in its last tau seconds of execution are considered to comprise its working set 0 A process will never be executed unless its working set is resident in main memory Pages outside the working set may be discarded at any time 0 Working sets are not enough by themselves to make sure memory doesn t get overcommitted We must also introduce the idea of a balance set 0 If the sum of the working sets of all runnable processes is greater than the size of memory then refuse to run some of the processes for a while 0 Divide runnable processes up into two groups active and inactive When a process is made active its working set is loaded when it is made inactive its working set is allowed to migrate back to disk The collection of active processes is called the balance set 0 Some algorithm must be provided for moving processes into and out of the balance set What happens if the balance set changes too frequently 0 Problem with the working set must constantly be updating working set information 0 One of the initial ideas was to store a capacitor with each memory page The capacitor would be charged on each reference then would discharge slowly if the page wasn t referenced Tau would be determined by the size of the capacitor 7 145 i CS 414 Operating Systems UNIVERSITY OF VIRGINIA Department of Computer Science Fall 2005 Topic 7 Threads and Messages 0 Threads 0 Each process has a PC stack address space and b single thread of control 0 Is it desirable to have multiple threads of control that share a single address space 0 Example le server and database server 0 Threads are like quotminiquot processes called lightweight processes each thread has its own PC and stack and runds strictly sequentially 0 Different threads are not as independent as different processes Why 0 What threads offer responsiveness via parallelism respond while performing other tasks retaining the idea of sequential processes using blocking system calls ef cient fast thread creation and reduced context switching overhead 0 easy software development coordinating processes 0 sharing easy to share data 0 User threads vs kernel threads 7717 0 User threads nachos 0 supported by userlevel thread libraries kernel has no knowledge 0 fast to create and manage no kernel support 0 all threads share their process time slice 0 Kernel threads most OS support them 0 handling preemption is easier at kernel level 0 thread creationscheduling is more expensive 0 number of threads typically limited 0 Multithreading models Mtol lto l MtoM 0 Thread pools 0 thread creation is expensive even with user threads 0 keep pool of idle threads available 0 starting a new thread is faster also limits number of threads 0 example web server 0 Thread libraries 0 Pthreads IEEE POSIX standard WIN32 for Windows Java threads Message communication between processesthreads Communication using shared data vs without shared data Components of message communication 0 Message a piece of information that is passed from one process to another 0 Mailbox a place where messages are stored until they are received Operations 0 Send copy a message into mailbox What if the mailbox is full 0 Receive copy message out of mailbox delete from mailbox What if empty Is there really no sharing Two general styles of message communication 0 lway messages ow in a single direction Unix pipes or producer consumer style 0 2way messages ow backandforth remote procedure call or client server style Producer amp consumer example lway 7727 Producer int msg11 000 area to prepare the stu to send while true prepare msg sendmsg1 mbox Consumer int msg21 000 place to receive a message while true receivemsg2 mbox process msg2 0 Client amp Server example 2way Client char response1000 send read my le mboxl receiveresponse mbox2 Server char command100 char answer1000 receivecommand mboxl decode command read le into answer send answer mbox2 7737 0 Note that this looks a lot like a procedure callampreturn Analogs between procedure calls and message operations 0 Parameters request message read mJ le 0 Result return message contents ofmJ le 0 Name of procedure mbox 0 Return address mb0x2 o The return address is hardwired in this example Any problem 0 Why use messages 0 Many applications t into the model of processing a sequential ow of information o The communicating parties can be totally separate except for the mailbox 0 Less errorprone because no invisible side effects 0 They might not trust each other OS vs user 0 They might have been written at different times by different programmers 0 They might be running on different processors on a network so 0 Message systems vary along several dimensions 0 Relationship between mailboxes and processes 0 One mailbox per process use process name in send no name in receive 0 No strict mailboxprocess association use mailbox name 0 Extent of buffering o Buffering ef cient transfers when sender and receiver run at different rates 0 None rendezvous protocols simple OK for callreturn type communication 0 Waiting blocking vs nonblocking ops o Blocking receive return message if empty wait until message arrives 7747 CS 414 Operating Systems UNIVERSITY OF VIRGINIA Department of Computer Science Fall 2005 Topic 8 CPU Scheduling o Readings for this topic Chapter 5 Resources fall into two classes 0 Preemptible vs Nonpreemptible o Is this distinction absolute Real issue OS makes two related kinds of decisions about resources What is the goal 0 Allocation who gets what Implication 0 Scheduling how long can they keep it Implication Resource 1 o Processes may be in any one of three general scheduling states 0 Running 0 Ready 0 Blocked Goals for scheduling disciplines o Ef ciency of resource utilization keep CPU and disks busy 0 Minimize overhead context switching o Minimize response time De ne response time o Distribute cycles equitably What does this mean 7817 FIFO also called FCFS run until nished 0 Usually nished means blocked 0 Problem Solution limit maximum amount of time that a process can run without a context switch This time is called a time slice Round Robin run process for one time slice then move to back of queue Each process gets equal share of the CPU What if the slice isn t chosen carefully o T 00 long 0 Too small Originally Unix had 1 sec time slices Right size Most systems today use time slices of 10000 100000 instructions How to implement priorities in R Is RR always better than FIFO What is the best we can do Is there quotperfectquot scheduling algorithm ST CF shortest time to completion rst with preemption In what sense is it the best Example two processes one doing 1 ms computation followed by 10 ms IO one doing all computation Suppose we use 100 ms time slice IO process only runs at 1 10th speed IO devices are only utilized 10 of time Suppose we use 1 ms time slice then computebound process gets interrupted 9 times unnecessarily for each valid interrupt STCF works well Why not using STCF Rule of thumb Give the most to those who need the least What s the idea here The strategy 7827 Dynamic Storage Allocation CS 414 Operating Systems Spring 2008 Memory Allocation 0 Static Allocation fixed in size Sometimes we create data structures that are fixed and don39t need to grow or shrink 0 Dynamic Allocation change in size At other times we want to increase and decrease the size of our data structures to accommodate changing needs 0 Often real world problems mean that we don t know how much space to declare as the number needed will change over time Static Allocation 0 Done at compile time 0 Global variables variables declared ahead of time such as fixed arrays 0 Lifetime entire runtirne of program 0 Advantage efficient execution time 0 Disadvantage Ifwe declare more static data space than we need we W35 9 space Ifwe declare less static space than we need we are out Ck Dynamic Allocation 0 Done at run time 0 Data structures can grow and shrink to t changing data requirements We can allocate create additional storage whenever we need them We can der ocate freejdelete dynamic space whenever we are done with them 0 Advantage we can always have exactly the amount of space required 7 no more no less 0 For example with pointers to connect them we can use dynamic ta structures to create a chain of data structures called a linked list Dynamic Allocation Memory Management of a Process L a1VarmbleS User aumated variables 0 The memory of a process is divided into the Lifetime duration of LI 391 then r a procedure aetivation as long de1etes it oruntil it is fouowmg Part5 as that method is active garbage eoneeted A Space for me code of me program Advantage ef cient storage Advantage perrnits ereation use of dynamic structures like u A space for the data global variables Stack Allocation all quotaesthnked hstsetc Th k f m 1 1 r M variabiesanoeated within Heap Allocation I e 5130 7 0r 3 003 Vana 35 gig 5134 ope quot1555 39 Fm exampl t nkm HS S39 The heap for the dynamic variables arnpie parameter passing during function calls L 1 M C Why isn t static allocation OglC emory OIIlpOIlCIltS 7 sufficient 0 Recursive procedures 0 Complex data structures If allstora e must be reserved in advance statically then it wil be used inef ciently enough will be reserved to handle the wors possible case OS cannot predict 0 how many jobs there will be or 0 which programs will be run or 0 when aprocess will come and ask for storage spacellr Dynamic Memory Management 0 Dynamic memory allocation is performed by the operating system when the program is executing and the program user sends a request for additional memory Three issues need to be addressed by the operating system 9 allocating variablerlength dynamic storage when needed freeing up storage when requested managing the storage e g reclaiming freed up memory for later use Dynamic Storage Allocation methods 0 Stack Allocation Used to allocate local variables Grown and shrunk on procedure calls and returns 0 Heap Allocation Used to allocate dynamic objects Heap objects are accessed with pointers StackBased Allocation 0 Memory allocation and freeing are partially predictable 0 Restricted but simple and efficient 0 Allocation is hierarchical Memory freed in opposite order of allocation Ir allocA when allocB when allocC when it musw be freeC when freeB when freeA 0 Procedure call m gram calls X which calls Y Each call pushes another stack frame on top of the stack Each stack frame has space for Variable parameters and return addresses Stack Based Allocation Example 0 A stackbased organization keeps all the free space together in one place Stack Frame or X After call to X after call to Y after exiting Y after exiting X etum address Heap Organization 0 Allocation and release are unpredictable 0 Heaps are used for arbitrary list structures complex data organizations 0 More general less efficient 0 Example payroll system Do not know when employees willjoin and leave the company Must be able to keep track of all of them using the least possible amount of storage I What if we implement it using stack Heap Organization Memory consists of allocated areas and free areas ho es Goal Reuse the spaces in holes Keep the no of holes small 0 Problem Fragmentationll Holes may be too smallto be useful for any alloc 39 n 0 mm D WWW Inefficientuseofmemory D Memory not in use Fragmentation and Compaction 0 External fragmentation Total memory space exists to satisfy arequest but it is not contiguous 0 Internal fragmentation Allocated memory may be slightly larger than requested memory holes in the memory block allocated 0 Compaction l Reduce external fragmentat39 n 10 Shuf e memory contents to place all free memory together in one large block 0 Heap Allocation Methods 0 Typically heap allocation schemes use a free lirt to keep track of the storage that is not in use 0 Algorithms differ in how they manage the free list 0 B est Fit Keep linked list of free blocks Search the Whole list on each allocation Choo se block that comes clo sest to matching the need 5 of all ocau on Save excess for la ter Merge adjacent free blocks duringrelease operations First Fit Keep linked list of free blocks Scan the list for the rst block that comes closest to matchingthe needs of all ocau on Merge adjacent free blocks duringrelease operations Example Heap Allocation Example Best Fit Methods Requext for Memory Request for Memory 8 42 s 50 12 16 1616 48 1616 40 24 618 s 42 s 50 12 16 1616 48 1616 40 24 618 s 42 8 50 12 16 1616 48 16 3916 35 5 24 618 D Mcmurymuxc DMcmurynutmuxc D Mcmurymuxc DMcmurynutmuxc Example First Fit Comparing the BF and FF Requext for Memory 0 Is BF always better than FF It depends 0 Try the following scenario 3 42 3 50 3912 16 16 16 43 16 1396 40 24 6 13 Memory contains 2 free blocks of size 20 and 15 Suppose the allocation requests are 10 then 20 Which will win 8 35 7 g 50 12 16 161639 48 161639 40 24 618 Slippose therequesLs are 8 12 then 12 Which Wln D Memory m uxc D Memory nutm uxc Other Heap Allocation Methods Example Worst Fit Request tor Memory 0 Worst Flt Keep hhhed hstottree h1ochs 3935 Search the who1e hst or each auocauoh Choose h1ock that worst matches the request Save excesstor1ater h Merge adjacehtfreehlocks during re1ease operations 3 42 3 5 17 15 15 16 43 16 15 40 5 13 0 Next Flt Keep hhhed hst of tree h1ochs 39 S MW W39J WS Wchlefm s 42 x 35 1512 16 16 16 48 6 16 40 39 618 Sean the hst for the rst h1ock that comes c1o sest to matching the eedsofalloeavjon Merge adjacemfreebloeks during re1ease operauohs D D Memory m use Memory hot m use Example Next F1t B1t Map RequesttorMemory BitMap Usedw allueauuneumesln xedrmzeehunkueg d1skblucksur 1237 Th1 rs where the last mrch Iett ott 42 50 12161616 48 42 50 12161616 D Memory m use D Memory hotrh use hen byte chunks Keep a large array otbrts one for every chunk Ltortrso ohuhkrs m use Ltortrs1chuhkrs tree d to merge later uperauuns 0 Problem Internal Fragmentation Bit Ma Qal1 l0 l l11lolo1l1o1lo1rololo1lo Memory Problems with BF FF and Bitmap 0 Linear search Extra processing to maintain and find the free space 0 Fragmentation External fragmentation for BF FF Internal fragmentation for bitmap o 130 Is EEEIEIEIE EEEEEEEEEE O eaehsize I I I 39 Allocation fast 64 0 Any problems it out of space while 123 others have a lot of free s aoe Solution I 256 o Reclamation Methods 0 How do we know when dynamicallyrallocated memory can be freed Easy when chunk only used in one place one pointer per object Harder when information is shared multiple pointers it can39t he recycl unti e sharers are nished 3 0 E Dangling Pointers when freed too early Memory Leaks when you forget to free 0 Goal don t free up too soony but don t forget to free up even 11 If memory leaks 100KB per hour how long it will take to exhaust 24MB a s o When you nm out ofmemory there39s a magie button to get more memory Reclamation Approaches 0 Two general solutions Reference Counts 0 Reference Counts keep track of the number of outstanding pointers for each chunk of memory 0 Works well for hierarchical structures must be managed carefully by the system Garbage Collection 0 No explicit free operations makes life easier for application programmers 0 Examples LISP Java Reference Counter Method 0 Reference Counts keep track of the number of outstanding pointers for each chunk of memory 0 The reference counts must be manage automatically by the system so no mistakes are in incrementing and decrementing them Ea time a new pointer refers to the block of memory the reference count of e block must be increased by 1 c ime a pointer that refers the block of memory is changed so it no longer points to that sub ist e reference count of the b ock must be decreased by 1 eference count becomes zero free the memory 0 Example File descriptors in Unix Reference Counter Example Reference Counter Example eference Counts Process 39 Process 6 does not need bIock C anymore Reference Counter Example Reference Counter Method 0 Advantages Unused blocks of memory are returned to the available list as soon as they are unused Time sp n r reclamation is b ing undertaken continually unlike garbage collection which happens ir lar and ta s more execution time when it does Reference Counts 0 Disadvantages Requires additional overheads storage of the reference count for each block reference count updating algorithm c et It is still time consuming for reference count updating Reference Count Updated H etc but more dynamic than garbage collection Circular structure example scenar39 Garbage Collection Garbage Collection 0 At all times the heap may contain three kinds of dynamic data structures or memory blocks Eluckxin use I B leeks that are m 5939 D Eluck nut in uxe and an the list of available pacc Blocks that are not in use and are on the current list of available memory space I Black nutin uxcbutarcnutunthclixtufavailablc pacc grbage Blocks that are not in use but are not on the list of available memory space garbage Blocks of type 2 and 3 should be 0 The management reclaiming and recycling of d t th I garbage blocks is called garbage collection merge Oge er Garbage Collection Garbage Collection Mechanism 0 Garbage Collection Initialization No explicit free operation by the user P39 mimic a memory b1 ks 0 be free i ass ar When system needs storage it searches through I Go through an all the pointers and collects things that aren t magmas 100tgigl g cs ted and Procedure ISVSI used Mark each object pointed to and recursively mark all objects it points to 0 Pass 2 Swee Go through all objects free up those that aren39t marked 0 Must be able to find all pointers to objects e problem Expensive 0 Must be able to find all objects Only way in circular structures Incredibly difficultto program and debug Garbage Collection Example Garbage Collection Example Initial State of Memory Memory after Marking llll ll l D Aliumed I Marked D Freed D Aiiumed I Marked D Freed CS 414 Operating Systems UNIVERSITY OF VIRGINIA Department of Computer Science Fall 2005 Topic 3 Cooperating Processes and Synchronization Synchronization using atomic operations to ensure correct cooperation between processes The Too Much Milk problem Person A Person B 3 00 Look in fridge Out of milk 3 05 Leave for store 310 Arrive at store Look in fridge Out of milk 315 Leave store Leave for store 3 20 Arrive home put milk away Arrive at store 3 25 Leave store 330 Arrive home OH NO One of the most impo ant things in synchronization is to gure out what you want to achieve In the given problem somebody gets milk but we don t get too much milk 0 Mutual exclusion Mechanisms that ensure that only one person or process is doing ceitain things at one time others are excluded E g only one person goes shopping at a time Critical section A section of code or collection of operations in which only one process may be executing at a given time Eg shopping 7317 0 There are many ways to achieve mutual exclusion Most involve some sort of locking mechanism prevent someone from doing something For example before shopping leave a note on the refrigerator 0 Three elements of locking 1 Must lock before using leave note 2 Must unlock when done remove note 3 Must wait if locked don t shop note lst attempt at computerized milk buying Processes A amp B 1 1f NoMz39lk 2 NoNote 3 Leave Note 4 Buy Milk 5 Remove Note 6 7 0 Will it work 0 If not why not 0 Making the problem less likely to occur that is typical of rst attempts at solutions to syn chronization problems 0 What happens if we leave the note at the very beginning does this make everything work 7327 2nd attempt change meaning of note A buys if there s no note B buys if there is a note Process A I NON0te 2 1f NoMilk 3 Buy Milk 4 5 Leave Note 6 Process B I more 2 1f NoMilk 3 Buy Milk 4 5 Remove Note 6 Does this work How can you tell Ideally we shouldn t rely on intuitions or informal reasoning when dealing with with complex parallel programs we should be able to prove that they behave correctly Unfortunately for mal veri cation has only been successful on very small programs For example in the above example A note will be left only by A and only if there isn t already a note A note will be removed only by B and only if there is a note Thus there is either one note or no note If there is a note only B will buy milk If there is not a note only A will buy milk Thus only one process will buy milk 7337 RealTime Systems Sang Hyuk Son Department of Computer Science University of Virginia Charlottesville Virginia 22903 soncsVirginiaedu University ui Virginia RealTime Systems A system Whose basic specification and design correctness arguments must include its ability to meet its timing constraints Its correctness depends not only on the logical correctness but also on the timeliness of its actions University ui Virginia Input Real RealTime World System w Output Input 7 current state View update 7 tasks to be performed by real time systems Output 7 actions to change real world situation 7 information to be retrieved to support decision making University ui Virginia RealTime Systems Real time systems 7 timeliness and predictability 7 typically embedded in a large complex system 7 dependability reliability is crucial 7 explicit timing constraints soft firm hard A large number of applications 7 aerospace and defense systems nuclear systems robotics process control agile manufacturing stock exchange network and traffic management multimedia computing wireless sensor networks and medical systems Rapid growth in research and development 7 workshops symposia journals 7 standards POSIX RT SQL RT COBRA RT Java University ui Virginia Misconceptions on RealTime Computing RT computing is equivalent to fast computing 7 meeting the timing requirements is a matter of increasing system throughput sufficiently Different objectives and computational structures 7 minimizing average response time vs satisfying indiVidual timing requ1rement 7 predictability and timeliness are the major goal 7 computational structures appropriate for systems requiring bounded response time are different from those requiring high throughput 7 higher system throughput does not guarantee timing constraints 7 demand for even more computing power has always outstripped the supply University ui Virginia Time Constraints vt vt University ui Virginia Trends in RealTime Systems Applications Soft real time requirements rather than hard ones 7 much Wider applications 7 relates well With the notion of Q08 7 soft is harder to deal With than hard ones Operate in unpredictable environments 7 WCET too pessimistic or high variance in execution time 7 unbounded arrival rate overload unavoidable Need to support multi dimensional requirements 7 real time power size cost security and fault tolerance 7 conflicting resource requirements and system architecture Embedded and component based University at Vlrglnla Server Overload 0 Performancecritical application in 1 7 open systems on the Internet ebusiness servers web hosting 7 datadriven systems realtime databases smart spaces Example Application 39 7 1335 Resources 7 Server requirement congeSted39 V Service delayw I I Throughput 4 Pncmg Differentiation 7 395 elemgeneily Multimedia User population Processing power I 1 University at Vlrglnla conomisf April 28 2007 Issue When Everything alks nivesity of Viri urgeScale WideArea RealTime A licafions University of Virginia Why Embedded 2000 7 100 million processors for workstations 7 64 billion for embedded systems 7 approximately 2 2008 7 approximately 0 UbiquitousPervasive computing 7 seamless invisible pervasive amorphous 7 wireless Component based and aspect oriented University ui Virginia Embedded Systems Aircraft control Smart Spaces Wristwatch SensorActuatorCPU Mobile phone clouds with movable entities Internet appliances Cyber Physwal Systems Process Control Air Traffic Control 64 Processors in automobiles University ui Virginia Smart Spaces EEEBEB Pervasive E EIQE E Global connectivity Smart School Smart Classroom Smart Factory Smart City University at Virginia SensorActuator Clouds Swarm Computing Resource management team formation rea time mobility power gt Smart Dust Heterogeneous SensorsActu ators processors 39 battlefield awareness earthquake response tracking movements of animals smart paint MEMS in human bloodstream University at Virginia Key Issues Enormous numbers of devices and amounts of software needed 7 exible and tailorable 7 interaction with physicaldistributed environment of greater heterogeneity not just cpus Aggregation system as a whole must meet requirements 7 individual entities may not be critical Multidimensional constraints 7 real time power mobility wireless size cost fault tolerance security and privacy Timely management of realtime data 7 large volume with temporal properties University ui Virginia How the Problems Change Environment 7 connect to physical environment large numbers 7 massively parallel 7 faulty highly dynamic non deterministic 7 wireless Network 7 structure is dynamically changing 7 sporadic connectivity 7 new resources enteringleaving 7 large amounts of redundancy 7 self configurere configure University ui Virginia How the Problems Change OSMiddleware 7 control aggregate performance 7m0ve nodes to area of interest self organizing 7 fuzzy membership and team formation 7 manage powermobilityreal timesecurity tradeoffs 7 geographically based Data management 7 enormous amount of real time data 7 notion of data QOS University ui Virginia Traditional Approaches RT Scheduling Paradigms 7 Static predictable all is known a priori WCET invocation times or worst case rates resource needs precedence etc cyclic scheduling rate monotonic table driven 7 Dynamic high degree of predictability all is known except invocation timesrates use WCET admission control and planning or EDF University ui Virginia CS 414 Operating Systems UNIVERSITY OF VIRGINIA Department of Computer Science Spring 2008 Topic 7 Message Communication 0 Readings for this topic Section 3436 Communication using shared data vs without shared data Components of message communication a Message a piece of information that is passed from one process to another a Mailbox Port a place where messages are stored until they are received Operations 0 Send copy a message into mailbox What if the mailbox is full 0 Receive copy message out of mailbox delete from mailbox What if empty Is there really no sharing Two general styles of message communication I lway messages ow in a single direction Unix pipes or producer consumer style a 2 way messages ow backandforth remote procedure call or client server style Producer amp consumer example lway Producer int msg11 000 area to prepare the stu to send while true prepare msg sendmsg1 mbox 7717 Consumer int msg21 000 place to receive a message while true receivemsg2 mbox process msg2 a Client amp Server example 2way Client char response1000 send read my le mboxl receiveresponse mbox2 Server char command100 char answer1000 receivecommand mboxl decode command read le into answer send answer mbox2 a Note that this looks a lot like a procedure callampretum Local Procedure Call LPC facility in Windows XP Ch22 PP 804805 Analogs between procedure calls and message operations 0 Parameters request message read mJ le a Result return message contents of mJ le a Name of procedure mbox a Return address mbox2 a Any problem with it Why 7727 0 Why use messages 0 Many applications t into the model of processing a sequential ow of information I The communicating parties can be totally separate except for the mailbox 0 Less errorprone because no invisible side effects a They might not trust each other OS vs user a They might have been written at different times by different programmers a They might be running on different processors on a network so a Different styles of message passing systems they vary along several dimensions a Relationship between mailboxes and processes a One mailbox per process use process name in send no name in receive a No strict mailboxprocess association use mailbox name a Extent of buffering a Buffering ef cient transfers when sender and receiver run at different rates 0 None rendezvous protocols simple OK for callreturn type communication a Desirable features of rendezvous protocols 0 Waiting blocking vs nonblocking ops a Blocking receive return message if empty wait until message arrives a Nonblocking receive return message if empty return special empty value a Blocking send wait until mailbox has space a Nonblocking send return full if no space in mailbox 0 Additional forms of waiting 0 Many processes wait on the same mailbox at the same time a One process wait on several mailboxes at once e g select in UNIX 7737 by PN Hilfinger UC Berkeley modified by M Clancy UCB and C Bono Basic Compilation Control with Make Even relatively small software systems can require rather involved or at least tedious sequences of instructions to translate them from source to executable forms Furthermore since translation takes time more than it should and systems generally come in separatelyetranslatable parts it is desirable to save time by updating only those portions whose source has changed since the last compilation However keeping track of and using such information is itself a tedious and erroreprone task if done by hand The UNIX make utility is a conceptuallyesimple and general solution to these problems It accepts as input a description of the interdependencies of a set of source files and the commands necessary to compile them known as a make le it examines the ages of the appropriate files and it executes whatever commands are necessary according to the description For further convenience it will supply certain standard actions and dependencies by default making it unnecessary to state them explicitly Though conceptually simple the make utility has accreted features with age and use and is rather imposing in the glory of its full definition This document describes only the simple use of make It is applicable both to the original make facility and to the new gmake GNU make program a copylefted and rather more powerful version Basic Operation and Syntax The following is a sample makefile for compiling a simple editor program edit Adapted from GNU Make A Program for Directing Recompilation by Richard Stallman and Roland McGrath 1990 This is available as part of the GNU source code from eight c files and three header h files Makefile for simple editor edit maino kbdc cammandso displayo inserto searcho fileso utilso gcc g 0 edit maino kbdc commandso displayo inserto searcho fileso utilso maino mainc defsh gcc g c mainc kbdc kbdc defsh commandh gcc g c kbdc commandso commandc defsh commandh gcc g c commandsc displayo displayc defsh bufferh g g c displayc inserto insertc defsh bufferh gc g c insertc searcho searchc defsh bufferh gcc g c searchc fileso filesc defsh bufferh commandh gc g c filesc utilso utilsc defsh gcc g c utilsc This file consists of a sequence of nine rules Each rule consists of a line containing two lists of names separated by a colon followed by one or more lines beginning with tab characters Any line may be continued as illustrated by ending it with a backslashenewline combination which essentially acts like a space combining the line with its successor The character indicates the start ofa comment that goes to the end of the line The names preceding the colons are known as targets they are most often the names of files that are to be produced The names following the colons are known as dependencies of the targets They usually denote other files generally other targets that must be present and upetoedate before the target can be processed The lines starting with tabs that follow the first line of a rule we will call actions They are shell commands that get executed in order to create or update the target of the rule we ll use the generic term update for both Each rule says in effect that to update the targets each of the dependencies must first be updated recursively Next if a target does not exist that is if no file by that name exists or if it does exist but is older than one of its dependencies the actions of the rule are executed to create or update that target The program will complain if any of the dependencies does not exist and there is no rule for creating it To start the process off the user who executes the make utility specifies one or more targets to be updated The first target of the first rule in the file is the default In the example above edit is the default target The first step in updating it is to update all the object 0 files listed as dependencies To update main o in turn requires first that main c and defs h be updated Presumably main c is the source file that produces main 0 and defs h is a header file that main c includes There are no rules targeting these files therefore they merely need to exist to be upetoedate Now main 0 is upetoedate if it is younger than either main c or defs h if it were older it would mean that one of those files had been changed since the last compilation that produced main o If main o is older than its dependencies make executes the action quotgcc g c main c producing a new maincgt Once main 0 and all the other 0 files are updated they are combined by the action gcc g o edit to produce the program edit if either edit does not already exist or if any of the 0 files are younger than the existing edit file To invoke the make for this example one issues the command make f make leename targetenames where the targetenames are the targets that you wish to update and the make leename given in the f switch is the name of the makefile By default the target is that of the first rule in the file and the makefile name is either makefile or Makefile whichever exists It is typical to arrange that each directory contains the source code for a single principal program By adopting the convention that the rule with that program as its target goes first and that the makefile for the directory is named makefile you can arrange that by convention issuing the command make with no arguments in any directory will update the principal program of that directory It is possible to have more than one rule with the same target as long as no more than one rule for each target has an action Thus we can also write the latter part of the example above as follows maino mainc gcc g c mainc kbdc kbdc gcc g c kbdc commandso commandc gcc g c commandsc displayo displayc gcc g c displayc inserto insertc gc g c insertc searcho searchc cc c searchc fileso filesc gcc g c filesc utilso utilsc gcc g c utilsc maino kbdc commandso displayo inserto searcho fileso utilso defsh kbdc commandso fileso commandh displayo inserto searcho fileso bufferh The order in which these rules are written is irrelevant Which order or grouping you choose is largely a matter of taste The example of this section illustrates the concepts underlying make The rest of makes features exist mostly to enhance the convenience of using it Variables The dependencies of the target edit in the last section are also the arguments to the command that links them One can avoid this redundancy by defining a variable that contains the names of all object files Makefile for simple editor OBJS maino kbdc cammandso displayo inserto searcho fileso utilso edit OBJS gcc g o OBJS The continued line beginning OBJS defines the variable OBJS which can later be referenced as quotOBJS or OBJS These later references cause the definition of OBJ to be substituted verbatim before the rule is processed The special variable is set in each rule to the target of that rule It is somewhat unfortunate that both make and the shell use to prefix variable references make defines to be simply thus allowing you to send s to the shell where needed Variables may also be set in the command line that invokes make For example if the makefile contains maino mainc gcc DEBUG c mainc Then a command such as make DEBUGg will cause the compilations to use the g add symbolic debugging information switch while leaving off the DEBUG g will not use the g switch Variable definitions in the command lines override those in the makefile which allows the makefile to supply defaults Finally variables not set by either of these methods may be set as UNIX environment variables Thus the sequence of commands setenv DEBUG g make for this last example will also use the g switch during compilations Implicit rules In the example from the first section all of the compilations that produced 0 files have the same form It is tedious to have to duplicate them it merely gives you the opportunity to type something wrong Therefore make can be told aboutmand for some standard cases already knows aboutmthe default files and actions needed to produce files having various extensions For our purposes the most important is that it knows how to produce a file F 0 given a file of the form F c and knows that the F 0 file depends on the file F c Specifically make automatically introduces in effect the rule Flo c s CC C CFLAGS Fc when called upon to produce F 0 when there is a file F c present but no explicitly specified actions for producing F 0 As a result the example may be abbreviated as follows Makefile for simple editor OBJS maino kbdo cammandso displayo inserto searcho fileso utilso CC gcc CFLAGS g edit OBTS gcc g o OBJS maino defsh kbdo defsh commandh commandso defsh cammandh displayo defsh bufferh inserto defsh bufferh searcho defsh bufferh fileso defsh bufferh cammandh utilso defsh There are quite a few other such implicit rules built into make The p switch will cause make to list them somewhat cryptically if you are at all curious We are most likely to be using the rules for creating 0 files from c files or from s assembler files It is also possible to supply your own default rules and to suppress the standard rules for details see the full documentation Special actions It is often useful to have targets for which there are never any corresponding files If the actions for a target do not create a file by that name it follows from the definition of how make works that the actions for that target will be executed each time make is applied to that target A common use is to put a standard cleaneup operation into each of your makefiles specifying how to get rid of files that can be reconstructed if necessary For example you will often see a rule like this in a makefile clean rm f 0 Every time you issue the shell command make clean this action will execute removing all 0 files Details of actions By default each action line specified in a rule is executed by the Bourne shell as opposed to the Cashell which is more commonly used here For the simple makefiles we are likely to use this will make little difference but be prepared for surprises if you get ambitious The make program usually prints each action as it is executed but there are times when this is not desirable Therefore a character at the beginning of an action suppresses the default printing Here is an example ofa common use edit OBJS echo Linking gcc g o OBJS echo Done The result of these actions is that when make executes this final editing step for the edit program the only thing you ll see printed is a line reading quotLinking edit and at the end of the step a line reading quotDonem When make encounters an action that returns a nonzero exit code the UNIX convention for indicating an error its standard response is to end processing and exit The error codes of action lines that begin with a 3 sign possibly preceded by a are ignored Also the k switch to make will cause it to abandon processing only of the current rule and any that depend on its target upon encountering an error allowing processing of sibling rules to proceed Using makefiles with C One of the differences between make and gmake relates to the variables used in the implicit rules for compiling C programs What I will describe here only works with gmake If you use the suffix cc it will create a 0 file with the same prefix using the compiler given by the variable CXX and the ags given by the variable CXXFLAGS The default value for CXX is g Since that is the compiler we want to use we don t have to set this variable in our makefile Thus cc CXX and CXXFLAGS for compiling C programs are analogous to c CC and CFLAGS for compiling C programs see earlier section on implicit rules Here is a simple makefile that creates the executable fracts from the source files Fractioncc testfractscc and the header file Fractionh which is included in both of the source files Remember this will only work with gmake CS 414 Operating Systems UNIVERSITY OF VIRGINIA Department of Computer Science Fall 2005 Topic 10 Introduction to Storage Allocation o Readings for this topic Section 81 0 Information stored in memory is used in different ways Some possible classi cations 0 Role in Programing Language 0 Instructions specify the operations to be performed 0 Variables information that changes as the program runs 0 Constants information used as operands but never changes 0 Changeability o Readonly o Read amp write 0 Why is this important 0 Initialized or not 0 Addresses vs Data 0 Why is this important 0 Binding time when is its location decided 0 Static before the process starts to run compiletime linktime or loadtime 0 Dynamic arrangement determined during execution 0 Do the classi cations overlap 0 When a process is running what does its memory look like It s divided up into areas called 71017 segments In Unix each process has three segments 0 Code 0 Data 0 Stack 0 Why distinguish between different segments of memory 0 Division of responsibility between various portions of system 0 Compiler generates one object le for each source code le containing information for that le Information is incomplete Why 0 Linker combines all of the object les for one program into a single object le Is this complete and selfsuf cient 0 Operating system loads executable les into memory provides facilities for processes to get more memory after they ve started running 0 Runtime library works together with OS to provide dynamic allocation routines such as malloc andfree in C o Linkers or Linkage Editors 1d in Unix tie together many separate pieces of a program reorganize storage allocation Often considered part of OS Three functions of a linker 0 Collect all the pieces of a program 0 Figure out a new memory organization so that all the pieces t together combine like segments 0 Touch up addresses so that the program can run under the new memory organization The result is a runnable program stored in a new object le 0 Problems linker must solve o Compiler doesn t know where the things it s compiling will go in memory It will just assume that things start at zero and let linker rearrange Compiler puts info in object le to tell linker how to rearrange safely This stuff is called relocation information What makes rtearrangement tricky o Compiler doesn t know where everything is when compiling les separately E g where is printf routine Where it doesn t know compiler just puts zero in the object le and leaves an additional note in the object le telling the linker to x things up These 71027 CS 414 Operating Systems UNIVERSITY OF VIRGINIA Department of Computer Science Spring 2008 Topic 9 Deadlock a Readings for this topic Chapter 7 Deadlock is one area where there is a lot of theory but most of them are ignored in practice Why a Simple approaches vs complex glorious approaches Deadlock example semaphores De ne deadlock a situation where each of a collection of processes is waiting for something from other processes in the collection Since all are waiting none can provide any of the things being waited for How do you know there s a deadlock Facts about deadlock a The semaphore example is a relatively simpleminded case Things may be much more complicated A deadlock can happen with any kind of nonpreemptible resources In general don t know the resource needs in advance If only we could predict the fu ture Deadlock can occur over separate resources as in semaphore example or over pieces of a single resource as in memory or even over totally separate classes of resources tape drives and memory Whenever there is waiting there s potential for deadlock Hard for OS to control 7917 a Four conditions for deadlock a Mutual exclusion resources cannot be shared a No preemption once given a resource cannot be taken away a Hold and wait processes don t ask for resources all at once ie multiple independent requests 0 Circular wait circularity in the graph of who has what and who wants what 0 Solutions to the deadlock problem fall into three general categories 0 Detection and resolution determine when the system is deadlocked and then take dras tic action Requires termination of one or more processes in order to release their resources Usually not very practical 0 Prevention organize the system so that it is impossible for deadlock ever to occur May lead to less ef cient resource utilization in order to guarantee no deadlocks a Avoidance divide into safe and unsafe state Banker s algorithm Each process de clare max resource requirement for each resource type and the system maintains neces sary information to avoid unsafe state a Deadlock prevention must nd a way to eliminate one of the four necessary conditions for deadlock a Don t allow exclusive access This is probably not reasonable for many applications 0 Create enough resources so that there s always plenty for all a Don t allow waiting phone company solution This punts the problem back to the user 0 Allow preemption Preempt student s thesis disk space 0 Attack hold amp wait Make process ask for everything at once Either get them all or wait for 7927


Buy Material

Are you sure you want to buy this material for

25 Karma

Buy Material

BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.


You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

Why people love StudySoup

Steve Martinelli UC Los Angeles

"There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

Allison Fischer University of Alabama

"I signed up to be an Elite Notetaker with 2 of my sorority sisters this semester. We just posted our notes weekly and were each making over $600 per month. I LOVE StudySoup!"

Jim McGreen Ohio University

"Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."


"Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

Become an Elite Notetaker and start selling your notes online!

Refund Policy


All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email


StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here:

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.