Operating Systems - Week 7 Notes
Operating Systems - Week 7 Notes CSC 4420
Popular in Operating Systems
Popular in Computer science
verified elite notetaker
This 4 page Class Notes was uploaded by Judy on Thursday October 13, 2016. The Class Notes belongs to CSC 4420 at Wayne State University taught by Thair Judeh in Fall 2016. Since its upload, it has received 6 views. For similar materials see Operating Systems in Computer science at Wayne State University.
Reviews for Operating Systems - Week 7 Notes
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 10/13/16
CHAPTER 6 Continued Optimization Criteria o Maximize CPU utilization o Maximize throughput Minimize overhead (context switching) o Maximize turnaround time o Minimize waiting time o Minimize response time Scheduling Algorithm Goals o For all systems Fairness - fair share of the CPU Policy enforcement Balance - Keep all parts of the system busy o For batch systems Throughput maximization Turnaround time minimization CPU utilization o For interactive systems Response time Proportionality o For real time systems Meeting deadlines Predictability o Goals are different based on the system type Problem solving for CPU scheduline o Given a set of processes, running times, arrival times, and an algorithm Deter the schedule Draw the Gantt chart Etc First Come First Serve o Run until finished or until the process blocks o Limitations Convoy effect Long CPU bound processes blocking short I/O processes Throughput o Number of jobs divided by time Evaluation of First Come First Serve o Easy to implement o There isn't any starvation though some processes may be kept waiting o Not much overhead, little to no context switching o Response and turnaround time depends on how long short jobs need to wait Shortest job first o Schedule processes in order of response and turnaround time o The optimal choice o The difficulty is knowing the length of the next CPU burst o This solution can naturally suffer from starvation Non Preemptive o Uses the CPU until it is done, only yields voluntarily Preemptive o Can stop and take care f shorter processes before returning The process to be delayed is moved from the running to the ready queue Has more overhead because of context switches o SJF preemptive version is known as the shortest remaining time first Priority scheduling o Priority integer is associated with each process o The scheme for priority may vary o Considered preemptive, however is can have a nonpreemptive version o Priority is the inverse of the predicted next CPU burst time o The problem here is starvation o The solution to the starvation problem is aging Round Robin o Each process gets a small unit of CPU time o After it passes, it is preempted and added to the end of the priority queue o When q is large, it is FIFO o When q is small, the overhead is too high o Q here means the quantum value o Has a higher average turnaround time than SJF, but has a better response time o Q should be large compared to the context switch time o A long time quantum makes it similar to FCFS o A shorter time quantum means that there is more overhead due to more context switching o Turnaround time varies with the time quantum ( there is no real relationship) o 80% of the CPS bursts should be shorter than the time quantum Multilevel queue o Present in most commercial operating systems o Includes the following Responsiveness Low overhead Freedom from starvation Background tasks Fairness o Ready Queue split into Foreground (interactive) Background (batch) o Processes are permanently in a given queue o Each queue has its own scheduling algorithm o Scheduling must be done between queues Fixed priority scheduling Time slice – each queue gets a certain amount of CPU time and can distribute it among the tasks Multilevel feedback queue o A process can move between queues o Defined by The number of queues The scheduling algorithm for each queue The method used to promote and demote processes The method used to determine which queue the process will enter Thread scheduling o When threads are supported by the operating system, the kernel threads are scheduled and not the processes o User threads are mapped to kernel level threads o Process contention scope: Scheduling competition is limited to the threads within the same process o System contention scope: Competition is among all threads in a system Multiple processor scheduling o More complex the more CPUs there are o The focus is on homogenous processors o We can select any available processor to run the queue o Asymmetric – only one process accesses the system data structure o Symmetric – Each processor is self scheduling Are processes are in a common ready queue Processor Affinity o Soft – Keep a process running on the same processor o Hard – A process may specify a set of processors it can run on o The main memory architecture can affect affinity issues Load balancing o Intended for SMP o Keeps all CPUs loaded for efficiency and to fully utilize the benefits of having multiple CPus o Attempts to keep the workload evenly distributed o Push migration – Specific tasks are periodically check the load on each processor o Pull migration – Pulls waiting tasks from busy processors o Both of these are not mutually exclusive Memory stall o Level 1 – The operating system decides which software thread to run on each hardware thread o Level 2 – Decides which hardware thread core can run Real time operating system o Typically are used in embedded systems o Use with rigid time requirements o Well defined, fixed time constraints Real time CPU scheduling o Soft – no guarantee when the process will be scheduled o Hard – The task must be serviced by its deadline Event latency o Real time systems are event driven o Typically wait for a real time event to occur o The system must respond to and service these events as quickly as possible o Different events have different latency requirements Priority based scheduling o For real time scheduling o Must support preemptive priority scheduling Periodic process o Requires to use CPU at constant intervals Admission Control Algorithm o A process may have to announce its deadline to the scheduler Rate monotonic scheduling o Schedules periodic tasks using static priority policy with preemption o Priority is assigned based on the inverse of its period o Assigns higher priority to tasks that require the CPU more often
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'