New User Special Price Expires in

Let's log you in.

Sign in with Facebook


Don't have a StudySoup account? Create one here!


Create a StudySoup account

Be part of our community, it's free to join!

Sign up with Facebook


Create your account
By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here

Study guide for exam 1

by: Abelardo Notetaker

Study guide for exam 1 C S 439

Abelardo Notetaker
GPA 3.56

Preview These Notes for FREE

Get a free preview of these Notes, just enter your email below.

Unlock Preview
Unlock Preview

Preview these materials now for free

Why put in your email? Get access to more of this material and other relevant free materials for your school

View Preview

About this Document

Operating Systems: purpose, basic functions, history, system calls (and how they differ from other calls), dual mode execution Processes: creation (fork, exec), cooperating, signals, process co...
Norman, A
Study Guide
OS, Computer Systems, process, thread, concurrency, Atomic operations
50 ?





Popular in ComputerScienence

This 12 page Study Guide was uploaded by Abelardo Notetaker on Sunday February 21, 2016. The Study Guide belongs to C S 439 at University of Texas at Austin taught by Norman, A in Spring 2016. Since its upload, it has received 135 views. For similar materials see PRINCIPLES OF COMPUTER SYS-C S in ComputerScienence at University of Texas at Austin.

Similar to C S 439 at UT

Popular in ComputerScienence


Reviews for Study guide for exam 1


Report this Material


What is Karma?


Karma is the currency of StudySoup.

You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 02/21/16
01 – Intro and Themes Operating systems three hats Referee: Manages shared resources, provide protection and communication for processes Illusionist: Provide the illusion of infinite resources Glue: Provide standard services which the hardware implements Evaluate an OS Reliability - work as expected Security - not vulnerable to attacks Portability - does not change as HW changes Performance - efficient, overhead, fairness, latency, throughput, predictability 02 – History of OS Phase 1: Hardware expensive, humans cheap Phase 2: Hardware cheap, humans expensive Phase 3: Hardware very cheap, humans very expensive Phase 1: 1. One user on the console, one process at a time(1945-1955) Inefficient use of expensive components 2. Batch processing: load program, run, output to tape, print results, repeat(1955- 1965) Adv:next job can be loaded immediately as previous one finished (pipelining) Disadv: No protection for bugs, computer is idle during I/O 3. Overlap of I/O and computation, interrupts OS request I/O, gors back to computation, gets interrupt when I/O device finishes Adv: Performance improves b/c I/O and processing happen concurrently 4. Multiprogramming: several programs run at the same time sharing the machine(1965-1980) Keep several jobs in memory and multiplex CPU between jobs One job runs until it performs I/O, then another job gets the CPU Requires: Memory protection and relocation Phase 2: 5. Interactive timesharing(1970-) Cheap terminals to let multiple users interact with the system at the same time Requires: more sharing, more protection, more concurrency Process switching occurs much more frequently, affects response time and thrashing occurs Introduces timer interrupts New OS services: shell, file system, rapid process switching and virtual memory 6. Personal computing Simplify OS by eliminating multiprogramming, concurrency and protection Don't let programs crash each other 7. Parallel and distributed computing Parellel systems, multiple processors in the same machine, sharing resources In distributed systems, multiple processors communicate via a network Adv: increased performance, increased reliability, sharing of specialized resources OS Three Interface Abstract Machine Interface(AMI) - between OS and apps: API + memory access model + legally exe instructions Application Programming Interface(API) Hardware Abstraction Layer(HAL) - abstracts hardware internally to the OS Abstraction for protection, apps should have restricted rights, but not hinder functionality Process - a program during execution program = static file process = program + execution state Dual Mode Execution User mode: access is restricted Kernel mode: access is unrestricted Supported by the HW HW must support at least three features: Memory protection - In user mode, cannot access outside a process' memory regiong Timer interrupts - kernel must be able to periodically regain control from running process Privileged instructions - instructions only available in kernel mode Entering the kernel Three methods Exceptions - program act silly, attempt to perform privileged instruction Interrupts - asynchronous exceptions, something interrupts the currently executing process System call - user program request OS service Saving the state of the interrupted process one interrupt stack per processor/process/thread interrupt vector is used to determine the action taken by the OS when interrupt, exception and system calls occur Privileged instructions User process may not: Address I/O directly use instructions that manipulate OS memory set the mode bits that determine user or kernel mode disable and enable interrups halt the machine Only done in kernel mode Can return to user mode call RTI create a process switch between processes 03 – Process Processes are created and managed through system calls OS tracks this data using a Process Control Block(PCB) which is a dynamic kernel data structure kept in memory which contains: pid, PC, SP, content of general purpose registers memory management information username of owner any resource needed/open process is in blocked state when waiting for I/O Only a process can create other child processes Process are created by fork() fork(): copies a process into an (identical) process returns twice: once to the parent and once to the child pid are different and return value is different: In parent, it is child process id In child, it is 0 Processes begin immediately following the call to fork() Each process has its own memory and its own copy of each variable exec(): overlays a process with a new process does not create a new process pid does not change code, stack and heap are overwritten It only returns if it encounters an error wait(): causes parent process to wait for the child process to terminate Allow parent process to get return value from the child Return value passed by the exit() in the child process as a result of the wait call Zombie process - process terminated, but parent process has not collected its status exit(): terminates a process deallocate and closes files checks if parent is alive kill(): interprocess communication sends a signals(user-level interrupts, handler defined by user) orphan children either: kill all children -or- keep it running Unix shell Each command is child of your shell process & after command runs it in the background Ctrl-c send SIGINT call Ctrl-z send SIGSTOP call 04 - CPU scheduling Boot sequence 1. load boot program from ROM examine machine configuration builds configuration structure loads the OS kernel and gives it the configuration structure 2. Operating system initialization Initialize kernel data structures Initialize state of all HW devices Create a number of processed to start operation 3. Run idle lopp Long Term Scheduling - determine degree of multiprogramming, i.e number of jobs executing at once in the primary memory? Short Term Scheduling - How the OS select a process from the ready queue to execute? Scheduler: executes when a process switched from running to blocking, a process is created or terminated, and interrupt occurs Scheduler algorithm can be non-preemptive(runs when process blocks or terminates, not on hardware interrupts) or pre-emptive(OS makes sheduling decisions during interrupts, requires interrupt to give control back to the OS) Keywords: throughput: number of processes completing in a unit of time CPU utilization: percentage of time that the CPU is busy Turnaround time: time length to run a process from initialization (since added to ready queue) to termination Response time: time between issuing a command and getting a result Waiting time: total amount of time that a process is in the ready queue Context switch: changing between processes Scheduling policies Real CPU scheduler wants to: Minimize response time Minimize variance of average response time Maximize throughput (minimize OS overhead, context switching) Minimize waiting time (increase average response time) First come first served (FCFS) Jobs run until they either complete or block on I/O Non-preemptive Round Robin Add timer and pre-emptive policy Run for a set time slice (aka time quantum) After each time slice, move the running process to the back of the queue Selecting time slice: too large: waiting time suffers too small: throughput suffers due to too much time spent on context switching Choose so context switching is roughly 1% of the time slice Shortest Job First (SJF) Non-preemptive least amount of work to do until its next I/O Could lead to starving the long jobs Shortest Remaining Time First (SRTF) Preemptive SJF Multilevel Feedback Queue Run jobs in highest priority queue first in round robin fashion Round Robin time slice increases exponentially at lower priorities Adjust priorities as follows: 1. Job start in the highest priority queue 2. If job's time slice expire, drop its priority one level 3. If job's time slices do not expire (I/O request), then increase its priority one level, up to the top priority level Tradeoffs: SJF increase throughput, decreases fairness Round Robin increase fairness, decreases throughput Cannot have best of both Possible solutions 1. Give each queue a fraction of the CPU time 2. Adjust the priority of the jobs as they do not get services. Avoid starvation but average waiting time suffers when the system is overloaded because all jobs end up with a high priority 05 - threads What is a thread? It represents an abstract entity that executes a sequence of instructions A thread is bound to a single process Each process may have multiple threads Multi-threaded programs allow: Better represent the structure of the task, the world is concurrent Improve performance, one thread performs computation while another waits for the I/O Threads may be scheduled across different processors thread_create(thread_function, arg0, arg1,...): execution continues with the original thread in main function and execution starts at thread_function in new thread in parallel(concurrently) They have the same states and life cycle as a PCB. They can be new, ready, running, blocking, and terminated states. Process define an address space, threads share the address space Each thread has its own stack, exclusive use of the CPU registes while it executes Thread creating, commmunicating and context switching is more lightweight than the process counterpart Threads share global data and the heap. The thread' stacks don't have protection Thread Control Block(TCB) contain: SP, PC, thread state, register values, a pointer to PCB Kernel-Level Threads Thread that the OS knows about Switching between kernel-level threads of the same process requires a small "context switch" Schedule by OS (through system calls) Order of context switch for kernel threads Thread is running Thread blocks, is interrupted, or voluntarily yields Mode switch to kernel mode OS code saves thread state to TCB OS code chooses new thread to run OS code loads its state from TCB Mode switch to user mode Thread is running Adv: Kernel-level threads can exploit parallelism System calls do not block the process Switching threads within the same process is inexpensive Only one scheduler User-Level Threads Threads that the OS does not know about Uses thread library to manage threads, can define scheduling policy Thread yield to other threads or voluntarily give up the processor Switching threads does not involve a context switch, therefore faster Adv: Customizable scheduler Faster to create and switch Disadv: All user-level threads in a process block on system calls User-level scheduler can fight with kernel-level scheduler Independent vs. cooperating threads Independent threads - do not share state with other threads Deterministic Reproducible Scheduling order does not matter Cooperating threads - share state Non-deterministic Non-reproducible Give us concurrency Race condition What is race condition? Instances where the result changes based on scheduling Terminology Atomic operation: an operation that is uninterruptible Synchronization: Using atomic operations to ensure cooperation between threads Mutual Exclusion: Exactly one thread/process is doing a particular activity at a time. Related to critical sections Critical Section: A piece of code that only one thread can execute at a time Entry section - code to attempt entry into critical section critical section - code that requires isolation (with mutual exclusion) exit section - cleanup code after execution of the critical section Four properties are required for correctness: 1. Safety: only one thread in the critical section 2. Liveness: if no threads are executing a critical section, and a thread wishes to enter a critical section, that thread must be guaranteed to eventually enter the critical section 3. Bounded waiting: if a thread wishes to enter a critical section, then there exist a bound of other threads that may enter the critical section before that thread does 4. Failure atomicity: it is okay for a thread to die in the critical section 06 - Locks and semaphores Our Ideal solution Satisfies correctness properties No busy waiting (spin locks) Extendable to many threads (symmetric) Locks: prevents another process from doing something Acquire lock before entering a critical section or when accessing shared data Release lock when leaving a critical section or when access to shared data is complete Wait if locked Lock->Acquire() - Wait until lock is free, then grab it Lock->Release() - Unlock and wake up any thread waiting in Acquire() We need mutual exclusion due to the scheduler Simplest solution - disabling interrupts to prevent context switch Disadv: once interrupts are disabled, thread can't be stopped, critical section can be very long Re-enabling interrupts First thing a thread does when it starts to execute is enable interrupts In te CPU scheduler: when the scheduler selects and starts the next running process, it can enable interrupts Read-Modify-Write atomic instruction Test&Set - reads a value from memory writes "1" back to the memory location Priority inversion: If the thread waiting for the lock has a higher priority than the thread using the lock Semaphore - generalized locks Each semaphore has a value associated with Each semaphore supports a queue of threads that are waiting to access a critical section Invented by Dijkstra in 1965 Two atomic operations: up(): signal waiting threads semaphore is available, increment value down(): wait for semaphore to be available, decrement value Two types: binary semaphores (same as locks) counted semaphores: represents resources with many units available Problems with semaphore No control or guarantee of proper usage 07 - Monitors What are monitors? A monitor defines a lock and zero or more condition variables for managing concurrent access to shared data Lock provide mutual exclusion for shared data Condition variables enable threads to block waiting for an event inside of critical sections release the lock at the same time the thread is put to sleep Rule: A thread must hold the lock when doing condition variable operations Condition Variable(CondVar) operations: Wait(Lock lock) - atomic (release lock, move thread to waiting queue, suspend thread) thread re-acquires the lock when the thread wakes(before returning from wait) thread will always block Signal(Lock lock) - wake up waiting thread, if one exists Broadcast(Lock lock) - wake up all waiting threads, if any exist Resource variables Condition variables keep no state Each condition variable should have a resource variable that tracks the state of that resource Check resource variable in a while loop before calling wait on associated condition variables Once the resource is available, claim it (subtract the amount you are using) Before signaling that you are finished with a resource, indicate the resource is available by increasing the resource variable Whose turn is it? Mesa/Hansen Style The thread that signals keeps the lock, the waiting thread waits for the lock Hoare Style The thread that signals gives up the lock and waiting thread gets the lock. Waiting thread executes and then releases the lock back to the signaling thread Difference between monitors and semaphores CondVar do not have any history A call to wait() will always wait Semaphores do have history A call to down(), will not always wait In semaphores, down() and up() are commutative result is the same regardless of order of execution CondVar are not commutative they must be in a critical section to access state variables and do their job Readers/Writers Problem - Data is shared among several threads To get correct results, we allow multiple readers at a time, but only one writer at a time 08 - Deadlock The Six Commandments 1. Thou shalt always do things the same - habits allows you to focus on the core problem 2. Thou shalt always synchronize with locks and condition variables - they make code clearer 3. Thou shalt always acquire the lock at the beginning of a function and release it at the end - build on this habit 4. Thou shalt always hold the lock when operating on condition variable 5. Thou shalt always wait in a while loop - protects against spurious wake ups 6. (Almost) Never sleep() - never use sleep() to wait for an event What is deadlock Deadlock occurs when two or more threads are waiting for an event that can only be generated by these same threads Deadlock can happen if all of the following conditions hold: 1. Mutual Exclusion 2. Hold and Wait: at least one thread holds a resources and is wai3ng for other resources to become available. A different thread holds the resource. 3. No pre-emption 4. Circular wait Prevent deadlock Total/partial ordering of locks Advance synchronization Three different techniques: Fine-Grained Locking: Better for performance, but complex Two-phase locking Basic example: thread acquires all locks it needs, makes necessary changes, commit and releases locks. Other threads cannot see the changes until it commits Strict: All unlocks happen at the commit Conservative: If all locks cannot be acquire, release any already acquired and begin again (No deadlock) Transactions Group actions together so they are: atomic, serializable (transactions happen one after the other), durable (once it happens, it sticks) Durability achieved if: If we get to the commit stage, we are okay If not, roll back operations as if the transaction never happened Keep write-ahead log on disk of all changes in the transaction When to use each If serializability is necessary, use two-phase locking If durability is necessary, use transactions 09 – The Importance of Safety Therac-25 Linear Accelerator Modify beam in either to ways: multileaf collimator or cones Two major software problems: Timer overflow and when user reached lower right corner, the machine expected the user to be done with the setting, but it created faults when the machine staff wanted to change a setting Tons of bad software design/human failures: False alarms Errors reports by number only and there was no documentation No clearinghouse for mistakes and company hid failures from other users No end-to-end consistency checks No quality control Remove hardware prevention locks Defaults to all leaves opened Lessons Complex systems fail for complex reasons Be tolerant of inputs Be strict on outputs Assume buggy software and protect against it People who are confident about their abilities tend to perform worse The consequences of getting code wrong can be atrocious In this class: Follow the six commandments Use coarse-grained locking Order your locks


Buy Material

Are you sure you want to buy this material for

50 Karma

Buy Material

BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.


You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

Why people love StudySoup

Steve Martinelli UC Los Angeles

"There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

Anthony Lee UC Santa Barbara

"I bought an awesome study guide, which helped me get an A in my Math 34B class this quarter!"

Bentley McCaw University of Florida

"I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

Parker Thompson 500 Startups

"It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

Become an Elite Notetaker and start selling your notes online!

Refund Policy


All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email


StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here:

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.