Class Note for CMPSCI 377 at UMass(30)
Class Note for CMPSCI 377 at UMass(30)
Popular in Course
Popular in Department
This 3 page Class Notes was uploaded by an elite notetaker on Friday February 6, 2015. The Class Notes belongs to a course at University of Massachusetts taught by a professor in Fall. Since its upload, it has received 17 views.
Reviews for Class Note for CMPSCI 377 at UMass(30)
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 02/06/15
CMPSCI 377 Operating Systems Queueing Systems 91 Queueing networks When modelling a concurrent problem using queue theory we are basically interested in representing it as a sequence of tasksjobs that have to be done The general ow of items in a queue system is that one item into the queue is eventually serviced and the result of that processing eventually comes out on the other end of the queue Typically we model such systems by explicitly taking into account both the queue itself a waiting line and the server For example imagine a grocery store most of the grocery stores nowadays have several different types of lines such as the regular lines one the line for people with less than 15 items Why would a grocery store want to have multiple types of lines Because of efficiency issues of course the grocery store basically tries to use less cashiers because they cost money and at the same time tries not to annoy the customers with long waiting times The tradeoff here is delicate if the lines grow too big the grocery store might lose clients but if we hire lots of cashiers to make the lines shorter some of them might be idle and then we7re also wasting money and so on The usual way to model this type of activity involves calculating an arrival rate A which represents how often new items are inserted into the queue Sometimes is also useful to know not only the average arrive rate but the probability distribution of A itself eigi is A a constant rate or is it given by a Poisson distribution etc Another interesting quantity besides the arrival rate is the servicing time maybe the cashier always takes 1 minute to service an item but with probability 20 the cashier can take up to 20 minutes In general the properties of the system as a whole depend on the size of the queue number of people in front of me on the line on how fast does the queue grow on the type of data being processed how many items do the people in front of me they have in their carts on the speed of the server how fast is the cashier etci Notice that in the long run we expect the arrival rate A to be equal to the departure rate in this case we7d have a stable queueing systemi If the average arrival rate is greater however then the queue could keeping growing and the system would not be stable 911 Latency throughput and Little7s Law Typically concurrent systems are not composed by only one queue they might actually be formed by a chain of queues For example Google can implement a frontend that takes up the requests sends them to another server that nds the relevant pages and then this server sends those relevant pages to yet another server that renders the output notice that each of these stages in the processing can be seen as a queueing system itself In other words it is possible to model this type of system as a system of connected servers In this type of system we care about two quantities the latency which is the time for one item to get through a server and the throughput which is the service rate ie the measure of how many items the system can process per second For example imagine a network with 3 nodes each node being able to process 5 items per second The throughput of the system in this case would be 5 itemssect The latency which is the time for one single item to get through would be 3 X g g of a second Now suppose that all nodes are still able to process 5 items per second but that the second node only processes 1 item per second in this case the throughput would be the lowest throughput iiei would represent the bottleneck of the system 1 itemsec 91 92 Chapter 9 Queueing Systems 912 Little7s law Littlels law is a useful mathematical result which says that the longterm average size of a queue in a stable system N is equal to the longterm average arrival rate A multiplied by the longterm average time a item spends in the server average latency T NTl Suppose that the arrival rate is of 10 items per second and that the latency time to process each item is 05 seconds per itemi This means that on average there will be 5 items on the queue Now suppose that the system changes and that suddenly the arrival rate doubles ie goes up to 20 items per second In order to deal with this either the system has to be able to cope with a queue of size 10 or it has to somehow improve the latency to 025 seconds of service time per itemi For more details on this subject please check the slides or visit the Wikipedia entry for the Littlels Law httpenwikipedia orgwikiLittle s1awi 92 Flux When designing concurrent applications it is hard to use threads locks etc in a way that guarantees that our parallel code works correctly in all cases In order to simplify the design of this type of application one can use the Flux programming language Flux is a domainspeci c programming language for building highperformance server applicationsi Its a clean and simple language that lets you build fast concurrent and correct servers out of sequential offthe shelf C code Flux abstracts away issues like threads events and synchronization yielding code that runs fast and is free from deadlock or race conditions Flux allows the programmer to separate the logic of the problem eigi fetching pages parsing them etc from the concurrency control Finally Flux also makes pro ling and performance prediction easieri Now let us discuss how Flux separates the logic of the program and the concurrency control The approach taken by the developers of Flux is to allow the programmer to write a set of components in a language like C or Java and then plugin those components in a graph which gives the speci cation of the possible concurrent flowsi Therefore each of te components represents a highlevel description of one of the step the program can take the graph of components or program flow when used together with the piece of code associated with each component is compiled translated by Flux into a real program ilei binary code This real program is such that it is a deadlockfree runtimeindependent server which makes full use of threads Notice that the user didn t write any concurrent code all he had to do was to put the components together in a graph in a way to specify the intended program flowi To summarize Flux is composes by components ow atomicity 0 components unmodi ed C C or Java 0 flow path thru components 0 atomicz39ty highlevel speci cation of mutual exclusion In other words Flux takes a graph and translates that graph into a parallel programi We can think of each node as having its own queue of items it has to process then if we decide to analyse the throughput of the system we can make some simple measurements on the size of the queues etc This would allow the programmer for instance to identify botllenecks which would be much harder in case the concurrency speci cation was not given independently of the actual programming logici Flux also makes it easier for Chapter 9 Queueing Systems 93 the programmer in the sense that the programmer doesn7t have to worry about how many threads Will be assigned to each node7 and that it makes it possible to specify atomicity constraints eigi only one thread can enter the component checkcache node at a time Ordering constraints come from the ordering on the nodes in the ow graph finally7 one can also speci cy readerWriter constraints7 such as no two Writers can be on node X at the same time h
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'