Adv Operating Systems
Adv Operating Systems CS 4210
Popular in Course
Popular in ComputerScienence
This 0 page Class Notes was uploaded by Alayna Veum on Monday November 2, 2015. The Class Notes belongs to CS 4210 at Georgia Institute of Technology - Main Campus taught by Staff in Fall. Since its upload, it has received 9 views. For similar materials see /class/234153/cs-4210-georgia-institute-of-technology-main-campus in ComputerScienence at Georgia Institute of Technology - Main Campus.
Reviews for Adv Operating Systems
Report this Material
What is Karma?
Karma is the currency of StudySoup.
Date Created: 11/02/15
UserLevel IPC for SMPs based on paper Userlevel Interprooess Communication for SharedMemory Multiprocessors Bershad et a1 mixed discussion about userlevel threads and userlevel RPC Cross address space calls data transfer MT client MT sewer m communicating singlemultithreaded processes on a SMP multiple address spaces small kernels functionality implemented in modules outside kernel that need to communicate efficiently common case of communication is not across the net but across address spaces In general we want to have separate address spaces promotes modularity fault isolation exibility extensibility protection need to cross address spaces but that can hurt performance ideas in the paper optimize crossaddress space communication user level thread management with user level communication InterProcess Communication needs to be efficient performance of systems depends upon fast IPC mechanism typical local IPC mechanisms shared synchronization variables shared memory typical local or remote IPC mechanisms message passing both including message queues but also unstructured communication remote procedure calls RPC our discussion will concentrate on local communication Remote Procedure Calls In RPC communication between address spaces is structured similarly to procedure calls An RPC runtime hides from higher layers address crossing type checking procedure parameter and result transfers etc From the caller s perspective RPC are synchronous calls can have different failure semantics details in following lecture RPC and all communication is closely related to thread management eg a server thread may need to be scheduled when it receives a message a client thread may need to be descheduled while it is waiting for a response we mentioned previously communication and synchronization two faces of same coin URPC standard RPC is kernelbased this is overkill for multithreaded applications argument similar as for userlevel threads cost of trapping in the kernel switching memory management context URPC is specialization of RFC for a SMM Characteristics summary shared memory is used for passing arguments and results Without kernel invocation addressspace switching can be avoided or made less frequent by lazy addressswitch valuable for SMPs but also for uniprocessors running multithreaded programs IPC requirements processor reallocation thread management data transfer How to do these Without kernel involvement first two effect a control transfer from one address space to another is kernel involvement needed for all the three requirements only processor reallocation needs kernel involvement thread management can be done at user level What about data transfer iii fl fry iii MT client MT server Integrity of data 9 chent s sewer 5 address address shared Space S mailbOX w who should do the mapping who should check What is being passed in the mailbox authentication implied statically mapping done once by kernel pairwise between client and server correctness checked dynamically on each callreturn between client and server by URPC runtime runtime responsible for putting data in memory buffers inspecting type range etc upshot cross address space calls can be implemented without involvement of the kernel both send and receive can be done within the userlevel library right not quite What about processor reallocation address switching MT client MT server tl blocked on call Why not pick a thread t4 in server to run on Pl to execute the call kernel op to change VM context cache and TLB performance want to avoid address switching solution give P1 to some other ready in client thread management at user level if server is already executing on P2 then use it to run the server thread to handle the call ie keep the OS out always possible to do this if an address space is underpowered then processor reallocation may be necessary solution lazy address switching the scheduler prefers to schedule threads from the same address space until it really has to give processor to another address space client s processor given to the server via the kernel processor returned to the client by the server upon completion of the call via the kernel the thread management system in each address space does the voluntary load balancing Lazy Address Switching Detail Whenever a message is sent the sender is descheduled from the processor another thread from the sender s addr space is scheduled on the same processor if no such thread exists the processor will run a special lowpriority thread that looks for underpowered receivers underpowered has msgs to receive but is not scheduled on a pI39OCCSSOI39 if an underpowered receiver exists the processor is assigned to it What would this mean for uniproeessors the Client address space keeps sending requests from different threads and the reeeivers will only be scheduled after all requests are sent Example Client has an editor with two thread T1 and T2 T1 invokes a procedure in a Window manager then upon return invokes a procedure in a le cache manager T2 invokes a procedure in a le cache manager Initially the editor and Window manager are running on two processors the le cache manager is not scheduled on a processor What is the sequence of events URPC system two software packages Figure 2 fast threads for thread management channel management and message primitives in the URPC layer solution bets on suf cient processor power in each address space client driven processor reallocation could be better than some xed kernel policy If I it I11 Applicatimn Serwr A Pphfaljan Emits LI RFC Meagago Stu b5 Chgan 17215 l l hmad s 7H AFN R K if Rea n cairn PITUEL39ESLEUI 1 U RF 1 Fast39l l lrmds 2 1 J39 f an If Km ne r Fig 2 Thu softwal39e components of L RPC URPC design decisions processor reallocation Without any scheduling decision amortize cost of kernel involvement over multiple crossaddress space calls data transfer via shared memory no kernel copying no dynamic protection checking message channels protected by polling style tamps locks user level threads 100 times cheaper than kernel threads for context switches Critique The problem with lazy address switching is that the assumptions are optimistic it assumes that a delay in processing a request will not degrade performance it assumes that the server of an RPC call has or will soon have a processor to run on what about single threaded applications realtime application highlatency IO operations high priority invocations A combination of lazy and eager address switching is necessary Performance will URPC always perform better than kernelbased RPC P number of processor T number of threads if T gt P then URPC may be worse Why T lt P then no processor reallocation and URPC will be better Kernel based IPC cost of switching address spaces for sync and comm mismatch in design of thread packages thread management at user level gt high performance communication amongst threads Via kernel gt low performance solution pull IPC out of the kernel