### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Advanced Stochastic Simulation IT 735

Mason

GPA 3.61

### View Full Document

## 21

## 0

## Popular in Course

## Popular in Information technology

This 15 page Class Notes was uploaded by Adonis Nader on Monday September 28, 2015. The Class Notes belongs to IT 735 at George Mason University taught by Chun-Hung Chen in Fall. Since its upload, it has received 21 views. For similar materials see /class/215218/it-735-george-mason-university in Information technology at George Mason University.

## Similar to IT 735 at Mason

## Reviews for Advanced Stochastic Simulation

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/28/15

Department of Systems Engineering amp Operation Research George Mason University IT 735 OR 735 Advanced Stochastic Simulation Prof ChunHung Chen Spring 2007 Benchmark Examples for Term Project updated on March 11 2007 Example 1 10 Designs with Normal Distribution L 19 g N Ni 62 i 0 l 9 We want to nd a design with minimum expected mean It is obvious that design 1 is the best Example 2 10 Designs with Uniform Distribution We consider a nonnormal distribution for the performance measure L 19 5 N Uniformi105 i105 i 0 l 9 We want to nd a design with minimum expected mean It is obvious that design 1 is the best Example 3 10 Designs with Normal Distribution Nonuniform Means and Unequal Variances L 19 g N Nmean variance i 0 l 9 We want to nd a design with minimum expected mean This example also compares normally distributed designs but the means and variances were iid randomly generated from Unif0 10 and Unif24 48 distributions respectively leading to the following values Means 834 098 649 010 803 621 978 910 132 327 Variances 3435 4434 2812 3593 4449 4339 3972 2431 4210 2442 Example 4 100 Designs with Normal Distribution To see how your algorithm perform in a bigger design space Lt9 5 N Nz3910 12 139 0 l 2 98 99 We want to nd a design with minimum expected mean It is obvious that design 1 is the best Example 5 Buffer Allocation Problem in Communication Networks We consider a lOnode network shown in the following gure There are 10 servers and 10 buffers that are interconnected in a switching network Unifl7 Unifl7 Cllcz 0 Cl I G Cllcz e EXP1 C2 C1 Unif2lB Arrival c2 Exp012 A C2 on We assume that there are two classes of customers with different arrival distributions but the same service requirements We consider both exponential and nonexponential distributions uniform in the network Both classes arrive at any of Nodes 0N3 and leave the network after having gone through three different stages of service Rather than probabilistic the routing is class dependent as shown in Figure 1 Finite bulTer sizes at all nodes are assumed which is exactly what makes our optimization problem interesting Specifically we are interested in distributing optimally a given number of buffer spaces to different nodes so that the network throughput is maximized A buffer is said to be full if and when there are as many customers as its capacity allows not including the customer being tended to in the server We consider the problem of allocating 12 buffer units among the 10 different nodes numbered from 0 to 9 We denote the buffer size of node 139 by 9139 Specifically 90 91 6 69 12 and 9 is a nonnegative integer The number of different combinations of 60 61 92 69 which satisfy the above constraint can be calculated as follows 121201 1 293930 Example 6 Continuous Design Space Lx y g N x 12 where fx y x1 x212 3xz3 for 20 le S 20 and 20 sz S 20 In this example we want to nd x and y such that E Lx y g is minimized Example 7 Continuous Design Space Lx y g N x 12 where fx y 1 x1 n12 l9l4x13 x12 14 xz6x1xz3xzz 30 2x13x22 1832x112 x12 48 xz36x1x227xz2 for 25 le S 20 and 25 sz S 20 In this example we want to find x and y such that E Lx y g is minimized More Examples You can find several well known examples in the literature The working paper quotA Model Reference Adaptive Search Method for Stochastic Global Optimizationquot coauthored by J Hu M Fu and S Marcus gives a nice description about 4 testing function a GoldsteinPrice b Rosenbrock c Pinter d Griewank The related pages are extracted and attached in the next pages A Model Reference Adaptive Search Method for Stochastic Global Optimization J iaqiao Hu Department of Applied Mathematics and Statistics State University of New York Stony Brook NY 11794 thu amsisunysbiedu Michael C Fu Robert H Smith School of Business amp Institute for Systems Research University of Maryland College Park MD 20742 mfu rhsmithiumdiedu Steven 1 Marcus Department of Electrical and Computer Engineering amp Institute for Systems Research University of Maryland College Park MD 20742 marcus umdiedu October 2005 Revised August 2006 Abstract We propose a randomized search method called Stochastic Model Reference Adaptive Search SMRAS for solving stochastic optimization problems in situations where the objective func tions cannot be evaluated exactly but can be estimated with some noise or uncertainty eg via simulation The method is a generalization of the recently proposed Model Reference Adap tive Search MRAS method for deterministic global optimization and is based on sampling from an underlying probability distribution model on the solution space which is updated iteratively after evaluating the performance of the samples at each iteration We show global convergence of SMRAS for both stochastic continuous and discrete combinatorial problems and carry out numerical studies to illustrate the performance of the method Keywords stochastic global quot 1a 39 39 39 1 Introduction Stochastic optimization problems arise in a wide range of areas such as manufacturing commu nication networks system design and nancial engineering These problems are typically much more dif cult to solve than their deterministic counterparts either because an explicit relation between the objective function and the underlying decision variables is unavailable or because the cost of a precise evaluation of the objective function is too prohibitive Oftentimes one has to use simulation or real time observations to evaluate the objective function In such situations all the objective function evaluations will contain some noise so special techniques are generally used as opposed to the deterministic optimization methods to lter out the noisy components There are some obvious distinctions between the solution techniques for stochastic optimization when the decision variable is continuous and when it is discrete Although some techniques in principle can be applied to both types of problems they require some suitable modi cations in order to switch 4 Numerical Examples In this section we test the performance of SMRAS on several continuous and combinatorial sto chastic optimization problems In the former case we rst illustrate the global convergence of SMRAS by testing the algorithm on four multi extremal functions then we apply the algorithm to an inventory control problem In the latter case we consider the problem of optimizing the buffer allocations in a tandem queue with unreliable servers which has been previously studied in eg Vouros and Papadopoulos 1998 and Allon et al 2005 We now discuss some implementation issues of SMRAS 1 Since SMRAS was presented in a maximization context the following slight modi cations are required before it can be applied to minimization problems needs to be initialized as a strictly decreasing function instead of strictly increasing Throughout this section we take 52 e for maximization problems and 52 5 for minimization problems where r gt 0 is some prede ned constant The sample 1 7 p quantile p Nk will now be calculated by rst ordering the sample performances i 1 Nk from largest to smallest and then taking the 1 7 pNklth order statistic The threshold function should now be modi ed as 0 ify2767 My 787y8 ifvltylt767 1 ifygy iv The inequalities at steps 3a and 3b need to be replaced with amp Nk 1176 and a Nk P Y P WA 7 8 respectively to In actual implementation of SMRAS a smoothed parameter updating procedure cf eg De Boer et al 2005 Rubinstein 1999 is used ie rst a smoothed parameter vector a is computed at each iteration k according to 11 u 0111 17 ma Vk 01 and 50 00 where 0k1 is the parameter vector derived at Step 4 of SMRAS and u 6 01 is the smoothing parameter then fz k1 instead of fz0k1 is used in Step 1 to generate new samples Although this modi cation will not affect the theoretical convergence results it may improve the empirical performance of the algorithm 41 Continuous Optimization For continuous problems we use multivariate normal pdfs as the parameterized probabilistic model Initially a mean vector uo and a covariance matrix 20 are speci ed then at each iteration of the algorithm it is easy to see that the new parameters pk1 and Elk are updated according to the following recursive formula W N7 2m Skltjkltzgtgtxltjkltzmgt 22 Ni ZmeAk SkUk DXUk t was MHW Mk1T Ni ZmeAk Skjkxjk k 39 By Corollary 32 the sequence of mean vectors 1 will converge to the optimal solution and EH1 the sequence of covariance matrices 21k to the zero matrix In subsequent numerical experiments we will use 11111 to represent the current best solution found at iteration k 411 Global Convergence To demonstrate the global convergence of SMRAS we consider four multi extremal test functions with additive noise where the noise 1 is normally distributed with mean 0 and variance 100 The graphical representations of the deterministic versions of these functions in two dimensions are given in Figure 2 1 Goldstein Price function with additive noise J1 LZJ 1 1 m2 1219 714x1 3x 714x2 6m1z2 30 2m1 7 3m2218 7 32x1 12m 48mg 7 36m1x2 27mg 1 where 73 g M g 3 i 1 2 The function J1z E J1z1J has four local minima and a global minimum at f 0 71T J1z 3 A to V Rosenbrock function with additive noise n 5 n71 5295711 210095141 7 90 i12 1 17 11 where 710 g x g 10 i 1 n lts deterministic counterpart J2z E J2z 11 has the reputation of being difficult to minimize and is widely used to test the performance of different global optimization algorithms The function has a global minimum at z 1 1T J2 A DJ V Pinter s function with additive noise n 5 7L 7L J311 2 20 81112 141 sin l 7 l Sini1 i1 i1 n Zilog101iz21 7 2m 33511 7 cosmi12 1 11 11 where x0 xn xn1 x1 710 g e g 10 2 1 35 0 0T ng 1 4 Griewank function with additive noise n 10 J 1 f1 Hw 7 7 cos 7 4 35 40 11 351 11 7 where 710 g e g 10 2 35 00T J4z 1 23 Figure 2 Test functions in two dimensions a J1 Goldstein Price b J2 Rosenbrock c J3 Pinter d J4 Griewank Guldsmm Pnce uncuunwheve 73 x 3 Fl 2 Rusenbvuck mummy Wham 4n X s in P1 2 For all four problems the same set of parameters are used to test SMRAS r 001 6 001 mixing coefficient 001 initial sample size N0 500 initial p 01 Oz 104 and the observation allocation rule is Mk 105Mk4l with M0 10 the smoothing parameter 1 05 the initial mean vector MO is an n by l vector with each component uniformly selected from the interval 73 3 for J1 and from 710 10 for J2 J3 and J4 and 20 is initialized as a n by n diagonal matrix with all diagonal elements equal to 100 where n is the dimension of the problem Note that if the random observations obey the large deviations principle then 04 and Mk satisfy the relevant conditions in Theorem 31 We refer the readers to Hu et al 2006 for a discussion of general guidelines for selecting r and 11 For numerical comparison purposes we also applied the simultaneous perturbation stochas tic approximation SPSA algorithm Spall 1992 and the simulated annealing SA algorithm Corana 1987 to all test cases In SPSA the gradient is estimated by averaging q independent simultaneous perturbation approximations at each iteration and whenever the update results in a solution that violates the constraint we simply project the solution back into the feasible region In our experiments we choose q 1 ie the algorithm requires only two measurements of the 24 objective function at each iteration We have also used a standard gain sequence ak 1k and a step size sequence ck 10 0025 where k 12 is the iteration counter 13 5 x 104 for J1 and J2 and c 5 x 102 for J3 and J4 The SA algorithm we have considered is a simple stochastic version of standard SA where each time the algorithm visits a solution we allocate L independent simulation observations to that solution estimate the performance of that solution by averaging over L replications and then use standard SA to solve the underlying problem We have used the following set of parameters L 50 initial temperature T 5 x 104 annealing factor 77 085 and the neighborhood of a solution x to be searched at each iteration is taken to be y maxlglSn lz 7 yl 05 For both SPSA and SA the initial solution is uniformly selected from 733ln for J1 and from 710 10 for J2 J3 and J4 For each function we performed 100 independent simulation runs of all three algorithms The numerical results are reported in Table 1 where J i 1 23 4 are the averaged function values J evaluated at the nal solutions found by the algorithms and the corresponding optimal values J are also included for reference Our performance comparison is based on the same amount of simulation effort where the total number of allowed function measurements is set to 3 x 105 for J1 and J3 2 x 106 for J2 and 106 for J4 In Figure 3 we also plotted the average function values of the current solutions as a function of the number of function measurements Numerical results indicate convergence of all three algorithms Since SPSA uses gradient infor mation it may quickly locate a local optimum by using only a small number of function measure ments However depending on the initial solutions used the algorithm may converge to solutions that are far from optimal Since SA permits uphill moves it has the capability to escape from local optimal solutions Therefore except for the J2 case which is often considered as a unimodal function SA generally yields better performance than SPSA does in the long run SMRAS con sistently outperforms SA in all test cases and nds better solutions than SPSA does when the number of function measurements is large enough Algorithm fl std err Jf j2std err J j3std err J j4std err J SMRAS 312001 3 137002 1 160003 1 175003 1 SPSA q 1 312117 3 202020 1 116461 1 654015 1 SA 18832 3 421022 1 14215 1 389009 1 Table 1 Performance of SMRAS SPSA and SA on benchmark problems J17J4 based on 100 independent simulation runs The standard errors are in parentheses 412 An Inventory Control Example To further illustrate the algorithm we consider an 55 inventory control problem with iid exponentially distributed continuous demands zero order lead times full backlogging of orders and linear ordering holding and shortage costs The inventory level is periodically reviewed and an order is placed when the inventory position on hand plus that on order falls below the level 5 and the amount of the order is the difference between S and the current inventory position 25 2 D Goldstein Price 5D Rosenbrock w 39 total sam le size total sample slze X 105 p X a 0 SD Pinter 10D Griewank m total sample SlZe C d Figure 3 Performance of SMRAS7 SPSA7 and SA on a Goldstein price function 10 5 D Rosen brock function c 5 D Pinter function d 10 D Griewank function Formally7 we let Dt denote the demand in period t X the inventory position in period t p the per period per unit demand shortage penalty cost7 h the per period per unit inventory holding cost7 c the per unit ordering cost7 and K the set up cost per order The inventory position Xt evolves according to the following dynamics S 7 Dt1 Xt lt S Xt1 Xt Dt1 Xt Z 8 The goal is to choose the thresholds s and S such that the long run average cost per period is minimized7 ie7 sir argmin Js7 S argmintlim JtSS H00 where JtsS ZZ1IXlt sK CS th HoX 7 f max0m7 and m max07 ix Note that the above objective function is convex thus a natural choice of comparison 26 algorithm is a gradient based algorithm like SPSA However we will not exploit this structure in SMRAS The following four test cases taken from Fu and Healy 1997 are used to test the performance of SMRAS and SPSA The cost coef cients and the analytical optimal solutions are given in Table 2 each with c h 1 and exponentially distributed demands with mean Case ED p K J 5 5 1 200 10 100 7409 341 541 2 200 10 10000 22000 0 2000 3 200 100 100 11844 784 984 4 200 100 10000 26434 443 2443 Table 2 The eight test cases In our simulation experiments with SMRAS the initial mean vector is uniformly selected from 0 2000 x 0 4000 for all four cases and the covariance matrices are initialized as diagonal matrices with all diagonal elements equal to 106 The other parameters are r 001 6 001 001 No 100 initial p 01 Oz 104 Mk 105Mk1 with M0 10 smoothing parameter 1 05 For SPSA we have considered two cases q 1 and q 10 where in both cases the initial solutions are uniformly selected from 02000 x 04000 and a gain sequence ak 200k and a step size sequence ck 200 025 are used which give reasonable performance for different starting values In both SMRAS and SPSA the average cost per period is estimated by averaging the accumulated cost over 50 periods after a warm up length of 50 periods The average performances of both algorithms for all test cases are given in Table 3 where Np indicates the total number of periods including the warm up periods simulated and the entries represent the averaged function values J of the nal sample solutions obtained for different choices of Np each based on 30 independent simulation replications Since SPSA q 10 uses more precise gradient estimates it can generally produce better solutions than SPSA q 1 within the same number of algorithm iterations However the performance gain of q 10 over q 1 is compromised by the additional simulation effort required in estimating the gradient in the sense that for a xed simulation budget the solutions found by SPSA q 10 are actually worse than those found by SPSA q 1 SMRAS does not explicitly exploit the gradient structure however the algorithm still does very well as compared to an algorithm designed for convex problems and that utilizes the gradient information We see that in all four cases SMRAS provides superior empirical performance over SPSA and nds solutions that are reasonably close to the optimal Moreover the algorithm also shows a signi cant variance reduction over SPSA 42 Combinatorial Optimization To illustrate the performance of SMRAS on discrete problems we consider the buffer allocation problem in a service facility with unreliable servers The system consists of m servers in series which are separated by m 71 buffer locations Each job enters the system from the rst server goes 27 Algorithm Case Np 103 N17 104 N17 105 N17 106 J 1 7 9962262 820191 747310 7409 SMRAS 2 7 23423147 2263382 2216628 22000 3 7 13108157 12914136 1219537 11844 4 7 27373139 2690252 2663533 26434 1 13997850 9686379 8368171 8136171 7409 SPSA 2 290371013 24471638 23129354 22422146 22000 q 1 3 220481898 177461617 155931351 13783878 11844 4 435824187 312301446 294371195 28192853 26434 1 7 14981946 9505333 8296167 7409 SPSA 2 7 29567973 24022348 22659118 22000 q 10 3 7 297915008 164061178 14681873 11844 4 7 433765183 298711010 28459768 26434 Table 3 Performance of SMRAS and SPSA on four test cases based on 30 independent simulation runs The standard errors are in parentheses through all intermediate servers and buffer locations in a sequential order and nally exits from the last server The service times at each server are independent exponentially distributed with service rate m i 1 m The servers are assumed to be unreliable and are subject to random failures When a server fails it has to be repaired The time to failure and the time for repair are both iid exponentially distributed with respective rates 1 and n i 1 m A server is blocked when the buffer associated with the server coming next to it is full and is starved when no jobs are offered to it Thus the status of a server busybroken will affect the status of all other servers in the system We assume that the failure rate of each server remains the same regardless of its current status Given n limited buffer spaces our goal is to nd an optimal way of allocating these 71 spaces to the m 7 1 buffer locations such that the throughput average production rate is maximized When applying SMRAS we have used the same technique as in Allon et al 2005 to generate admissible buffer allocations the basic idea is to use an m 7 1 by n 1 matrix P Whose i jth entry speci es the probability of allocating j 7 1 buffer spaces to the ith buffer location and then generate allocations according to P Please refer to their paper for a detailed discussion on how to generate admissible allocations We de ne an allocation scheme x as an m 7 1 by 1 vector Whose ith element speci es the number of buffer spaces allocated to the ith location Thus when parameterized by P the probability of generating z is m71 n1 m71 35713 H Hpj1wexij H 59ZTFZw 71 j1 i1 where XM represents the set of allocation schemes in which j 7 1 buffer spaces are allocated to the 2th buffer location 0139 ln PLO ln an and was Iz e XLO Im e XMHT Once 28 the admissible allocations are generated it is not difficult to see that the entries of the matrix P are updated at the kth iteration as ZmeAk Skjkxjk7 310100 6 KM 2mm SkJkmXJkm7 3k k1 7 PM i where Ak X Xivk is the set of Nk admissible buffer allocations generated and jk is the average throughput obtained via simulation when the allocation z is used It is not difficult to see that a straightforward interpretation of Theorem 31 yields limk Pf z E Vi 1 m 7 1 Vj 1 n 1 which indicates that the sequence of stochastic matrices Pk will converge to a matrix P with all mass at the optimal allocation scheme f ie P5 1 for jmi1and P5 0foralljzi1i1m71 For the numerical experiments we consider two cases m 3 n 1 10 p1 1 p2 12 pg 14 failure rates 1 005 and repair rates r 05 for all i 123 m 5 n 110u1 1 2 11 M3 12 p4 13 5 14f 005 and r 05 for all 2 1 5 Apart from their combinatorial nature an additional difficulty in solving these problems is that different buffer allocation schemes samples have similar performances Thus when only noisy observations are available it could be very difficult to discern the best allocation from a set of candidate allocation schemes Because of this in SMRAS we have used a relatively large T 23 The other parameters are as follows 6 0001 001 initial sample size N0 10 for case and N0 20 for case initial p 01 Oz 12 observation allocation rule Mk 15Mk1l with M0 1 the stopping control parameters 739 1e 7 4 and l 5 see Remark 7 smoothing parameter 1 07 and the initial P0 is taken to be a uniform matrix with each column sum equal to one ie Pg f steady state throughputs are simulated after 100 warm up periods and then averaged over the Vi j We start all simulation replications with the system empty The subsequent 900 periods Note that we have employed the sample reuse procedure cf Remark 1 in actual implementation of the algorithm Tables 4 and 5 give the performances of SMRAS for each of the respective cases and In each table Navy is the averaged number of simulations over 16 independent trials A1100 is the true optimal allocation scheme and NA is the number of times the optimal allocation was found out of 16 runs T is the averaged throughput value calculated by the algorithm and T represents the exact optimal solution cf Vouros and Papadopoulos 1998 We see that in both cases SMRAS produces very accurate solutions while using only a small number of simulations 5 Conclusions and Future Research We have proposed a new randomized search method called Stochastic Model Reference Adaptive Search SMRAS for solving both continuous and discrete stochastic global optimization problems The method is shown to converge asymptotically to the optimal solution with probability one The algorithm is general requires only a few mild regularity conditions on the underlying problem and 29 71 Navgstd err Alloc N144 Tstd 577 T 1 33149 10 16 0634406e4 0634 2 46832 11 16 0674635e4 0674 3 43915 21 16 0711611e 4 0711 4 49835 31 14 0735647e 4 0736 5 50437 32 13 0758106e 3 0759 6 64063 42 12 0776139e 3 0778 7 59143 52 14 07921048 3 0792 8 63948 53 10 08051208 3 0806 9 60635 63 10 0817653e 4 0818 10 63757 73 12 0826988e 4 0827 Table 4 Performance of SMRAS on the buffer allocation problems case based on 16 independent simulation runs The standard errors are in parentheses 71 Navgstd err Alloc N144 Tstd 577 T 1 102e2749 0100 16 0523679e 4 0521 2 129e2148 1100 16 0555386e 4 0551 3 175e2157 1110 16 0587457e 4 0 582 4 251e2259 1210 11 0606120e 3 0603 5 337e2420 2210 10 0626657e 4 0621 6 469e2552 2211 8 0644110e 3 0642 7 456e2582 2221 7 0659110e 3 0659 8 445e2549 3221 7 0674110e 3 0674 9 591e2561 3321 6 0689139e 3 0689 10 529e2540 3331 8 0701110e 3 0701 Table 5 Performance of SMRAS on the buffer allocation problem case based on 16 independent simulation runs The standard errors are in parentheses thus can be applied to a wide range of problems with little modi cation More importantly we believe that the idea behind SMRAS offers a general framework for stochastic global optimization based on which one can possibly design and implement other ef cient algorithms There are several input parameters in SMRAS In our preliminary numerical experiments the choices of these parameters are based on trial and error For a given problem how to determine a priori the most appropriate values of these parameters is an open issue One research topic is to study the effects of these parameters on the performance of the method and possibly design an adaptive scheme to choose these parameters adaptively during the search process Our current numerical study with the algorithm shows that the objective function need not be evaluated very accurately during the initial search phase Instead it is sufficient to provide the algorithm with a rough idea of where the good solutions are located This has motivated our 30 research to use observation allocation rules with adaptive increasing rates during different search phases For instance during the initial search phase we could increase Mk at a linear rate or even keep it at a constant value and exponential rates will only be used during the later search phase when more accurate estimates of the objective function values are required Some other research topics that would further enhance the performance of SMRAS include incorporating local search techniques in the algorithm and implementing a parallel version of the method References G Allon D P Kroese T Raviv and R Y Rubinstein 2005 Application of the cross entropy method to the buffer allocation problem in a simulation based environment77 Annals of Operations Research Vol 134 pp 137 151 M H Alrefaei and S Andradottir 1995 A modi cation of the stochastic ruler method for discrete stochastic optimization77 European Journal of Operational Research Vol 133 pp 160 182 M H Alrefaei and S Andradottir 1999 A simulated annealing algorithm with constant temper ature for discrete stochastic optimization77 Management Science Vol 45 pp 748 764 S Andradottir 1995 A method for discrete stochastic optimization77 Management Science Vol 41 pp 19461961 A Corana M Marchesi C Martini and S Ridella 1987 Minimizing multimodal functions of continuous variables with the simulated annealing algorithm77 ACM Trans Mathematical Software Vol 13 pp 262 280 P T De Boer D P Kroese S Mannor R Y Rubinstein 2005 A tutorial on the cross entropy method77 Annals of Operations Research Vol 134 pp 19 67 M Dorigo and L M Gambardella 1997 Ant colony system a cooperative learning approach to the traveling salesman problem77 IEEE Trans on Euolutionary Computation Vol 1 pp 53 66 M C Fu and K J Healy 1997 Techniques for simulation optimization an experimental study on an s S inventory system77 IE Transactions Vol 29 pp 191 199 M C Fu 2005 Stochastic gradient estimation Chapter 19 in Handbooks in Operations Research and Management Science Simulation SG Henderson and BL Nelson eds Elsevier 2005 P Glasserman 1991 Gradient estimation uia perturbation analysis Kluwer Academic Publisher Boston Massachusetts P W Glynn 1987 Likelihood ratio gradient estimation an overview77 Proceedings of the 1987 Winter Simulation Conference pp 366 375 W J Gutjahr 2003 A converging ACO algorithm for stochastic combinatorial optimization Proc SAGA 2003 Stochastic Algorithms Foundations and Applications Hat eld UK A Al brecht K Steinhoe eds Springer LNCS 2827 10 7 25 W Hoeffding 1963 Probability inequalities for sums of bounded random variables Journal of the American Statistical Association Vol 58 pp 13 30 L J Hong and B L Nelson 2006 Discrete optimization via simulation using COMPASS Operations Research forthcoming Y C Ho and X R Cao 1991 Perturbation analysis of discrete euent dynamic systems Kluwer Academic Publisher Norwell Massachusetts J Hu M C Fu and S 1 Marcus 2006 A model reference adaptive search method for global optimization Operations Research forthcoming J Kiefer and J Wolfowitz 1952 Stochastic estimation of the maximum of a regression function Annals of Mathematical Statistics Vol 23 pp 462 466 P L Ecuyer 1991 An overview of derivative estimation Proceedings of the 1991 Winter Simu lation Conference pp 207 217 G C P ug 1989 Sampling derivatives of probabilities Computing Vol 42 pp 315 328 H Robbins and S Monro 1951 A stochastic approximation method Annals of Mathematical Statistics Vol 22 pp 400 407 R Y Rubinstein 1999 The cross entropy method for combinatorial and continuous optimization Methodology and Computing in Applied Probability Vol 2 pp 127 190 R Y Rubinstein 2001 Combinatorial optimization ants and rare events Stochastic Optimiza tion Algorithms and Applications 304 7 358 S Uryasev and P M Pardalos eds Kluwer R Y Rubinstein and D P Kroese 2004 The cross entropy method a uni ed approach to combi natorial optimization Monte Carlo simulation and machine learning Springer New York R Y Rubinstein and A Shapiro 1993 Discrete Euent Systems Sensitiuity Analysis and Stochastic Optimization by the Score Function Method John Wiley amp Sons L Shi and S Olafsson 2000 Nested partitions method for stochastic optimization Methodology and Computing in Applied Probability Vol 2 pp 271 291 J C Spall 1992 Multivariate appi using perturbation gradient approximation IEEE Transactions on Automatic Control Vol 37 pp 332 341 M Srinivas and L M Patnaik 1994 Genetic Algorithms A Survey IEEE Computer Vol 27 pp 17 26

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "I signed up to be an Elite Notetaker with 2 of my sorority sisters this semester. We just posted our notes weekly and were each making over $600 per month. I LOVE StudySoup!"

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.