Monte Carlo Fashions Ltd was launched in 1984 by Oswal Woolen Mills Ltd, the flagship company of Nahar group. The launch was a significant step in the evolution of branded garment industry in India. Since then, it has been catering to the ever-growing demands of the Clothing and Fashion industry.In many problems of statistics, it is of interest to compute expectations with respect to a probability distribution. For certain situations, it is also necessary to estimate its normalizing constant. Specifically, let q(x) be a nonnegative function on a state space X and consider the probability distribution whose density is p(x) = q(x) Z with respect to a baseline measure µ0, where Z is the normalizing constant q(x) dµ0. Monte Carlo is a useful method for solving the aforementioned problems, and typically has two parts, simulation and estimation, in its implementation. First, a sequence of observations x1,...,xn are simulated from the distribution p(·). Then the expectation Ep(Ï•) of a function Ï•(x) with respect to p(·) can be estimated by the Zhiqiang Tan is Assistant Professor, Department of Biostatistics, Bloomberg School of Public Health, Johns Hopkins University, 615 North Wolfe Street, Baltimore, MD 21205 (E-mail: [email protected]). c 2006 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America Journal of Computational and Graphical Statistics, Volume 15, Number 3, Pages 735–752 DOI: 10.1198/106186006X142681 735 736 Z. TAN sample average or the crude Monte Carlo (CMC) estimator 1 n n i=1 Ï•(xi). (1.1) By letting Ï•(x) = q1(x)/q(x), the normalizing constant Z can be estimated by 1 n n i=1 q1(xi) q(xi) −1 , (1.2) where q1(x) is a probability density on X . This estimator is called reciprocal importance sampling (RIS); see DiCiccio, Kass, Raftery, and Wasserman (1997) and Gelfand and Dey (1994). This article considers rejection sampling or Metropolis-Hastings sampling for the simulation part. Rejection sampling requires a probability density ρ(x) and a constant C such that q(x) ≤ Cρ(x) on X , which implies Z ≤ C (von Neumann 1951). At each time t ≥ 1, • Sample yt from ρ(·); • accept yt with probability q(yt)/[Cρ(yt)] and move to the next trial otherwise. The second step can be implemented by generating ut from uniform (0, 1) and accepting yt if ut ≤ q(yt)/[Cρ(yt)]. Then the accepted yt are independent and identically distributed (iid) as p(·). To compare, Metropolis-Hastings sampling requires a family of probability densities {ρ(·; x) : x ∈ X} (Metropolis et al. 1953; Hastings 1970). At each time t ≥ 1, • Sample yt from ρ(·; xt−1); • accept xt = yt with probability 1 ∧ β(yt; xt−1) and let xt = xt−1 otherwise, where β(y; x) = q(y)ρ(x; y) q(x)ρ(y; x) . The second step can also be implemented by generating ut from uniform(0, 1) and accepting xt = yt if ut ≤ 1 ∧ β(yt; xt−1). Under suitable regularity conditions, the Markov chain (x1, x2,...) converges to the target distribution p(·). In the so-called independence case (IMH), the proposal density ρ(·; x) ≡ ρ(·) is independent of x. Then the chain is uniformly ergodic if q(x)/ρ(x) is bounded from above on X and is not even geometrically ergodic otherwise (Mengersen and Tweedie 1996). The condition that q(x) ≤ Cρ(x) on X is assumed henceforth. Neither rejection sampling nor Metropolis-Hastings sampling requires the value of the normalizing constant Z. However, each algorithm involves accepting or rejecting observations from proposal distributions. Acceptance or rejection depends on uniform random variables. By integrating out these uniform random variables, Casella and Robert (1996) proposed a Rao-Blackwellized estimator that has no greater variance than the crude Monte Carlo estimator, but they mostly disregarded the issue of computational time increased by Rao-Blackwellization