Monte Carlo Estimator


In-depth Articles

Monte Carlo Method

The Monte Carlo method was created by Von Neumann, Enrico Fermi and other scholars in the early 1940s in the Manhattan Project. This method aims to solve "complicated" mathematical problems such as calculating integrals using replications from known distributions.
Let:

\[ \psi =\int f(x) \space dx = \int m(x) \space \space p(x) \space dx = E[m(x)]\]

Where \(p(x)\) is a density function (continuous) or probability function (discrete) from which it is possible to generate pseudo-random values

then:

\[ \hat \psi = \frac{\sum_r m(x_r)}{R} \]

I estimate its value with the empirical mean.

Properties of the Monte Carlo Estimator

  • Unbiased \[E[\hat \psi] = R^{-1}\sum_r E[m(Y_r)] = \int m(y) \space \space p(y) \space dy = \psi \]
  • Its variance is: \[ Var(\hat\psi) = R^{-1} \space Var(m(Y_r)) = R^{-1} \bigg\{ \int m^2(y) \space \space p(y) \space dy - \psi^2 \bigg\}\]
  • Asymptotically tends to a Normal distribution, this property is useful in calculating confidence intervals using the Central Limit Theorem \(\bigg(\hat\psi \pm 1.96 \space \sqrt{\hat{Var}(\hat\psi)}\bigg)\)
  • The variance of the estimator can be estimated using simulated values: \[ \hat{Var}(\hat\psi)= R^{-1} \space \bigg\{ (R-1)^{-1} \sum_r \bigg(m(y_r)- \hat\psi\bigg)^2 \bigg\} \]
  • By the law of large numbers \(\hat \psi\) converges a.s. to \(\psi\)



Reducing the Variability of the MC Estimator:

Control Variables Method

Suppose there exists an \(n(x)\) correlated with \(m(x)\) and such that \(E[n(x)] = \mu\) known with the same density function then a method to calculate the estimator \(\psi\) reducing the variance is:

\[\hat\psi_c = R^{-1} \sum_r \bigg( m(x_r) + c \space \space (n(x_r) - \mu) \bigg)\]

where \(c\) is found by minimizing the variance: \[ c= \frac{-cov(m(x),n(x))}{\sqrt{Var(n(x))}} \] This estimator has the same expected value but lower variance compared to the "classical" Monte Carlo estimator, in fact:

  • The expected value results:\[ E[\hat\psi_c] = E\bigg[ R^{-1} \sum_r \bigg( m(x_r) + c \space \space (n(x_r) - \mu) \bigg) \bigg] = \\\\ = R^{-1} \space \space R \space \space \bigg( E[m(x)] + c \space \space E[n(x_r) - \mu] \bigg) = \\\\= E[m(x)] = \psi \]
  • Its variance is: \[ Var(\hat\psi_c)= Var(m(x))- \bigg( \frac{cov(m(x),n(x))}{\sqrt{Var(n(x))}} \bigg)^2 \]

Antithetic Variables Method

Suppose there exists an \(n(x)\) negatively correlated with \(m(x)\) such that its expected value is \(E[n(x)]=\psi\) then a method to calculate the estimator \(\psi\) reducing the variance is:

\[\hat\psi_a = R^{-1} \sum_r \bigg( \frac{m(x_r) + n(x_r)}{2} \bigg)\]

This estimator has the same expected value but lower variance compared to the "classical" Monte Carlo estimator, in fact:

  • The expected value results:\[ E[\hat\psi_a] = E\bigg[R^{-1} \sum_r \bigg( \frac{m(x_r) + n(x_r)}{2} \bigg) \bigg] = \\\\ = R^{-1} \space \space R \space \space \bigg( \frac{2 \space \space E[m(x)]}{2} \bigg) = \\\\= E[m(x)] = \psi \]
  • Its variance is: \[ Var(\hat\psi_a)= Var(m(x))+ \frac{cov(m(x),n(x))}{2}\] Therefore the variance is lower as the correlation is negative and therefore the covariance will be the same way.