On this page, we discuss the **Monte-Carlo method** for numerical quadrature (i.e. numerical evaluation of integrals). The Monte-Carlo method is a very special class of methods for solving an integral numerically that differs substantially from the standard methods (as described in [[Vectorized numerical quadrature]]), with applications in many areas of research. It is well-worth understanding the basics of the Monte-Carlo method, and this is what we will go through here.
## Basics of Monte-Carlo
In the one-dimensional case (that is, for a function of a single variable), the Monte-Carlo method relies on the fact that:
$
\int_{a}^b f(x) dx = \lim_{ N \to \infty } \frac{b-a}{N} \sum_{k = 1}^N f(\overline x_{k})
$
Where $\overline x_k$ is a randomly-sampled point in the range $\overline x_k \in [a, b]$, and $N$ is the total number of points sampled. Note that this may also be written in the form:
$
\int_{a}^b f(x) dx = (b-a)E(\overline x), \quad E(\overline x) = \lim_{ N \to \infty } \frac{1}{N} \sum_{k = 1}^N f(\overline x_{k})
$
Where $E(\overline x)$ is the **expectation value** (average value) of the function $f(x)$ over the same interval. The general proof requires an understanding of statistics (specifically the [law of large numbers](https://en.wikipedia.org/wiki/Law_of_large_numbers)). For a more detailed description of the mathematics, please see [this article](https://www.scratchapixel.com/lessons/mathematics-physics-for-computer-graphics/monte-carlo-methods-mathematical-foundations/quick-introduction-to-monte-carlo-methods.html); while it is specifically about the application of Monte-Carlo integration in computer graphics, it describes the same principle. However, we will focus purely on the *application* of the theory here, instead of the theoretical foundations of Monte-Carlo integration. To actually use the Monte-Carlo method in practice, we of course cannot sample infinitely-many points, so we let $N$ be a large but finite number. Thus, the exact result becomes an approximate result, given by:
$
\int_{a}^b f(x) dx \approx \frac{b-a}{N} \sum_{k = 1}^N f(\overline x_{k})
$
As a demonstrative example, consider the following integral:
$
I = \int_{0}^\pi \sin(x) dx
$
The analytical solution, of course, is $I = 2$. We may compare the exact result with the *approximate value* of the integral, calculated with the Monte-Carlo method, as shown in the below code sample in Python:
```python
import numpy as np
from numpy.random import default_rng
a = 0
b = np.pi
N = 100
# function to integrate
# can be changed to any function desired
f = lambda x: np.sin(x)
# analytical solution
# set this to None if it doesn't exist
exact_solution = 2.0
def monte_carlo_1d(f, a, b, N, random_seed=2):
"""
Implements the basic Monte-Carlo
quadrature method in 1D.
Note: seed must be provided for
reproducible results.
"""
E = 0 # expectation value
xk = 0 # random x-value in domain
# random number generator
rng = default_rng(seed=random_seed)
for k in range(1, N):
# draw random x-value from [a, b]
xk = rng.uniform(a, b, size=1).item()
f_k = f(xk)
E += f_k
E *= 1/N
return (b - a)*E
```
## Why Monte-Carlo integration?
One nice advantage of the Monte-Carlo method is that it works for *any function*. We do not need to require that the function take a specific form; the Monte-Carlo method *just works*, although (by the law of large numbers) it does require a fairly large number of sampled random points to achieve an accurate result.
As a crude demonstration, consider the Python code that we showed prior. We can perform a rough benchmark by adding the following Python code:
```python
N_values = [5, 10, 50, 100, 500, 1000, 10000, 100000]
for sample in N_values:
result = monte_carlo_1d(f, a, b, sample)
print(f"\nN = {sample}")
print(f"Approximated value of integral: {result:.3f}")
if exact_solution is not None:
abs_err = abs(result - exact_solution)
rel_percent_err = 100*abs(abs_err/exact_solution)
print(f"Absolute error: {abs_err:.3f} ({rel_percent_err:.1f}%)")
```
Again, this is running in vanilla Python and certainly not in any scientific-computing environment for optimal benchmarking. However, the benchmark results (shown below) very much show a clear trend.
| $N$ | Monte-Carlo result | Error of result |
| ------- | ------------------ | --------------- |
| 5 | 1.492 | 25.4% |
| 10 | 1.749 | 12.5% |
| 50 | 2.157 | 7.9% |
| 100 | 2.005 | 0.2% |
| 500 | 1.975 | 1.3% |
| 1,000 | 2.005 | 0.3% |
| 10,000 | 1.981 | 1.0% |
| 100,000 | 1.997 | 0.1% |
_Note: the error oscillates slightly since Monte-Carlo methods are by nature stochastic (relying on randomness), so their convergence is an overall trend but occurs unevenly, with overshoots and undershoots common._
But by far the biggest advantage of Monte-Carlo numerical quadrature is that it is an $O(n^{-1/2})$ method, unlike standard numerical quadrature methods, which are typically $O(n^k)$ for $k$ dimensions, meaning that such standard methods become intolerably-slow for evaluating integrals over a large number of points for anything beyond one or two dimensions. This also means that the Monte-Carlo method is **vastly faster** for evaluating integrals over millions of points, evaluating integrals of higher-dimensional functions, or both. For precision numerical calculations or for performing integrals over functions with a large number of parameters (such as the [rendering equation](https://en.wikipedia.org/wiki/Rendering_equation) in computer graphics), the Monte-Carlo method has such an edge in terms of performance that it becomes the *only* practical method of numerical integration.
## The Monte-Carlo method in higher dimensions
We may generalize the Monte-Carlo method to any number of dimensions relatively easily. The generalized formula for dimensions is given by:
$
\int_\Omega f(\mathbf{x}) dV = \lim_{ N \to \infty } \frac{V}{N} \sum_{k = 1}^N f(\overline{\mathbf{x}}_{k})
$
Where $\Omega$ is the domain of integration, $V$ is the volume of the space contained inside $\Omega$, $\mathbf{x} = (x, y, z, \dots)$ is now a set of $n$ coordinates for a function of $n$ variables $f(\mathbf{x})$, and lastly, $\overline{\mathbf{x}}_{k} \in \Omega$ is a randomly sampled point in the domain. This allows us to straightforwardly perform integration in polar, cylindrical, and spherical coordinates just as easily as in Cartesian coordinates (by sampling from a uniform distribution of points in the domain).
## Enhanced Monte-Carlo methods
In the description we have given a very basic description of Monte-Carlo methods, but there is far more to study about them. One of the most important areas that we didn't mention is **importance sampling**. It is a collection of techniques to make it easier to sample over non-uniform domains, and to improve the rate of convergence in general by selectively sampling some regions more than others. For a guide to importance sampling, please see [this article](https://lisyarus.github.io/blog/posts/multiple-importance-sampling.html). The computer graphics community in particular has developed extensive techniques for optimizing Monte-Carlo integration, including [next-event estimation](https://chunky-dev.github.io/docs/user_guides/introduction/next_event_estimation/) and [Russian roulette](https://www.pbr-book.org/3ed-2018/Monte_Carlo_Integration/Russian_Roulette_and_Splitting); for a general overview of Monte-Carlo as applied in computer graphics, see [this article](https://alain.xyz/blog/real-time-ray-tracing).