I was looking for some examples of Monte-Carlo methods, as the only one I had concretely in mind was the sausage method for pi. Applications? Not many.

I run into a way to solve the (say, 2d) Laplace equation with Dirichlet boundary conditions. Writing out the null Laplacian condition on a square lattice can be seen as stating random walks are isotropic in every direction. I think the method was derived by contemplating these two facts:

  • In the special case of boundary conditions on a sphere (or a square in a square lattice, not sure, anyways a “ball”), if you start an isotropic random walk from the center and go on until you crash into a wall, you will crash into all points on the boundary with equal probability.

  • The value of an harmonic function at the center of a sphere is equal to its average value on its boundary.

The methods works like this:

  • To calculate the value of the solution at a point, simulate a lot of random walks, starting from the point and terminating whenever you crash into a wall, and note the function’s value at this destination point.

  • The sought value is the average of those values.

Now, this rings some bells, as I’ve being keeping track on some related observations:

  • The Laplacian is the adjacency matrix of a large square lattice.
  • The Hamiltonian of a free quantum particle consists of a single Laplacian.
  • Electrical circuits are related to random walks on graphs, were conductances (inverse resistances) determine the Laplacian/adjacency matrix.

How does the method generalize? To non-homogeneous equations, different differential operators, …

Edit: I found how! The Feynman-Kac formula relates solutions of parabolic PDEs to expectation values of Wiener processes.