The Monte Carlo Method
Written by Mike James
Thursday, 27 July 2017
Article Index
The Monte Carlo Method
Simulations
Pi Program

## Pi Program

The program is fairly easy. First we set up a loop that will repeat the random "shot" at the target:

`var num = 1000; var total = 0; var hit = 0; for (var i = 1; i <= num; i++) {`

Next we generate two random numbers x and y and work out how for this point is from the center of the circle at 0.5,0.5:

``` var x = Math.random();  var y = Math.random();  var r = Math.sqrt(Math.pow(x - 0.5, 2)                  + Math.pow(y - 0.5, 2));```

If the point is closer than 0.5 then we have a hit:

```  total++;   if (r < 0.5) hit++;   Text1.value = hit / total * 4; }```

If you run the program (using Firefox in this case) for a range of values you get something like:

`num     Pi10      3.2100     3.161000    3.110000   3.1512100000  3.141000000 3.143372`

As you can see convergence isn't fast but it generating random numbers is fairly cheap.

The complete program as an HTML page is:

```<!DOCTYPE html> <html lang="en">  <head>   <meta charset="utf-8" />   <title>Random Pi</title>  </head>  <body>   <input type="text" id="Text1"/>   <script>    var num = 1000000;    var total = 0;    var hit = 0;    for (var i = 1; i <= num; i++) {     var x = Math.random();     var y = Math.random();    var r = Math.sqrt(Math.pow(x - 0.5, 2)                       + Math.pow(y - 0.5, 2));    total++;     if (r < 0.5) hit++;    Text1.value = hit / total * 4;   }   </script>  </body> </html>  ```

## More Than Just Areas

You might be convinced that you can work out areas, volumes and even Pi by random numbers but where next?

The answer is that you can estimate the results of just about any numerical computation using randomness in much the same way. The only problem is that explaining how it works would get us ever deeper into mathematics so a simple example will have to do.

You can use the Monte Carlo method to solve linear equations like

`Ax=b `

where b is a known vector and A is a known matrix.

Usually this problem is solved by inverting the matrix or a similar numerical method but when this is large finding the inverse is an expensive problem. Again randomness comes to the rescue and it is possible to estimate the x vector in much the same way as the needle dropping estimated Pi.

The actual steps to get to the solution are complicated but what about just working out the matrix product y=Ab, where A and b are known.

This isn’t such a big problem but it is a step on the way to solving Ax=b and at first it doesn't  seem to have anything at all to do with random numbers.

See if you can work out how to do it first.

## Working out the matrix product

To work out the matrix product we have to form sums involving the matrix elements a(i,j) and the vector elements b(k) to be precise

`y(i)=Σj a(i,j)*b(j)`

or in other words

` y(i)=sum over all j of a(i,j)*b(j)`

This isn’t the place to be teaching matrix algebra, but it comes down to multiplying the elements row j in the matrix by the corresponding elements of vector and adding them up - usually said as "row times column". How to multiply a vector by a matrix

Computing this very non-random value using random numbers seems a very unlikely prospect but it is fairly easy.

For the moment let's suppose that all the values in the matrix are positive and less than one and that each row adds up to one. If this isn't the case then we can modify the matrix to make it true and the solution of the new matrix is related to the solution to the original.

This condition allows us to treat the rows as probabilities of something happening.

Focus for a moment on a single row - its entries give you the probability of some event occurring.

To simulate these events  all you do is set up a number of random number generators R(i) one for each row of the matrix. Each random number generator is set up so that it returns b(j) i.e a value in the B vector, with probability a(i,j) .

For example, if the matrix A was

`A= (0.5 0.3 0.2)   (0.2 0.1 0.7)`

and the vector b was

`    (4.3) b=  (6.2)    (3.2)`

Then for the first row of the matrix we would create a random number generator R(1)  that implemented the distribution of the first row. That is R(1) produces  4.3 with probability 0.5, 6.2 with probability 0.3 and 3.2 with probability 0.2. For the second row we would create a random number generator R(2) that produced 4.3 with probability 0.2, 6.2 with probability 0.1 and 3.2 with probability 0.7.

That is each random number generator returns the elements of the vector with probabilities given by the rows of the matrix – this can be achieved using the simple methods described earlier.

Now we can forget about matrix multiplication and simply run the random number generators - one for each row. We run the simulation and work out the average value produced by each of the random number generators. The resulting vector of averages one from each random number generator estimates the vector we want i.e. y.

In terms of our earlier example once we have R(1) and R(2) we would run them and work out the average value they produced. This average is an estimate of the result of multiplying the vector b by the matrix A.

That is:

`y=(y(1))= (mean of R(1)) = Ab  (y(2))  (mean of R(2))`

I don’t know if you’ll be impressed by this demonstration of how randomness can be used to get an estimate of an answer that seems to have nothing to do with chance  - but you should be.

Let's look at a simple JavaScript program to implement this algorithm to see how easy it really is. First we need two random number generators R1 and R1:

``` function R1() {  var r = Math.random();  if (r < 0.5) return 4.3;  if (r < 0.5 + 0.3) return 6.2;  return 3.2 } function R2() {  var r = Math.random();  if (r < 0.2) return 4.3;  if (r < 0.2 + 0.1) return 6.2;  return 3.2; }```

These use the "size of interval" method described in the introduction to generate random numbers with the specified probabilities. Now all we have to do is to write a loop that generates the appropriate random numbers using R1 and R2 and work out the mean:

``` var num = 100; var r1mean = 0; var r2mean = 0; var total = 0; for (var i = 1; i <= num; i++) {  r1mean = r1mean + R1();  r2mean = r2mean + R2();  total++; Text1.value = r1mean / total;  Text2.value = r2mean / total; }```

If you try this out using Firefox say you should get results something like:

`n       R1     R2100     4.669  3.637991000    4.6696 3.7544910000   4.6552 3.71641100000  4.6479 3.716561000000 4.6510 3.72015`

which should be compared to the exact result R1=4.65 R2=3.72

The complete program as a simple HTML page is:

```<!DOCTYPE html> <html lang="en">  <head>  <meta charset="utf-8" />   <title>Matrix Multiply</title>  </head>  <body>   <input type="text" id="Text1"/>   <br/>   <input type="text" id="Text2"/>   <script>    function R1() {     var r = Math.random();     if (r < 0.5) return 4.3;     if (r < 0.5 + 0.3) return 6.2;    return 3.2    }    function R2() {     var r = Math.random();     if (r < 0.2) return 4.3;     if (r < 0.2 + 0.1) return 6.2;    return 3.2;   }   var num = 100;   var r1mean = 0;   var r2mean = 0;   var total = 0;   for (var i = 1; i <= num; i++) {    r1mean = r1mean + R1();    r2mean = r2mean + R2();    total++;    Text1.value = r1mean / total;    Text2.value = r2mean / total;   }   </script>  </body> </html>```

This is the beginning of a wide range of computing applications that use random numbers to get at answers that would otherwise be too difficult to compute exactly.

Who needs quantum computers when you have Monte Carlo to play with…

#### Related Articles

Inside Random Numbers The McCulloch-Pitts NeuronNowadays the McCulloch-Pitts neuron tends to be overlooked in favour of simpler neuronal models, but it was, and still is, important. It proved that something that behaved like a biological neuron was [ ... ] + Full Article XOR - The Magic SwapWe all know that if you want to swap the contents of two variables you need a third temporary variable to do the job. It's like swapping the contents of two mugs using a third to hold the contents of  [ ... ] + Full Article Other Articles

<ASIN:352740760X>

<ASIN:0812975219>

<ASIN:0387942823>

Last Updated ( Thursday, 14 March 2019 )