At this point you might think that the problem is that finding a fast method of testing primes would compromise Internet security.
This is not so but the reason it isn’t the case is interesting in itself.
The truth of the matter is that we have had fast procedures that test primality for some time – and they are faster than the new algorithm!
If this seems confusing I should make clear that these algorithms are very different in that they are randomised algorithms and they only guarantee that a number is a prime to any given level of certainty you care to specify. This is a very strange idea and it takes us away from the pure and exact results of the sort of mathematics we have been considering so far, so an example might be a good idea.
Consider the problem of trying to prove that two very complicated formulae, f1 and f2, are in fact identical. It might well be that the algebra needed to reduce one to the same form as the other could take years but working out a specific value is by comparison easy.
Suppose we pick a value x1 and work out the value of f1 using it. If we worked out the value of f2 using x1 and found it was different then we would have proved that f1 and f2 are not identical. On the other hand what would we have learned if they were the same? Well nothing certain but the chance that two different formula give exactly the same answer for a number picked at random is small.
That probability can be made even smaller by repeating the check using a different number, x2, picked at random.
This is a randomised test of the identity of the two formula and by picking more and more numbers to test then we can be increasingly certain that the formulae are identical.
Of course this is not a proof but it is often good enough.
Pick a prime, any prime…
To return to primality testing.
You might think that the simplest way to create a randomised test that n is prime is to simply pick random values between 2 and n and see if they divide n.
The more you pick and fail to find a divisor the more certain you are that n is prime. While this is a randomised primality test it isn’t a particularly good one and there are much, much better ones, both in terms of the amount of work they require you to do and the level of certainty they provide.
Any number which passes a randomised test with a high probability is referred to as a “pseudo-prime”.
Currently one of the best known and most used test is Rabin’s Strong Pseudoprimality test and it’s interesting enough to explain in more detail.
If the number we want to test is n, assumed odd, first we have to find d and s such that
where d is also odd (this is easy to do, try it).
Next we pick an x smaller than n and form the sequence:
xd, x2d, x4d, x8d, … xn-1 (mod n)
all calculated modulo n.
Calculating modulo n simply means use “clock arithmetic” with the clock wrapping round to zero at n-1. For example, counting mod 6 goes 0,1,2,3,4,5,0,1,2,3,4,5,0 and so on as the clock wraps round at 5 back to 0 again.
Once you have the sequence of values – called a “Miller-Rabin sequence” – you can be sure that if n is a prime the sequence ends in a 1.
How does this help with a randomised primality test?
Simple, just pick a value of x less than n and compute the Miller-Rabin sequence. If it doesn’t end in a 1, then the number is composite and not a prime. Notice that if it does end in a 1 then this doesn’t prove it is a prime because some composite numbers produce Miller-Rabin sequences that end in a 1.
The reason why this is such a good test is that if n isn’t a prime just less than three-quarters of all the x values you could pick at random produce Miller-Rabin sequences that don’t end in a 1.
So if you pick one value at random and it gives a sequence that ends in a 1 the probability is one in four that it is really a composite non-prime.
Pick 2 and the probability goes down to one sixteenth. If you continue picking M numbers that give a sequence that ends in 1, this means that the probability that you have a non-prime is ¼M.
For example, if you get sequences that end in 1 with 100 random values then the probability that you have a non-prime number is ¼10 which is roughly 1/1061 which is exceeding unlikely and so you are safe in concluding that the number is indeed prime.
Such is the strange reasoning behind randomised testing for primes, and many other randomised algorithms. They never produce certainty but what is the practical difference between a probability of 1/1061 and certain proof?
There is more chance of a machine malfunction giving you the wrong answer computing a certain proof of primality than the Strong Pseudoprimality test failing!
So if knowing that a number is probably a prime is almost as good as knowing that it certainly is what’s the fuss about?
There is still the philosophical point that a mathematical proof reveals a deeper truth than some random chance that something is true.
What is more surprising is that the proof is so very simple and that it has remained undiscovered for so long.
Now consider what this means for the rest of the foundation of our cryptographic security. We assume that because something is difficult and because mathematicians have been working on the problem for years, centuries even, that the problem must really be difficult, intrinsically difficult. This is the assumption behind the factoring problem that many code systems rely on.
Suppose now that the next problem Agrawal, Kayal, and Saxena, or some similar group, turn their attention to is factoring. Perhaps what was always assumed to be difficult will turn out not be if a sufficiently fresh approach is taken to the problem.
Perhaps this has already happened.
Answer to the factor problem: 2760727302559 and 2760727302517.
What has modular arithmetic got to do with the real world? The answer any experienced programmer should give you is "a lot". Not only is it the basis for many an algorithm, it is part of the hardware. [ ... ]