Prime Numbers And Primality Testing
Written by Mike James
Thursday, 27 June 2019
Article Index
Prime Numbers And Primality Testing
Primes & Primality Testing
Almost Sure Its Prime

## Primes

Primes are special in the world of integers because they are numbers that do not have any non-trivial factors. In fact we have already made use of two primes in the locker number example.

Most numbers have multiple factors and yet I claimed that 137703491 had exactly two factors – the key and the locker number. The reason I could be so sure is that the key used is a prime, i.e. it has no other factors, and the locker number was chosen, conveniently, to be another prime. I can generate similar puzzles by taking two primes and multiplying them together safe in the knowledge that the result has just two factors – i.e. the only primes I used.

Large prime numbers are used to generate codes and they have grown increasingly important for this very reason. However, they fascinated mathematicians for many years before they became famous in this context.

The interest has been high since early times. For example the Greek mathematician Euclid showed that there were an infinite number of primes and since then there have been many questions asked about these mysterious numbers that have no factors, such as how many primes are smaller than a given upper limit, what is the probability that a number chosen at random has just two prime factors and so on.

## Primality Testing

The question that is of most interest to cryptographers is very similar to the factoring problem discussed earlier. If I give you a number can you tell me if it is a prime or not. Clearly to prove that it’s not a prime all you have to do is find a non-trivial value, i.e. not 1 or the number itself, that divides it exactly. On the other hand to prove that it is a prime means you have to test all of the possible candidates for dividing it and show that none of them actually do.

Clearly primality testing is difficult. In fact Eratosthenes in 200BC developed the first of the primality testing algorithms but it has the problem of being slow.

His method “The Sieve of Eratosthenes” works like this:

• Suppose you want to test n for primality
• Make a list of numbers from 2, to n and begin by putting a circle around 2 and then crossing off all of its multiples
• Next move to 3 and put a circle around it and cross off all its multiples and so on
• When you have finished and every number is either crossed off or circled all you have to do is to see if any of the circled numbers divide n
• If none of them do then n is a prime

This works because all of the circled numbers are primes and this is the smallest set of numbers we need to test to see if n is also prime.

This algorithm works but it’s slow and it gets slower as n gets bigger. It simply isn’t a practical method of testing primes.

The problem solved in 2002 is whether or not there is a quick method of testing if a number is prime – to be more precise whether primality testing is something that can be done in polynomial time, in other words in a time that increases only like some power of the size of the number n.

Until the discovery of the new algorithm all primality tests took longer than polynomial time and hence weren’t practical once n got sufficiently big.

The AKS algorithm found by Agrawal, Kayal, and Saxena is most simply put as “Primes is in P”. The 13-line program that tests numbers for primality faster than any other

## Still secure

At this point you might think that the problem is that finding a fast method of testing primes would compromise Internet security.

This is not so but the reason it isn’t the case is interesting in itself.

The truth of the matter is that we have had fast procedures that test primality for some time – and they are faster than the new algorithm!

If this seems confusing I should make clear that these algorithms are very different in that they are randomized algorithms and they only guarantee that a number is a prime to any given level of certainty you care to specify. This is a very strange idea and it takes us away from the pure and exact results of the sort of mathematics we have been considering so far, so an example might be a good idea.

Consider the problem of trying to prove that two very complicated formulae, f1 and f2, are in fact identical. It might well be that the algebra needed to reduce one to the same form as the other could take years but working out a specific value is by comparison easy.

Suppose we pick a value x1 and work out the value of f1 using it. If we worked out the value of f2 using x1 and found it was different then we would have proved that f1 and f2 are not identical. On the other hand what would we have learned if they were the same? Well nothing certain but the chance that two different formula give exactly the same answer for a number picked at random is small.

That probability can be made even smaller by repeating the check using a different number, x2, picked at random.

This is a randomized test of the identity of the two formula and by picking more and more numbers to test then we can be increasingly certain that the formulae are identical.

Of course this is not a proof but it is often good enough.

Last Updated ( Thursday, 27 June 2019 )