Page 3 of 3
The main trouble with fuzzy logic and similar theories is that they don't give you any sort of absolute way of interpreting levels of truth or belief.
For example, is my .7 certainty the same as your .7 certainty. In situations such as this most people turn to the subject of probability for an answer.
The most favoured theory of probability (yes there is more than one!) relates all probabilities to physical events. If you say that the probability of a coin landing heads is .5 this means that roughly half of all the landings come up heads. You can check that the probability is .5 by doing an experiment and counting the proportion of heads that you get. In this sense probabilities are objective and measurable.
Saying that the probability of a coin landing heads is .5 is very close to saying that your belief or certainty that the coin will land heads is .5 but in fact there is a world of difference.
If you equate probabilities with beliefs then you run into difficulties very quickly.
For example, what does my estimate of .8 of the probability of their being life on other planets mean. It certainly doesn't mean that I expect 80% off all planets to have life on them. If it means that the probability of finding life anywhere in the universe is .8 then in what sense can I repeat this event so that 80% of the time there is life and 20% of the time there isn't! You can go on to imagine parallel universes in 80% of which life develops one other planets but this is hardly a measurement that you could make in the same way as tossing a coin.
In short the exact theory of probability doesn't really apply to estimates of beliefs or certainties and at once you realise this fact you might as well admit that fuzzy logic has just as much claim to be correct as probability theory.
This doesn't mean to say that there aren't applications within AI where probability theory applies. Many an expert system contains rules where the conclusion doesn't always follow from the conditions. For example, I may have noticed that a particular set of symptoms goes with a particular fault with a probability of .8:
IF symptoms THEN
fault=X with a probability of .8
This is a reasonable use of probability because I can measure the number of times the symptoms are associated with the fault - this isn't a question of belief or even opinion. In cases such as these you should use the laws of probability to work out final probabilities.
In practice this usually turns out to be far too difficult. For example, if you have a rule
IF A THEN B with a probability of .9
and you have concluded A, as the result of another rule, with a probability of .8 what probability do you assign to B? It turns out that using strict probability theory it is very difficult to say what the probability of B is. The difficulty is caused by needing to know lots of conditional or joint probabilities rather than anything theoretic. It is simply that you usually can't gather enough data to work out the probability of B if A is also uncertain.
What most expert systems do in this case is to multiply the factors together giving .72 and so abandoning any interpretation that the figures have as probability. In the same way probabilities in rules such as
IF A AND B THEN C with probability P
are combined by taking the minimum of the probabilities of A and B. You might see the connection with fuzzy logic but this has very little to do with probability theory.
Fuzzy factors in Prolog
If you do decide that you want to include fuzzy logic or probability into Prolog programs then you don't have to worry that Prolog is based on traditional logic.
All you have to do is include a numerical value in each clause giving the clause's truth value, confidence factor or probability depending on the system that you are trying to implement.
The definition of each clause should also include a predicate that calculates the new truth value as the clause is proved.
For example, if you want to work with fuzzy logic then you might use something like:
where F1 is the truth value of 'is the battery OK', F2 is the truth value of 'tank full' and max sets F to the maximum of F1 and F2 so giving the truth value of 'fault=fuel_system'.
Notice that this isn't quite a full fuzzy logic reasoning system because using this method no clause would fail and so there would be no backtracking. The first clause tested for fault would return true with a truth value that might or might not be the largest possible. To enforce backtracking you either have to set a threshold - e.g. F>.9 at the end of each clause or you would have to use the findall predicate to return all of the possible faults with their associated truth values.
In practice a combination of both methods is necessary because without failing clauses with small truth values findall would return far too many potential solutions for normal computer to cope with.
What all this means is that there is no good or accepted method of dealing with uncertainty, beliefs or any sort of vagueness within rules.
Currently expert systems adopt any method that seems to give reasonable results. Most of them call their measures of certainty etc. 'confidence factors' or something other than probability. However as most confidence factors range between 0 and 1 or 0 and 100 it is difficult for an innocent user not to be tempted into thinking that they are probabilities.
At the end of a consultation an expert system may give you a conclusion with a confidence factor of 80% but whatever this means it doesn't mean that it expects to be correct 80% of the time in giving this diagnosis on the basis of the facts that have been supplied. Confidence factors are not probabilities and the best you can do is come to some feeling that the conclusion has a high, low or medium confidence factor.
We clearly have a long way to go when it comes to building computer programs that reason like human experts.