|Fuzzy Logic And Uncertainty In AI|
|Written by Mike James|
|Friday, 20 August 2021|
Page 1 of 3
Things get very messy when you move away from mathematically founded theories like probability. What does it mean to say that you are 70% sure of something? Can you create a theory of the credible versus the unlikely that lets programs reason like we do? Perhaps.
AI is a tough problem to crack. Even when you think you have a solution all it takes is a little exposure to the real world to demonstrate how far we have to go.
The best way to start is to simplify things by considering a much simpler and artificial version of the problem. For example, in the study of machine vision the problem can be reduced by working in the 'blocks world' - a table top collection of simple shapes.
Previous articles have looked at human reasoning and thinking, in particular at logic and expert systems. Now we broaden our horizons and take a look at the elements of human reasoning that have been ignored so far. It is almost certain that most of these 'real' aspects of reasoning are going to be beyond the methods of simple logic.
Most of the problems that arise with applying logic to humans is that they have a tendency to believe rather than to know.
Logic assumes that once a fact is added to the list of known facts it stays added - truth is eternal!
In real life however facts tend to move onto and off the 'true' list as the evidence changes. You could say that the trouble with logic is that it doesn't have any sense of time or change.
There is also the problem that you may not be willing to state that something was true, just that it was likely or unlikely.
In this sense the status of facts can be rather vaguer than the two categories, true and false, of traditional logic allow for.
In other words, the real world just isn't run by pure logic.
There are two different ways of extending logic so that it can take account of the muddle of the real world.
You can admit that the facts do change their status from true to false and vice versa. This approach gives rise to what is generally referred to as 'non-monotonic logic'.
This may sound grand but monotonic means steadily increasing or steadily decreasing so non-monotonic simply means that the pool of known facts can both increase and decrease during the course of reasoning.
Non-monotonic logic allows for the vagueness of real life by allowing facts to change their status from true to false.
The alternative approach is to abandon the simple classification of facts into true and false and allow shades of meaning between true and false.
This approach is generally referred to as 'multi-valued logic'.
There are a great many varieties of both non-monotonic logic and multi-valued logic that have been constructed to deal well with particular cases. This a reflection of the fact that at the moment no one really knows what sort of method we need to deal with all of the vague reasoning techniques that humans employ.
I suppose you could say that we are a bit vague about how to be vague!
One of the biggest headaches in constructing an expert system is what to do if something is unknown.
Suppose that you have a rule:
in a knowledge base concerned with car fault diagnosis and you know that the engine doesn't fire.
The next question to ask the user is
"Is there gas in the tank?"
In 99% of the cases the answer that you will get back is something equivalent to
"I don't know but I think so."
Given an answer of this sort what can you conclude from the rule?
In most cases unless there is evidence to the contrary you can assume that there is gas in tank. In other words we can use a default rule that says:
In practice a great many of the rules in a knowledge base should be concerned with what to assume if facts are unknown.
This sort of reasoning is non-monotonic because if at a later time the fact does become known or is deducible from what is known then the conclusion that you have reached has to be withdrawn.
In practice most expert systems do allow you to deal with the unknown by adding appropriate rules but they cannot retract a conclusion.
Other types of default reasoning are very similar in that they all involve drawing conclusions that are by strict logic unwarranted.
For example, if you are using proof by resolution and you discover that you cannot prove something then you cannot conclude that it is false.
It is one of the asymmetries of logic that failure to prove true doesn't in any way imply that something is false.
You might not be able to prove something simply because you don't have all the necessary facts. The category 'unproven' is often added to 'true' and 'false' to produce a three-valued logic.
An alternative approach is to allocate a default reasoning rule something like:
IF you cannot prove A THEN conclude B
Once again this produces a non-monotonic logic.
An extension of this idea of drawing conclusions when there is a lack of proof is to give up trying to prove something after a given amount of time. The are cases where the time taken to exhaustively search a collection of facts is so great as to be impractical. In these cases it is reasonable to suppose that if a proof hasn't turned up in a given time that one isn't very likely.
The default reasoning for this is:
IF you cannot prove A in time T THEN
Given that you want to extend traditional logic to include non-monotonic systems how do you go about it?
The answer is surprisingly easy in theory but computationally very expensive in practice. All you have to do is keep lists of all the facts that you know or have deduced and their justification. The justification for holding a particular fact to be true can vary from 'none needed', i.e. the fact is true no matter what, to lists of other facts that either have to be true or not known to be true. The facts that have to be true in the justification support the conclusion and the facts that are not be known to be true are contradictions.
For example, you may have a rule:
the car is fast <- support(expensive,
which means that you can conclude that a car is fast if 'expensive' and 'large CC' are both true and 'diesel engined' isn't known to be true. (A diesel engine generally give less power than a gas engine of the same CC.)
Notice that you do not have to prove that "diesel engined" is false, it is enough not to have proved it true as most cars have gas engines.
This looks deceptively easy but it implies that you have to keep a list of facts that you have not yet proved and their justifications so that you can propagate the effect of any fact changing its status.
In addition to this justifications can become much more complex than just lists of facts. It might be that you need to list sets of rules, proofs and relationships necessary to support a conclusion.
At the moment practical non-monotonic logic needs machines that are more powerful than we can build. The theory might be good but the practice is more than we can manage.
If you suggest using a system of logic that has more than two truth values to the proverbial 'man in the street' then the likely result is a straight jacket!
To even contemplate that a fact can be some how intermediate between true and false seems like the first step on the road to madness.
Of course it all depends what you use the intermediate value to represent.
As long as you can come up with a good interpretation for three truth values and a set of operations that don't lead to any contradictions then what you have is sense and far from madness.
The only trouble is that there are many multi-valued logics, each one designed to overcome some particular problem.
Let's take a look at some of the best known.
|Last Updated ( Friday, 20 August 2021 )|