Page 1 of 2
Don’t raise an exception – EVER!
Exceptions are, in practice the lazy programmer's way out of a hole. It would be better not to dig the hole in the first placen but if you do find yourself in such a hole then the advice is to stop digging and climb out - not drop someone else into the hole by throwing an exception..
We live in a world where in the main most things break slowly.
You can hear the chair cracking and creaking long before it gives way under you. Most mechanical structures fail with some warning and the ones that only ever fail catastrophically we like to avoid as clearly dangerous.
It isn’t so with software and it isn’t the case with digital hardware.
If you have a nice old analog amplifier - I still think class A is best - then if it starts to go wrong then you hear some music with perhaps distortion or noise. If the amplifier is digital it generally is an all or nothing situation. While it is true that some small error might produce a small timing shift a pop or a spike generally anything that gets close to closing off the stream of bits is a complete fail.
In electronics the tendency to fail slowly or put another way to remain usable under error isn't quite as pronounced as in the case of physical structures.
Software however almost doesn't have any error tolerance.
It tends to fail abruptly and completely. A program saving some data to disk either completes the entire task or it fails completely crashing to the point where it is unstable and has to be reloaded.
Of course this probably means that the user just lost all their work, but they should have been more cautious and saved more often and they should have had backups and …
And all this fell over because perhaps one bit failed to save correctly and a parity error occurred. If the user had been given a chance they might have opted to save the file with a parity error rather than face the loss of the entire work.
To be able to retrieve a file with perhaps a scrambled paragraph would be better than a complete loss and who knows the error might only be in a single characters.
A whole document lost because of a single character error.
This is almost paradoxical because one of the reasons for abandoning analog and moving to digital is that it is possible to correct a degraded digital signal back to its original state. However once again this is an example of the all-or-nothing principle because there comes a point when you cannot recover the digital signal at all.
All or nothing
Software tends to perpetuate the all or nothing approach to failure that it inherits from the digital hardware that runs it.
This all goes much deeper however because the approach is built into the languages that we use to create the applications.
There are two approaches to what should happen when a programmer writes something that could be nonsense.
Once group of languages simply throws up its hands (metaphorically) in horror and stops executing the program.
The sentiment is nicely captured in the phrase "Halt and catch fire".
The other, and far less common, approach is to attempt to complete the operation as best as possible and then continue with the rest of the program.
This might not make sense in all situations and it might even be dangerous in some other situations.
For example, if the file system is corrupt even at the level of a single bit, then pressing on with an operation could in principle wipe the entire drive. This is a bit like noticing that a bridge has a single broken support that should enable it to continue to work but when you drive onto it - catastrophic collapse because it was the one support that actually mattered. Of course bridges don't tend to be built in this way.
Jjust because you can invent horror stories doesn’t mean it isn’t worth considering. Add to the mix a sort of “do no harm” clause and things begin to look more reasonable.
You can even formulate a sort of Asimov's law of robotics:
Software should press on with the task unless the possibility of harming the wider system or itself is detected.
What is should not do at all costs is throw an exception, fail to handle the exception and come to a grinding halt as all the brakes are applied – usually bowing out with a
“something unexpected has just happened"
Consider for a moment what the world would be like if throwing an unhandled exception was the way everything was dealt with, even the slightest problems.
Your cooker might just switch off half way through the roast because it can’t quite get to the temperature you set, or the freezer might give up keeping things frozen because the water dispenser was blocked.
You generally expect your car to limp on if it has a minor problem, sufficient at least to get you home if possible. You would be fairly annoyed with a car that switched off the engine with a “Cylinder 3 not firing reliably” exception and then refused to move.
Of course with software finding a way into everyday hardware a car failing because of an exception isn't at all unlikely.
So software should try to follow the example set by analog hardware and just keep on working.
For the programmer this is most easy to achieve by having the "keep going" attitude built into the language.
A good example of this is PHP which in development mode signals problems to the programmer with warnings that sound so severe that they surely must cause a runtime error if not fixed. The truth of the matter is that they usually don't and you can set the level that an error causes a PHP program to halt programmatically.
Yes, that's correct you can write instructions that say - the next bit is slightly error prone so just ignore anything that goes wrong.
The idea is that PHP is creating HTML for consumption by a web browser which also follows the "keep going" pattern of work so if the HTML sent is slightly wrong syntactically the page rendering shouldn't crash.
In fact web browsers are a really good example of the "keep going" principle.
They are implemented to swallow a lot of rubbish in a web page and they ignore anything they don't understand. The result is that web browsers are remarkably robust and they don't usually crash because of some HTML they are fed.
However, even if you are stuck working with a language that has a built-in tendency to stop on the slightest error you can do something about it. You can quite simply handle all of the exceptions that are raised - without exception - and stick to the rule that you will never write code that raises an exception.
In this way you keep the show on the road and keep the code running through the processor. If you raise an exception you allow the possibility that some other chunk of software will not handle it and so you are creating a chance that the the whole thing will come to a sudden halt.
Exceptions Not Considered Harmful
Let's get this clear.
I'm not saying exceptions are to be considered harmful - in theory.
In principle exceptions are a way of handling what should happen when things go wrong.
You may wonder why such going wrong things cannot be handling in standard code without having to invent the whole idea of an exception?
The reason you need exceptions is so that you can "unwind" the call stack.
It is often said, and there is some truth in it, that an exception is different from a error condition because it is unexpected. To borrow some well know jargon - and error is a known unknown but an exception is an unknown unknown.
There is more to it than this. The reason that some conditions are best handled using an exception is that that they need to "climb" back up the call stack to be fixed. If function A calls function B, calls function C and something goes wrong in function C then if it can be fixed in function C then its an error condition. If it can't be fixed in function C then the call stack has to be unwound and we have to go back to function B or even A to fix the problem then this is an exception.
Consider for example the dreaded divide by zero error - which isn't a good example of an exception but it is simple. This has been around since the first days of computing and yet we still don't handle it well.
Simple languages expect you to test for the possibility of a divide by zero before you do the divide. For example:
other languages will throw an exception if
cannot be computed because a is zero.
in both cases we have put off the moment when a user sees an error message and the program stops.
What we haven't done is work out what should happen next.
In most cases some part of the program will have to be done again. The fact that we are trying to divide by zero means that some how a got to be zero and it shouldn't be zero. We need to go back and have another go at getting a non-zero value for a.
This is where the real problem lies. Most programs have a forward flow of control. Functions call other functions and these deliver up their results. What usually doesn't happen is that a function says to the function that called it - "let's start again, forget you called me and do it over again". This is a bit like the tail wagging the dog and this is what exceptions are supposed to allow for.
When you call a function you need to prepare for something to go wrong and write an exception handler that tries the task over again.
This is generally very difficult.
The reason is that for a retry to be successful you have to figure out what caused the problem. In this case what caused a to be zero. If a was determined by a function higher up the call stack then there may be no choice but to throw the exception one higher and so on.
This theory is great but in practice it usually doesn't work like this.