You write a function that tests for a divide by zero and you throw an exception when it occurs. Your assumption is that that the function that called your function will work out why the divide by zero happened and call your function again with a correct or at least reasonable value.
What the programmer writing the function calling your function, and yes it could still be you, thinks is
"That function just threw an exception - it isn't working and I don't know why or how. I will just throw the exception and see if something higher up knows how to handle it."
The exception handling mechanism is designed to allow you to unwind the call stack and try again, not pass the buck.
The exception isn't harmful but it does allow programmers to adopt a frame of mind that usually ends up with the user being the ultimate exception handler.
The goal is not to have the user act as the routine exception handler but the exception handler of last resort.
So how can the keep going mentality help?
Rather than throwing an exception as soon as say a divide by zero occurs and leaving someone else to deal with the problem you really should consider your options.
Is there anything you can do which allows the process to continue in such a way that the user can get some value out of what is going on?
in the case of the divide by zero error it is tempting to think that the best fixt is to set the result to the machine infinity - after all that's what dividing by ever smaller numbers tends to. Sometimes the limiting argument is reasonable but in many cases the result should be set to some sort of mean result and the input a should be ignored.
For example if you are working out a graphics layout then taking an reasonable value for the result lets the process continue and might result in poor layout but at least the user gets to see the result and gets some feedback.
Ban The Error Message
It's not always easy to see what to do next when an error has occurred but you can always look for a way to return the control back to the user. Not to simply present them with a
"Something unexpected happened this program is going to terminate now"
but to present them with the wreck of the task and see if they can work out what to do next.
After all you may not be able to implement the artificial intelligence needed but to solve the problem but you can always rely on the user to supply some intelligence.
Try to keep as much of the application's state intact and return to the most general form of the user interface appropriate for the state. And yes you do have to explicitly manage state. To keep a program running when something goes wrong you need to have a good enough control of its state to be able to roll back to the last stable point that loses the smallest amount of data or work.
For example, the user has just "lost" a document due to a disk or communications error. Restore all state data so that the document is still editable and return to edit mode. The user can then decide what to do. You also need to try not to constrain the user in the choice of next action so you might have to rollback some of the state - allowing the user to select a storage path or medium for example.
Keep the state data but try to ensure that it can be subsequently changed.
We could go on examining examples for a long while and each example would present a different set of difficulties - but so far I've yet to find one that defeats the basic desire to "keep going". It's more an attitude of mind than something inherent in the technology.
Digital hardware might be all or nothing but the software that runs on top of it is much more flexible.
The "keep going" philosophy has at least four slogans:
don't raise an exception - handle it locally if possible
always handle any exceptions that existing code beyond your control might raise
don't ever pass exceptions on - unless you really can't handle it
only ever generate warning messages - error messages are for quitters.
Of course slogans are by their very nature not binding:
Delegates are C#'s original way of allowing you to work with functions as if they were first class objects. The aim may be simple but the need to define a type and then an instance of the type can be [ ... ]
Passing parameters is easy as it always works in the same way, but the effects aren't always the same. It can be confusing and even error prone unless you understand how it all works. So does C# pass [ ... ]