You can, and your program will call std::terminate if there’s already an exception being processed. Not exactly desirable if you’re trying to write code that ensures careful resource cleanup.
Also why it’s widely regarded as _wrong_ to ever throw in a destructor.
IMO this is a design bug in C++. The authors couldn't agree on what to do in the exception-during-unwind scenario, so they chose the worst possible option: crash.
In most cases, an second exception raised while another exception is already being thrown is merely a side-effect of the first exception, and can probably safely be ignored. If the idea of throwing away a secondary exception makes you uncomfortable, then another possible solution might have been to allow secondary exceptions to be "attached" to the primary exception, like `std::exception::secondary()` could return an array of secondary exceptions that were caught. Obviously there's some API design thought needed here but it's not an unsolvable problem.
If we could just change C++ to work this way, then throwing destructors would be no problem, it seems? So this seems like a C++-specific problem, not fundamental to RAII.
That said, there is another camp which argues that it fundamentally doesn't make sense for teardown of resources to raise errors. I don't think you're in this camp, since you were arguing the opposite up-thread. I'm not in that camp either.
> If the idea of throwing away a secondary exception makes you uncomfortable, then another possible solution might have been to allow secondary exceptions to be "attached" to the primary exception, like `std::exception::secondary()` could return an array of secondary exceptions that were caught.
Java has that for its pseudo-RAII "try-with-resources" statement: when an exception happens during cleanup of a try-with-resources statement, and the cleanup was because of an exception (instead of normally leaving the block), the inner exception is added to a "suppressed" list in the outer exception. Java exceptions have, since Java 7 (which added try-with-resources), both a "cause" field (for the exception which caused that exception, this exists since Java 4) and a "suppressed" field (which records the exceptions suppressed while cleaning up that exception).
I agree with your points about Java. I have direct experience with suppressed exceptions in Java 6 -- it was painful to debug ("where did my exception go???"). However, this works because Java forces everything thrown to be a sub-class of Throwable. (Please correct me if wrong.) C++ allows you to throw anything, including (bizarrely) null. I learned recently that C# allows the same -- you can throw null(!). How does C# handle suppressed exceptions?
In C#, if the finally-block of a try-finally throws, it replaces the current exception altogether; and using-statement desugars into try-finally.
And C# does not actually allow you to throw null. It does allow you to write "throw x" where x may be null, but that will just cause an immediate NullReferenceException at runtime.
Even if the standard consolidated on one way or another to pack up secondary exceptions (or discard them) how likely is the calling code going to be able to handle and recover from this case?
I am personally on team crash - I would rather my program exited and restarts in a known state then being in some weird and hard to replicate configuration.
So, I personally prefer to use exceptions for "panic" scenarios, like assertion failures, where the application has hit a state it doesn't expect and cannot handle.
Crashing makes sense in these scenarios if the application is only doing one thing. But I am usually working on multi-user servers. I would rather fail out the current request, but allow concurrent requests from other clients to continue.
Yes, I understand the argument: "But if something unexpected happened, your application could be left in a bad state that causes other requests to fail too. It's better to crash and come back clean."
This is not my experience in practice. In my experience, bad states that actually poison the application for other requests are extraordinarily rare. The vast, vast majority of exceptions only affect the current request and failing out that request is all that is necessary. Taking down the whole process is not remotely worth it.
Moreover, crashing on assertions has the unintended consequence of making programmers afraid to write assertions. In a past life, when I worked on C++ servers at Google, assertion failures would crash the process. In this argument, I saw some people argue that you should not use assertions in your code at all! Some argued for writing checks that would log an error and then return some sort of reasonable default that would allow the program to continue. In my opinion, this is an awful place to end up. Liberal use of asserts makes code better by catching problems, making the developer aware of them, and avoiding producing garbage output when something goes wrong.
> Moreover, crashing on assertions has the unintended consequence of making programmers afraid to write assertions. In a past life, when I worked on C++ servers...
The client side rendition of this philosophy exists too. Some client engineers consider it bad form to allow the user to see the application crash. So much so that they'll actually advocate for harmful things like littering the codebase with default values so that when something bad happens the application just keeps on chugging along in a state that nobody every accounted for because doing who knows what to the user's data because they hid errors in default values. It's really really sloppy.
I am definitely team let the user see the crash. Then they know something went wrong, can be alert, and try again in needed. They can report the problem so the devs are aware or the dev's crash tooling will automatically do it. And, ultimately, the issue will get fixed.
(The original version of this philosophy was probably "don't let the user see the app crash, handle the error properly, showing something helpful to the user if necessary, instead". But when adopted by time-constrained product engineering teams, sadly nobody cares about properly handling error states.)
> Even if the standard consolidated on one way or another to pack up secondary exceptions (or discard them) how likely is the calling code going to be able to handle and recover from this case?
Not unlikely. Sometimes your unwind involves cleaning up things that throw for the same reason as the original failure - e.g. failure to communicate with some piece of hardware. But you still try going through that unwind, right? Eventually you leave the context of accessing your hardware device entirely and are back to just working with system memory, the standard streams and some files, which would probably work fine.
I have recently experienced this writing wrappers for the CUDA API for GPU programming.
Sort of, but nested_exception covers a different scenario. With nested_exception, the "attachment" is an exception which caused the exception it is attached to. In the scenario I'm talking about, the "attachment" is an exception which was caused by the exception it is attached to.
Anyway, the key missing thing is not so much the exception representation, but the ability to have custom handling of what to do when an exception is thrown during unwind. Today, it goes straight to std::terminate(). You can customize the terminate handler, but it is required to end the process.
Goto + RAII isn’t generally compatible unless you structure things very carefully