We do seem to have got into a race to make C and C++ compilers less and less useful while still following the letter of the rules (but certainly NOT the spirit).
I think modern C and C++ compilers are very useful. Then again I know that these languages are full of huge pitfalls, so I know what I'm getting into, and when I make a mistake I blame myself, not the compilers.
I also use the features of modern C/C++ compilers such as warnings (-Weverything on clang, then turn off those warnings I'm not interested in) and sanitizers (Undefined Behavior Sanitizer, Address Sanitizer).
I was under the impression that the whole point of undefined behavior in the spec was to give implementers flexibility. Am I wrong? If not, then it sounds like it's very much in the spirit of the rules.
That's implementation-defined behaviour (do something nonstandard and document it) or unspecified behaviour (do something specific and nonstandard but don't have to document it).
Undefined behaviour allows the compiler writer to have the code do anything unreliable and unpredictable and not document it. Its purpose is to allow compiler writers to completely ignore the erroneous code that results in UB and not spend effort trying to diagnose it or implement it. Most compilers still make an effort to diagnose some UB anyway, though.
The original reason was to ensure the standard was compatible with existing compilers (i.e. to leave behaviour which varied between existing compilers unstandardized) and to allow hardware-native behaviour for things like integer overflow and extended shifts, not to allow aggressive optimizations.