CopyPastor

Detecting plagiarism made easy.

Score: 0.8041701316833496; Reported for: String similarity Open both answers

Possible Plagiarism

Plagiarized on 2025-11-27
by Toby Speight

Original Post

Original - Posted on 2011-11-08
by ssube



            
Present in both answers; Present only in the new answer; Present only in the old answer;

Let's start with a history lesson. The C language, in particular, has a long and active history of more than half a century.
The earliest C compilers did very little program analysis, performing fairly simple translation from source code to assembly output. If you wanted to scan your code for common bugs, you'd make a separate pass with a _linter_, but because that's time-consuming, you'd do that much less than compilation.
Your users wouldn't be expected to lint your code when they built it - that's something that would only happen after making changes.
Over time, the functions of the linter became available within compilers, so that common errors could be diagnosed at the same time that you compile. That's not always desirable, of course - not everyone has sufficiently fast hardware to accept the overheads of linting, and if you're simply building code you received without modifying it, then you don't care about the results.
Around this time, C became formally standardised by ANSI and ISO. Most compilers followed a default policy of emitting only the diagnostics required by the standard, and not linting the code unless explicitly requested by the user. This is helpful for the common case of building existing programs, which might have been written for earlier versions of the standard. If a new C standard mandated many additional diagnostics, we'd probably end up in a world where libraries tend to ossify and become to hard to use with new code.
C++ compilers tend to be closely aligned to C compilers, so the "warn only if requested" behaviour was natural for them, too. Additionally, many of the questionable constructs (e.g. narrowing conversions) of C were inherited by C++ to aid migration of codebases.
--- The recommendation to enable warnings (or, equivalently, run a linter on the sources) is usually given when the program doesn't behave as expected. As a first step in debugging, the most return for effort is to have the machine identify likely problems, before more time-consuming human analysis.
If code has been developed without warnings enabled, then enabling warnings when needed for debugging is less helpful, as there's likely to be lots of warning spam that drowns out the "smoking gun" we're looking for.
Enabling warnings-as-errors is something I recommend for developers, as it ensures that all false-positive warnings are addressed while the code is working, forcing you to investigate and rectify any newly-appearing warnings. It's not usually a good choice for the recipients of your code, who might be using different compilers (perhaps warning on more potential problems), but having warnings enabled (as mere warnings) may be helpful particularly if they identify target-specific issues that don't affect the author's platform.
Because sometimes you know better than the compiler.
It's not necessarily often with modern compilers, but there are times when you need to do something slightly outside of the spec or be a little tricky with types, and it is safe in this particular case, but not correct. That'll cause a warning, because technically it's usually mostly wrong some of the time and the compiler is paid to tell you when you might be wrong.
It seems to come down to the compiler usually knowing best but not always seeing the whole picture or knowing quite what you *mean*. There are just times when a warning is *not an error*, and shouldn't be treated as one.
As you stray further from standard use, say hooking functions and rewriting code in memory, the inaccurate warnings become more common. Editing import tables or module structure is likely to involve some pointer arithmetic that might look a little funny, and so you get a warning.
Another likely case is when you're using a nonstandard feature that the compiler gives a warning about. For example, in MSVC10, this: enum TypedEnum : int32_t { ... };
will give a non-standard extension warning. Completely valid code when you're coding to your compiler, but still triggers a warning (under level 4, I believe). A lot of features now in C++11 that were previously implemented as compiler-specific features will follow this (totally safe, totally valid, still a warning).
Another safe case that gives a warning is forcing a value to bool, like:
bool FlagSet(FlagType flags) { return (flags & desired); }
This gives a performance warning. If you know you want that, and it doesn't cause a performance hit, the warning is useless but still exists.
Now, this one is sketchy as you can easily code around it, but that brings up another point: there may be times when there are two different methods of doing something that have the same results, speed and reliability, but one is less readable and the other is less correct. You may choose the cleaner code over the correct code and cause a warning.
There are other cases where there is a potential problem that may occur, which the warning addresses. For example, MSVC C4683's description literally says "exercise caution when..." This is a warning in the classic sense of the word, something bad could happen. If you know what you're doing, it doesn't apply.
Most of these have some kind of alternate code style or compiler hint to disable the warning, but the ones that don't may need it turned off.
Personally, I've found that turning up the warnings and then fixing them helps get rid of most little bugs (typos, off-by-one, that sort of thing). However, there are spots where the compiler doesn't like something that *must* be done one particular way, and that's where the warning is wrong.

        
Present in both answers; Present only in the new answer; Present only in the old answer;