Have you ever had an
assert get triggered only to result in a useless core dump with missing variable information or an invalid callstack? Common factors that go into selecting a C or C++ compiler are: availability, correctness, compilation speed and application performance. A factor that is often neglected is debug information quality, which symbolic debuggers use to reconcile application executable state to the source-code form that is familiar to most software engineers. When production builds of an application fail, the level of access to program state directly impacts the ability for a software engineer to investigate and fix a bug. If a compiler has optimized out a variable or is unable to express to a symbolic debugger how to reconstruct the value of a variable, the engineer’s investigation process is significantly impacted. Either the engineer has to attempt to recreate the problem, iterate through speculative fixes or attempt to perform prohibitively expensive debugging, such as reconstructing program state through executable code analysis.
Debug information quality is in fact not proportionally related to the quality of the generated executable code and wildly varies from compiler to compiler. This blog post compares debug information quality between two popular compilers: gcc and clang. In this blog post, we will introduce the topic of optimization and highlight examples of their impact on debuggability. This blog post is part of a longer series, in the next blog post we’ll do finer grained analysis directly comparing
clang in real world and synthetic programs.