Have you ever had an
assert get triggered only to result in a useless core dump with missing variable information or an invalid callstack? Common factors that go into selecting a C or C++ compiler are: availability, correctness, compilation speed and application performance. A factor that is often neglected is debug information quality, which symbolic debuggers use to reconcile application executable state to the source-code form that is familiar to most software engineers. When production builds of an application fail, the level of access to program state directly impacts the ability for a software engineer to investigate and fix a bug. If a compiler has optimized out a variable or is unable to express to a symbolic debugger how to reconstruct the value of a variable, the engineer’s investigation process is significantly impacted. Either the engineer has to attempt to recreate the problem, iterate through speculative fixes or attempt to perform prohibitively expensive debugging, such as reconstructing program state through executable code analysis.
Debug information quality is in fact not proportionally related to the quality of the generated executable code and wildly varies from compiler to compiler. This blog post compares debug information quality between two popular compilers: gcc and clang. In this blog post, we will introduce the topic of optimization and highlight examples of their impact on debuggability. This blog post is part of a longer series, in the next blog post we’ll do finer grained analysis directly comparing
clang in real world and synthetic programs.
Depending on your application, crash reports may come in at the rate of a few to many thousands a day. Regardless of scale, fatal errors slip through the cracks, and even for the ones you do catch, it’s difficult to understand crash impact and to differentiate unique crashes from crash groups. Your ability to quickly triage and prioritize these fatal errors is crucial to your ability to act on them with urgency. Triage and prioritization relies on determining impact, like which users are affected by a crash or which crash has the highest impact on revenue. Backtrace helps you do this effectively through our deduplication systems.
For our perceptive clients, you might have noticed that historically we’ve written robust release notes for our server components, but we haven’t included release notes for our web UI which we call Console. Now that more of our users are adopting Console as the primary means to manage, understand, and resolve crashes with Backtrace, we’ve decided to diligently update our community with Console release notes.
Check them out to find the latest updates to our web UI.
Backtrace now includes a completely new storage and indexing subsystem that enables engineers to slice and dice hundreds of attributes in real-time easily so they can better triage and investigate errors across their ecosystem, all from the comfort of their command-line environment. When an application crash occurs, there are hundreds of data points that may be relevant to the fault. This can range from application-specific attributes, such as version or request type to crucial fault data such as crashing address, fault type, garbage collector statistics to environment data such as system memory utilization.
Read on to learn more how you can interact with Backtrace from the command line for crash report and error investigation.