Static Analysis on ’Roids
Faced with a large code base and high-risk environment?
Logiscope’s three tools help you sort your code into different
buckets of quality—all it takes is a spirited leader and deep
By Allan McNaughton
Unlike the rest of us, developers of mission-critical software
know that the lives and welfare of people are often in their hands.
Imagine the tragic consequences of a bug appearing in the avionics
system of a fighter jet hurtling along at Mach 2. Or consider the
millions of dollars that could be lost by a software malfunction
that shuts down the power grid.
To improve software quality, developers of such applications turn
to tools that employ standardized quality metrics and coding rules
to identify the modules and individual lines of code most likely to
contain bugs. Telelogic’s Logiscope 6.1 offers a robust feature set
that can result in more reliable, maintainable and testable C, C++,
Ada and Java applications.
Logiscope consists of three quality improvement tools (licensed
separately) that run within the context of Logiscope Studio (an
IDE-like interface that’s also used by Telelogic Tau):
- RuleChecker checks code against a predefined set of
programming rules to detect violations of good coding practices.
In addition to more than 370 coding and naming rules, it also
allows you to implement your own rules.
- Audit compares code to a standardized quality model and
generates numerous metrics and graphs, so you can diagnose
problems and make decisions based on quantitative information.
- TestChecker measures code coverage and reveals
uncovered source code paths. It dynamically collects this
information by instrumenting source code and monitoring
Static analysis is at the heart of Logiscope. The capabilities of
each tool rest upon a detailed knowledge of an application’s code
structure and control flow. However, to obtain this information, you
must first tell Logiscope where your source code is located and what
you want to accomplish.
This process can be started from within Logiscope Studio or
through its integration with Microsoft Visual Studio .NET (or 6.0).
After creating a new workspace, you select the project type (a
workspace contains one or more projects) based on the language
you’re analyzing and the tool you wish to use (for example, a C++
TestChecker project versus a Java Audit project).
Then Logiscope Studio pops up a language- and project-specific
wizard to guide you through the remaining steps. Clicking Finish
invokes Logiscope’s static analysis engine, which rapidly parses
through code to build the appropriate data files for each tool.
Well, that’s the easy part. The challenge is figuring out what to
do with all the information that Logiscope collects. If terms like
cyclomatic complexity, vocabulary frequency and
program volume seem strange to you now, they’ll be old
friends by the time you’re an accomplished Logiscope user.
Follow the Rules
Logicscope’s simplest tool,
RuleChecker is best described as a souped-up, multilanguage,
customizable lint-type tool with a graphical UI. Although Ada and
Java are supported, most of RuleChecker’s predefined rules are
intended for C and C++ code—not surprising considering the
anything-goes nature of C (and to a lesser extent, C++) programming.
Since C code has so many rules, RuleChecker groups them into the
- Coding rules to restrict how code is presented (for example,
requiring one declaration per line).
- Complexity rules to restrict the way the language is used (for
instance, addressing structure fields via pointers, such as
- Control flow rules to restrict how the language is used (for
example, not permitting the use of goto statements).
- Naming rules to govern the way application entities are
identified (for instance, requiring that enumeration constants be
written in uppercase).
- Portability rules to restrict how language is used to prevent
porting problems (for example, not using the
operator on signed integers).
- Resource rules to restrict how application resources are used
(for instance, requiring that a variable be declared and
RuleChecker provides similarly appropriate rules for the other
supported languages. While many of the standard rules for C++, Ada
and Java are customizable, entirely new rules can also be created
for C. While this feature allows RuleChecker to be further tailored
to organizational coding guidelines, it’s not a task for the faint
of heart—a custom TCL script must be written that traverses the
parse tree to determine whether a rule has been violated.
RuleChecker’s analysis is presented in a number of ways,
including the very useful rule violations report. This HTML report
orders coding violations by the file or by the rule. If you’re
searching for a particular type of problem, the “by rule” listing is
appropriate; otherwise, the “by file” listing is more useful, as it
helps you clean up one module at a time.
What’s a Quality Model?
While RuleChecker deals with
code conformance issues, Logiscope Audit measures how well code is
written—and estimates where it’s likely to break. If you’re a
newcomer to quality modeling, you’ll first need to grasp some key
A quality model should quantify the maintainability of code,
based on criteria such as analyzability, changeability, stability,
usability, specializability and testability. But how can Logiscope
measure the level of effort it takes to do something as complex as
modifying code? It computes more than 190 metrics and combines them
in meaningful ways. For example, the level of effort necessary to
modify a function (its changeability) is the summation of the
- PARA, the number of parameters in the function.
- LVAR, the number of local variables in the function.
- VOCF, the complexity of the vocabulary (operators and
operands) used in the function.
- GOTO, the number of goto statements in the function.
The most complex components are usually those that rank highest
Logiscope Audit runs the application
through its default quality model and then lets you do all sorts of
interesting things with the results. The journey of discovery begins
when a quality report is created.
The quality report shows how the code ranks against the quality
model. Results can be scoped at the application, class, function or
source file level. The resulting criteria, such as changeability,
testability and so on, are displayed in a pie chart, which is
divided into sections for excellent, good, fair and poor results.
Clicking on each section brings up a list of results that fall
within that ranking.
While the quality report points out which components are complex,
it doesn’t reduce complexity in those components. Ultimately, that’s
the job of the programmer—Logiscope doesn’t rewrite code. That task
can be made easier, however, by better understanding how application
components fit together.
The Logiscope Viewer assists developers on this front by
displaying a call graph of the application (Sorry, folks, this
feature isn’t available for Java). You can navigate the call graph
to examine relationships between caller and callee components, and
further insight can be gleaned by viewing the control flow graph. A
word of warning: While the control flow graph helps illustrate the
logic within a component, the results can be hard to interpret for
[click for larger image]
Logiscope in Action
While the Viewer presents quality metrics in many
different ways, the most useful is the Kiviat graph (in the
lower right corner). This chart presents multiple metrics,
exploiting your ability to recognize patterns and hence to
detect the presence of some outlying metric that would
otherwise be elusive.
An alternative to reading the manual is to click through the
graph and watch the source code window update accordingly (it shows
the matching line of code). Complex graphs can also be “reduced” for
improved clarity. Control graph reduction simplifies deeply nested
structured subgraphs into a single node. This lets you focus on the
overall structure of the code by rolling up the details of deeply
nested control structures into a parent node.
The Viewer also presents quality metrics in different ways—the
most useful is the Kiviat graph (see “Logiscope
in Action”). This chart presents multiple metrics, helping you
recognize patterns and hence detect the presence of some outlying
metric that would otherwise be elusive.
While RuleChecker and Audit use
static analysis to determine how well code is written, Logiscope
TestChecker uses the same information to show how well it’s tested,
by instrumenting source code and monitoring application runtime.
Uncovered source code paths are captured at a number of levels:
- Instruction block (IB) level details sequential instructions
in a component such that execution of the first instruction block
leads to execution of the last.
- Decision-to-decision paths (DDP) include a sequence of
instructions whose origin is the entry point of the function or a
while …) and whose end is
the exit point of the function or the next decision.
- Modified condition/decision coverage (MC/DC) evaluates
conditionals to determine that every entry and exit point has been
invoked at least once, and each decision has switched to all
possible outcome values at least once (available only for C and
These approaches provide varying degrees of measurement
precision; which one you use depends on the criticality of the
software to be tested and your objective. For example, in a trivial
application, IB coverage of 100 percent may be sufficient, whereas
critical applications may warrant DDP coverage nearing 100 percent,
and very critical applications may strive for almost 100 percent
The simplest way to review TestChecker results is to generate a
test coverage report in Logiscope Studio. This HTML report
highlights uncovered source code paths and shows coverage details
down to the lowest level, such as exactly which parts of a
conditional expression weren’t tested.
TestChecker comes with its own viewer for sorting and displaying
coverage data. This tool lets you browse the results of concluded
test runs, or more interestingly, see coverage data as it’s
captured. This unique feature is especially valuable during test
development—you can easily see when a test needs further steps to
achieve desired coverage levels.
Overall, I was impressed with Logiscope
6.1. It helps to find coding problems that compilers miss and is
especially valuable when you’re faced with a large code base.
Logiscope sorts efficiently through code and sifts it into different
buckets of quality, enabling you to focus your efforts accordingly.
It’s also a surprisingly capable coverage analysis tool that can
prove useful during test development.
Although Logiscope lives up to its claims of enabling you to find
bugs early in the development process, it’s not for everyone. The
decision to adopt Logiscope will depend largely on whether your
organization is a fan of quality modeling and, of course, your
|Telelogic Logiscope 6.1 |
9401 Jeronimo Rd.
Irvine, CA 92618
Tel: (877) 275-4777
Fax: (949) 830-8023
Audit: Variable; $2,174
per seat to $7,500 for a multi-site floating license.
RuleChecker: Variable; $1,130 per seat to $3,900 for a
multi-site floating license. Reviewer (Audit &
RuleChecker): Variable pricing schemes from $2,500 per
seat to $8,625 for a multi-site floating license $31,500
(batch-enabled). TestChecker: Variable pricing schemes.
From $2,000 per seat to $5,700 for a multi-site floating
license, or $18,810 (batch-enabled). (Prices listed do not
include an 18% maintenance fee.)
Microsoft Windows NT 4 SP
6, Windows 2000 SP1, Windows XP; Sun Solaris 2.6 and
Rating: 3 stars
- Logiscope estimates code quality based on accepted
- The quality model and coding rules are customizable to
- The product presents a real-time view of test coverage
- Logiscope’s learning curve is too steep for those
unfamiliar with quality modeling.
- Its documentation lacks clarity and needs better
organization and a tutorial.
- Its batch-enabled tools (script-friendly) are pricey.
Allan McNaughton, a long-time developer and writer, is the
principal at Technical Insight LLC. He can be reached at mailto:email@example.com<.