«Бог не меняет того, что (происходит) с людьми, пока они сами не изменят своих помыслов.» Коран, Сура 12:13

Coverage Cookbook/Coverage Metrics and process (Theory)/Code Coverage Metrics

Материал из Wiki
< Coverage Cookbook/Coverage Metrics and process (Theory)
Версия от 13:47, 15 февраля 2013; ANA (обсуждение | вклад)

(разн.) ← Предыдущая | Текущая версия (разн.) | Следующая → (разн.)
Это снимок страницы. Он включает старые, но не удалённые версии шаблонов и изображений.
Перейти к: навигация, поиск
Проект Диплом

Литература

Coverage Cookbook (en)

OVM методология

* OS-VVM *

In this section, we introduce various coverage metrics associated with a design model's implicit implementation coverage space. In general, these metrics are referred to as code coverage or structural coverage metrics.

Содержание

Benefits:

Code coverage, whose origins can be traced back to the 1960's, is one of the first methods invented for systematic software testing.[1] One of the advantages of code coverage is that it automatically describes the degree to which the source code of a program has been activated during testing-thus, identifying structures in the source code that have not been activated during testing. One of the key benefits of code coverage, unlike functional coverage, is that creating the structural coverage model is an automatic process. Hence, integrating code coverage into your existing simulation flow is easy and does not require a change to either your current design or verification approach.

Limitations:

In our section titled What is coverage, we discussed three important conditions that must occur during simulation to achieve successful testing. They were:

  1. The testbench must generate proper input stimulus to activate a design error.
  2. The testbench must generate proper input stimulus to propagate all effects resulting from the design error to an output port.
  3. The testbench must contain a monitor that can detect the design error that was first activated then propagated to a point for detection.

Code coverage is a measurement of structures within the source code that have been activated during simulation. One limitation with code coverage metrics are that you might achieve 100% code coverage during your regression run, which means that your testbench provided stimulus that activated all structures within your RTL source code, yet there are still bugs in your design. For example, the input stimulus might have activated a line of code that contained a bug, yet the testbench did not generate the additional required stimulus that propagates the effects of the bug to some point in the testbench where it could be detected. In fact, researchers have studied this problem and found cases where a testbench achieved 90% code coverage-yet, only 54% of the code was covered would be observable during a simulation run.[2] That means that a bug could exist on a line of code that had been marked as covered?yet the bug was never detected due to insufficient input stimulus to propagate the bug to an observability point.

Another limitation of code coverage is that it does not provide an indication on exactly what functionality defined in the specification was actually tested. For example, you could run into a situation where you achieved 100% code coverage, and then assume you are done. Yet, there could be functionality defined in the specification that was never tested?or even functionality that had never been implemented! Code coverage metrics will not help you find these situations.

Even with these limitations, the automatic aspect of code coverage makes it a relatively simple way to identify input stimulus deficiencies in your testbench. And is a great first choice for coverage metrics as you start to evolve your advanced verification process capabilities.

Types of Code Coverage Metrics

Toggle Coverage

Toggle coverage is a code coverage metric used to measure the number of times each bit of a register or wire has toggled its value. Although this is a realitively basic metric, many projects have a testing requirement that all ports and registers, at a minimum, must have experienced a zero-to-one and one-to-zero transition.

In general, reviewing a toggle coverage analysis report can be overwhelming and of little value if not carefully focused. For example, toggle coverage is often used for basic connectivity checks between IP blocks. In addition, it can be useful to know that many control structures, such as a one-hot select bus, have been fully exercised.

Line Coverage

Line coverage is a code coverage metric we use to identify which lines of our souce code have been executed during simulation. A line coverage metric report will have a count associated with each line of source code indicating the total number of times the line has executed. The line execution count value is not only useful for identifying lines of source code that have never executed, but also useful when the engineer feels that a minimum line execution threshold is required to achieve sufficient testing.

Line coverage analysis will often reveal that a rare condition required to activate a line of code has not occurred due to missing input stimulus. Alternatively, line coverage analysis might reveal that the data and control flow of the source code prevented it either due to a bug in the code, or dead code that is not currently needed under certain IP configurations. For unused or dead code, you might choose to exclude or filter this code during the coverage recording and reporting steps, which allows you to focus only on the relavent code.

Statement Coverage

Statement coverage is a code coverage metric we use to identify which statements within our souce code have been executed during simulation. In general, most engineers find that statement coverage analysis is more useful than line coverage since a statement often spans multiple lines of source code-or multiple statements can occur on a single line of source code.

A metrics report used for statement coverage analysis will have a count associated with each line of source code indicating the total number of times the statement has executed. This statement execution count value is not only useful for identifying lines of source code that have never executed, but also useful when the engineer feels that a minimum statement execution threshold is required to achieve sufficient testing.

Block Coverage

Block coverage is a variant on the statement coverage metric which identifies whether a block of code has been executed or not. A block is defined as a set of statements between conditional statements or within a procedural definition, the key point being that if the block is reached, all the lines within the block will be executed. This metric is used to avoid unscrupulous engineers from achieving a higher statement coverage by simply adding more statements to their code.

Branch Coverage

Branch coverage (also referred to as decision coverage) is a code coverage metric that reports whether Boolean expressions tested in control structures (such as the if, case, while, repeat, forever, for and loop statements) evaluated to both true and false. The entire Boolean expression is considered one true-or-false predicate regardless of whether it contains logical-and or logical-or operators.

Expression Coverage

Expression coverage (sometimes referred to as condition coverage) is a code coverage metric used to determine if each condition evaluated both to true and false. A condition is an Boolean operand that does not contain logical operators. Hence, expression coverage measures the Boolean conditions independently of each other.

Focused Expression Coverage

Focused Expression Coverage (FEC), which is also referred to as Modified Condition/Decision Coverage (MC/DC), is a code coverage metric often used used by the DO-178B safety critical software certification standard, as well as the DO-254 formal airborne electronic hardware certification standard. This metric is stronger than condition and decision coverage. The formal definition of MC/DC as defined by DO-178B is:

Every point of entry and exit in the program has been invoked at least once, every condition in a decision has taken all possible outcomes at least once, every decision in the program has taken all possible outcomes at least once, and each condition in a decision has been shown to independently affect that decisions outcome. A condition is shown to independently affect a decisions outcome by varying just that condition while holding fixed all other possible conditions. [3]

It is worth noting that completely closing Focused Expressing Coverage can be non-trivial.

Finite-State Machine Coverage

Today's code coverage tools are able to identify finite state machines within the RTL source code. Hence, this makes it possible to automatically extract FSM code coverage metrics to measure conditions. For example, the number of times each state of the state machine was entered, the number of times the FSM transitioned from one state to each of its neighboring states, and even sequential arc coverage to identify state visitation transitions.

Typical Code Coverage Flow

The objective of gathering and analyzing code coverage metrics is to identify portions of the source code that have not been exercised by the current verification environment. From a project perspective, it is generally best to wait until the implementation of the RTL is close to complete before seriously starting to gather and analyze code coverage results. Otherwise, you can waste a lot of cycles trying to make sense of a moving target from the changing RTL code. With that said, we recommend that you at least run a few simulations that capture coverage metrics early in the project cycle (that is, prior to seriously gathering coverage metrics) to work out any potential issues in your coverage flow.

From a high-level perspective, there are generally three main steps involved in a code coverage flow, which include:

  1. Instrument the RTL code to gather coverage
  2. Run simulation to capture and record coverage metrics
  3. Report and analyze the coverage results

Part of the analysis step is to identify coverage holes, and determine if the coverage hole is due to one of three conditions:

  1. Missing input stimulus required to activate the uncovered code
  2. A bug in the design (or testbench) that is preventing the input stimulus from activating the uncovered code
  3. Unused code for certain IP configuations or expected unreachable code related during normal operating conditions

The first condition requires you to either write additional directed stimulus or adjust random constraints to generate the required input stimulus that targets the uncovered code. The second condition obviously requires the engineer to fix the bug that is preventing the uncovered code from being exercised. The third condition can be addressed by directing the coverage tool to exclude the unused or unreachable code during the coverage recording and reporting steps. Formal tools can be used to automate the identification of unreachable code, and then automatically generate the exclusion files.

References

[1] J. Miller, C. Maloney, "Systematic mistake analysis of digital computer programs." Communications of the ACM 6 (2): 58-63, February 1963.

[2] F. Fallah, S. Devadas, K. Keutzer: "OCCOM: Efficient Computation of Observability-Based Code Coverage Metrics for Functional Verification." Proceedings of the Design Automation Conference, 1998: 152-157

[3] DO-178B, "Software Considerations in Airborne Systems and Equipment Certification", RCTA, December 1992, pp.31, 74.

[4] M. Stuart, D. Dempster: Verification Methodology Manual for Code Coverage in HDL Designs - TransEDA, August 2000