Coverage Cookbook/Coverage Metrics and process (Theory)
- Метрики и процессы покрытия (en)
- Coverage Examples (Practice) (en)
- Requirements Writing Guidelines (en)
Содержание |
Coverage/What is coverage
As the saying goes, "What doesn't get measured might not get done." And that is certainly true when trying to determine a design project's verification progress, or trying to answer the question "Are we done?" Whether your simulation methodology is based on a directed testing approach or constrained-random verification, to understand your verification progress you need to answer the following questions:
- Were all the design features and requirements identified in the testplan verified?
- Were there lines of code or structures in the design model that were never exercised?
Coverage is the metric we use during simulation to help us answer these questions. Yet, once coverage metrics become an integral part of our verification process, it opens up the possibility for more accurate project schedule predictions, as well as providing a means for optimizing our overall verification process. At this stage of maturity we can ask questions such as:
- When we tested feature X, did we ever test feature Y at the exact same time?
- Has our verification progress stalled for some unexpected reason?
- Are there tests that we could eliminate to speed up our regression suite and still achieve our coverage goals?
Hence, coverage is a simulation metric we use to measure verification progress and completeness.
Observability and Controllability
Fundamental to the discussion of coverage is understanding the concepts of controllability and observability. Informally, controllability refers to the ability to influence or activate an embedded finite state machine, structure, specific line of code, or behavior within the design by stimulating various input ports. Note that, while in theory a simulation testbench has high controllability of the design model's input ports during verification, it can have very low controllability of an internal structure within the model. In contrast, observability refers to the ability to observe the effects of a specific internal finite state machine, structure, or stimulated line of code. Thus, a testbench generally has limited observability if it only observes the external ports of the design model (because the internal signals and structures are often indirectly hidden from the testbench).
To identify a design error using a simulation testbench approach, the following conditions must hold:
- The testbench must generate proper input stimulus to activate a design error.
- The testbench must generate proper input stimulus to propagate all effects resulting from the design error to an output port.
- The testbench must contain a monitor that can detect the design error that was first activated then propagated to a point for detection.
It is possible to set up a condition where the input stimulus activates a design error that does not propagate to an observable output port. In these cases, the first condition cited above applies; however, the second condition is absent, as illustrated in Figure 1.
In general, coverage is a metric we use to meaure the controllability quality of a testbench. For example, code coverage can directly identify lines of code that were never activated due to poor controllability issues with the simulation input stimulus. Similarly, functional coverage can identify expected behaviors that were never activated during a simulation run due to poor controllability.
Although our discussion in this section is focused on coverage, it's important to note that we can address observability concerns by embedding assertions in the design model to facilitate low-level observability, and creating monitors within and on the output ports of our testbench to facilitate high-level observability.
Summary
So what is coverage? Simply put, coverage is a metric we use to measure verification progress and completeness. Coverage metrics tells us what portion of the design has been activated during simulation (that is, the controllability quality of a testbench). Or more importantly, coverage metrics identify portions of the design that were never activated during simulation, which allows us to adjust our input stimulus to improve verification.
There are different kinds of coverage metrics available to you, and the process of how to use them is discussed in the Coverage Cookbook examples.
Coverage/Kinds of coverage
No single metric is sufficient at completely characterizing the verification process. For example, we might achieve 100% code coverage during our simulation regressions. However, this would not mean that 100% of the functionality was verified. The reason for this is that code coverage does not measure the concurrent interaction of behavior within, or between multiple design blocks, nor does it measure the temporal sequences of functional events that occur within a design. Similarly, we might achieve 100% functional coverage, yet only achieve 90% code coverage. This might indicate that there is either a problem with the fidelity in our functional coverage model (that is, an important behavior of the design was missing from the coverage model), or possibly some functionality was implemented that was never initially specified (for example, perhaps the specification and testplan needs to be updated with some late stage change in the requirements). Hence, to get a complete picture of a project's verification progress we often need multiple metrics.
Coverage Classification
To begin our discussion on the kinds of coverage metrics, it is helpful to first identify various classifications of coverage. In general, there are multiple ways in which we might classify coverage, but the two most common ways are to classify them by either their method of creation (such as, explicit versus implicit), or by their origin of source (such as, specification versus implementation).
For instance, functional coverage is one example of an explicit coverage metric, which has been manually defined and then implemented by the engineer. In contrast, line coverage and expression coverage are two examples of an implicit coverage metric since its definition and implementation is automatically derived and extracted from the RTL representation.
Coverage Space Classification
Coverage associated with the two categories we just described can be combined to form a coverage space, which is often referred to as a coverage model. [1] For instance, an explicit specification coverage space consists of coverage metrics that are manually created by an engineer, derived from a design's requirements document or specification. Another kind of explicit coverage is the instrumentation created by an engineer that is based on the behavior encapsulated by the design implemention, such as the filling or emptying events associated with a particular FIFO in an RTL model.
Similarly, an implicit implementation coverage space consists of coverage metrics that are automatically extracted by a tool (such as a simulator), and derived from a design implementation (such as an RTL model). Another part of the implicit specification coverage space consists of coverage metrics that are automatically extracted by a tool, and are derived from the design specification. This part of the coverage space is currently an area of academic research, although there have been a few EDA tools recently emerge that attempt to automatically extract higher-level coverage properties by observing the effects of simulation patterns on an implementation (such as an RTL model). Note that these higher-level functional behaviors cannot be automatically extracted from the implementation alone, which is why they fall into the coverage metrics associated with the implicit specification coverage space.
Coverage Metrics
There are two primary forms of coverage metrics in production use in industry today and these are:
- Code Coverage Metrics (Code Coverage Metrics (Implicit coverage))
- Functional Coverage Metrics (Functional Coverage/Assertion Coverage Metrics (Explicit coverage))
References
[1] A. Piziali, Functional Verification Coverage Measurement and Analysis, Kluwer Academic Publishers, 2004.