«Случай — это псевдоним Бога, когда Он не хочет подписываться своим собственным именем.» А. Франс

Coverage Cookbook/Coverage Metrics and process (Theory)

Материал из Wiki
Версия от 13:34, 15 февраля 2013; ANA (обсуждение | вклад)

(разн.) ← Предыдущая | Текущая версия (разн.) | Следующая → (разн.)
Это снимок страницы. Он включает старые, но не удалённые версии шаблонов и изображений.
Перейти к: навигация, поиск

Coverage/What is coverage

As the saying goes, "What doesn't get measured might not get done." And that is certainly true when trying to determine a design project's verification progress, or trying to answer the question "Are we done?" Whether your simulation methodology is based on a directed testing approach or constrained-random verification, to understand your verification progress you need to answer the following questions:

  • Were all the design features and requirements identified in the testplan verified?
  • Were there lines of code or structures in the design model that were never exercised?

Coverage is the metric we use during simulation to help us answer these questions. Yet, once coverage metrics become an integral part of our verification process, it opens up the possibility for more accurate project schedule predictions, as well as providing a means for optimizing our overall verification process. At this stage of maturity we can ask questions such as:

  • When we tested feature X, did we ever test feature Y at the exact same time?
  • Has our verification progress stalled for some unexpected reason?
  • Are there tests that we could eliminate to speed up our regression suite and still achieve our coverage goals?

Hence, coverage is a simulation metric we use to measure verification progress and completeness.

Observability and Controllability

Fundamental to the discussion of coverage is understanding the concepts of controllability and observability. Informally, controllability refers to the ability to influence or activate an embedded finite state machine, structure, specific line of code, or behavior within the design by stimulating various input ports. Note that, while in theory a simulation testbench has high controllability of the design model's input ports during verification, it can have very low controllability of an internal structure within the model. In contrast, observability refers to the ability to observe the effects of a specific internal finite state machine, structure, or stimulated line of code. Thus, a testbench generally has limited observability if it only observes the external ports of the design model (because the internal signals and structures are often indirectly hidden from the testbench).

To identify a design error using a simulation testbench approach, the following conditions must hold:

  1. The testbench must generate proper input stimulus to activate a design error.
  2. The testbench must generate proper input stimulus to propagate all effects resulting from the design error to an output port.
  3. The testbench must contain a monitor that can detect the design error that was first activated then propagated to a point for detection.

It is possible to set up a condition where the input stimulus activates a design error that does not propagate to an observable output port. In these cases, the first condition cited above applies; however, the second condition is absent, as illustrated in Figure 1.

Figure 1. Poor observability & controllability misses bugs

In general, coverage is a metric we use to meaure the controllability quality of a testbench. For example, code coverage can directly identify lines of code that were never activated due to poor controllability issues with the simulation input stimulus. Similarly, functional coverage can identify expected behaviors that were never activated during a simulation run due to poor controllability.

Although our discussion in this section is focused on coverage, it's important to note that we can address observability concerns by embedding assertions in the design model to facilitate low-level observability, and creating monitors within and on the output ports of our testbench to facilitate high-level observability.

Summary

So what is coverage? Simply put, coverage is a metric we use to measure verification progress and completeness. Coverage metrics tells us what portion of the design has been activated during simulation (that is, the controllability quality of a testbench). Or more importantly, coverage metrics identify portions of the design that were never activated during simulation, which allows us to adjust our input stimulus to improve verification.

There are different kinds of coverage metrics available to you, and the process of how to use them is discussed in the Coverage Cookbook examples.