Coverage Cookbook/Testplan to functional coverage
Материал из Wiki
- Метрики и процессы покрытия (en)
- Coverage Examples (Practice) (en)
- Requirements Writing Guidelines (en)
Arriving at functional coverage closure is a process that starts with the functional specification for the design, which is analyzed to determine:
- What features need to be tested
- Under what conditions the features need to be tested
- What testbench infrastructure is required to drive and monitor the design's interfaces
- How the testbench will check that the features work
Deriving a functional coverage model is not an automatic process, it requires interpretation of the available specifications and the implementation of the model requires careful thought.
The Process
The process that results in a functional coverage model is usually iterative and the model is built up over time as each part of the testbench and stimulus is constructed. Each iteration starts with the relevant and available functional specification documents which are analyzed in order to identify features that need to be checked by some combination of configuration and stimulus generation within the testbench.
In general terms, a testbench has two sides to it, a control path used to stimulate the design under test to get it into different states to allow its features to be checked; and an analysis side which is used to observe what the design does in response to the stimulus. A self-checking mechanism should be implemented in the testbench to ensure that design is behaving correctly, this is usually referred to as the scoreboard. The role of the functional coverage model is to ensure that the tests that the DUT passes have checked the design features for all of the relevant conditions. The functional coverage model should be based on observations of how the design behaves rather than how it has been asked to behave and should therefore be implemented in the analysis path. The easiest way to think about this is that with a testbench, the stimulus that runs on it and the scoreboard(s) have to be designed to test all the features of a design, and that the functional coverage model is used to ensure that all the desired variations of those tests have been seen to complete successfully.
Verification is an incomplete process, even for "simple" designs it can be difficult to verify everything in time available. For reasonable sized designs there is a trade-off between what could be verified and the time available to implement, run, and debug test cases, this leads to prioritization based on the technical and commercial background to the project. A wise verification strategy is to start with the highest priority items and work down the priority order, whilst being prepared to re-prioritize the list as the project progresses. The functional coverage model should evolve as each design feature is tested, and each additional part of the functional coverage model should be put in place before the stimulus.
Process Guidelines
The functional coverage model is based on functional requirements
The testbench is designed to test the features of the design. The role of the functional coverage model is to check that the different variants of those features have been observed to work correctly. Features may also be referred to as requirements or in some situations as stories.
For instance - a DUT generates a data packet with a CRC field. The CRC is based on the contents of the packet which has, say 10 variants. The testbench generates stimulus that makes the DUT produce the data packets and the scoreboard checks the CRC field to make sure that the DUT has calculated it correctly. The role of the functional coverage monitor in this case is to ensure that all 10 packet variants are checked out.
Choose the most appropriate implementation
When deciding how to implement a functional coverage model, there is a choice as to how the functional coverage will be implemented. The SystemVerilog language supports two main types of functional coverage:
Coverage Type | Implemented As | Used for |
---|---|---|
Covergroup Modeling | SystemVerilog Covergroups | Checking permutations of condition and state when a known result is achieved |
Cover Property Modeling | SystemVerilog Assertions - sequences and properties | Checking that a set of state transitions has been observed |
Covergroup functional coverage relies on sampling the value of one or more data fields to count how many times different permutations of those values occur.
Cover Property or temporal based coverage is based on counting how many times a particular sequence of states and/or conditions occurred during a test. Temporal coverage is usually used to get coverage on control paths or protocols where timing relationships may vary. Examples include:
- Whether a FIFO has been driven into an overflow or underflow condition
- Whether a particular type of bus cycle has been observed to complete
The first step in developing a functional coverage model is deciding which of these two approaches should be taken for each of the areas of concern.
Functional Coverage should be based on observation
The stimulus side of the testbench should be used to drive the DUT, the stimulus side should not be used for coverage because the DUT or the stimulus may not work correctly resulting in false coverage. Instead the functional coverage should be based on what is observed to happen at the DUT outputs in the testbench. In a UVM testbench, the functional coverage would be based on the content of analysis transactions. This has implications on the design of the testbench and the analysis transactions.
Functional Coverage Validity
Functional coverage data is only valid when a check passes. If you are not using automatic tools to merge coverage from different simulations, then you should ensure that the coverage model is only sampled when a test passes.
However, if you are using verification management tools to collect coverage from multiple simulations, then there is no need to do this since the coverage results from tests that fail would not be merged with the rest of the functional coverage data.
Collect coverage on negative testing separately from positive checking
If the verification plan calls for positive and negative testing of a feature, design and collect the functional coverage separately for both types of testing.
Design the coverage model for analysis
A common mistake is to develop a thorough functional coverage model only to realize later that it is difficult to understand the results. Use the available language constructs to make it easier to understand where the missing coverage is. In other words, it is worth investing some time when creating the functional coverage model to save time later when you need to understand what the results are telling you. See the Coverage Design For Analysis page for more discussion about this topic.
Determining the appropriate level of fidelity
How close a match to the detail of the design implementation is required? How much abstraction can be applied?
In simple cases where there are relatively few values to check creating a coverage model which is very closely related to the design detail is a manageable problem. However, when there is a wide range of values, careful decisions need to be made concerning the level of detail. For instance, a device configuration with about 20 possible values can easily be modeled directly, whereas something like a 32 bit address range needs to be split in to a set of interesting values or value ranges which requires some abstraction.
Use a consistent coding style
Using a consistent coding style for implementing covergroups and assertion code is important since it makes them easier to understand and interpret. It also makes integration with verification planning and tracking tools easier.
See the SystemVerilog coding guidelines for an example covergroup coding style. Note that the actual style used is unimportant, but using a consistent style is important.
Covergroup based coverage considerations
When designing covergroup based coverage there are a number of key points to consider based on the functionality and characteristics of the design
Which values are important?
It is easy to collect information in an uninformed way by simply specifying that a variable be sampled by a coverpoint, and this will result in the automatic generation of 2**n coverage bins where n is the number of bits in the variable. For narrow width fields this may be OK, but for wider fields this is unlikely to be very useful. In most cases there will be a number of values or ranges of values that will be important to cover, these should be derived from the specification and coded into the coverpoint as a set of bins.
What are the dependencies between data variables?
When analyzing the different ways in which a design might be configured or a packet might be constructed, are there relationships between different variables? If such a relationship does exist, then a cross product between the variables should be specified in the covergroup.
Are there boundary conditions that should be checked?
Are there particular values or combinations of values that should be checked because they are at the limits of operation or are at a known inflection point in the design? This will invariably require some "reading between the lines" of the specification and some design or implementation knowledge. Any boundary conditions identified should be added to the coverage model to ensure that they are tested.
Are there illegal conditions?
If there are conditions which should not occur, then the covergroup can have a term to trap those conditions. The term does not contribute to the functional coverage but it can help detect either a design or a testbench error.
Are there conditions that are not important?
Even the simplest of designs may have more ways of configuring it than are ever realistically going to be used. If there is a way to determine which modes are most likely to be used then it also likely that there will be some that are known to be either useless or very unlikely to used, these can be omitted from the permutations of coverage values collected. There may also be a degree of prioritization here, with certain configurations that have to be tested, then later on, if there is time, lower priority configurations can be checked.
When is the right time to sample the coverage?
The data coverage collection code needs to sample the data values it is referencing. The sample point needs to:
- Only occur when the associated check has passed
- Occur when the data values are valid
- Occur when the data values are stable
If the sampling is based on receiving a UVM analysis transaction, then if it comes directly from a monitor it may need to have a means to discriminate between valid and invalid analysis traffic. If the functional coverage collector is fed analysis transactions from a scoreboard then the scoreboard should qualify that a check has passed before sending the analysis transaction.
Are there times when the data coverage sample is not valid?
If there are, then guards will have to be coded into the functional coverage implementation code.
What information is required in the analysis transaction?
For testbenches based on a TLM methodology, such as OVM or UVM, the information required by the functional coverage needs to come from the analysis transaction. This implies that the analysis transaction has to have all the information that is going to be sampled, and this may well affect the transaction and the design of the monitor or scoreboard that is generating the transaction.
Summary
When considering how a design feature is to be tested, and what the covergroup based functional coverage model for that feature should be, remember to answer these questions:
Coverage Criterion | Feature on which data coverage is to be collected |
---|---|
Which values are important? | Identify the important values to hit. |
What are the dependencies between the values? | Identify the important cross products between data values |
Are there illegal conditions? | Identify values, or combinations of values that should not occur |
When is the right time to sample? | Specify a valid sampling point |
When is the data invalid? | Identify conditions when the data should not be sampled |
Temporal coverage considerations
Control paths and protocols rely on timing relationships between signals to implement handshakes and transfers. These types of relationships are easiest to verify using temporal sequences and properties. Functional coverage information can be derived from these checks in several ways:
- If a property is asserted and never fails, but can be shown to complete then it can be assumed that the functionality described in the property has been covered
- If a sequence completes, then the functional behavior encoded in that sequence has been observed and functional coverage can be assumed
- If a sequence completes, then a covergroup can be sampled to check under what conditions it completed (Hybrid coverage)
- If a property assertion passes, then the pass statement can be used to trigger sampling of a covergroup to check under what conditions the pass occurred (Hybrid coverage)
Checks should be specific
A property or a sequence should only check for one condition, having multiple conditions checked in a property or a sequence leads to aliasing in functional coverage.
If a sequence and or or fusion operator is used to define a property, then to collect specific functional coverage on each potential path through the property the property will have to be un-rolled and implemented as a sequence specific to each of the paths.
Hybrid Coverage
There may be times when a hybrid of data coverage and temporal coverage techniques is required to collect specific types of functional coverage. For example, checking that all modes of protocol transfer have occured is best done by writing a property or sequence that identifies when the transfer has completed successfully and then sampling a covergroup based on the interesting signal fields of the protocol to check that all relevant conditions are seen to have occurred.
The APB Bus protocol monitor contains an example implementation of using hybrid functional coverage.
Functional Coverage Examples
Different styles of design require alternative approaches to building the functional coverage model. Here are some worked examples that illustrate some common design categories:
Design Type | Covergroups | Assertions | Functional Coverage modeling strategy | Link to example | Example Tarball | ||
---|---|---|---|---|---|---|---|
Control based designs | Maybe | Yes | In this style of design there are timing relationships between different signals which need to be checked and seen to work |
APB Bus Protocol Example |
| ||
Peripheral style design, programmed via registers | Yes | Maybe | Most of the functional coverage can be derived from content of the registers which are used to control and monitor the behaviour of the device. The register interface may also serve the data path. There may be scope for using assertions on signal interfaces. |
UART Coverage Example |
| ||
DSP datapath style design | Yes | No | In this class of design, the stimulus pumps data through the design datapath and compares the output against a reference model. The functional coverage is primarily about ensuring that the algorithm 'knobs' have been tested sufficiently. |
Biquad Filter Example |
| ||
Aggregator/Controller style, e.g. Memory Controller | Yes | Yes | Coverage of combinations of abstract stimulus on multiple ports, coverage of Config registers, coverage of features of target DDR specification |
To be released | |||
SoC with vertical reuse of UVM analysis components | Yes | Maybe | At the SoC level not functional coverage is use case driven, and only some interface or block level coverage can be reused |
SoC Coverage Example | Not applicable |