2013-05-06

Bundling compliance tests, constrained random tests, coverage suite and assertion suite along with VIPs is a challenging task.

But, a complete verification solution should not only provide this core functionality but also be customization ready. If not, customization can end up being a nightmare for both the customer and the vendor.

Unless customization is thought out in advance the deployment will fail. Customization is not just about the primary functionality being designed right but also about the deployment thought out with equal attention. Customers need to make sure that the VIP solutions they purchase are deployment ready.

Building a Verification solution that is Reusable, Configurable and Programmable is not optional but mandatory.

SystemVerilog?s support for object oriented programming along with standard verification methodologies such as UVM make reuse and customization possible in an elegant way. At Arrow, we have applied the concepts of object oriented programming to a wide range of problems apart from the primary VIP functionality.

Some of these customization problems that have been solved by us are shared below.

Customization requirements:
[1] Register interface
[2] DUT Sideband signals
[3] Feature control
[4] Parameter randomization control
[5] Debug logging verbosity control
[6] Regress list generation
[7] Coverage trimming
[8] DUT integration
[9] Code release mechanism

Register interface

All interface standard specifications provide a list of set configuration, Control and Status attributes. Typically, an RTL implements these as registers. The specification may or may not specify the bus on which these registers need to be accessed and might also not specify the actual address map to be used. Different RTL designs could choose different buses and address maps based on the system level requirements.

The challenge for VIPs is to build tests and environments, which do not get affected by the physical bus and address map of registers. UVM RAL has already addressed this problem.

UVM RAL provides a method to model the registers that are abstracted from the real address map and physical bus implementation. Read and write from tests and sequences can be performed on this RAL register model. UVM RAL provides a hook that gets activated on read and writes. The BFM of a physical bus can implement the driver hook that can be attached to the RAL model. Through address translation logic mechanism provided by RAL, the address map specifics of the implementation can be captured.

Various register access sequences may need DUT specific customizations. Let's take a basic sequence for example, the DUT initialization sequence. Every DUT will have its own requirements, in terms of the order and values of registers programmed during initialization. All standard DUT register access sequences (Low power entry, Link startup etc.) are enumerated and implemented adhering to specifications.
Arrow VIP allows DUT specific customizations by overriding the RAL sequence data type using UVM?s data type override capability.

DUT Sideband signals

A specification might require various indications and events to be implemented by the DUT. The specification might not specify how these must be implemented and, hence, designers could choose to implement these events as sideband signals.

These specification defined indications might need to be checked in the tests. Different DUTs may use different nomenclatures for these signals and, also, adhere to different protocols to communicate this information on the sideband interfaces.

The customization challenge here is to abstract the tests-suite and environment from variation in the implementation of the sideband signaling. Arrow solves this problem by creating a placeholder API class with virtual tasks. This placeholder can be instanced in the environment and virtual tasks of the APIs are used inside the test-suite and environment.

The base API class can be extended to create a DUT specific API class. The derived DUT specific API class contains implementation of the API based on the specifics of the DUT sideband signal implementation. This mechanism provides the ability to support different DUT sideband signal handling without having to make changes to tests and environments.

Feature control

Many specifications have a set of advanced and optional features. These features may or may not be supported by the DUT. Hence, it?s important to have a clean mechanism wherein these features can be enabled or disabled. Arrow VIP solutions provide control over enabling the testing of such features through a global config object. These controls can be set either from the tests or from the command lines.

Error injection control is another major aspect of any verification environment. Hierarchical control is needed to support requirements from directed bringup to full random(?). Arrow VIP provides a wide range of flexible feature control options that provide the ability to inject the error on the next transaction or to just use one single percentage knob to enable all possible errors.

Parameter randomization control

Most specifications and designs provide a series of configurable parameters. These configurable parameters have a range of programmable values. At Arrow, we capture all of these in a global config object and add constraints for each of the fields. Typically this config is randomized once per simulation and passed on to the verification environment for configuration. The config can be extended, and additional constraints added, to customize it in order to cater to the DUT specific requirements.

Protocol level transactions consist of various variables that can take a wide range of values. Arrow's Transaction modeling also follows the same approach as described above. Transactions can be extended and additional constraints can be added to customize the design in order to meet DUT specific requirements in terms of traffic fed in to the DUT.

The Specification also defines a number of mechanisms for error detection, recovery and reporting. Error injection information should be captured in the form of an error config. Error configs specify the type of error to be injected and corrupted values of the fields are captured in the same class. Error config can be extended and additional constraints can be imposed to customize errors that are to be injected and the corresponding corrupted field values that must be used.

With this approach, it's very easy to override the config, transactions and error config with DUT specific ones cleanly without affecting the tests and environment.

Debug logging verbosity control

Most of the standard verification methodologies (VMM, OVM, UVM) do provide logging utility base classes. Using them as is might not lead to optimal logging control as the standard logging utility base classes lack the understanding of the domain. For example, in a typical communication protocol, the VIP might have multiple layers. It would make sense to provide each layer its own control on the logging verbosity. An additional infrastructure needs to be built, in advance, to address this requirement.

Apart from functional control, it?s also important to tune the logging based on the audience. User and Developer needs must be addressed separately. Typically, users of the verification components are not interested in the internal functioning of the component. While users are more interested in the black box view of the component and, hence, logging from that perspective, developers are interested in the flow of information through various sub blocks, queues, threads etc. This distinction needs to be carefully designed and implemented. Logging verbosity by default should provide basic information indicating what has been done in the test. While one focuses on the debug it should provide the control to distinguish the verbosity between the user and developer(?).

With this approach users of the verification components don?t get bogged down by loads of excessive information. Excessive information is the main reason for slow down in debug efforts. With this type functionality and audience sensitive logging control, verification engineers can debug DUT failures effectively and quickly.

Regress list generation

Tests are run using command lines. A group of command lines creates the regress list. Seeding is a mechanism wherein the same command line is run with different seeds leading to repeatable yet different stimulus generation.

There are two critical features that the regress list generating mechanism must support in order to ensure that the regress lists can be customized based on the DUTs requirements : The ability to override the command line arguments, which is useful for changing values of the functional arguments which in turn alter the behavior of tests, and, to alter the seed count based on the importance of the feature being tested.

At the same time, the base command line should not be modified so that any change to the base command in the newer releases are easily reflected in the seeded and customized command lines.

Arrow's CheckMate solutions are equipped with tools through which the user can specify the seed counts and/or override the command line arguments without having to touch the base command lines. This way, Arrow has applied the object-orientated concept for command line management.

Coverage trimming

Compliance coverage covers all possible parameter values and scenarios defined by the specification. The same may or may not be completely supported or applicable for the DUT. Now this means the compliance coverage might not reach 100 %. Coverage needs to be trimmed to make it effective for the DUT. It may be in terms of the total bins, ranges covered by bins or applicability of the cover points itself.

Arrow's CheckMate solutions provide tools through which the user can configure the coverage to meet DUTs requirements. Yet, newer updates to coverage can be loaded without there being any conflicts with the modifications done by the user. This is yet another object orientated concept that we, at Arrow, have applied for coverage management.

Code release mechanism

The same Verification solution deployed at multiple customer ends would require slightly different customizations before it can support the respective customer. It would require a well thought version control mechanism support to ensure that the main development trunk does not get broken.

For each of Arrow's customers, a branch of the CheckMate solutions is created to ensure stable and healthy releases without affecting the main trunk and other customers' branches. This enables Arrow's support staff to quickly fix the customer issues in the customer branch itself and allows time before doing the correct fix in trunk.

Periodically, the main trunk is integrated into customer branches, regressed and released to the customers. Every release is assigned a unique number consisting of the date and version number. This ensures the ease of the communication of release version during discussions regarding issues and fixes.

DUT integration

Well, as much as we'd like to make things reusable in scalable ways, somethings are not. For instance, the test bench top module, where the DUT is instanced, and the top environment, are not conducive for reuse in object oriented ways. In such cases, a customer copy of the same is created in a folder with the customer name.

This ensures that customer specific files are cleanly separated out without polluting the main code. We know when to stop.

For customers it?s critical to ensure that VIPs are not only ?Verification-3D? ready but also a step beyond.. A step beyond to ensure VIPs are deployment ready to stand the test of customization.

Is the next VIP you are licensing Deployment ready?

Show more