Product Documentation
Virtuoso ADE Verifier User Guide
Product Version IC23.1, November 2023

5


Implementation Runs in ADE Verifier

After you have set up your design verification project with mapped requirements and implementations, you capture the simulation results of the implementation cellviews. Verifier uses these results to determine the verification status of the entire project, and for each requirement in the project. To capture the results, you can run the implementation cellviews from Verifier or load the existing run results in Verifier. You can also review the results.

You can set various aspects of the implementation runs that you initiate from Verifier. The main graphical user interface lets you enable or disable runs, whereas the run preferences let you control the run settings. You can also manage implementation runs that include ADE Assembler run plans.

You can organize implementation cellviews into implementation sets. An implementation cellview can belong to multiple sets. You can then run all the cellviews in a set.

Verifier lets you generate batch scripts to run implementation cellviews of your project from the command line interface. You can schedule these scripts to run as required. For example, you can set a batch script to run all the cellviews of an implementation set, say run_weekly. You can create a cron job to run this script once a week, or as required.

Related Topics

Implementation Simulation Flow

Setting Pre-run Scripts

Implementation Sets

Simulation of Implementations

Simulation of Implementations Externally

Environment Setup to Run Simulations in ADE Verifier

Simulation Run Results

Batch Scripts in ADE Verifier

Scheduling Runs

Implementation Simulation Flow

Verifier lets you simulate implementations to capture results, or load existing simulation results. It uses the results to determine the verification status of the requirements mapped to those implementations. The overall verification project status depends on the verification status of all the requirements.

The following is a sample flow for simulating implementations for verifying your design:

  1. Specify the implementations that you want to run from Verifier, and the implementations whose results you want to load in Verifier. For details, see Implementation Runs in ADE Verifier.
  2. Organize the implementations that you want to run from Verifier in implementation sets. For details, see Implementation Sets.
  3. Run the implementations from Verifier, or run the implementations from outside Verifier and load the results in Verifier. For details, see Simulation of Implementations.
    Verifier lets you generate batch scripts, which you can use to schedule the runs. For details, see Batch Scripts in ADE Verifier and Scheduling Runs.
  4. Review the results from Verifier, or directly in the main application of the implementations. For details, see Simulation Run Results.
    If required, you can modify the design and rerun the implementations.

When Verifier has the simulation results, you can review the verification reports. For details, see Managing Verification Results

Setting Pre-run Scripts

You can use a pre-run script to update a set of implementations before running the simulation. For example, you can enable or disable tests or corners, use setup states, change variables and parameters, and so on. Pre-run script is the script file that is executed before simulation.

To add a pre-run script, do one of the following:

Additionally, you can specify a relative or absolute path to the script file, or modify the specified path. You can also shell environment variables in the file path. For example, you can specify ${MYSCRIPTS}/prerun.il. To retrieve the name of a pre-run script file, you can use the SKILL function verifGetImpSetPreRunScript. To specify a new pre-run script file, you can use the SKILL function verifSetImpSetPreRunScript.

You can use pre-run scripts within implementation sets only. Pre-run scripts are not used if you run a simulation for an implementation, outside of an implementation set. However, when you run the simulation for an implementation set, the pre-run script impacts all the implementations in the set except for the implementations that are in the "Load" mode.

During the simulation the pre-run script in use is shown in the PreRunScript column of the Run tab for the individual implementations. Once the simulation is complete, the pre-run script information for the implementations is removed from the PreRunScript column.

You can observe the Pre-Run Script field populated in the Information assistant when simulation is complete. The Information assistant does not display the Pre-Run Script row if the pre-run script is not run because of errors.

Examples

The following example shows the script demonstrating changes in variables and tests.

printf("** PreRun Script Example **\n")  
printf("** change global variable values **\n") maeSetVar("vdd" "1.3") printf("** change global temperature **\n") maeSetVar("temperature" "25")  
printf("** enable/disable tests **\n") maeSetSetup(?tests '("AC") ?enabled nil) maeSetSetup(?tests '("TRAN") ?enabled t) printf("** Enable/Disable some corners **\n") maeSetSetup(?corners '("myCorner1_0") ?enabled nil) printf("** Load Setup State **\n") maeLoadSetupState("test" ?tags list("vars") ?operation `overwrite) printf("** change some corner values **\n") enCorners = maeGetSetup(?typeName "corners" ?enabled t) foreach(cornerName enCorners    when( (cornerName != "nominal")
maeSetVar("vdd" 1.4 ?typeName "corner" ?typeValue list(cornerName))
)
)
printf("** Change model files **\n") foreach(testName maeGetSetup(?typeName "tests" ?enabled t)    maeSetEnvOption(testName ?options '(("modelFiles" (("gpdk045.scs" "tt")))))
maeSetEnvOption(testName ?options '(("includePath" "./gpdk045_v_3_5/models/spectre")))
)

To know about the mae* functions, see Virtuoso ADE SKILL Reference.

Setting Up Run Preferences

To set the run options:

  1. Choose Edit – Preferences.
    The Preferences form displays.
  2. Click the Run Options tab.
  3. Edit the options as required.
    You can edit the following types of preference options organized in group boxes.
    Group Box Description

    Implementations

    Set options to show and run mapped implementations, and run implementations in debug mode.

    Report Identical History

    Set options to do pre-run checks on implementations and results.

    Check the following items

    Set options for tests, test states, views, model and definition files.

    Run Controls

    Set the maximum number of jobs that Verifier can run in parallel, and the default job policy that Verifier must use when running implementations.

  4. Click OK.

Verifier uses your run settings when it runs implementations.

Related Topics

Run

Enabling or Disabling Implementation Runs from Verifier

You can run an implementation cellview from Verifier to capture the results used for verification. Alternatively, you can run the implementation cellview from an external application and load its results in Verifier. In this case, you must select the cellview history where the results are stored and load them in Verifier.

To ensure that you can run an implementation cellview from Verifier:

  1. Click the Setup tab or the Run tab to view the implementations.
  2. Ensure that the Run check box is selected for the implementation cellview.
    If required, display the Run column. For this, right-click anywhere on the header area in the Implementations pane and select Run.

The option to run the implementation cellview becomes available in the Run tab.

To ensure that you can only load the run results of an implementation cellview:

  1. Click the Setup tab or the Run tab to view the implementations.
  2. Ensure that the Run check box is not selected for the implementation cellview.
  3. Select the history from the History drop-down list of the implementation cellview where the results are available.
    Verifier displays the tests and outputs of the selected history. If the History column is not displayed, click Show and select History.

The option to load the results becomes available in the Run tab.

Implementation Sets

You organize and run the implementation cellviews of your verification project using the features available in the Run tab.

As shown in the following figure, all the implementation cellviews appear under the Implementations node. You can optionally create implementation sets to group the implementation cellviews according to your needs. For example, you can group them according to their run frequency. You can create a set called run_daily to group the implementation cellviews that must be run every day. Similarly, you can create run_weekly to group the implementation cellviews that must be run once a week. You can then choose to run all the implementation cellviews that belong to an implementation set. An implementation cellview can be included in multiple sets.

In this figure:

The implementation sets follow a hierarchical methodology that helps you to define dependencies for runs. All implementations at a given level of hierarchy are considered equivalent.

For example, consider the following figure:

This figure shows two top-level implementation sets, run_weekly and run_daily, which can be run independently because there is no dependency between them.

The simplest case of a dependency is visible in run_daily, where implementation 7 will only run after implementation 6 has successfully completed.

The set run_weekly contains two implementations, implementation 1 and implementation 2, and a child set myGroup. These two implementations have to successfully finish before the contents of myGroup , implementation 4 and implementation 5, are even considered for running. Specifically, for any child implementation set, simulation of any sibling implementations of its parent set have to finish before the implementations of the child implementation set can be released to run.

Additionally, within run_weekly, implementation 3 is a dependent of implementation 2. This means that implementation 3 will be scheduled to run only when the simulation of implementation 2 finishes. The preference Maximum implementations to run in parallel determines the number of implementations that can run at any given time.

When a parent implementation or set fails or is stopped, all its dependencies are stopped. For example, in the given illustration, if implementation 6 fails, then implementation 7 is stopped. Similarly, if implementation 2 fails or is stopped, then implementations 3, 4, and 5 are stopped.

The Priority value also determines the order of simulation runs. However, Verifier honors this value at the level where the implementation exists in the hierarchy. For example, consider that implementation 2 has a higher priority than implementation 1. If you run the implementation set run_weekly, then implementation 2 runs first. Similarly, when you run myGroup, implementations 4 and 5 will run in the order of their specified Priority value. Again, if implementation 4 has the highest priority of all the implementations, it will run only after the simulation of its parent has finished.

The key points to remember about an implementation set hierarchy are as follows:

To run implementation sets in Batch Run mode and honor the Maximum implementations to run in parallel and Maximum batch runs to start in parallel settings, enable the distributeImpSet environment variable. To ensure that implementations within implementation sets are distributed according to these settings in batch mode, the implementation must not have any hierarchy or dependencies. In case of hierarchical implementation sets, all implementations of a hierarchical branch must be included in the same batch run, which makes it impossible to distribute these implementations between different runs.

Consider the following points when creating implementation sets:

Creating and Deleting Implementation Sets

To create an implementation set:

  1. Right-click the implementations area and choose Create New Implementation Set.
    If you want to create a sub set in an existing implementation set, right-click the existing set and choose Create New Implementation Set.
  2. Type the name of the implementation set in the displayed form.
  3. Click OK.

The new set is added to the Implementation Sets node, which is added below the Implementations node, if it is not already present..

To edit an implementation set name:

To add an implementation cellview to an implementation set:

  1. Right-click the implementation cellview in the Implementations node and choose Add to Implementation Set. You can select multiple implementation cellviews.
    A hierarchical list of existing implementation sets is displayed. For every set, a pair of sub-menu items are displayed, where the first item allows adding the implementation to the set on the first level. The second item in this sub-menu allows adding the implementation as a dependency of existing implementations in the set.
  2. Choose the implementation set.
    You can add an implementation to multiple implementation sets.
    Alternatively, you can drag an implementation from its source location and drop it to the required destination.
  3. Click OK.

Verifier adds the selected implementation cellview to the specified implementation set.

To remove an implementation set, or an implementation cellview from an implementation set:

  1. Right-click the implementation cellview in the Implementation Sets node and choose Delete Implementation Set.
    Verifier displays a prompt to confirm the removal.
  2. Click OK.

The implementation set or cellview entry is removed from the Implementation Sets node.

Rearranging Implementation Sets

To rearrange an implementation set or cellview:

  1. Select the implementation set or cellview from the Implementation Sets node.
  2. Click the Up, Down, Left, or Right arrow buttons on the toolbar to change the sequential order.

When you choose to run the cellviews of an implementation set, the order in which they are run depends on their order in the Implementations node. Rearranging the cellviews under the Implementations Sets node does not reflect on the Setup page.

Simulation of Implementations

Verifier lets you simulate implementation cellviews individually, serially, or in parallel. You can use implementation sets to organize the implementation cellviews and their runs. You can configure how you want to run implementations.

It is possible to run implementations from another application and load the run results in Verifier. For details, see Implementation Runs in ADE Verifier and Implementation Sets.

For information on using a batch script to simulate implementations, see Batch Scripts in ADE Verifier.

If you want to make changes to your implementation cellview source when its simulation is running, ensure that the option to ignore the design changes made during the runs is enabled. For this, set the environment variable ignoreDesignChangesDuringRun to t in the .cdsenv or .cdsinit file.

You can view the current simulation status of implementation cellviews from their Status indicators. For details, see Simulation Status of Implementations. It is possible to stop a simulation that is in progress. The following figure shows the run controls of Verifier.

You can show or hide the columns of the implementations list in the Run tab. For this, right-click anywhere on the column heading in the implementations area and select the columns you want to show. Verifier hides the columns that are not selected.

To sort columns, you can use the drag-and-drop method and change the display of columns. Additionally, you can specify the default column order and hidden columns by using the environment variables runColumnOrder and runHiddenColumns.

Additional Information

You can also view the status of simulations that are currently running in the Run tab. Move the mouse pointer to the Status column of a simulation to see the number of simulation points submitted, completed, or failed.

Running and Stopping Simulations

To run all the implementations organized in an implementation set:

To run an individual implementation cellview:

To run multiple implementation cellviews:

  1. Do one of the following:
    • Ensure that you have not selected any cellview or implementation set if you want to run simulations for all the implementation cellviews.
    • Select the relevant implementation sets or implementation cellviews if you want to run simulations for only those cellviews.

    Implementation cellviews can run serially or in parallel, depending on the value set in the Maximum implementations to run in parallel preference option.
  2. Click Run on the toolbar.

Verifier runs the implementation cellviews and lets you monitor their status.

If the implementation cellview uses the integrated history management, you receive a popup asking you to convert the cellview to use the separate history management scheme.

For information on running simulations on referenced cellviews, see Simulation of Implementations Externally.

To stop a run that was started from Verifier, do one of the following:

To view the requirements associated with an implementation in the Run tab, right-click and select Show Details.

If an implementation cellview does not run properly, try to run it outside of Verifier. Also, check the run preferences, license requirements, and other settings in Verifier. For example, if the simulator path is not set properly, the implementation cellviews cannot run. You can attempt to run the cellview after enabling the preference option Debug mode, open implementation interface.

Simulation Status of Implementations

You can monitor the run status of your verification project using the Status column on the Run tab. This column displays each implementation cellview or custom implementation with its corresponding status indicators.

The following table describes these indicators.

Status Indicators Description

The summary of the simulation status is displayed.

The cellview is not simulated.

The cellview has been selected for simulation and is in queue.

The cellview simulation is no longer pending but waiting for the simulation run to start.

The cellview is being simulated, and the indicated percentage of simulation has been completed.

The cellview is being simulated. The indicated percentage of simulation has been completed and the indicated failed points have been detected.

The cellview simulation has completed successfully, and the results are available for verification.

(LSCS only) The simulation is running, and the status shows the progress of the simulation and netlist generation.

The cellview simulation is stopping.

The cellview simulation was stopped manually.

The pending cellview simulation was stopped manually.

The cellview simulation was started, but failed because of an error. When the error is encountered, a popup message appears. You can also view the details in the Virtuoso CIW.

The cellview was simulated previously. However, the available simulation results can be out of date because the cellview has changed since it was last simulated. The color of this indicator changes from green to yellow to indicate potential problems. Consider re-simulating the cellview to get the latest results.

The cellview simulation results are not loaded in Verifier and Verifier cannot initiate its simulation. You must load the results from a history.

The red background color of this indicator indicates issues in loading the results. For example, the color becomes red when the simulation results are not found in the selected history.

The cellview simulation results are loaded from a history.

The cellview simulation results were loaded previously. However, the loaded results can be out of date because the cellview has changed since it was last simulated. The color of this indicator changes from green to yellow to indicate potential problems. Consider reloading the latest results in Verifier after running the cellview externally.

The external simulation results were partially available and were loaded from an incomplete history.

The simulation run finished with errors.

The external simulation results were loaded but contained errors.

The simulation is running. The indicated percentage of simulation and netlist generation has been completed and the indicated failed points have been detected.

Report Identical Histories

By default, before running a simulation, the setup details are not compared with the loaded history. The design of many verification blocks might not have changed and this can result in running a number of redundant simulations with the same setup. To avoid this, choose Edit - Preferences and enable the check for Report identical histories before run. This check does the following:

Simulation of Implementations Externally

In the normal run mode, the current Virtuoso session is used to launch ADE Assembler in the background and run simulations. This works well for short simulations. However, for simulations that run for longer durations, you might prefer to launch an independent Virtuoso session. The Batch Run feature typically launches an independent Virtuoso session in nograph mode that allows running your simulations outside of the current Verifier session. It launches without nograph mode if you select the Debug mode, open implementation interface preference. The Batch Run includes the following process:

  1. Creates a small Virtuoso batch script.
  2. Starts the batch script in the background. This batch script internally launches Virtuoso, which creates a new Verifier session. The new Verifier session runs the simulations.
  3. Triggers automatic loading of results when the batch run is finished. In addition, the status information is continuously sent to the current Verifier session to monitor the progress.

The main advantage of using the Batch Run is that the implementation simulation run and the Virtuoso GUI session are two independent Virtuoso processes. It minimizes the impact on the performance of Verifier. The Verifier or Virtuoso process can be closed without the risk of impacting the simulation run. The flow is particularly useful for long simulation runs. Splitting the simulation into multiple independent processes minimizes the risk of unexpected failures impacting the overall results. You can start a batch run by choosing Batch Run from the Run modes combo box on the toolbar and clicking the run button on the toolbar.

Every time you start a batch run, it automatically creates a script in the following format:

verifRun_<library>_<cell>_<view>_<id>.script

The corresponding log file is saved in the following format:

verifRun_<library>_<cell>_<view>_<id>.log.

Here, <library>, <cell> and <view> indicate the Verifier cellview which initiates the batch run, and <id> is an integer value starting from 0, which is reset at the end of the run.

Forexample, if the current Verifier cell name is TOP_verification, the library in use is Two_Stage_OpAmp, the view is verification, and the id is 21, the script file is named as
verifRun_Two_Stage_OpAmp_TOP_verification_verification_21.script.

It is strongly recommended that you do not modify the script file because this file is created automatically and uses internal functions.

Running and Stopping Implementations in Batch Mode

To run an individual implementation cellview in batch mode, do the following

  1. Select an implementation.
  2. Select Batch Run in the Run modes combo box on the toolbar.
  3. Click the Run button ( ) on the toolbar.

To run all the implementations organized in an implementation set in batch mode:

Verifier runs the implementation cellviews. You can monitor their status.

To stop a run that was started from Verifier, do one of the following

Setting Up Pre and Post Simulation Operations in Batch Scripts

You can add pre- and post-simulation operations in your batch script file to automate operations for your batch runs. To do this, you must set the following SKILL variables in the CIW or the .cdsinit file before generating the batch script.

If the text is a symbol or function, it must have the three arguments, n_session, n_runId, and t_mode, and return a string, which is inserted into the script without any changes.

Specifying a string

  1. In the CIW or the .cdsinit file, specify the following:
    verifGetPreRunText = "; Before Run"
    verifGetPostRunText = "; After Run".
  2. On the Verifier toolbar, click Run.

The specified text is added to the script file, as shown below:

Specifying a lambda function

  1. In the CIW or the .cdsinit file, specify the following:
    verifGetPreRunText = "lambda((sess id mode) sprintf(nil "info(\"User %s running cellview %s id=%L mode=%s\\n\")" getLogin() buildString(verifGetSessionCellView(sess) "/") id mode))" 
    
    verifGetPostRunText = lambda((sess id mode) sprintf(nil "; %L %L %L\n" sess id mode))
  2. On the Verifier toolbar, click Run.

The specified text is added to the script file, as shown below:

Distributing Batch Simulation Runs

You can start multiple implementations and run simulations in parallel in the batch mode. Running simulations in batch mode launches an independent Virtuoso session in -nograph mode on the same host machine. These multiple runs imply that multiple ADE Assembler sessions are launched in the current session. To avoid overloading the current Virtuoso session, you can use load balancing commands to distribute the batch runs to different hosts. When using distributed procession of simulation runs, Assembler launches multiple Virtuoso ICRPs (IC Remote Process). The batch runs are distributed to these Virtuoso ICRPs on different hosts on a server farm.

An ICRP (IC Remote Process) is essentially a virtuoso –nograph process launched by Assembler when you select Batch Run and click the Run button in Verifier. This process runs a simulation, evaluates the results, and send the results back to Assembler. The job policy setup in Assembler manages the distribution of simulations.

Verifier lets you specify settings to launch the nograph session used by the batch run. This nograph session runs the implementations in multiple ADE Assembler processes. These ADE Assembler processes can use a job policy to further distribute the ICRPs.

If you have enabled the Optimize Single Point Run option in your Job Policy Setup form in ADE Assembler for a single point run, ADE Assembler optimizes the run by doing the following:

Therefore, the ADE Verifier process is blocked by these tasks and the Message Processing System messages, such as restart or stop simulations, in a batch run are possibly lost because of the blocking. The Optimize Single Point Run is automatically disabled by setting the environment variable "adexl.distribute useSameProcess" to nil during simulation runs started from Verifier.

To specify the batch run settings, use the preference option Executable or script for batch runs and do one of the following:

Alternatively, you can use all the above options with the environment variable binaryName.

Simulation Runs based on Priority and Estimated Run Time

You can specify Priority, EstRunTime, Maximum implementations to run in parallel, and Maximum batch runs to launch in parallel for an implementation in the Setup tab and schedule the simulation of runs based on the specified values. The Priority and EstRunTime columns are also available on the Run tab and are synchronized with the columns on the Setup tab.

These specified values influence factors such as the scheduling order and the queue for an implementation.

You can also set the Display estimated run time as: option to Seconds or Formatted Time.

In the Setup or Run tab, you can specify the estimated run time as follows:

Alternatively, you can specify the estimated run time format using the estRunTimeDisplayFormat environment variable.

Environment Setup to Run Simulations in ADE Verifier

You can access the environment setup for running simulations from Verifier by using the Environment Variables table in the Run tab of the Preferences form.

In this table, the following environment variables that appear in a read-only state are always enabled by default because the tool requires these to configure the simulation environment. These environment variables cannot be modified and are always displayed at the bottom of the table.

This section describes how you can modify the Verifier environment to run simulations. It covers the following topics:

The Environment Variables table contains the following columns:

Column Description

Enabled

If selected, the OverrideValue overwrites the CurrentValue of the environment variable before running a simulation from ADE Verifier.

Tool[.Partition]

The name of the tool and its optional partition.

A partition groups the related set of variables associated with a tool.

Name

The name of the environment variable.

Type

The type of the environment variable. Possible values: string, int, boolean, cyclic, float, toggle.

CurrentValue

The currently used value of the environment variable set through the .cdsenv or .cdsinit file or through the envSetVal SKILL function.

OverrideValue

The value of the environment variable used for simulations in ADE Verifier.

This value overwrites the CurrentValue before running the simulation and restores the CurrentValue after the simulation is complete.

The table also offers the following shortcut menu commands:

Command Description

Add Environment Variable

Adds a new environment variable in the Verifier session. See Adding and Removing Environment Variables.

Modify Environment Variable

Modifies the values for Name, Tool[.Partition], or the OverrideValue of the selected environment variable with values that are not in use. See Modifying Environment Variables.

Remove Environment Variables

Removes the selected environment variables from the list. See Adding and Removing Environment Variables

Import Environment Variables from CSV

Imports environment variables from a CSV file. See Creating a CSV File for Defining Environment Variables.

Export Environment Variables to CSV

Exports environment variables to a CSV file. See Exporting Environment Variables to a CSV File.

Set Environment Variable Override Value to Default Value

Resets the value of the selected environment variable to its default value. See Setting the Environment Variable Override Value.

Set Environment Variable Override Value to Current Value

Resets the value of the selected environment variable to its current value. See Setting the Environment Variable Override Value.

Creating a CSV File for Defining Environment Variables

To create a CSV file that defines environment variables in the current Verifier session:

  1. Set the envVarConfigFile environment variable.
  2. Create a new Verifier cellview.
  3. Choose Edit – Preferences – Run.

The table loads and displays the environment variables that you have defined in the CSV file.

If a CSV file is not specified, the table displays the following default rows:

Adding and Removing Environment Variables

To add a new environment variable to the current Verifier session:

  1. Right-click anywhere in Environment Variables table and choose Add Environment Variable.
    The Add Environment Variable form appears.
  2. Select the Enabled check box to specify a new value to override the current value of an environment variable.
  3. In the Tool.[Partition] field, specify the tool and partition name of the environment variable that you want to add.
  4. In the Variable Name field, specify the name of the environment variable.
    On specifying valid values in the Tool.[Partition] and Variable Name fields, the values for Variable Type and CurrentValue are populated automatically.
  5. In the Override Value field, specify a value that will override the CurrentValue during simulations.

The specified environment variable is added.

To remove an environment variable in the current Verifier session:

  1. In the Environment Variables table, select one or more rows of values that you want to remove.
  2. Right-click and choose Remove Environment Variables.

The selected environment variables are removed.

Modifying Environment Variables

To modify an environment variable in the current Verifier session:

  1. Right-click an existing row in the Environment Variables table and choose Modify Environment Variable, or double-click anywhere on a row in the table.
    The Modify Environment Variable form appears.
  2. In the form, select or deselect the Enabled check box.
    Selecting the check box will allow overriding the current value of an environment variable with the specified value.
  3. In the Tool.[Partition] field, modify the tool and partition name of the environment variable.
  4. In the Variable Name field, modify the name of the environment variable.
    On specifying valid values in the Tool.[Partition] and Variable Name fields, the values for Variable Type and CurrentValue are populated automatically.
  5. In the Override Value field, modify the value that will override the CurrentValue during simulations.

Saving and Importing Pre-Defined Environment Variables from a CSV File

To create a CSV file that contains the environment variables defined in the current Verifier session:

  1. Set the envVarConfigFile environment variable.
  2. Create a new Verifier cellview.
  3. Choose Edit – Preferences – Run.

The table loads and displays the environment variables that you have defined in the CSV file. If a CSV file is not specified, the table displays the following default row:

To import environment variables from an existing CSV file:

  1. Right-click anywhere in the Environment Variables table and choose Import Environment Variables from CSV.
    The Import Environment Variables from CSV file form appears in which you can browse and select a file that you want to add.
  2. Click Open.
    The environment variables are added to the Environment Variables table.
    To import environment variables from a CSV file, ensure that the count and names of the column headers in the CSV file and the Environment Variables table match.

Exporting Environment Variables to a CSV File

To export environment variables to a CSV file:

  1. Right-click anywhere in the Environment Variables table and choose Export Environment Variables to CSV.
    The Export Environment Variables to CSV file form appears.
  2. Browse and specify a file in which you want to export the environment variables.
  3. Click Save.
    The environment variables are exported into the specified file.

Setting the Environment Variable Override Value

To set the override value of an environment variable to its default value:

  1. In the Environment Variables table, select one or more rows of environment variables for which you want to reset the override value.
  2. Right-click and choose Set Environment Variable Override Value to Default Value.

The Override Value of the selected environment variables is reset to their default value.

To set the override value of an environment variable to its current value:

  1. In the Environment Variables table, select one or more rows of environment variables for which you want to reset the override value.
  2. Right-click and choose Set Environment Variable Override Value to Current Value.

The Override Value of the selected environment variables is reset to their current value.

Simulation Run Results

After an implementation cellview is run successfully, you can review its run results from Verifier, Virtuoso CIW, and the default Virtuoso application of the implementation cellview. You can also access the simulation log generated by the simulator for an implementation test run.

Related Topics

Viewing Run Results Using the Information Assistant

Viewing Simulation Logs

Viewing Run Results Using the Information Assistant

To view the run results of an implementation cellview using the Information assistant:

  1. Open the Information Assistant pane.
    The assistant appears in the right pane.
  2. Do any of the following, as required:
    • Select the implementation cellview. The Information assistant displays the simulation run details, such as the run mode, tests, simulation corners, run history, and other details.
      • The selected implementation cellview can be a cellview that was run from Verifier, or a cellview whose results were loaded in Verifier.
      • You can copy the value of a result field. For this, right-click the result field and choose Copy String. You can also select the field and press Ctrl+C to copy the text.
      • When you right-click a result field, Verifier also provides the options to view and edit the results in the main application of the implementation cellview. These options are also available in the popup menu in the Setup tab.
    • Right-click the implementation cellview whose results can be loaded in Verifier and select View Implementation Results. Verifier launches the application of the implementation cellview, such as ADE Assembler, where the results are displayed. If the implementation cellview uses the integrated history management, you receive a popup asking you to convert the cellview to use the separate history management scheme.
    • View the simulation run status and details in the Virtuoso CIW and the simulation application log.

The following figure illustrates how to access the run results of an implementation cellview in the Information assistant.

Related Topics

Results of Failed Outputs or Tests

Viewing Results from an Incomplete History

Results of Failed Outputs or Tests

You can view the details of the tests or outputs that have failed simulation checks. These details are shown in the Failures row of the Information assistant. The Failures row shows the first failure from the complete list. However, you can control the maximum number of failed tests or outputs that are shown by using the maxFailures environment variable.

The following figures show the Failures row in the Information assistant.

You can also choose to rerun any unfinished or failed points from the loaded implementation history of selections by choosing the Rerun Unfinished/Error Points menu command on the Run tab. This menu command is available for local and batch runs, referenced implementations, as well as single or multiple selections on the Run tab after the simulation run is complete. Alternatively, you can use the verifRun SKILL function.

Viewing Results from an Incomplete History

In case of a large design testbench or a large design with a run plan, simulations might run for several days. The results become available after the simulation completes and the history is created. Occasionally, it becomes necessary to use an incomplete history to load and view the partially available results of such simulations. Viewing the available results also helps in determining the result trends. Verifier lets you load and view the partial results of in-progress or incomplete simulations.

To load external results from an incomplete history:

  1. In the CIW, set the environment variable allowLoadingIncompleteHistory.
    Setting this variable shows the run status from the maestro view in the Information assistant as well as the tooltip of the Status column on the Run page.
  2. In ADE Assembler, run a simulation as follows:
    1. Open a maestro cellview.
    2. Click the Run button.
  3. In Verifier, view the simulation results as follows:
    1. Open a verifier cellview that contains the same maestro cellview as an implementation.
      Ensure that the corresponding requirements exist in the Requirements pane. If not, right-click the implementation and select Create Requirements Hierarchically.
    2. Click the Reload Simulation Results button.
    3. The external results are loaded.
    4. Click the Run tab.
      The Run page shows Incomplete Results Loaded in the Status column. When you click the implementation, the Information assistant displays Pending or Running in the Status column depending on the number of simulations or points that are still pending or running.
      Only implementations with the Run check box deselected are supported.
    5. Click the Results tab.
      The Results page lets you evaluate outputs with partial or incomplete results. When you click the implementation, the Information assistant shows the partially available results in the OverallStatus row.

Viewing Job Policies for Implementations

In case of a large design testbench or a large design with a run plan, simulations might run for several days. The results become available after the simulation completes and the history is created. Occasionally, it becomes necessary to use an incomplete history to load and view the partially available results of such simulations. Viewing the available results also helps in determining the result trends.

Viewing Simulation Logs

The simulator application, such as Spectre Circuit Simulator, that runs implementation cellviews logs details of each test run. You can access such simulation logs from Verifier. The following figure illustrates how you can access the simulation log of a test.

You can access the simulation log of an implementation cellview if the simulation has completed and the results directory exists. Verifier must have access to the history of the run. The test must be defined and enabled in the history.

To view the simulation log of an implementation test:

  1. Do one of the following:
    • Select the implementation from the Setup or Run tab.
    • Select a requirement mapped to the implementation in the Setup or Results tab.

    You can also select a status value in the Run or Result tab.
  2. Right-click, select View Simulation Log of Implementation Test, and then select the test name.
    If the implementation cellview uses the integrated history management, you receive a popup asking you to convert the cellview to use the separate history management scheme.

The simulation log is displayed in a new text window.

Batch Scripts in ADE Verifier

You can generate a batch script from Verifier, which is also referred to as a regression script. You can use this script to start Verifier from the command line in non-graphical mode with a cellview and run its implementation cellviews.

For information on simulating implementation cellviews from the Verifier graphical user interface, see Simulation of Implementations.

For a video overview of the Verifier batch scripts and how to set a cron job to run a script periodically, see Generating and Using Batch Scripts on Cadence Online Support.

Related Topics

Generating a Batch Script

Reviewing and Refining a Batch Script

Running and Stopping a Batch Script

Scheduling Runs

Generating a Batch Script

Before you generate a batch script, save the Verifier setup for which you want to generate the script to a cellview of type verifier.

To generate a batch script for the current cellview:

  1. Do one of the following:
    • Click the Create Batch Script button on the toolbar:
    • Choose Tools – Create Batch Script.

    The Batch Script File Name dialog box displays.
  2. Specify the location and filename of the batch script.
    The default location is the current directory and the default filename is in the format currentCellview_run. For example, TOP_verification_run.
    You can specify a new location by changing the directory name. If the specified directory does not exist, it is created automatically.
  3. Click Save.

Verifier saves the batch script in the specified location using the specified filename.

Reviewing and Refining a Batch Script

You can open the batch script you generated from Verifier in an editor and review the script.

To generate a script:

The following example shows a sample script generated in ADE Verifier.

#!/bin/csh -f
#
# Script to run implementations of cellview opamp090.full_diff_opamp_AC:verification1
setenv vlib  "opamp090"
setenv vcell "full_diff_opamp_AC"
setenv vview "verification1"
virtuoso -nograph -log full_diff_opamp_AC_run.batch.log $* << SKILLSCRIPT
let(((retval 255))
unwindProtect(
let((sess failedReqs)
when(sess = verifOpenCellView("$vlib" "$vcell" "$vview" ?mode "r" ?openWindow nil)
verifRun(sess ?mode "Local Run" ?waitUntilDone t)
verifSetOptionVal(sess "batchhtmlreport" "$vcell.html")
verifPublishHTML(sess)
failedReqs = setof(r verifGetReqs(sess ?mappable t) verifGetReqStatus(sess r) != "Pass")
when(failedReqs
info("The following requirements did not pass:\n")
foreach(req failedReqs
info("\t%L\t%L\n" req verifGetReqStatus(sess req))
)
)
verifCloseSession(sess ?saveIfModified nil)
retval = length(failedReqs)
)
)
begin(
info("Exiting Virtuoso (%d)\n" retval)
exit(retval)
)
)
)
SKILLSCRIPT

The script used in this example is standard .csh script that sets up some shell environment variables to specify the library, cell, and view. With this script, Virtuoso is launched in the optionally used -nograph mode. The script sends the commands between the two SKILLSCRIPT segments to Virtuoso as SKILL commands to run.

The Verifier SKILL functions used in this SKILL script do the following:

Function Description

verifOpenCellView

Opens the verifier cellview in read mode without any window.

verifRun

Runs all the simulations and ensures waiting before the runs are finished before proceeding.

A few options that you can use with verifRun are:

  • ?mode: Change the mode to Batch Run or Batch Run with SPACE, Local Run or Local with SPACE.
  • ?implementation: Specify an implementation or list of implementations to run instead of defaults.
  • ?impSet: Specify the name of the implementation set to run.

verifSetOptionVal

Specifies where a batch report file will be generated.

verifPublishHTML

Publishes the report from the location specified by verifSetOptionVal.

info

Generates a summary of the requirements.

failedReqs

Prints out a summary of the requirements that did not pass.

verifCloseSession

Closes the session without saving it.

exit

Exits with the number of failed requirements as the return value.

The use of unwindProtect is to ensure sure that the script always exits even if there is a SKILL error. In case of an error, the return value is 255.

If required, you can edit the batch script appropriately. For example, if you want to run the implementation cellviews that belong to a specific implementation set, add -run implementation_set_name to the script. For more details on refining the batch script for a scheduled run, see Scheduling Runs.

When a batch script run is completed, the overall verification status is displayed by default. The following snippet shows some example messages that Verifier displays at the end of a batch script run. The last message provides the overall verification status.

...
...
Verifier: 11:16:33: INFO (Verifier-80084) The cellview 'amsPLL/PLL_VCO_320MHZ_PD_tb/maestro/Active' run finished. Summary: Started 'Dec 15 11:16:09 2015'. Finished 'Dec 15 11:16:30 2015'. Took '21s' to run.
Verifier: 11:16:33: INFO (Verifier-80083) Finished running '3' job(s).
Verifier: 11:16:33: Created HTML report file './TOP_verification.html'.
Verifier: 11:16:33: The overall verification status of the project is: FAIL
Verifier: 11:16:33: Verifier is exiting.

If you want to view the overall verification status of only specific requirements, instead of the status of the project, use -returnStatus RequirementIDlist. You can specify a list of requirement IDs or an implementation set that contains the requirements.

Running and Stopping a Batch Script

You can run a batch script from the command line. You can also schedule a batch script to run through a cron job. For details, see Scheduling Runs.

To run a batch script:

Verifier runs the specified batch script and displays appropriate messages.

When the batch script run is completed, it either returns 0 to indicate the successful completion of the script or 1 to indicate failure. For example, the following return value indicates that the run was successfully completed:

$ echo ? $ 0

To stop a running batch script:

  1. Press Ctrl + Z to stop the run manually.
    Do not press Ctrl + C to stop the script. Using this stops the script, but the simulations continue to run in the Virtuoso session. The Verifier session also does not stop.
  2. Run the following command to kill the Virtuoso and Verifier processes.
    $ kill -9 $$

The batch script run stops, along with Virtuoso and Verifier.

Scheduling Runs

You can schedule implementation cellview runs using batch scripts. For this, generate the batch script and set cron jobs to run those scripts, as required. For details on batch scripts, see Batch Scripts in ADE Verifier.

If you want to schedule the run of only specific implementation cellviews, organize those cellviews in an implementation set and edit the script to run that set. Then set a cron job to run the batch script.

The following set of steps describe how to set a batch script and schedule its cron job:

  1. Open the Verifier cellview.
  2. Organize the implementation cellviews in implementation sets.
    For this example, assume that run_weekly has been created and some implementation cellviews are added to this implementation set. For more details, see Implementation Sets.
  3. Create a batch script called weekly_run_script.
    For more details, see Batch Scripts in ADE Verifier.
  4. Edit the script appropriately.
    For example, edit weekly_run_script to run the implementation cellviews in the set run_weekly. The following snippet shows a sample script that uses the run_weekly implementation set.
    #!/bin/csh -f
    #
    # Script to run implementations of cellview opamp090.full_diff_opamp_AC:verification1
    setenv vlib  "opamp090"
    setenv vcell "full_diff_opamp_AC"
    setenv vview "verification1"
    virtuoso -nograph -log full_diff_opamp_AC_run.batch.log $* << SKILLSCRIPT
    let(((retval 255))
    unwindProtect(
    let((sess failedReqs)
    when(sess = verifOpenCellView("$vlib" "$vcell" "$vview" ?mode "r" ?openWindow nil)
    verifRun(sess ?mode "Local Run" ?waitUntilDone t ?impSet "run_weekly")
    verifSetOptionVal(sess "batchhtmlreport" "$vcell.html")
    verifPublishHTML(sess)
    failedReqs = setof(r verifGetReqs(sess ?mappable t) verifGetReqStatus(sess r) != "Pass")
    when(failedReqs
    info("The following requirements did not pass:\n")
    foreach(req failedReqs
    info("\t%L\t%L\n" req verifGetReqStatus(sess req))
    )
    )
    verifCloseSession(sess ?saveIfModified nil)
    retval = length(failedReqs)
    )
    )
    begin(
    info("Exiting Virtuoso (%d)\n" retval)
    exit(retval)
    )
    )
    )
    SKILLSCRIPT
  5. Add the cron job entry, as required.
    The cron job calls do not load any environment settings automatically, such as sourcing .cshrc. Therefore, ensure that the requirement settings, and software and project setups are sourced. Also ensure that the current directory is changed appropriately before the batch script is run.

The cron job runs the batch script at the scheduled time.


Return to top
 ⠀
X