Tuesday, 18 November 2014

Inject error values into DUT - SERIES 2

In SERIES 1, errors are driven by inserting error values into cfg_db and passing this instance of cfg_db into respective testcase.

Another method is to use nested transaction class in test class and override this nested class with original transaction class. this method provides encapsulation of error injection into that particular test. This can be understood with below discussion in verification academy forum.

***********************************************************************************
There are separate sequences running on multiple instances of UVC. In a testcase, I want to override the transaction item in particular instance of sequence. I tried the below, which is not working. Any clue ?
class baseseq extends uvm_sequence;
 
task body();
basetrans = trans::type_id::create ("trans");
start_item(trans);
trans.randomize();
finish_item(trans);
endtask
 
endclass
 
class basetest extends uvm_test;
 task run;
   foreach (seq[id]) begin      // Let's say id is from 0 to 4
     seq[id]=baseseq::type_id::create($sformatf("seq_%0d",id), ,get_full_name());
     seq[id].start(seqr[id]);
    end
 endtask
endclass
 
class mytest extends basetest;
 
 class errtrans extends basetrans;
   // additional functionalities
 endclass
 
 task run;
   foreach (seq[id]) begin
     seq[id]=baseseq::type_id::create($sformatf("seq_%0d",id), ,get_full_name());
     seq[id].start(seqr[id]);
    end
   basetrans::type_id:set_inst_override(errtrans:get_type(), {get_full_name,".seq_4"});
   // I expect the override type should effect from now, but is not happening !!
   foreach (seq[id]) begin
     seq[id]=baseseq::type_id::create($sformatf("seq_%0d",id), ,get_full_name());
     seq[id].start(seqr[id]);
    end
 endtask
endclass
 
**********************************************************************************************
Instance overrides just require the create() name of the instance to match the pattern of the override. You have an override that looks like it would be intended to override the sequence, not the transaction. It looks like you have followed most of the recommendations in the cookbook page Sequences/Overrides - just a couple of things need changed in your code to make it work:
(1) give your transaction instance some context when you create it in class baseseq, otherwise you can't easily match it with an instance override. Recommend you change the create() to add the 3rd context argument:

basetrans trans = basetrans::type_id::create("trans",,get_full_name());
                                                    ^^^^^^^^^^^^^^^^^
The transaction (that you wish to override) now has a context which is the get_full_name() OF THE SEQUENCE.
You need to match that up with (a) your create() of the sequence and (b) your instance override.
(2) in the sequence creation and override, use the get_full_name() of seqr[4], not of the test, and append the actual transaction name to the instance override (or a wildcard):

basetrans::type_id::set_inst_override(errtrans::get_type(), {seqr[4].get_full_name,".seq_4.trans"});
                                                             ^^^^^^^^                     ^^^^^^
    foreach (seq[id]) begin
      seq[id]=baseseq::type_id::create($sformatf("seq_%0d",id), ,seqr[id].get_full_name());
                                                                 ^^^^^^^^^
BTW kudos for using a NESTED CLASS for your definition of errtrans, inside your test. That is a great technique for keeping all aspects of a testcase together in one file, especially when the override is as simple as e.g. adding a random constraint on the base transaction. Just remember that class still needs a `uvm_object_utils so that the factory knows about it, and you need to ensure all such nested classes have unique names!
 
***********************************************************************************
 
Gordon, Thanks for the solution. It did work :) I have also tried the below method previously

basetrans trans = basetrans::type_id::create("trans");
$display ("Full_NAME=%0s", trans.get_full_name());
 
This was to get the hierarchical path of the transaction item. It was giving the result "env.seqr_4.seq_4.trans" and in the test, I had done the following

basetrans::type_id::set_inst_override(errtrans::get_type(), env.seqr_4.seq_4.* );
 
But, it was not working. Now that after adding context when creating
transaction item as given below, it is working

basetrans trans = basetrans::type_id::create("trans", , get_full_name());
Why is it required to specify context while creating transaction item?
 

Tuesday, 4 November 2014

Inject error values into DUT using sequences - SERIES 1

Courtesy: Verification Academy:Sequences

A Configurable Sequence

The most generic way to configure sequences is to use the full hierarchical name by default, but allow any other name to be used by the creator of the sequence.
class my_bus_seq extends uvm_sequence #( my_bus_sequence_item );
  string scope_name = "";
 
  task body();
    my_bus_config m_config;
 
    if( scope_name == "" ) begin
      scope_name = get_full_name(); // this is { sequencer.get_full_name() , get_name() }
    end
 
    if( !uvm_config_db #( my_bus_config )::get( null , scope_name , "my_bus_config" , m_config ) ) begin
      `uvm_error(...)
    end
  endtask
endclass
Suppose that we have a sequence called "initialization_sequence" running on the sequencer on "uvm_test_top.env.sub_env.agent1.sequencer". This scope_name in the code above by default is "uvm_test_top.env.sub_env.agent1.sequencer.initialization_sequence".
The most usual use case for sequence configuration is that we want to pick up the agent's configuration class which has been set on the agent and all its children.
class sub_env extends uvm_env;
  ...
  function void build_phase( uvm_phase phase );
    ...
    my_bus_config agent1_config;
    ...
    uvm_config_db #( my_bus_config )::set( this , "agent1*" , "my_bus_config" , agent1_config );
    ...
  endfunction
 
  task main_phase( uvm_phase phase );
     my_bus_sequence seq = my_bus_sequence::type_id::create("my_bus_sequence");
 
     seq.start( agent1.sequencer );
  endtask
  ...
endclass
Since we have used the sub_env as the context, the set and the default get in the configurable sequence will match and as a result the sequence will have access to the agent's configuration object.

Per Sequence Configuration

We can use the default mode of the configurable sequence above to configure distinct sequences differently, although we can only do this if the names of the sequences are different.
For example, the environment class might look something like this:
class sub_env extends uvm_env;
  ...
  function void build_phase( uvm_phase phase );
    ...
    my_bus_config agent1_config , agent1_error_config;
    ...
    agent1_config.enable_error_injection = 0;
    agent1_error_config.enable_error_injection =10;
 
    // most sequences do not enable error injection
    uvm_config_db #( my_bus_config )::set( this , "agent1*" , "my_bus_config" , agent1_config );
 
    // sequences with "error" in their name will enable error injection
    uvm_config_db #( my_bus_config )::set( this , "agent1.sequencer.error*" , "my_bus_config" , agent1_error_config );
    ...
  endfunction
 
  task main_phase( uvm_phase phase );
     my_bus_sequence normal_seq = my_bus_sequence::type_id::create("normal_seq");
     my_bus_sequence error_seq = my_bus_sequence::type_id::create("error_seq");
 
     normal_seq.start( agent1.sequencer );
     error_seq.start( agent1.sequencer );
  endtask
  ...
endclass
Since the configurable sequence uses the sequence name to do a get, the normal sequence will pick up the configuration which does not enable error injection while the error sequence will pick up the error configuration.

Ignoring the Component Hierarchy Altogether

It is quite possible to completely ignore the component hierarchy when configuring sequences. This has the advantage that we can in effect define behavioural scopes which are only intended for configuring sequences, and we can keep these behavioural scopes completely separate from the component hierarchy. The configurable sequence described above will work in this context as well as those above which are based on the component hierarchy.
So for example in a virtual sequence we might do something like:
class my_virtual_sequence extends uvm_sequence #( uvm_sequence_item_base );
  ...
  task body();
     my_bus_sequence normal_seq = my_bus_sequence::type_id::create("normal_seq");
     my_bus_sequence error_seq = my_bus_sequence::type_id::create("error_seq");
 
     normal_seq.scope_name = "sequences::my_bus_config.no_error_injection";
     error_seq.scope_name = "sequences::my_bus_config.enable_error_injection";
 
     normal_seq.start( agent1.sequencer );
     error_seq.start( agent1.sequencer );
  endtask
  ...
endclass
The downside to this freedom is that every sequence and component in this testbench has to agree on the naming scheme, or at least be flexible enough to deal with this kind of arbitrary naming scheme. Since there is no longer any guarantee of uniqueness for these scope names, there may be some reuse problems when moving from block to integration level.

Wednesday, 29 October 2014

understanding scheduling scemantics from simulator perspective

Any simulator has an algorithm to execute any code. These algorithms are mainly designed with help of scheduling scemantics of verilog or system verilog. Below example shows a small verilog code consisting of two blocks that run concurrently. (initial- executes once, always- never ending loop)

module m1;
reg r1;
initial begin
r1 = 1'b1;
#5 r1 = 1'b0;
end
always@(r1) $display("Printing r1: %d",r1);
endmodule
 
 
 
Below is a way to think of how a simulator can execute this code:
  1. At time 0, prior to the execution of any events, r1 has its default initial value; 1'b1. Had you written reg r1 = 1'b1; its initial value would have been 1'b1.
  2. All initial and always processes in the entire design are added to the active event queue. Processes implied by continuous assignments are also added to the queue. The order they are placed in the queue is indeterminate. You should never write any code that depends on any observed ordering. Also, the execution of individual statements within a process with respect to the event queue is not defined by the LRM. The LRM does guarantee ordering of statements within a begin/end block within one process, but the relative ordering of statements inbetween multiple processes is not determinate.
  3. Execution of the active queue continues until the active queue is empty. As has been stated, either the initial or always block may be picked to execute first. Lets start assuming the initial block first.
  4. The statement attached to the initial block is a begin/end block. This means execute each statement inside the block serially.
  5. The first statement of the begin end block is an assignment r1 = 1'b1. The assignment creates an update event for any processed sensitive to this update event that will be added to the end of the active queue. If nothing is sensitive to the update, nothing gets scheduled.
  6. The next statement has an delay control #5. This suspends the current process for 5 time units. The next assignment statement will be put on the inactive queue for 5 time unites later.
  7. The next event on the active queue is the always block. As mentioned before, there is no reason that this statement could not have started earlier. When simulators go through their optimizations, there is no way to predict the ordering between processes.
  8. The first statement in the always block has an event control @r1; wait for a change on r1. Since we assumed the initial block started first, the update to r1 has already happened, and this process will suspend waiting for another r1 event.
  9. The active event queue is now empty and all other queues in the current time slot are empty, so the simulator will advance time to the next scheduled time, 5 time units.
  10. The initial block process will be put on the active queue and resume executing.
  11. The next statement is an assignment which generate an update event to r1, and the always block process is put back on the active queue.
  12. There are no more statements in the initial block, so that process is terminated.
  13. The always block resumes, executing the $display statement.
  14. The are no more statements in the always block, so it goes back to the beginning of the block.
  15. The first statement in the always block has an event control @r1; wait for a change on r1. This process will suspend waiting for another r1 event.
  16. The active event queue is now empty and all other queues in the current time slot are empty, so the simulator will advance time to the next scheduled time, which does not exist, so the simulation terminates.
 

Wednesday, 15 October 2014

Difference between @cb and @posedge IF.clk

Interface handle : IF
clocking block : cb
clock used in clocking block: clk

logically both appear to have same functionality. But there is a difference due to clocking block.

when interacting with clocking blocks, only use the clocking block event @IF.cb as the synchronizing event. That includes not using the wait() statement or any other @IF.cb.variable expressions.

There are two reasons fort this recommendation.

There is a race condition if you use the @(posedge clk) and try to read a sampled clocking block variable. The race is between reading the old sampled value and the new sampled value. This is describe in a note in section 14.4 Input and output skews of the 1800-2012 LRM.

Although not explicitly stated in the LRM, a consequence of eliminating the above race is that because clocking block event comes after all clocking block variables are updated, if you wait on a clock block variable followed by a wait on a clocking block event, both event may occur in the same time slot. so write your code as
@(IF.cb iff !IF.CB.reset); // instead of wait (!IF.cb.reset)
...
@(IF.cb) // guaranteed to be 1 cycle later.

Thursday, 9 October 2014

How to control or check the status of threads generated by fork

In a complex TB designs, if multiple fork-join blocks are involved, it becomes difficult to manage threads generated by all the forks. SV provides a way to check the status of these threads or to suspend or kill threads. ( we are aware of only 'wait fork' and 'disable fork' commands to control fork-join)

A process is a built-in class that allows one process to access and control another process once it has started.Users can declare variables of type process and safely pass them through tasks or incorporate them into other objects. The prototype for the process class is as follows:

class process;
   typedef enum { FINISHED, RUNNING, WAITING, SUSPENDED, KILLED }state;
   static function process self();
   function state status();
   function void kill();
   task await();
   function void suspend();
   function void resume();
   function void srandom( int seed );
   function string get_randstate();
   function void set_randstate( string state );
endclass


Objects of type process are created internally when processes are spawned. Users cannot create objects of type process; attempts to call new shall not create a new process and shall instead result in an error. The process class cannot be extended. Attempts to extend it shall result in a compilation error. Objects of type process are unique; they become available for reuse once the underlying process terminates and all references to the object are discarded.
The self() function returns a handle to the current process, that is, a handle to the process making the call. The status() function returns the process status, as defined by the state enumeration:

— FINISHED means the process terminated normally.
— RUNNING means the process is currently running (not in a blocking statement).
— WAITING means the process is waiting in a blocking statement.
— SUSPENDED means the process is stopped awaiting a resume.
— KILLED means the process was forcibly killed (via kill or disable).
The kill() function terminates the given process and all its subprocesses, that is, processes spawned using fork statements by the process being killed. If the process to be terminated is not blocked waiting on some other condition, such as an event, wait expression, or a delay, then the process shall be terminated at some unspecified time in the current time step.

The await() task allows one process to wait for the completion of another process. It shall be an error to call this task on the current process, i.e., a process cannot wait for its own completion.

The suspend() function allows a process to suspend either its own execution or that of another process. If the process to be suspended is not blocked waiting on some other condition, such as an event, wait expression, or a delay, then the process shall be suspended at some unspecified time in the current time step.
Calling this method more than once, on the same (suspended) process, has no effect.

The resume() function restarts a previously suspended process. Calling resume on a process that was suspended while blocked on another condition shall resensitize the process to the event expression or to wait for the wait condition to become true or for the delay to expire. If the wait condition is now true or the
original delay has transpired, the process is scheduled onto the Active or Reactive region to continue its execution in the current time step. Calling resume on a process that suspends itself causes the process to continue to execute at the statement following the call to suspend.

The methods kill() , await() , suspend() , and resume() shall be restricted to a process created by an initial procedure, always procedure, or fork block from one of those procedures.

The following example starts an arbitrary number of processes, as specified by the task argument N. Next, the task waits for the last process to start executing and then waits for the first process to terminate. At that point, the parent process forcibly terminates all forked processes that have not yet completed.



task automatic do_n_way( int N );
   process job[] = new [N];
   foreach (job[j])
   fork
      automatic int k = j;
      begin
         job[k] = process::self(); ... ;
      end
   join_none
  foreach (job[j])
     wait( job[j] != null ); // wait for all processes to start
      job[1].await(); // wait for first process to finish
  foreach (job[j]) begin
     if ( job[j].status != process::FINISHED )
     job[j].kill();
end
endtask

For more info: refer SV LRM

Wednesday, 8 October 2014

Familiar with static/dynamic task? then what is static process or dynamic process?

There are two kinds of processes in SystemVerilog:static and dynamic.
The SystemVerilog LRM defines a static process as one where "each time the process starts running, there is an end to the process." Another way of putting this is that static processes are created when the code is elaborated and persist until the end of simulation. Static processes come in several forms—each
always,always_comb,always_latch,always_ff and initial procedure is a separate static process as is every concurrent signal assignment.

On the other hand, dynamic processes are created at run-time and execute as independent threads from the processes that spawned them. They can be waited upon or disabled. Dynamic proc-esses come in the form of
fork..join_all,fork..join_none, and dynamic processes created by con-current assertions and cover properties. Dynamic processes allow a testbench to dynamically react to a design under test, control the flow of simulation, build high-level models,and respond to both testbench components and the design.

Tuesday, 7 October 2014

why a method using 'ref' as argument should be automaic?

As per SV guidelines, any method using 'ref' as keyword should be made automatic. Below are some key points to understand this.

1. methods inside a module/program block by default will have "static" life times.
2. methods defined inside a class by default will have "automatic" life time.

Consider an example below.

program main();
int a;
 
initial
begin
#10 a = 10;
#10 a = 20;
#10 a = 30;
#10 $finish;
end
 
task pass_by_val(int i);
forever
@i $display("pass_by_val: I is %0d",i);
endtask
 
 
task pass_by_ref(ref int i);
forever
@i $display("pass_by_ref: I is %0d",i);
endtask
 
initial
pass_by_val(a);
 
initial
pass_by_ref(a);
 
endprogram
 
 
This example has two static tasks.
In the example above, both tasks pass_by_val() and pass_by_ref() have static lifetimes. You could change the second initial block to
initial begin
      pass_by_val.i = 5;
      $display(pass_by_val.i); // will display 5
      fork 
         #11 pass_by_val(a); // a = 10, copied to pass_by_val.i
         #12 $display(pass_by_val.i); // will display "10"
         #30 pass_by_val.i = 2; // will display "pass_by_val: I is 2"
      join
  end
The $display at time 12 shows 10 because that was the value passed to i. The assignment statement at time 30 cause the @i event control to trigger and execut the $display inside pass_by_val(). If you change the third initial block to
initial begin
      pass_by_ref.i = 5;
      $display(pass_by_ref.i);
      pass_by_ref(a);
      ...
 
What variable is i referencing? We haven't called pass_by_ref yet, so 'a' is not referenced by i yet. This is just one of many problems where pass_by_ref.i could reference something that does not exist yet, or something that did exist, but no longer does. 
 
So, If task is static, we can access its variables hierarchically.
But, if arguments are "ref", and task is static, compiler
doesn't understand from where to take that ref value. So, if
task is defined as automatic, compiler doesn't map the ref
values until it is called. 
 
it is mandatory to make tasks/funtions automatic if they have
ref arguments and are being used in a module/program block. 
(in class, by default task/function is automatic). 

Thursday, 4 September 2014

Good way to model Slave sequences in any master-slave protocol or scenario

While developing verification environments for master-slave, it is always confusing on how to start slave sequences, since they have to be triggered based on DUT outputs and are monitored in Monitor. Below Webinar gives a clear picture on how to do that.

Use Sequences to Model Multiple Behaviors in UVM

Often in an UVM agent, a sequencer is considered just to initiate stimulus on an interface, but an in-depth study of its behavior reveals that it can be modelled in multiple other ways; for example, as a responder to traffic from DUT. In UVM, the functionality of sequencers can be encapsulated such that they can be controlled from the test without replacing the agent connected to the DUT, thus enhancing the reuse. This is possible by having different types of sequencers (other than normal ones) such as Slave and Interrupt sequencers. I was impressed with a webinar on this presented by Tom Fitzpatrick, a renowned Verification Technologist, at Mentor Graphics. Let’s look at some details about how exactly this is done.



In general, a sequence is generated by a sequencer and the stimulus sent to driver. The driver then converts the sequence item to pin wiggles and puts it on the bus to DUT. A sample code for handshake between sequencer and driver is shown on the right in the above picture. Look at the optional part, it’s possible if driver provides separate response object.



Now let’s consider a situation where a transaction is initiated by DUT; here the sequencer has to respond on the request filled by the driver. By exploiting this phenomenon and putting a phase shift between the request, driver, bus and response as shown on the right side of the above picture, the slave functionality of a sequencer can be easily realized.



Above is a sample code which implements the functionality of a Slave Driver and Slave Sequence. By using this out of phase mechanism, multiple start and finish items can be done with a single request on bus.



Above is a waveform representation where two transactions are filled in one bus cycle. The part shown in blue represents the objects in UVM.

Similarly the code for slave driver and slave sequence can be written alternatively with phase level sequence items, slave setup and slave access (request and response objects) which are different but use the same type of transaction (uvm_sequence_item) and hence the item has to be casted to either request or response type as the case may be. The details can be looked at in the on-line webinar.

The Slave Sequence can be started like any other sequence with different sequence in different test without any structural change to the environment. Alternatively, slave sequence can be started from an environment by using config object to pass in slave sequence type/instance from test or by using factory to override slave sequence type.



Interrupt Sequence is an interesting item where an existing sequence has to be blocked until interrupt sequence is processed. In this case detecting an interrupt is an important task which is done by using a Virtual Interface Pointer in the config object. Then separate code is implemented for Virtual Sequence and ISR (Interrupt Service Routine) Sequence. The Virtual Sequence has code for starting normal sequence, spawning blocking tasks to wait for interrupt and starting ISR Sequence. The ISR Sequence has code for catching the sequencer, determining the cause of interrupt, processing prioritized interrupts and releasing the sequencer.

Thus the sequencer behavior can be changed according to the need without changing the testbench. This enhances the power of reusability in UVM. Verification engineers and professionals can go through the webinar to know more and actual details presented very elaborately by Tom. Also, Verification Academy constituted by Mentor can be referred for more trainings, downloads, video courses and verification methodology cookbooks.

Wednesday, 27 August 2014

Methods to model UVM driver/sequence wrt pipelined or un-pipelined transactions

Below Article shows good insight in how to write a OVM/UVM driver to model pipelined transactions. good way of modeling pipelined behavior.

Courtesy: Verification Academy

original link: https://verificationacademy.com/cookbook/ovm/driver/pipelined


In a pipelined bus protocol a data transfer is broken down into two or more phases which are executed one after the other, often using different groups of signals on the bus. This type of protocol allows several transfers to be in progress at the same time with each transfer occupying one stage of the pipeline. The AMBA AHB bus is an example of a pipelined bus, it has two phases - the address phase and the data phase. During the address phase, the address and the bus control information, such as the opcode, is set up by the host, and then during the data phase the data transfer between the target and the host takes place. Whilst the data phase for one transfer is taking place on the second stage of the pipeline, the address phase for the next cycle can be taking place on the first stage of the pipeline. Other protocols such as OCP use more phases.
A pipelined protocol has the potential to increase the bandwidth of a system since, provided the pipeline is kept full, it increases the number of transfers that can take place over a given number of clock cycles. Using a pipeline also relaxes the timing requirements for target devices since it gives them extra time to decode and respond to a host access.
A pipelined protocol could be modelled with a simple bidirectional style, whereby the sequence sends a sequence item to the driver and the driver unblocks the sequence when it has completed the bus transaction. In reality, most I/O and register style accesses take place in this way. The drawback is that it lowers the bandwidth of the bus and does not stress test it. In order to implement a pipelined sequence-driver combination, there are a number of design considerations that need to be taken into account in order to support fully pipelined transfers:
  • Driver Implementation - The driver needs to have multiple threads running, each thread needs to take a sequence item and take it through each of the pipeline stages.
  • Keeping the pipeline full - The driver needs to unblock the sequencer to get the next sequence item so that the pipeline can be kept full
  • Sequence Implementation - The sequence needs to have separate stimulus generation and response threads. The stimulus generation thread needs to continually send new bus transactions to the driver to keep the pipeline full.

Contents


Recommended Implementation Pattern Using get and put

The most straight-forward way to model a pipelined protocol with a sequence and a driver is to use the get() and put() methods from the driver-sequencer API.

Driver Implementation

In order to support pipelining, a driver needs to process multiple sequence_items concurrently. In order to achieve this, the drivers run method spawns a number of parallel threads each of which takes a sequence item and executes it to completion on the bus. The number of threads required is equal to the number of stages in the pipeline. Each thread uses the get() method to acquire a new sequence item, this unblocks the sequencer and the finish_item() method in the sequence so that a new sequence item can be sent to the driver to fill the next stage of the pipeline.
In order to ensure that only one thread can call get() at a time, and also to ensure that only one thread attempts to drive the first phase of the bus cycle, a semaphore is used to lock access. The semaphore is grabbed at the start of the loop in the driver thread and is released at the end of the first phase, allowing another thread to grab the semaphore and take ownership.
At the end of the last phase in the bus cycle, the driver thread sends a response back to the sequence using the put() method. This returns the response to the originating sequence for processing.
In the code example a two stage pipeline is shown to illustrate the principles outlined.
//
// This class implements a pipelined driver
//
class mbus_pipelined_driver extends ovm_driver #(mbus_seq_item);
 
`ovm_component_utils(mbus_pipelined_driver)
 
virtual mbus_if MBUS;
 
function new(string name = "mbus_pipelined_driver", ovm_component parent = null);
  super.new(name, parent);
endfunction
 
// The two pipeline processes use a semaphore to ensure orderly execution
semaphore pipeline_lock = new(1);
//
// The run_phase(ovm_phase phase);
//
// This spawns two parallel transfer threads, only one of
// which can be active during the cmd phase, so implementing
// the pipeline
//
task run();
 
  @(posedge MBUS.MRESETN);
  @(posedge MBUS.MCLK);
 
  fork
    do_pipelined_transfer;
    do_pipelined_transfer;
  join
 
endtask
 
//
// This task has to be automatic because it is spawned
// in separate threads
//
task automatic do_pipelined_transfer;
  mbus_seq_item req;
 
  forever begin
    pipeline_lock.get();
    seq_item_port.get(req);
    accept_tr(req, $time);
    void'(begin_tr(req, "pipelined_driver"));
    MBUS.MADDR <= req.MADDR;
    MBUS.MREAD <= req.MREAD;
    MBUS.MOPCODE <= req.MOPCODE;
    @(posedge MBUS.MCLK);
    while(!MBUS.MRDY == 1) begin
      @(posedge MBUS.MCLK);
    end
    // End of command phase:
    // - unlock pipeline semaphore
    pipeline_lock.put();
    // Complete the data phase
    if(req.MREAD == 1) begin
      @(posedge MBUS.MCLK);
      while(MBUS.MRDY != 1) begin
        @(posedge MBUS.MCLK);
      end
      req.MRESP = MBUS.MRESP;
      req.MRDATA = MBUS.MRDATA;
    end
    else begin
      MBUS.MWDATA <= req.MWDATA;
      @(posedge MBUS.MCLK);
      while(MBUS.MRDY != 1) begin
        @(posedge MBUS.MCLK);
      end
      req.MRESP = MBUS.MRESP;
    end
    // Return the request as a response
    seq_item_port.put(req);
    end_tr(req);
  end
endtask: do_pipelined_transfer
 
endclass: mbus_pipelined_driver

Sequence Implementation

Unpipelined Accesses

Most of the time unpipelined transfers are required, since typical bus fabric is emulating what a software program does, which is to access single locations. For instance using the value read back from one location to determine what to do next in terms of reading or writing other locations.
In order to implement an unpipelined sequence that would work with the pipelined driver, the body() method would call start_item(), finish_item() and get_response() methods in sequence. The get_response() method blocks until the driver sends a response using its put() method at the end of the bus cycle. The following code example illustrates this:
//
// This sequence shows how a series of unpipelined accesses to
// the bus would work. The sequence waits for each item to finish
// before starting the next.
//
class mbus_unpipelined_seq extends ovm_sequence #(mbus_seq_item);
 
`ovm_object_utils(mbus_unpipelined_seq)
 
logic[31:0] addr[10]; // To save addresses
logic[31:0] data[10]; // To save data for checking
 
int error_count;
 
function new(string name = "mbus_unpipelined_seq");
  super.new(name);
endfunction
 
task body;
 
  mbus_seq_item req = mbus_seq_item::type_id::create("req");
 
  error_count = 0;
  for(int i=0; i<10; i++) begin
    start_item(req);
    assert(req.randomize() with {MREAD == 0; MOPCODE == SINGLE; MADDR inside {[32'h0010_0000:32'h001F_FFFC]};});
    addr[i] = req.MADDR;
    data[i] = req.MWDATA;
    finish_item(req);
    get_response(req);
  end
 
  foreach (addr[i]) begin
    start_item(req);
    req.MADDR = addr[i];
    req.MREAD = 1;
    finish_item(req);
    get_response(req);
    if(data[i] != req.MRDATA) begin
      error_count++;
      `ovm_error("body", $sformatf("@%0h Expected data:%0h Actual data:%0h", addr[i], data[i], req.MRDATA))
    end
  end
endtask: body
 
endclass: mbus_unpipelined_seq
Note: This example sequence has checking built-in, this is to demonstrate how a read data value can be used. The specifictype of check would normally be done using a scoreboard.

Pipelined Accesses

Pipelined accesses are primarily used to stress test the bus but they require a different approach in the sequence. A pipelined sequence needs to have a seperate threads for generating the request sequence items and for handling the response sequence items.
The generation loop will block on each finish_item() call until one of the threads in the driver completes a get() call. Once the generation loop is unblocked it needs to generate a new item to have something for the next driver thread to get(). Note that a new request sequence item needs to be generated on each iteration of the loop, if only one request item handle is used then the driver will be attempting to execute its contents whilst the sequence is changing it.
In the example sequence, there is no response handling, the assumption is that checks on the data validity will be done by a scoreboard. However, with the get() and put() driver implementation, there is a response FIFO in the sequence which must be managed. In the example, the response_handler is enabled using the use_response_handler() method, and then the response_handler function is called everytime a response is available, keeping the sequences response FIFO empty. In this case the response handler keeps cound of the number of transactions to ensure that the sequence only exist when the last transaction is complete.
//
// This is a pipelined version of the previous sequence with no blocking
// call to get_response();
// There is no attempt to check the data, this would be carried out
// by a scoreboard
//
class mbus_pipelined_seq extends ovm_sequence #(mbus_seq_item);
 
`ovm_object_utils(mbus_pipelined_seq)
 
logic[31:0] addr[10]; // To save addresses
int count; // To ensure that the sequence does not complete too early
 
function new(string name = "mbus_pipelined_seq");
  super.new(name);
endfunction
 
task body;
 
  mbus_seq_item req = mbus_seq_item::type_id::create("req");
  use_response_handler(1);
  count = 0;
 
  for(int i=0; i<10; i++) begin
    start_item(req);
    assert(req.randomize() with {MREAD == 0; MOPCODE == SINGLE; MADDR inside {[32'h0010_0000:32'h001F_FFFC]};});
    addr[i] = req.MADDR;
    finish_item(req);
  end
 
  foreach (addr[i]) begin
    start_item(req);
    req.MADDR = addr[i];
    req.MREAD = 1;
    finish_item(req);
  end
  // Do not end the sequence until the last req item is complete
  wait(count == 20);
endtask: body
 
// This response_handler function is enabled to keep the sequence response
// FIFO empty
function void response_handler(ovm_sequence_item response);
  count++;
endfunction: response_handler
 
endclass: mbus_pipelined_seq
If the sequence needs to handle responses, then the response handler function should be extended.

Alternative Implementation Pattern Using Events To Signal Completion

Adding Completion Events to sequence_items

In this implementation pattern, events are added to the sequence_item to provide a means of signalling from the driver to the sequence that the driver has completed a specific phase. In the example, a ovm_event_pool is used for the events, and two methods are provided to trigger and to wait for events in the pool:
//------------------------------------------------------------------------------
//
// The mbus_seq_item is designed to be used with a pipelined bus driver.
// It contains an event pool which is used to signal back to the
// sequence when the driver has completed different pipeline stages
//
class mbus_seq_item extends ovm_sequence_item;
 
// From the master to the slave
rand logic[31:0] MADDR;
rand logic[31:0] MWDATA;
rand logic MREAD;
rand mbus_opcode_e MOPCODE;
 
// Driven by the slave to the master
mbus_resp_e MRESP;
logic[31:0] MRDATA;
 
// Event pool:
ovm_event_pool events;
 
`ovm_object_utils(mbus_seq_item)
 
function new(string name = "mbus_seq_item");
  super.new(name);
  events = get_event_pool();
endfunction
 
constraint addr_is_32 {MADDR[1:0] == 0;}
 
// Wait for an event - called by sequence
task wait_trigger(string evnt);
  ovm_event e = events.get(evnt);
  e.wait_trigger();
endtask: wait_trigger
 
// Trigger an event - called by driver
task trigger(string evnt);
  ovm_event e = events.get(evnt);
  e.trigger();
endtask: trigger
 
// do_copy(), do_compare() etc
 
 
endclass: mbus_seq_item

Driver Signalling Completion using sequence_item Events

The driver is almost identical to the get, put implementation except that it triggers the phase completed events in the sequence item rather than using a put() method signal to the sequence that a phase has completed and that there is response information available via the sequence_item handle.
//
// This class implements a pipelined driver
//
class mbus_pipelined_driver extends ovm_driver #(mbus_seq_item);
 
`ovm_component_utils(mbus_pipelined_driver)
 
virtual mbus_if MBUS;
 
function new(string name = "mbus_pipelined_driver", ovm_component parent = null);
  super.new(name, parent);
endfunction
 
// the two pipeline processes use a semaphore to ensure orderly execution
semaphore pipeline_lock = new(1);
//
// The run();
//
// This spawns two parallel transfer threads, only one of
// which can be active during the cmd phase, so implementing
// the pipeline
//
task run();
 
  @(posedge MBUS.MRESETN);
  @(posedge MBUS.MCLK);
 
  fork
    do_pipelined_transfer;
    do_pipelined_transfer;
  join
 
endtask
 
//
// This task has to be automatic because it is spawned
// in separate threads
//
task automatic do_pipelined_transfer;
  mbus_seq_item req;
 
  forever begin
    pipeline_lock.get();
    seq_item_port.get(req);
    accept_tr(req, $time);
    void'(begin_tr(req, "pipelined_driver"));
    MBUS.MADDR <= req.MADDR;
    MBUS.MREAD <= req.MREAD;
    MBUS.MOPCODE <= req.MOPCODE;
    @(posedge MBUS.MCLK);
    while(!MBUS.MRDY == 1) begin
      @(posedge MBUS.MCLK);
    end
    // End of command phase:
    // - unlock pipeline semaphore
    // - signal CMD_DONE
    pipeline_lock.put();
    req.trigger("CMD_DONE");
    // Complete the data phase
    if(req.MREAD == 1) begin
      @(posedge MBUS.MCLK);
      while(MBUS.MRDY != 1) begin
        @(posedge MBUS.MCLK);
      end
      req.MRESP = MBUS.MRESP;
      req.MRDATA = MBUS.MRDATA;
    end
    else begin
      MBUS.MWDATA <= req.MWDATA;
      @(posedge MBUS.MCLK);
      while(MBUS.MRDY != 1) begin
        @(posedge MBUS.MCLK);
      end
      req.MRESP = MBUS.MRESP;
    end
    req.trigger("DATA_DONE");
    end_tr(req);
  end
endtask: do_pipelined_transfer
 
endclass: mbus_pipelined_driver
 

Unpipelined Access Sequences

Unpipelined accesses are made from sequences which block, after completing the finish_item() call, by waiting for the data phase completed event. This enables code in the sequence body method to react to the data read back. An alternative way of implementing this type of sequence would be to overload the finish_item method so that it does not return until the data phase completed event occurs.
//
// Task: finish_item
//
// Calls super.finish_item but then also waits for the item's data phase
// event. This is notified by the driver when it has completely finished
// processing the item. 
//
task finish_item( ovm_sequence_item item , int set_priority = -1 );
 
  // The "normal" finish_item()
  super.finish_item( item , set_priority );
  // Wait for the data phase to complete
  item.wait_trigger("DATA_DONE");
 
endtask
As in the previous example of an unpipelined sequence, the code example shown has a data integrity check, this is purely for illustrative purposes.
class mbus_unpipelined_seq extends ovm_sequence #(mbus_seq_item);
 
`ovm_object_utils(mbus_unpipelined_seq)
 
logic[31:0] addr[10]; // To save addresses
logic[31:0] data[10]; // To save addresses
 
int error_count;
 
function new(string name = "mbus_pipelined_seq");
  super.new(name);
endfunction
 
task body;
 
  mbus_seq_item req = mbus_seq_item::type_id::create("req");
  error_count = 0;
 
  for(int i=0; i<10; i++) begin
    start_item(req);
    assert(req.randomize() with {MREAD == 0; MOPCODE == SINGLE; MADDR inside {[32'h0010_0000:32'h001F_FFFC]};});
    addr[i] = req.MADDR;
    data[i] = req.MWDATA;
    finish_item(req);
    req.wait_trigger("DATA_DONE");
  end
 
  foreach(addr[i]) begin
    start_item(req);
    req.MADDR = addr[i];
    req.MREAD = 1;
    finish_item(req);
    req.wait_trigger("DATA_DONE");
    if(req.MRDATA != data[i]) begin
      error_count++;
      `ovm_error("body", $sformatf("@%0h Expected data:%0h Actual data:%0h", addr[i], data[i], req.MRDATA))
    end
  end
endtask: body
 
endclass: mbus_unpipelined_seq

Pipelined Access

The pipelined access sequence does not wait for the data phase completion event before generating the next sequence item. Unlike the get, put driver model, there is no need to manage the response FIFO, so in this respect this implementation model is more straight-forward.
class mbus_pipelined_seq extends ovm_sequence #(mbus_seq_item);
 
`ovm_object_utils(mbus_pipelined_seq)
 
logic[31:0] addr[10]; // To save addresses
 
function new(string name = "mbus_pipelined_seq");
  super.new(name);
endfunction
 
task body;
 
  mbus_seq_item req = mbus_seq_item::type_id::create("req");
 
  for(int i=0; i<10; i++) begin
    start_item(req);
    assert(req.randomize() with {MREAD == 0; MOPCODE == SINGLE; MADDR inside {[32'h0010_0000:32'h001F_FFFC]};});
    addr[i] = req.MADDR;
    finish_item(req);
  end
 
  foreach (addr[i]) begin
    start_item(req);
    req.MADDR = addr[i];
    req.MREAD = 1;
    finish_item(req);
  end
endtask: body
 
endclass: mbus_pipelined_seq