RE: [sv-bc] Suppression of unique/priority glitches

From: Steven Sharp <sharp_at_.....>
Date: Fri Oct 12 2007 - 18:48:54 PDT
>From: "Seligman, Erik" <erik.seligman@intel.com>

>Steven-- The way I was thinking about it, a 'process' begins at the
>start of an always_comb block or similar construct, and ends when that
>block is exited.  Any form of suspension, forking, or delay would leave
>it as the same process; reaching the end of the defining block is the
>way to end the process, at least for the purpose of the glitch-free
>assertions we're defining.   

This is a common incorrect mental model among users.  For code in the
usual synthesizable subset, the correct model gives behavior close
enough to this that it doesn't matter much.  But if we are going to
talk about general language rules, we need to use the correct model.

Let's start with an initial block.  That has a process that is created
before simulation starts, and starts executing at the beginning of the
statement inside it.  If it reaches the end of that statement, it
terminates.

Your mental model of an always block seems to be the following: When
the event control at the top is triggered, it creates a process that
runs until it reaches the bottom, and then terminates.  The next time
the event control triggers, a new process is created that does the
same thing.  This model is incorrect.

The correct model of an always block is that "always" is equivalent
to writing "initial forever".  It has a single process that is created
before simulation starts, and starts executing at the beginning of
the statement inside it.  If it reaches the end of that statement, it
loops back to the beginning of the statement and executes it again,
forever.  The event control at the top does not launch a new process;
it stops the existing process until the event control occurs.

For synthesizable constructs, you don't see much difference.  Since
the event control is usually at the top, the process stops there as 
soon as it starts to execute.  So it doesn't do anything until the event
control triggers the first time.  And when it reaches the bottom, it
jumps to the top, where it immediately stops again.  Since it doesn't
do anything at the top before stopping, it looks the same as if it
had stopped at the bottom.  The behavior closely matches the incorrect
model.

There are still some visible differences that sometimes take users by
surprise.  One of them is if an event in the event control occurs when
the always block is not back at the event control yet (which can occur
either because the always block contains a delay control, or because
the always block itself triggered the event).  With the incorrect model,
users expect a second process to be created by the event, so that the
always block always responds when that event occurs.  Their mental idea
is that "always @e" means "whenever e happens".

The correct model is that there is a single process.  If it is not waiting
at the event control, it will not wake up when the event happens.  There
is no such construct as "always @e".  There is an "always" with a statement
inside it, which will be executed forever.  That statement can start with
"@e", which makes the statement wait until e occurs.

BTW, there is a slight difference for always_comb.  The LRM specifies that
this will execute once at time zero before it starts waiting for any
events.  The best way to view that is actually that the implicit event
control is at the bottom of the loop instead of the top.  With this view,
execution does stop at the end of the block.  However, it is still not
conceptually running off the end and stopping.  It is still stopping
inside the loop, just at the bottom.  When it wakes up, it will jump to the
top.  You can't prove that it works this way, since there is no way to
insert anything between the implicit event control and the bottom, but
it is the simplest and most consistent model of what is happening.

If you want to verify some of this, you can try an always block with
no event controls, just a display statement.  You will see it loop
forever, printing, with no trigger.  You can put an event control
somewhere other than at the top, and see the code before it execute
at the start of simulation, before it stops the first time.  You can
put multiple event controls in the block, so it stops and waits at
multiple points.  You could use a counter and conditionally execute
the event control only at a certain count, so that the always block runs
from top to bottom multiple times before finally stopping at the event
control.

When not constrained by synthesis restrictions, Verilog code does not
fit into your definition of "process evaluation".  Any rules that we
specify need to be based on how Verilog is actually defined to work.

If you try to base it on execution reaching the bottom or top of the
block, this doesn't work.  It might work for always_comb.  But for an
always block with an explicit event control at the top, it reaches
the bottom and then the top just before it stops, not just after it
starts.  So the violations would always get discarded in this case
with this definition.

Gord is suggesting that it be defined based on the process waking up,
independently of where that happens and what the process executed
before it suspended itself.  That definition is more general, and fits
with how Verilog is actually defined to work.  However, if an always
block contains multiple event controls and/or delay controls, that
could mean there are multiple "executions" between the top and the
bottom of the always block.


Steven Sharp
sharp@cadence.com


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
Received on Fri Oct 12 18:49:32 2007

This archive was generated by hypermail 2.1.8 : Fri Oct 12 2007 - 18:50:02 PDT