RE: [sv-bc] concurrent assertions in procedural code

From: Korchemny, Dmitry <dmitry.korchemny_at_.....>
Date: Thu Mar 13 2008 - 09:37:40 PDT
Hi Steven,

The issues you mentioned are serious, but I think that there are still
big advantages in placing concurrent assertions in procedural code. You
pointed the following problems:

1. The clock inference rules from the always block are not intuitive.
2. Concurrent assertions don't follow intermediate (blocking)
assignments.
3. They show variable values one clock behind than it is natural to
expect.
4. Wrapping code fragments into subroutines cannot be straightforward in
presence of concurrent assertions in the procedural code.

I will try first to show the need in specifying concurrent assertions in
procedural code, and then to address your issues.

When using assertion-based validation (ABV) methodology writing
concurrent assertions inside procedural code becomes absolutely
critical. Consider the following simplified example:

always @(posedge clk) begin
  for (int i = 0; i < 2; i++) begin
    if (en[i]) begin
      c1[i] <= c2[i];
      p: assert property (c1[i] == $past(c2[i]));
    end
  end
end

If you want to rewrite it without embedding concurrent assertions into
procedural code, you will have to write something like:

always @(posedge clk) begin
  for (int i = 0; i < 2; i++) begin
    if (en[i]) begin
      c1[i] <= c2[i];
    end
  end
end

genvar i;
for (int i = 0; i < 2; i++) begin
if (en[i]) begin
  p: assert property (en[i] |-> c1[i] == $past(c2[i]));
  end
end

In the real life instead of en, c1, and c2 you will have something like

always @(posedge clk) begin
  for (int i = 0; i < 2; i++) begin
    if (ABCbus1Enabler[i]) begin
      NewValueOfSomeVector[i] <= OldValueOfSomeVector[i];
      p: assert property (NewValueOfSomeVector[i] ==
$past(OldValueOfSomeVector[i]));
    end
  end
end

and more complex conditional expressions, probably nested with different
assertions in different conditional clauses. If the people have to
manually rewrite this code to specify standalone assertions, their life
will become very difficult. Note that in the above example you cannot
replace concurrent assertions with immediate/deferred ones.

I.e., the ability of writing concurrent assertions in procedural code is
crucial.

Unfortunately the problems you mentioned really exist, and I don't know
how to provide immediate solutions, though in the long term satisfactory
solutions should be possible. It should be taken into account concurrent
assertions cannot be absolutely transparent to the user and they require
basic understanding of the underlying simulation mechanism.


Now I would like to address the specific problems:

1. The clock inference rules from the always block are not intuitive.

As far as I understand the rationale behind these rules is to infer the
clock and not the reset conditions from an always or always_ff
procedures. E.g., in the following case:

always @(posedge clk or negedge rst) begin
	if (!rst) ...
	else ...
end

a clever synthesis tool would infer that the clock is clk and the reset
is rst, and it will be able to do it even if the event list is written
as

always @(negedge rst or posedge clk)

Unfortunately there is no clear way to define this rule for the clock
inference rules for assertions, unless there is a standard for SV
synthesis that P1800 is trying to avoid. Therefore the clock inference
rule for assertion is defined just as taking the first event of this
form in the list. This definition is old, and it already exists in
LRM'05. Several clarifications have been made recently, but they still
refer to the first event of a specific kind in the list.

2. Concurrent assertions don't follow intermediate (blocking)
assignments.

This is also a matter of definition, and this definition was accepted
long time ago. The rationale is that if you deal with temporal
assertions you usually don't want to check states from the middle of one
simulation tick until the middle of another simulation tick, but rather
to sample the simulation state either at the beginning of a simulation
tick as defined in the LRM, or at the end of a simulation tick. Your
example provides a use case when the latter behavior is not intuitive.

I don't think that this issue may be addressed right now, the users just
need to know what they are doing, and I don't see how writing standalone
assertions can solve this problem: you still have to rewrite your
assertion manually, in your case explicitly using {sel1, sel2} instead
of a. You will just have to duplicate big parts of your code.

I can see two possible approaches to resolve the problem you pointed.
One of them would be introducing yet another kind of assertions, similar
to concurrent assertions, but based on shadow variables. Your example
may be conceptually rewritten as:

  logic a1_shadow, a2_shadow;
  always @(posedge mclk) begin
    a = {sel1, sel0};
    a1_shadow = a;
    case (a1_shadow)
      1:  begin q <= d1;
          r4_p: assert ??? (r4);
          end
      default: q1 <= d1;
    endcase
    a = something else;
    a2_shadow = a;
    case (a2_shadow)
    ...
  end

and the assertion of the new kind "assert ???" will produce the
intuitive result (I am omitting here a discussion about shadow variable
initialization).

Another approach would be not to introduce a new kind of assertions, but
to introduce a pseudo-function $shadow to be used in checker
declaration, that would return a reference to an appropriate shadow
variable. E.g., in your example when using checker a instead of the
assertion, one could write:

checker r4(..., en = $shadow($inferred_enable));

This will be transparent to the user, and will produce the expected
result.

3. They show variable values one clock behind then it is natural to
expect.

I don't see any feasible solution here. This is common for all
concurrent assertions, both standalone and embedded. The users will
eventually get used to this situation.

4. Wrapping code fragments into subroutines cannot be straightforward in
presence of concurrent assertions in the procedural code.

I would prefer to be able to specify concurrent assertions in functions,
but I don't see any immediate solution for this. But it should be noted
that the immediate/deferred assertions should be sufficient in
functions, while the concurrent assertions are temporal, and there is no
compelling reason to put them into functions or in the middle of the
code to be wrapped later into a function. If a concurrent assertion is
extracted into a function then in most cases it indicates that they were
not written in an appropriate place, and the compilation error will only
help to clean up the design.

To summarize: the problems you raised are tough, and some of them may be
addressed in the long term. However, IMO avoiding assertion embedding in
the code now does not solve any problem, but only results in the code
duplication, which makes the code more error prone and less supportable.

Thanks,
Dmitry

-----Original Message-----
From: owner-sv-bc@server.eda.org [mailto:owner-sv-bc@server.eda.org] On
Behalf Of Steven Sharp
Sent: Saturday, March 08, 2008 4:41 AM
To: sv-bc@server.eda.org
Subject: [sv-bc] concurrent assertions in procedural code

My biggest concerns with checkers are their interactions with procedural
code.  These concerns do not apply just to checkers, but in general to
concurrent assertions in procedural code (and extending this further in
proposals such as Mantis 1995).

I have heard a number of members of this and other SV committees express
surprise when they heard that this was allowed, and the intended
semantics.
I was surprised and concerned when I heard about it.  I would like to
describe some of my concerns here.

The starting point of this was in IEEE Std 1800-2005 subclause 17.13.5
"Embedding concurrent assertions in procedural code".  It may look
rather
innocuous on first reading, but there are significant issues with it.

From the start it assumes that all procedural code is written in such a
way that cycle semantics can be inferred from it, which is an invalid
assumption.  It talks about "the clock from the event control of an
always block."  Note that it just assumes that there is exactly one
event control in an always block.  There appears to be another
assumption
that this event control must appear at the start of the always block,
though this is not as clear.

I understand that this is being clarified for the next revision of the
standard.  Rules are being created for when a clock should be inferred
and what to do if it cannot (presumably an error if there is a
concurrent
assertion in the always block).  From the viewpoint of the language in
general, such rules are arbitrary, and are a kludge.  Even from a
synthesis
view, these rules are unlikely to match those of any specific synthesis
tool.  Such rules would be more justifiable if they started by being
restricted to an always_ff construct, and the LRM defined restrictions
for an always_ff that ensured a clock could always be inferred.

A more serious concern is that the concurrent assertion is embedded in
the procedural code, but it does not execute when the flow of execution
reaches it.  It can refer to the same variables as the procedural code
around it, but it is not referencing the same values for those variables
as the procedural code.  Instead, it is executing in the Observed region
and referencing values sampled in the Preponed region.  This is
inconsistent
with anything else in procedural code, and seems likely to lead to much 
confusion and many mistakes.

As an example, the procedural code may assign a value to a variable, and
then have a concurrent assertion that appears to use that value.  But in
fact it will use a sampled value of that variable.  If the variable
value
changes again before it is next sampled, the assertion will never see
that
value.  Even if it doesn't, the assertion is using a value that is
running
one clock behind the rest of the procedural code it is in.

Assertion writers may take care to avoid referring to such intermediate
variables explicitly in assertions.  But what about inferred enabling
conditions?  One of the advantages of putting these assertions inside
procedural code seems to be that the enabling conditions will be
inferred
automatically from the procedural if and case conditions.  Just write
normal procedural code, and the inferrence process will take care of
the enabling conditions.  But what if those enabling conditions turn
out to be intermediate variables?  If you aren't explicitly specifying
the conditions, but are trusting the implicit rules, you may not notice
if this happens.

Take the case-statement example at the end of the subclause.  Suppose it
were written as

  always @(posedge mclk) begin
    a = {sel1, sel0};
    case (a)
      1:  begin q <= d1;
          r4_p: assert property (r4);
          end
      default: q1 <= d1;
    endcase
    a = something else;
    case (a)
    ...
  end
  
If I understand things correctly, this doesn't work as you would expect.
It works differently than if you had written "case ({sel1, sel0})".  The
assertion will sample the value of 'a' in the Preponed region, not at
the
point where execution reaches the concurrent assertion, where its value
has actually been computed.  So the assertion will have a stale value,
unless of course the value gets changed later in the procedural code, in
which case the assertion doesn't even get a stale value.

This looks like a serious problem to me.  You may be able to spot it
fairly easily in a simple case like this one.  But the advantages of
embedding assertions seem to be closeness to the related procedural
code and not having to explicitly specify the clock and enables.  For
those advantages to be significant, the procedural code must be large
and the enables complex.  So this problem will not be easy to spot.
Even if you are careful when you insert the assertion, somebody may
come along later and modify some of the procedural code to insert an
intermediate variable.

This all gets worse if Mantis 1995 is included.  My example of using an
intermediate variable and then re-using it for another value later may
have seemed unusual.  But if you are writing code inside a loop, it is
normal to have an intermediate variable that gets re-used each time
around
the loop.

Another minor issue I have with this involves the use of tasks and
functions.  Normally you can take procedural code and modularize it,
moving pieces into tasks and functions.  But if the procedural code
contains concurrent assertions, it cannot be moved into a subroutine.
There is no way to infer clocks and enables for a concurrent assertion
in a subroutine.  So the use of concurrent assertions in procedural
code will prevent modularizing it.  To me, the fact that procedural
code can normally be moved into tasks, but concurrent assertions cannot,
is another indication that concurrent assertions do not belong in
procedural code.

Steven Sharp
sharp@cadence.com


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

---------------------------------------------------------------------
Intel Israel (74) Limited

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
Received on Thu, 13 Mar 2008 18:37:40 +0200

This archive was generated by hypermail 2.1.8 : Thu Mar 13 2008 - 09:54:39 PDT