Re: [sv-bc] Suppression of unique/priority glitches

From: Gordon Vreugdenhil <gordonv_at_.....>
Date: Thu Oct 11 2007 - 16:11:02 PDT
Steven Sharp wrote:
[..]
> Is there a consideration I am missing here?

No.  When we all talked about the fork..join_none issues
for continuous assigns (which has a very weak relationship
to this) people were concerned about various things and it
wasn't clear to me whether that would be of concern here too.


> The scenarios I am worried about are the more unusual ones: procedural
> continuous assigns, nonblocking assigns, $monitor, and so forth.  The
> only one that seems likely to come up in reasonable designs are the
> nonblocking assigns, but the behavior has to be specified for everything.

I don't think nbas are a concern since the evaluation is still in
the originator.  The procedural continuous assigns and $strobe,
$monitor, etc. are a bit troublesome, but I think they are under
almost any model that I've heard people discuss.

Proposal (off the cuff) -- unique/priority assertions should be
ignored if they occur during the evaluation of a procedural
continuous assign or any system task (which would include user
define arg evaluations as well).

Would anyone worry about those?



>> I would support the same semantics for continuous assigns, but
>> would certainly consider other alternatives that were well formed.
>> Would you suggest treating unique/priority violations in the
>> current immediate manner in such scenarios and differently in
>> true "process" contexts?
> 
> No, I don't think it would make sense to treat them differently.  As
> I said, I think it would be well-behaved.  And I'm not happy about
> having to treat violations differently in the same function depending
> on what kind of process it was called from.


I too am a bit worried about the potential cost of this but I
think with a bit of thought it may not be too bad.  I think
that this could be considered as a specific kind of "disable"
of a child assertion reporting thread.  The cost then could
just be borne by processes that have (or better "have done") at
least one unique/priority evaluation.  Basically, I think the cost
could be made to scale with the number of processes that do priority
checks.  I don't see anything fundamental that would require
a seriously non-efficient approach.  Obviously there may be
impacts on one's ability to optimize designs in such cases which
could be noticeable, but I don't think this kind of thing
can be done "for free".  I certainly hope that users can be
made to understand that this kind of feature does have
implementation implications and that this likely won't be a free
lunch in any implementation.

Once again, I don't see much of a better alternative -- all
of the other approaches seem to have much deeper potential
analysis and scalability issues.  I'm afraid that there
are some fundamental costs with any approach to this kind
of suppression.

I would love to find a cheap, simple to describe, scalable
approach; if you have any ideas to recast things such that
it could be less intrusive, that would be great!

I suspect that the user community cares primarily about the
"normal" circumstances so we should explore whether reducing
the generality will be acceptable to users and will help with
describing what happens in the weird cases.

Gord.
-- 
--------------------------------------------------------------------
Gordon Vreugdenhil                                503-685-0808
Model Technology (Mentor Graphics)                gordonv@model.com


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
Received on Thu Oct 11 16:11:27 2007

This archive was generated by hypermail 2.1.8 : Thu Oct 11 2007 - 16:11:41 PDT