Re: [sv-bc] Question on ?: with "any" data type

From: Greg Jaxon <Greg.Jaxon_at_.....>
Date: Thu Feb 16 2006 - 12:41:57 PST
Thanks for your thoughts here; I feel a proposal coming on...
but I think a few issues need just a bit more discussion.

1) Blending of unpacked structures: I don't see any reason
    to impose "memberwise" blending as the default, in fact
    that weakens the type system pretty dramatically.  Perhaps
    "bind" could overload the "blending" function so the
    unpacked struct types have some say in the production of
    new objects of their type.  That may require an error,
    rather than any default behavior for 'X ? UPS1 : UPS2.

2) Result type for unpacked arrays.  This is not a big
    issue, since only $type() lets you inspect this result.
    The question, of course, is what the lower and upper
    bounds are on (all) the array dimensions.
    THEN and ELSE types need only be conformable, so one
    right answer might be that all the bounds become canonical
    ([0:width-1] or [width-1:0], for unpacked or packed).
    That's the only answer available for $type( A ? B : C ).

    ?: on unpacked arrays otherwise must occur in an
    assignment-like context, where there is a context-
    determined unpacked array type involved already.  This
    type would, e.g., inform any assignment patterns in
    the THEN or ELSE expressions.  Ordinarily that would
    be the result type.  But if you try to observe it
    with $type(), you perturb the context and get a different
    answer;-)

3) enumVAR = test ? enumTAG1 : enumTAG2;
    No user is going to appreciate a type mismatch error here.
    $cast( enumVAR, test ? enumTAG1 : enumTAG2 )  is not
    yet a widely implemented or used language feature
    (and I think it's pretty ugly).

    However we have to handle
    intVAR = test ? enumTAG1 : enumTAG2  and
    '0 + (test ? enumTAG1 : enumTAG2), which both remove the
    enum type and probably expect bitwise blending.

    These observations are driving in the direction of
    another context-based rule, aren't they?  I hate to
    say it, but it looks like the LHS enum type informs
    the assignment-like context of these THEN and ELSE
    expressions, and hence the choice of blending function
    to apply when the test result is unknown.  I observe
    that an implementation that gets assignment patterns
    right here, would have to work hard to avoid reaching
    this same conclusion.

    Enums were made for FSM states.  The differences
    between an 'X state and a bitwise-blended state
    must be pretty profound in simulations.  But which
    direction is conservative?  A coarse 'X is honest:
    you don't yet know which state to be in.
    But the hardware you're simulating may already know
    a few bits because it cannot implement coarse-grain
    'X propagation (economically anyway).

    Semantically conservative (coarse 'X) and accuracy
    conserving (bitwise blending) answers have to be a
    matter of the simulation designer's choice.  SV
    provides semantic strong typing to enable one side
    of that choice.  Designers are free to use raw
    4-state vectors for more accuracy.  We're not here
    to set coding policy, only to enable whatever policies
    the designers wish to self-impose.  The word may go
    out that using enums makes some insane designs look
    like they're sane.  There might be a subspecialist
    who does "enum-removal"... but isn't this why we have
    language?  To express designs enroute to reality -
    even if they aren't 100% there yet?

Just pushing the definitions around as always... these
thoughts don't represent a Synopsys position, product
deliverables, attitudes, or anything except my attempts
to grasp the design problem and wrestle it to the ground.

Steven Sharp wrote:
>>From: Greg Jaxon <Greg.Jaxon@synopsys.com>
> 
> 
>>  I think P1800 section 8.18 extends ?: to non-integral types.
>>I too have some problems with that section's wording.
>>One thing I find ambiguous is that it does not
>>precisely specify a result type that is statically determinable
>>independent of the value of the conditional expression.
> 
> 
> You are right that the rules for the result type are not clearly
> stated.  However, they are implied by the bulleted rules.  If
> either or both of the expressions are integral type, then the
> operation proceeds as defined, so the result would be of integral
> type.  You kind of have to assume the result is a generic vector,
> not a specific integral type.
> 
> In all other cases, the expressions must be the same type.  We must
> assume that the result will be of that type, though it is not actually
> stated.
> 
> The rules for enums are unclear.  More on that later.
> 
> 
> 
>>  Another problem is that my copy (which may be old) says the
>>blending of unpacked types is done element-by-element.
> 
> 
> That is what the new copies say also.
> 
> 
> 
>>I'm not clear on how to do this for tagged union types,
> 
> 
> It isn't clear, but I can come up with a scheme that works.
> 
> First match the tags.  If those don't match, then use the default
> initialization value for the entire union, and you are done.  You
> can't combine the elements of the union if the tags don't match.
> If they do match, then that is the tag of the result.  And since both
> unions are now known to hold the same type, you can recurse and start
> combining their contents.
> 
> 
>>for structure types (which don't have "elements"),
> 
> 
> It seems pretty obvious that you regard the members as the elements.
> 
> 
> 
>>or for multi-dimensional array types (which some view as having
>>subarray-shaped elements of one fewer dimension, and others view as
>>having a multitude of scalar elements).  Nor can I imagine  any useful
>>application for propagating an X effect with a coarse granularity.
> 
> 
> Finer granularity gives less pessimistic results, but costs more.
> 
> 
>>There is a clear interest in letting enum type info propagate
>>up through the ?:, if no blending can happen.  Enum type info can
>>be implicitly discarded at any later stage where it becomes irrelevant.
> 
> 
> This is another case where the desire for strongly typed enums in
> testbenches conflicts with the desire for accurate enum behavior in
> the design itself (e.g. in state machines).
> 
> The behavior specified in the LRM for this is not entirely clear.
> Applying the bulleted rules in 8.18 to two enums, neither is technically
> an integral type.  So their types would have to match, and the result
> would be that type.  If the condition were unknown and their values
> did not match, the result would be the default initialization value for
> that type.  
> 
> This interpretation produces an enum result, which could then be assigned
> to an enum, in accordance with strong typing.  If we assume that a design
> will use a 4-state enum, then the default initialization value will be
> all-X.  This is more pessimistic than a bitwise blending, but at least
> it is pessimistic rather than optimistic.
> 
> However, I believe other LRM text disagrees with this interpretation.
> 4.10.4 says that an enum used as part of an expression is automatically
> cast to its base type.  So it has already been cast from an enum to an
> integral type before the ?: is applied.  Therefore, the result with an
> unknown condition will be bitwise blending.  This is a more accurate
> result for the design, but is no longer an enum.  Strong typing prevents
> direct assignment of the result to an enum variable, unless a cast is
> used.
> 
> 
> 
>>  User-defined structure types must match to be equivalent, so that
>>case is trivial.  There should be no "blending" effect between structures -
>>they are all or nothing THEN or ELSE.  If a 4-state conditional presents
>>an 'X, I'd be happy seeing a default or 'X-ful result.
> 
> 
> As I said before, I assume that element-by-element combining was intended
> to mean member-by-member combining for structs.  I prefer all-or-nothing
> for efficiency, but I doubt that was the intent.
> 
> 
> 
>>  Finally, I'd like this section to say something like:
>>
>>  Note: Bitwise blending of signal values is not part of the
>>  synthesizable subset because it presupposes the existence of
>>  hardware "X" detection.
> 
> 
> I don't see the need for this.  Presumably everything being synthesized
> is 4-state, or synthesis already can't match simulation behavior.  And
> if everything is 4-state, I think the bitwise blending is guaranteed
> to give pessimistic results.  To need this statement, there would have to
> be a case where the blending could give a definite 0 or 1, while the
> hardware could give the other.  I believe that the blending will not do
> this, but will give an X instead.  Can you give any counter-example?
> 
> At any rate, this LRM does not specify the synthesizable subset of the
> language.  Any such statement would belong in another standard or
> sub-standard that did.
> 
> Steven Sharp
> sharp@cadence.com
> 
Received on Thu Feb 16 12:46:59 2006

This archive was generated by hypermail 2.1.8 : Thu Feb 16 2006 - 12:47:30 PST