Re: [sv-ec] RE: [sv-bc] Resolving name resolution

From: Gordon Vreugdenhil <gordonv_at_.....>
Date: Tue Sep 04 2007 - 16:00:23 PDT
To expand on this a bit (well, maybe more than a bit), it
isn't just inline constraints that I have problems with.
I also think that examples such as:

    module m #(parameter type T = int);
       int x;
       class C extends T;
          function int get_from_env();
             return x;
          endfunction
       endclass
    endmodule

are going to be a real issue.  Mark's approach
defers binding "x" until elaboration time.  I want
to make sure that we have enough information at
compile time to know whether "x" will bind to
"super.x" or to "m.x".

Mark had interpreted my position to mean that ALL
bindings (including through dotted names) had to resolve
early.  That is not what I was saying.

The basic point that I have been trying to argue is
based on what kind of local reasoning about name binding
is possible within the language.

If you explicitly have a hierarchical name, one can
inspect the local module and determine that you are
addressing something out of the module.  That reference
is an implicit part of the user's mental specification.

If you have something like:
    module m (type T = int);
        T x;
        initial x.a = 5;
    endmodule

You are still making a very explicit local specification
that the type T must admit ".a".  I am perfectly happy
to have such code in 1800.


However, when you have inline constraints or extend a
type parameter, you lose all locality of invariants.

Given my earlier example:

    module m #(parameter type T = int);
       int x;
       class C extends T;
          function int get_from_env();
             return x;
          endfunction
       endclass
    endmodule


A user might *expect* that the local invariant for
"return x" is "return m.x" but since "T" is opaque,
the user has NO ability to reason about the nature
of T nor to what "x" will bind.  Mark finds this
acceptable; I don't.

Going down this path leads to a great deal of uncertainty
when reasoning about design behavior and composition.

I would much rather go down a path where we have at least
a reasonable path to having proper specifications
added to the language and not be relying on such very
dynamic name binding.

This ties directly into the import issues -- in Mark's
view if we have:

    package pkg;
       int x;
    endpackage
    module m #(parameter type T = int);
       import pkg::*;
       class C extends T;
          function int get_from_env();
             return x;
          endfunction
       endclass

       int x;
    endmodule

then even the error regarding a package name import
conflict can't be reported until elaboration when
the base type T is fully known.  Such a possible
conflict can lurk for a very long time until someone
finally removes "x" from some base class.  This
is likely going to require extreme care in designs,
particularly in IP that can live for a long time
and be used in circumstances that the user might
not have guarded against.  It would require full
qualification of *every* non-inherited name to be
safe and, in some situations, it isn't even clear
that such references would be legal (i.e.
qualified names for formals of automatic routines).


So we have both the possibility of very late errors
for name *conflicts* (not just existence) and the
possibility of surprising bindings.  These are both
"names coming into existence" and "names going out
of existence" issues and the combination makes the design
composition space very fragile.

I think that permitting those in the language leads down
a very undesirable path.


I understand Mark's position (likely pragmatic as well as
philosophical) on the inline constraint change.  There is
an explicit statement in the LRM that requires the kind
of dynamic binding that in that scenario that I find
objectionable -- the example of:

    function void f (int x);
         some_object.randomize with (x < y);

which, given imports, obviously has all the same pitfalls
I raise above.

My suggestion has been that *for this specific case* that
we allow an "item." prefix to be used within the inline
constraint to denote a required binding into the class.

That has a couple of benefits:
   1) it allows users to move to a more explicit syntax,
      particularly if the committee indicates that the
      implicit form may be subject to deprecation
   2) it allows for experience in highly dynamic systems
      to develop and inform (1)

In fully static type systems, the inline constraint binding
is determinable at compile time and systems can warn about
potentially "surprising" bindings to ensure that users
understand the binding.  This would also make a great lint
type check.

It is in the dynamic cases that I think we're playing with
fire and, to be blunt, I don't think that anyone has
much implementation experience in dynamic circumstances
involving inline constraints.  That makes it difficult
to argue that there is too much legacy code using such
dynamic inheritance to permit adding reasonable
restrictions.  And, as I've argued earlier, there is
ample precedent for such a change, even if it is
a compatibility issue.


I think (hope?) that both my position and Mark's are
reasonably clear.  I'd be happy to answer further
clarification questions, but I think that it is very
important to hear from others so I'm going to try to
just respond to clarification questions for a while.

Gord.
-- 
--------------------------------------------------------------------
Gordon Vreugdenhil                                503-685-0808
Model Technology (Mentor Graphics)                gordonv@model.com


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
Received on Tue Sep 4 16:39:39 2007

This archive was generated by hypermail 2.1.8 : Tue Sep 04 2007 - 16:39:55 PDT