[sv-bc] Comments on P1800 Draft 4 initial ballot draft

From: Shalom Bresticker <Shalom.Bresticker_at_.....>
Date: Thu Mar 24 2005 - 01:16:45 PST
To the P1800 SV WG:

I don't have voting rights, but I want to submit comments as a non-voting
observer on the P1800 initial ballot (Draft 4).

I am concerned about one issue in the definition of P1800, where I think
the current definition is somewhat broken, and discuss how it can be fixed.

This is the issue of the behavior at time 0 of initial, always, and
always_comb constructs, variable initializations, combinational UDPs,
continuous assignments, and port connections.

I emphasize that I am not comparing the P1800 definition to the
P1364-2001 definition. I do compare the P1800 definition to the way
existing implementations execute existing code, to what users expect
and want, and to what will be safest for them.

There may be more than one way to solve the problem.
The suggested solution may not be complete and/or
may not solve the problem completely. But I do believe
it would make the situation much better.
Better solutions are welcome.

=============================================================================
What does the D4 LRM say?

6.4 says about variable initializations:

"A variable can be declared with an initializer, for example: int i = 0;

In Verilog, an initialization value specified as part of the declaration
is executed as if the assignment were made from an initial block, after
simulation has started. Therefore, the initialization can cause an event
on that variable at simulation time zero.

In SystemVerilog, setting the initial value of a static variable as part
of the variable declaration (including static class members) shall occur
BEFORE any initial or always blocks are started, and so does NOT generate
an event. If an event is needed, an initial block should be used to
assign the initial values."


10.2 says about always_comb:

"The [always_comb] procedure is automatically triggered once at time
zero, AFTER all initial and always blocks have been started, so that the
outputs of the procedure are consistent with the inputs." and

"always_comb automatically executes once at time zero, whereas always @*
waits until a change occurs on a signal in the inferred sensitivity list."

===============================================================================

I present a brief and admittedly selective summary of the emails on this
subject since last September. Most are from the sv-bc mail list.
For brevity and sometimes emphasis, I sometimes rephrase the author's
words. I have tried to be fair.
I note the source so that others can review my interpretations of what
was written. Here and there I also inject some comments in the middle.
Since the discussion was long, the summary still comes out rather long,
unfortunately. Try to follow.

Summary of the problem and principles of the solution as I see it at the end.
You might want to skip to that before reviewing the detailed correspondence.


1. Steven Sharp raised the issue on Aug 31, 2004. He raised the issue as
SV initializer semantics being different from 1364-2001. As mentioned
above, my concern is different. I mention this source because this is how the
dicussion started.


2. Brad Pierce, Sept 1:

The issue is that Verilog-2001 variable initializations are
nondeterministic. SV fixes this: Variable declaration assignments are
initialized before any initial constructs. This is totally backward compatible.


3. Jay Lawrence, Sept 1:

The new initialization semantic DOES create a backward compatibility
problem...
The argument made in favor of this change is that it simply makes the
use of variable initialization in a procedural context deterministic.
This argument has nothing to do with why we believe this is a
nonbackward compatible change. The problem with this change of
initialization is that in the Verilog-2001 method an event is generated.
In the SystemVerilog method, NO event is generated. This difference has
a severe impact on gate-level models and the behavior of continuous
assignments, not procedural contexts as argued.
...
                integer var_i = 1; // A variable with an initial value
                assign wire_i = var_i; // Continuously assign the wire
...
In Verilog 1364, the initial value on var_i is guaranteed to produce an
event. This event is critical because it causes the continuous
assignment to the wire wire_i to execute. Without this event, the
continuous assignment does not execute at time 0 and therefore the
initial value of the variable would not propagate to the wire, leaving
the wire at the default value of 32'bz. The exact same problem would
occur if a gate were substituted for the continuous assignment above.

In Verilog 1364 the code snippet above would produce a 1 on the wire_i,
in SystemVerilog a 32'bz would be produced. This is not a trivial
problem. The vast majority of Verilog modules have this style of code
wherein an internal value is calculated and stored in a register and
then the value is propagated either through a continuous assignment,
buffer, or port onto a wire. Any of these forms of interconnect would
not propagate the initial value in SystemVerilog. This would cause most
devices to propagate the default value of 'z' on a wire instead leading
to catastrophic simulation failures.


4. Cliff Cummings, Sept 1:

I believe Jay's analysis is not completely correct for the Verilog
execution.
...
I am pretty sure that at least xxx Verilog simulators (and perhaps all
simulators) execute initial
blocks and now declaration initializations AFTER all continuous
assignments, gates and always blocks become active, which is actually a
pretty smart thing to do and should probably be codified in the Verilog
and P1800 standards.
...
The fact that SystemVerilog initializes declaration variables BEFORE
continuous assignments and gates ... disagrees with how most if not all Verilog
simulators execute events at time 0 (and therefore probably a very bad
incompatibility).
...
Is it time to specify that declaration initialization happens in the
time-0 preponed region and then take the VHDL approach of requiring all
procedures, continuous assignments and gates to evaluate once at time-0?


5. Stuart Sutherland, Sept 1:
...
The LRM already resolves this issue.  If the design is dependent on a
time 0 event, then use an initial procedure instead of in-line initialization.

[Shalom: IMO, this is VERY bad. It says if the standard is broken, the
user has to use a workaround. Further, most users are not
scheduling/semantics/LRM experts, and they will always have to check whether
their designs depend on it.
Initial constructs are not synthesizable, either.]

On the hand, SystemVerilog's rules for in-line initialization do ensure
race-free conditions between in-line initialization and time zero
assignments to a variable.
...
SystemVerilog rules for in-line initialization are
intuitive, and, in my opinion, the only right way simulation should behave.

I think there is a simple compromise that will make everyone happy --

I suggest that the fourth paragraph of Section 5.4 be changed FROM:

"In SystemVerilog, setting the initial value of a static variable as part
of the variable declaration (including static
class members) shall occur before any initial or always blocks are started,
and so does not generate an
event. If an event is needed, an initial block should be used to assign
the initial values."

TO (all caps used only to emphasize changes):

"In SystemVerilog, setting the initial value of a static variable as part
of the variable declaration (including static
class members) shall occur DURING SIMULATION TIME ZERO, before any
initial or always blocks are started."


6. Shalom Bresticker, Sept 2:

If the SV needs to state that always_comb executes once at time 0, then I
would think it would need to say so for all other similar constructs, such as
continuous assignments, port connections, and primitive evaluations.


7. Shalom, Sept 2:

A related question:

When do supply0 and supply1 nets get their values?


8. Steven, Sept 2:

In the simulators I have access to (XL and NC), they transition from
X to 0 or 1 at time 0 during simulation.


9. Brad, Dec 3:

According to 3.5.3 of (15 May 2003)

    http://www.eda.org/sv-ec/SV_3.1_Web/SVChairsChampionsResponse.pdf

"The latest example ... seems to imply that in order for the evaluation
of a continuous assignment, an event on the RHS must be generated,
including at time 0. However, a continuous assignment does not require
an event on its RHS. For example:

    module init;
      wire w;
      assign w = 1;
      initial #1 $display( "wire is %b", w );
    endmodule

The code above must also show '1' as the value of wire w, and no
event is generated by the constant 1.

At least two separate implementations of the SystemVerilog initialization
semantics exist and both are 100% backward compatible with respect to
continuous assignments."

[Shalom: I don't understand this response. 1364 does say that a
continuous assignment requires an event and 1800 does not change this (yet).
Further, if implementations exist (100% compatibility needs to be proved, by the

way, not just claimed), but they [have to] do something different from or in
addition to what the LRM says, that means that the LRM has a problem.]


10. Steven, Dec 3:

I believe that Verilog-XL gives special treatment to continuous
assignments where the RHS is a constant, and evaluates them specially.
For all others it relies on events to trigger them.  So the fact
that this particular case works in a simulator does not necessarily
imply that all cases where there is no event generated will work in
that simulator.


11. Shalom, Dec 6:

I think that first we should decide what is the Right Thing to do.
Only afterwards to look at whether or not it is compatible with the past.

I feel the claim presented by Jay Lawrence is legitimate.

If there are implementations that "do it right", that does not prove that
the LRM gets it right. In fact, if implementations have to do something
additional to order for it to work, that is a sign that the LRM is not
complete.

I don't accept the approach of writing
"If an event is needed, an initial block should be used to assign the
initial values," unless there is no alternative.

That is doing something which you know is going to cause problems
and people are going to fall into traps.

Cliff wrote on Sep 1:

"I am pretty sure that at least xxx simulators (and perhaps all
simulators) execute initial blocks and now declaration initializations after all

continuous assignments, gates, and always blocks become active, which is
actually a pretty smart thing to do and probably should be codified."

This is opposite from what appears now in the 1800 LRM, that declarations
occur BEFORE always blocks.

Cliff also wrote,

"Is it time to specify that declaration initialization happens in the
time 0 preponed region and then take the VHDL approach of requiring all
procedures, contiuous assignments and gates to evaluate once at time 0?"

Stu suggested that these declaration initializations occur "during
simulation time 0", although he left it as happening BEFORE initial and
always blocks, which I think does not solve the problem.

What Cliff writes about "always blocks" might not apply to all always
blocks.  While it certainly applies to always_comb and always @*, it might not
apply to always_ff and always_latch, and maybe not to a general always.

I previously noted that certain other constructs are equivalent to
continuous assignments, and therefore should be treated the same way,
namely always_comb, port connections, and combinational UDPs.

It seems to me that the current Verilog and SystemVerilog specifications
are buggy and cause problems for the user. Implementations have to do special
things not specified in the LRM in order for it to work as desired.
It should not be that way. The LRM should get it right.

I think Cliff's suggestion is the closest to being right and deserves
serious consideration.


12. Steven, Dec 8:

I suspect that the execution of (most) always blocks BEFORE initial
blocks may be the result of a common optimization.  The fact that combinational
always blocks will generally fall into this category gives a nice benefit
in event ordering.  It ensures that the always blocks will respond to the
initializers and have correct combinational output values starting from
time zero.  Some simulators might execute all always blocks BEFORE
initial blocks to get this benefit.  The 1364 LRM does not guarantee this,
but it allows it.

>This is opposite from what appears now in the 1800 LRM, that declarations
>occur BEFORE always blocks.

Which means that the 1800 LRM is forbidding simulators from using the
order that makes Verilog combinational always blocks work properly.

>"Is it time to specify that declaration initialization happens in the time 0
>preponed region and then take the VHDL approach of requiring all procedures,
>contiuous assignments and gates to evaluate once at time 0?"

This could be done for continuous assignments and gates (and SystemVerilog
always_comb blocks), but not Verilog always blocks.  How can you define
what it means to "evaluate an always block" when they can contain event
controls and delay controls at arbitrary places, and the language
semantics require the processes to wait at those specific points in the code?

>I previously noted that certain other constructs are equivalent to
>continuous assignments, and therefore should be treated the same way,
>namely always_comb, port connections, and combinational UDPs.

Yes, those can be dealt with by requiring them to evaluate once at
time zero.  But Verilog combinational always blocks are just a subset
of general always blocks, and should behave consistently with how those
are defined.  There isn't a way to define "evaluation at time zero" for
always blocks in general.

I think the best thing you can define for them is to require them to
execute BEFORE initial blocks (and initializers) at time zero.  And the
P1800 LRM actually forbids that.


13. Michael McNamara, Dec 8:

I believe we should define what shall happen just before the big bang
of simulation; that the time has come to remove this useless ambiguity.

1: when execution begins, all of the wires are z and all of the
registers are x.  There is darkness on the land.

2: All processes that are in the active region are executed, and hence
become sensitized to any changes to the values of registers and wires.

3: at this point any tf_misc routines are called with
reason_begin_of_simulation, and the pli can execute, and the cli can
execute.

4: then the initial blocks are executed, in no particular order, and
an organized chaos emerges, transforming the values of the wires and
registers in the familiar dance we all recognize as SIMULATION!!!


14. Shalom, Dec 9:

I agree.


15. Mark Hartoog, Dec 9:

I think the issue here is that initial blocks execute in an arbitrary
order. There is no guarantee that all initial values will be set before
an initial block starts. For code like:

int x = 123;
int y;
initial y = x * 2;

We can not be sure that 'x' will be initialized before the user written
initial blocks starts. Now a "reasonable" verilog simulator probably
would always do this correctly, but it is less clear whether this is
true between different modules or program blocks.

I think the requirement is that all initial values must be assigned
throughout the design before any user initial blocks starts.

By the way, with the addition of 2 state variables, you have this problem
even without initial values. Consider:

int x;
logic [31:0] y;
logic [31:0] z;

always @(x) y = x;
always_combo z = x;

The always_comb block gets executed at time 0, so 'z' would be zero,
but the regular always block does not, so 'y' will be unknown.

Do you want the initial values of 2 state variables to also generate
events?


16. Shalom, Dec 9:

Steven,

When you write that always constructs should 'execute' BEFORE initial
blocks and initializations at time 0, that term 'execute' could be
misunderstood.

You mean if you have

always @(a or b) or always @(posedge clk), then you enter the always
and start waiting on the @(a or b) or @(posedge clk). For always_comb,
you mean to wait on the implicit sensitivity list.

Correct?


17. Steven, Dec 9:

Yes, misunderstandings about how always blocks are defined to work are
widespread enough that I should be careful to be clear.

>You mean if you have
>
>always @(a or b) or always @(posedge clk), then you enter the always
>and start waiting on the @(a or b) or @(posedge clk).

Yes.  The goal in executing the always blocks first is to get them
to reach the event control and start waiting there, so that if an
initial block causes one of those events, the always block will
be ready to respond.

The grouping in these partial examples might mislead someone into
thinking the event controls are somehow associated with the always,
when in fact they are associated with the statement that follows
the event control.  This is part of the most common misunderstanding
about always blocks.

Executing the always block means to start executing the statement
inside the always block (which might be a sequential block).  And
in these cases, the statement is preceded by an event control, so
execution would immediately suspend to wait for the event to occur.
If the statement were not preceded by an event control, execution
would continue until it reached an event control or delay control.

> For always_comb, you
>mean to wait on the implicit sensitivity list.

No, at least not the way you probably mean it.

The simplest way to view an always_comb, with its requirement to
execute the body once at time zero, is to recognize that this is
equivalent to putting the event control at the *bottom*, instead
of the top.

A typical Verilog combinational always block is coded like

always @(a)
  b = ~a;

which happens to be semantically equivalent to

initial
  forever begin
    @(a) b = ~a;
  end

A SystemVerilog always_comb, coded like

always_comb
  b = ~a;

is semantically almost equivalent to

initial
  forever begin
    b = ~a;
    @(a);
  end

Notice that here the event control appears at the bottom of
the loop implied by the always_comb.  This means that it will
execute the always block body once BEFORE it starts waiting for
input changes.  It is much like the difference between a while loop
and a do-while loop.  The fancy wording about "triggering once at
time zero" just comes down to this change in the position of the
implicit event control from the top to the bottom.

The reason I wrote "almost equivalent" instead of "equivalent", is
that always_comb also has a special rule requiring it to execute
*AFTER* all of the initial and always blocks.  This rule is not
actually needed to make it behave like combinational logic.  Putting
the event control at the bottom is sufficient to ensure that it will
behave correctly from time zero.  If it executed BEFORE the initial
blocks, the worst that could happen is that evaluates again if an
initial block changes an input.  And if it really is combinational
logic, that should not cause any harm.

So anyway, executing an always_comb would not involve immediately
waiting on the implicit "sensitivity list" or event control.  It
would involve executing all of the statement inside the always_comb,
and then waiting on the implicit event control at the bottom.  But
an always_comb doesn't need to be executed BEFORE initial blocks
anyway.  It could be executed then, or when SystemVerilog says it
must be executed, and it will still work like combinational logic.

However, Verilog combinational always blocks, when coded the usual
way with the event control at the top, only work properly when inputs
change at time zero, if they start executing BEFORE the input changes.
It is the only way to make them act the way so many people believe
they do: sensitive to their inputs from the very start of simulation.


18. Steven, Dec 9:

>I think the requirement is that all initial values must be assigned
>throughout the design before any user initial blocks starts.

That might be a desirable thing, and it might be reasonable to require
it. But it is definitely *not* desirable to have initial values assigned
BEFORE always blocks start.


>By the way, with the addition of 2 state variables, you have this problem
>even without initial values.

Yes, that is a problem when you take a language that was designed to work
properly with 4-state values and throw 2-state values in after the fact.
However, new SystemVerilog code has the option of using always_comb,
which will avoid this problem.

On the other hand, there is a lot legacy Verilog code that doesn't use
always_comb.  It doesn't have a problem with 2-state variables, since
those didn't exist in Verilog either.  But it *does* have a problem if
the Verilog simulator used an execution order that made it work, but
SystemVerilog simulators are required to use an order that guarantees
that it won't work.

>Do you want the initial values of 2 state variables to also generate
>events?

We could discuss that, but it isn't necessary to making legacy Verilog
code work, since that code doesn't contain any 2-state variables.


19. Shalom, Dec 15:

You're correct about always_comb executing "once at time zero, after
all initial and always blocks have been started." I had forgotten that.
...
If we activated all the always blocks, but including always_comb, at the start
(and the other stuff
we talked about, like continuous assignments, port connections, etc.)
and then did the initializations, wouldn't that be enough?
And then there would be no need to treat always_comb specially?
Conceptually, it does not make sense to treat always_comb and always @*
and continous assignments differently.


20. Steven, Dec 15:

>If we activated all the always blocks, but including always_comb, at the start
>(and the other stuff
>we talked about, like continuous assignments, port connections, etc.)
>and then did the initializations, wouldn't that be enough?

I am not clear whether you are suggesting that always_comb still evaluate
once at time zero.  If so, then yes, it is OK to execute them at the same
time as other always blocks.  However, I assume that you are suggesting
that they start execution early but not evaluate unconditionally at time zero.

It would cover most designs.  However, you could still contrive situations
where an always_comb block would not behave like combinational logic.  For
example, if the always block reads only constants, it has no sensitivity
and will never wake up.  Activating it early would not help.  It needs to
execute once unconditionally, like an always_comb does.

And while it would be weird, there is nothing to prevent someone from
replacing all of their initial blocks with always blocks that have the
same behavior (i.e. waiting on a constant at the end so that they don't
loop back).  In this case, the ordering requirements between initial and
always blocks would not ensure that combinational always blocks worked
properly.  This is an artificial situation, but there may be realistic
ones that have similar effects.  For example, always blocks are sometimes
used for clock generators, and there may be other situations where always
blocks are used to provide stimulus to combinational always blocks.

>And then there would be no need to treat always_comb specially?

I don't regard the "evaluation at time zero" part of always_comb as treating
them specially.  As I noted, it is just the same as regarding an always_comb
as an ordinary always block where the event control has been put at the
bottom.  Since the event control is implicit anyway, putting it there is
just as reasonable as putting it at the top.  I think it is the fault of the
LRM text for making this part of always_comb sound like special treatment.
In fact, they act like any other always block would if you put the event
control at the bottom.

The part about always_comb executing AFTER other blocks is special treatment,
and is also unnecessary to make them behave properly.  That part could be
eliminated.  If always blocks were required to execute early, always_comb
blocks could be executed at the same time.  The only effect might be that
they might get evaluated twice at time zero, but the final values would
still be the same.

>Conceptually, it does not make sense to treat always_comb and always @*
>and continous assignments differently.

Well, unfortunately there are conceptual differences.  An always with an
@* at the top has an event control at a specified place in the procedural
code, and is required to stop and wait at that exact place.  It is a special
kind of event control, but it is still at a specific location.  The
advantage of always_comb is that there is no explicit event control, just
an implicit one that is part of the construct.  There is no syntax to say
where it must be inserted in the procedural code.  It is associated with
the always_comb looping construct as a whole, which surrounds the body of
the always_comb.  It can be inserted in the bottom part of that looping
construct just as readily as in the top part.
...
A continuous assignment also has implicit sensitivity, so it can also
be implemented to execute once and wait for input changes at the bottom
BEFORE looping back to the top to evaluate again.

This waiting at the bottom instead of the top makes both always_comb and
continuous assignments work properly like combinational logic, even if
the inputs change at the start of simulation or are constants.  The best
that can be done for ordinary always blocks is to execute them as early
as possible, hopefully BEFORE those initial input changes.  Unfortunately,
SystemVerilog forbids executing them BEFORE initializers, which could break
existing Verilog-2001 code.


21. Cliff, Jan 5, private email:

I believe the scheduling semantics has one gaping hole, it does not
describe in detail what should happen at time-0 and moving forward I think
this should be fixed. I take this into account in my comments below and I
even proposed a partial solution at the end of the quoted email (using new
SV event regions).
...
As a matter of fact, the standard does not say anything about execution of
continuous assignments at time-0 and in theory continuous assignments could
be activated AFTER all procedural blocks and the standard does not say if
they would evaluate the RHS variables at time-0 (I believe continuous
assignments should evaluate the RHS variables at time-0, no matter what the
order of assignment initialization is, then they would always behave
correctly - this should be specified - this would also ensure that variable
initialization would be recognized by continuous assignments).

To me, continuous assignments represent combinational logic and they should
behave that way at time-0.
...
>Since the order is not specified in the Verilog Standard, the fact that
>SystemVerilog initializes declaration variables before continuous
>assignments and gates is indeed a fully backward compatible solution,
>albeit a solution that disagrees with how most if not all Verilog
>simulators execute events at time 0 (and therefore probably a very bad
>incompatibility).

I still think this is a very bad incompatibility but I think it is bad that
initialized variables trigger procedural blocks because time-0 ordering is
undefined in the Verilog-2001 Standard.
...
>Is it time to specify that declaration initialization happens in the time-0
>preponed region and then take the VHDL approach of requiring all
>procedures, continuous assignments and gates to evaluate once at time-0?

I still like this the best. Steve Sharp would argue that some testbenches
would execute differently at time-0 and he is probably right, but I believe
those testbenches are poorly coded for time-0 ambiguities.
...
IMO - SystemVerilog assertions (including unique and priority) should be
defined at time-0. My current thinking is that all assertions at time zero
should fir evaluate in the observed region. An assertion at time-0 that
evaluates in the preponed region will almost always fail because almost all
variables are undefined in the time-0 preponed region.

===============================================================================

Summary of problem:

There are cases where a net or variable is intended to be a strict
combinational function of other nets/variables/constants. In some cases,
it does not work correctly as currently defined because the result is
evaluated only due to events/changes on the inputs, and the inputs in these
cases are constants or have an initial value which does not create an event,
and/or the construct is made sensitive to input events only AFTER an initial
time-0 event occurs.

In all the correspondence, I did not see any refutation of the claim
that there is a problem.

Proposed Prinicples of Solution:

1. The solution needs to cover: always_comb, combinational always @*,
combinational always @(), non-collapsed port connections, continuous
assignments,
combinational UDPs.

2. It is proposed to treat all of the above as identically as possible for
simplicity
and consistency.

3. Since the value being assigned can be a simple constant (or parameter)
where surely no event is generated, the constructs must unconditionally
evaluate at time 0 even without an event. An exception is the simple
always, for which we can only activate it, but not unconditionally
execute its entire body.

4. The constructs must evaluate at time-0 AFTER any assignments which
do not generate events. Variable initializers currently behave that way.
However, regular always constructs have a problem with inputs whose
initial assignments do not create events.

5. It is accepted that variable initializers execute before initial
constructs. No one has argued against that.

6. 1364 does not define an order between always and initial constructs.
SV does not change that. However, it seems that in practice, simulators
activate always constructs before initial constructs. It would be wise
to adopt that offically. (It could be argued that the always_comb scheduling
has the same advantages but without the race between initial and always
constructs.)

7. Thus, I propose the following 2 solutions. Admittedly, there may be flaws.
That is
why we have reviews. However, I think the general direction is correct.
I'm open to better solutions.

Proposal A:

Evaluate and activate all of the constructs in question (see Principle 1)
at time 0, BEFORE both variable initializations and initial constructs.

"Evaluate and activate" means start an execution. In the
case of an always construct with a delay or event control, it would wait
there. The others will execute unconditionally, then after the first
execution, wait for a change on the inputs, like always_comb now, but
executing at time 0 earlier than defined now for always_comb.

Variable initializers will still execute BEFORE initial constructs,
but they WILL generate events.


Proposal B:

Activate always constructs
at time 0, BEFORE both variable initializations and initial constructs.

Variable initializers will still execute BEFORE initial constructs,
but they WILL generate events.

All the other combinational constructs will unconditionally execute at
the end of time-0 and then wait for events on their events, just as
always_comb does now.


My apologies if I made some mistakes in this. I'm very tired...

Thanks for your serious consideration.

Sincerely,
Shalom Bresticker

--
Shalom.Bresticker @freescale.com                     Tel: +972 9  9522268
Freescale Semiconductor Israel, Ltd.                 Fax: +972 9  9522890
POB 2208, Herzlia 46120, ISRAEL                     Cell: +972 50 5441478

[ ]Freescale Internal Use Only      [ ]Freescale Confidential Proprietary
Received on Thu Mar 24 01:17:06 2005

This archive was generated by hypermail 2.1.8 : Thu Mar 24 2005 - 01:17:37 PST