Re: When is "volatile& quot; used instead of "lock" ; ?
Peter Ritchie [C# MVP] <PRSoCo@newsgro ups.nospamwrote :
>
That's not the same thing as saying use of Monitor.Enter and Monitor.Exit
are what are used to maintain that behaviour.
Well, without that guarantee for Monitor.Enter/Monitor.Exit I don't
believe it would be possible to write thread-safe code.
It doesn't matter what the volatile write is to - it's the location in
the CIL that matters. No other writes can be moved (logically) past
that write, no matter what they're writing to.
I don't see what's "casual" about it, nor why you should believe that
12.6.7 should only apply to instructions with the "volatile." prefix.
The section starts off by mentioning the prefix, but then talks in
terms of volatile reads and volatile writes - which is the same terms
as 12.6.5 talks in.
I really, really haven't. I think the problem is the one I talk about
above - you're assuming that *what* is written to matters, rather than
just the location of a volatile write in the CIL stream. Look at the
guarantee provided by the spec:
<quote>
A volatile read has =3Facquire semantics=3F meaning that the read is
guaranteed to occur prior to any references to memory that occur after
the read instruction in the CIL instruction sequence. A volatile write
has =3Frelease semantics=3F meaning that the write is guaranteed to happen
after any memory references prior to the write instruction in the CIL
instruction sequence.
</quote>
Where does that say anything about it being dependent on what is being
written or what is being read? It just talks about reads and writes
being moved in terms of their position in the CIL sequence.
So, no write that occurs before the call to Monitor.Exit in the IL can
be moved beyond the call to Monitor.Exit in the memory model, and no
read that occurs after Monitor.Enter in the IL can be moved to earlier
than Monitor.Enter in the memory model. That's all that's required for
thread safety.
I'm not talking about certain circumstances - I'm talking about
*guarantees* provided by the CLI spec.
I'm saying that I can write code which doesn't use volatile but which
is *guaranteed* to work. I believe you won't be able to provide any
exmaple of how it could fail without the CLI spec itself being
violated.
Well, I've got information specific to the .NET 2.0 memory model (which
is stronger than the CLI specified memory model) elsewhere.
However, I feel pretty comfortable in having the interpretation experts
who possibly contributed to the spec or at least have direct contact
with those who wrote it.
Well, this is why I suggested that I post a complete program - then you
could suggest ways in which it could go wrong, and I think I'd be able
to defend it in fairly clear-cut terms.
I *hope* we won't just have to agree to disagree, but I realise that
may be the outcome :(
It implies that without volatility you've got problems - which you
haven't (provided you use locking correctly). This means you can use a
single way of working for *all* types, regardless of whether you can
use the volatile modifier on them.
If you're not used to that being the idiom, you're right. However, if
I'm writing thread-safe code (most types don't need to be thread-safe)
I document what lock any shared data comes under. I can rarely get away
with a single operation anyway.
Consider the simple change from this:
this.number = 1;
to this:
this.number++;
With volatile, your code is now broken - and it's not obvious, and
probably won't show up in testing. With lock, it's not broken.
What happens here is that I don't let this method go through code
review. There have to be *very* good reasons not to use lock{}, and in
those cases there would almost always still be a try/finally.
I wouldn't consider using volatile just to avoid the possibility of
code like this (which I've never seen in production, btw).
It's the other way round - the JIT compiler doesn't have enough
information to perform certain optimisations, simply because it can't
know whether or not Monitor.Exit will be called.
Assuming the CLR follows the spec, it can't move the write to number to
after the call to random.Next() - because that call to random.Next()
may involve releasing a lock, and it may involve a write.
Now, I agree that it really limits the scope of optimisation for the
JIT - but that's what the CLI spec says.
<snip>
No, I'm not. I said you don't need to synchronize an atomic invariant but
you still need to account for its volatility (by declaring it volatile). I
didn't say volatility was a secondary concern, I said it needs to be
accounted for equally. I was implying that using the "lock" keyword is not
as clear in terms of volatility assumptions/needs as is the "volatile"
keyword. If a I read some code that uses "lock", I can't assume the author
did that for volatility reasons and not just synchronization reasons; whereas
if she had put "volatile" on a field, I know for sure why she put that there.
I use lock when I'm going to use shared data. When I use shared data, I
want to make sure I don't ignore previous changes - hence it needs to
be volatile.
Volatility is a natural consequence of wanting exclusive access to a
shared variable - which is why exactly the same strategy works in Java,
by the way (which has a slightly different memory model). Without the
guarantees given by the CLI spec, having a lock would be pretty much
useless.
>
Which ones? Like Hashtable.versi on or StringBuilder.m _StringValue?
Yup, there are a few - but I believe there are far more places which
use the natural (IMO) way of sharing data via exclusive access, and
taking account the volatility that naturally provides.
--
Jon Skeet - <skeet@pobox.co m>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Peter Ritchie [C# MVP] <PRSoCo@newsgro ups.nospamwrote :
It specifies how the system as a whole must behave: given a certain
piece of IL, there are valid behaviours and invalid behaviours. If you
can observe that a variable has been read before a lock has been
acquired and that value has then been used (without rereading) after
the lock has been acquired, then the CLR has a bug, pure and simple.
It violates the spec in a pretty clear-cut manner.
piece of IL, there are valid behaviours and invalid behaviours. If you
can observe that a variable has been read before a lock has been
acquired and that value has then been used (without rereading) after
the lock has been acquired, then the CLR has a bug, pure and simple.
It violates the spec in a pretty clear-cut manner.
That's not the same thing as saying use of Monitor.Enter and Monitor.Exit
are what are used to maintain that behaviour.
believe it would be possible to write thread-safe code.
In 335 section 12.6.5 has "[calling Monitor.Enter]...shall implicitly
perform a volatile read operation..." says to me that one volatile operation
is performed. And "[calling Monitor.Exit]...shall implicitly perform a
volatile write operation." A write to what? As in this snippet:
Monitor.Enter(t his.locker)
Trace.WriteLine (this.number);
Monitor.Exit(th is.locker)
perform a volatile read operation..." says to me that one volatile operation
is performed. And "[calling Monitor.Exit]...shall implicitly perform a
volatile write operation." A write to what? As in this snippet:
Monitor.Enter(t his.locker)
Trace.WriteLine (this.number);
Monitor.Exit(th is.locker)
the CIL that matters. No other writes can be moved (logically) past
that write, no matter what they're writing to.
It only casually mentions "See [section] 12.6.7" which discussions acquire
and release semantics in the context of the volatile prefix (assuming the C#
volatile keyword is what causes generation of this prefix).
and release semantics in the context of the volatile prefix (assuming the C#
volatile keyword is what causes generation of this prefix).
12.6.7 should only apply to instructions with the "volatile." prefix.
The section starts off by mentioning the prefix, but then talks in
terms of volatile reads and volatile writes - which is the same terms
as 12.6.5 talks in.
12.6.7 only
mentions "the read" or "the write" it does not mention anything about a set
or block of read/writes. I think you've made quite a leap getting to: code
between Monitor.Enter and Monitor.Exit has volatility guarantees.
mentions "the read" or "the write" it does not mention anything about a set
or block of read/writes. I think you've made quite a leap getting to: code
between Monitor.Enter and Monitor.Exit has volatility guarantees.
above - you're assuming that *what* is written to matters, rather than
just the location of a volatile write in the CIL stream. Look at the
guarantee provided by the spec:
<quote>
A volatile read has =3Facquire semantics=3F meaning that the read is
guaranteed to occur prior to any references to memory that occur after
the read instruction in the CIL instruction sequence. A volatile write
has =3Frelease semantics=3F meaning that the write is guaranteed to happen
after any memory references prior to the write instruction in the CIL
instruction sequence.
</quote>
Where does that say anything about it being dependent on what is being
written or what is being read? It just talks about reads and writes
being moved in terms of their position in the CIL sequence.
So, no write that occurs before the call to Monitor.Exit in the IL can
be moved beyond the call to Monitor.Exit in the memory model, and no
read that occurs after Monitor.Enter in the IL can be moved to earlier
than Monitor.Enter in the memory model. That's all that's required for
thread safety.
Writing a sample "that works" is meaningless to me. I've dealt with
thousands of snippets of code "that worked" in certain circumstances (usually
resulting in me fixing them to "really work").
thousands of snippets of code "that worked" in certain circumstances (usually
resulting in me fixing them to "really work").
*guarantees* provided by the CLI spec.
I'm saying that I can write code which doesn't use volatile but which
is *guaranteed* to work. I believe you won't be able to provide any
exmaple of how it could fail without the CLI spec itself being
violated.
You're free to interpret the spec any way you want, and if you've gotten
information from Chris or Vance, you've got their interpretation of the spec.
and, best case, you've got information specific to Microsoft's JIT/IL
Compilers.
information from Chris or Vance, you've got their interpretation of the spec.
and, best case, you've got information specific to Microsoft's JIT/IL
Compilers.
is stronger than the CLI specified memory model) elsewhere.
However, I feel pretty comfortable in having the interpretation experts
who possibly contributed to the spec or at least have direct contact
with those who wrote it.
Based upon the spec, I *know* that this is safe code:
public volatile int number;
public void DoSomething() {
this.Number = 1;
}
>
This is equally as safe:
public volatile int number;
public void DoSomething() {
lock(locker) {
this.Number = 1;
}
}
>
I think it's open to interpretation of the spec whether this is safe:
public int number;
public void DoSomething() {
lock(locker) {
this.Number = 1;
}
}
public volatile int number;
public void DoSomething() {
this.Number = 1;
}
>
This is equally as safe:
public volatile int number;
public void DoSomething() {
lock(locker) {
this.Number = 1;
}
}
>
I think it's open to interpretation of the spec whether this is safe:
public int number;
public void DoSomething() {
lock(locker) {
this.Number = 1;
}
}
could suggest ways in which it could go wrong, and I think I'd be able
to defend it in fairly clear-cut terms.
...it might be safe in Microsoft's implementations ; but that's not open
information and I don't think it's due to Monitor.Enter/Monitor.Exit.
information and I don't think it's due to Monitor.Enter/Monitor.Exit.
may be the outcome :(
I don't see what the issue with volatile is, if you're not using "volatile"
for synchronization . Worst case with this:
public volatile int number;
public void DoSomething() {
this.Number = 1;
}
you've explicitly stated your volatility usage/expectation: more readable,
makes no assumptions...
for synchronization . Worst case with this:
public volatile int number;
public void DoSomething() {
this.Number = 1;
}
you've explicitly stated your volatility usage/expectation: more readable,
makes no assumptions...
haven't (provided you use locking correctly). This means you can use a
single way of working for *all* types, regardless of whether you can
use the volatile modifier on them.
Whereas:
public int number;
public void DoSomething() {
lock(locker) {
this.Number = 1;
}
}
>
...best case, this isn't as readable because it uses implicit volatility
side-effects.
public int number;
public void DoSomething() {
lock(locker) {
this.Number = 1;
}
}
>
...best case, this isn't as readable because it uses implicit volatility
side-effects.
I'm writing thread-safe code (most types don't need to be thread-safe)
I document what lock any shared data comes under. I can rarely get away
with a single operation anyway.
Consider the simple change from this:
this.number = 1;
to this:
this.number++;
With volatile, your code is now broken - and it's not obvious, and
probably won't show up in testing. With lock, it's not broken.
What happens with the following code?
internal class Tester {
private Object locker = new Object();
private Random random = new Random();
public int number:
>
public Tester()
{
DoWork(false);
}
>
public void UpdateNumber() {
Monitor.Enter(l ocker);
DoWork(true);
}
internal class Tester {
private Object locker = new Object();
private Random random = new Random();
public int number:
>
public Tester()
{
DoWork(false);
}
>
public void UpdateNumber() {
Monitor.Enter(l ocker);
DoWork(true);
}
review. There have to be *very* good reasons not to use lock{}, and in
those cases there would almost always still be a try/finally.
I wouldn't consider using volatile just to avoid the possibility of
code like this (which I've never seen in production, btw).
private void DoWork(Boolean doOut) {
this.number = random.Next();
if(doOut)
{
switch(random.N ext(1))
{
case 0:
Out1();
break;
case 1:
Out2();
break;
}
}
}
>
private void Out1() {
Montior.Exit(th is.locker);
}
>
private void Out2() {
Monitor.Exit(th is.locker);
}
}
>
...clearly there isn't enough information merely from the existence
Monitor.Enter and Monitor.Exit to maintain those guarantees.
this.number = random.Next();
if(doOut)
{
switch(random.N ext(1))
{
case 0:
Out1();
break;
case 1:
Out2();
break;
}
}
}
>
private void Out1() {
Montior.Exit(th is.locker);
}
>
private void Out2() {
Monitor.Exit(th is.locker);
}
}
>
...clearly there isn't enough information merely from the existence
Monitor.Enter and Monitor.Exit to maintain those guarantees.
information to perform certain optimisations, simply because it can't
know whether or not Monitor.Exit will be called.
Assuming the CLR follows the spec, it can't move the write to number to
after the call to random.Next() - because that call to random.Next()
may involve releasing a lock, and it may involve a write.
Now, I agree that it really limits the scope of optimisation for the
JIT - but that's what the CLI spec says.
Again you're treating atomicity as almost interchangeable with
volatility,
volatility,
No, I'm not. I said you don't need to synchronize an atomic invariant but
you still need to account for its volatility (by declaring it volatile). I
didn't say volatility was a secondary concern, I said it needs to be
accounted for equally. I was implying that using the "lock" keyword is not
as clear in terms of volatility assumptions/needs as is the "volatile"
keyword. If a I read some code that uses "lock", I can't assume the author
did that for volatility reasons and not just synchronization reasons; whereas
if she had put "volatile" on a field, I know for sure why she put that there.
want to make sure I don't ignore previous changes - hence it needs to
be volatile.
Volatility is a natural consequence of wanting exclusive access to a
shared variable - which is why exactly the same strategy works in Java,
by the way (which has a slightly different memory model). Without the
guarantees given by the CLI spec, having a lock would be pretty much
useless.
This *is* guaranteed, it's the normal way of working in the framework
(as Willy said, look for volatile fields in the framework itself)
(as Willy said, look for volatile fields in the framework itself)
Which ones? Like Hashtable.versi on or StringBuilder.m _StringValue?
use the natural (IMO) way of sharing data via exclusive access, and
taking account the volatility that naturally provides.
--
Jon Skeet - <skeet@pobox.co m>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Comment