Consider the following sequence of writes to volatile
memory, which I've taken from David Chisnall's article at InformIT, "Understanding C11 and C++11 Atomics":
volatile int a = 1;
volatile int b = 2;
a = 3;
My understanding from C++98 was that these operations could not be reordered, per C++98 1.9:
conforming
implementations are required to emulate (only) the observable behavior of the abstract machine as
explained below
...
The observable behavior of the abstract machine is its sequence of reads and writes to volatile data and
calls to library I/O functions
Chisnall says that the constraint on order preservation applies only to individual variables, writing that a conforming implementation could generate code that does this:
a = 1;
a = 3;
b = 2;
Or this:
b = 2;
a = 1;
a = 3;
C++11 repeats the C++98 wording that
conforming
implementations are required to emulate (only) the observable behavior of the abstract machine as explained
below.
but says this about volatile
s (1.9/8):
Access to volatile objects are evaluated strictly according to the rules of the abstract machine.
1.9/12 says that accessing a volatile
glvalue (which includes the variables a
, b
, and c
above) is a side effect, and 1.9/14 says that the side effects in one full expression (e.g., a statement) must precede the side effects of a later full expression in the same thread. This leads me to conclude that the two reorderings Chisnall shows are invalid, because they do not correspond to the ordering dictated by the abstract machine.
Am I overlooking something, or is Chisnall mistaken?
(Note that this is not a threading question. The question is whether a compiler is permitted to reorder accesses to different volatile
variables in a single thread.)
Answer
IMO Chisnalls interpretation (as presented by you) is clearly wrong. The simpler case is C++98. The sequence of reads and writes to volatile data
needs to be preserved and that applies to the ordered sequence of reads and writes of any volatile data, not to a single variable.
This becomes obvious, if you consider the original motivation for volatile: memory-mapped I/O. In mmio you typically have several related registers at different memory location and the protocol of an I/O device requires a specific sequence of reads and writes to its set of registers - order between registers is important.
The C++11 wording avoids talking about an absolute sequence of reads and writes
, because in multi-threaded environments there is not one single well-defined sequence of such events across threads - and that is not a problem, if these accesses go to independent memory locations. But I believe the intent is that for any sequence of volatile data accesses with a well-defined order the rules remain the same as for C++98 - the order must be preserved, no matter how many different locations are accessed in that sequence.
It is an entirely separate issue what that entails for an implementation. How (and even if) a volatile data access is observable from outside the program and how the access order of the program maps to externally observable events is unspecified. An implementation should probably give you a reasonable interpretation and reasonable guarantees, but what is reasonable depends on the context.
The C++11 standard leaves room for data races between unsynchronized volatile accesses, so there is nothing that requires surrounding these by full memory fences or similar constructs. If there are parts of memory that are truly used as external interface - for memory-mapped I/O or DMA - then it may be reasonable for the implementation to give you guarantees for how volatile accesses to these parts are exposed to consuming devices.
One guarantee can probably be inferred from the standard (see [into.execution]): values of type volatile std::sigatomic_t
must have values compatible with the order of writes to them even in a signal handler - at least in a single-threaded program.
No comments:
Post a Comment