concurrency - Memory effects of synchronization in Java -


Says:

But there is more to synchronization than mutual exclusion. Synchronization ensures that a thread writes memory before or during a synchronized block, in an approximate way, appears to other threads that synchronizes on the same monitor. After we exit a synchronized block, we release the monitor, in which the cache is flushing in the main memory, so that the instruments written by this thread appear for another thread. Before entering a synchronized block, we receive the monitors, in which the effect of making the local processor cache invalid, so that the variable can be reloaded from the main memory. Then we will be able to see all the writing that is visible by the previous release.

I also have to remember that unusual synchronization on modern Sun VM is cheap. I am a little confused with this claim, like the code:

  square fu {int x = 1; Int y = 1; .. synchronize (aLock) {x = x + 1; }}  

Update of X requires synchronization, but does the acquisition of lock also clear the value of Y from the cache? I can not think that this is the case, because if this was true, techniques like lock stripping could not help. Alternatively, what JVM can reliably evaluate the code to ensure that y is not modified by using the same lock in another synchronized block and therefore when entering the synchronized block The value of the cache is not dumped?

The short answer is that JSR-133 goes far beyond its interpretation .

The Java Memory Model is formally defined in terms of visibility, atomicity, before relations, and similar things, which tells which threads should actually see, what should be used before another, by using a precise (mathematical) defined model, tasks and other relationships. Behavior that is not formally defined can be random, or can be well defined in practice on some hardware and JVM execution - but of course you should not trust it because it can change in the future. , And you can not really make sure that it was well defined in the first place until you write JVM and the hardware system Known well about tint.

Then the text you quoted guarantees formally Java, but it describes how some hypothetical architecture that guarantees a very weak memory order and visibility Complete the Java Memory model requirements using cache flipping No real discussion on cash flushing, main memory and so on clearly applies to Java because these concepts are not present in the imagination of abstract language and memory models.

In practice, the guaranteed memory model is more vulnerable than the full flush - having a nuclear, concurrent, or lock operation will flush the whole cache - and it's almost never in practice has been done. Rather, special nuclear CPU operations are used, sometimes in conjunction with instructions, which ensures memory visibility and order. Therefore, keeping in mind the apparent incompatibility between cheap unrestricted synchronization and "completely flushing the cache" has been solved in the first place that is true and not the second - complete for the Java Memory model (and any flush is in practice). Flush is not required.

If the formal memory model is too heavy to digest (you will not be alone), then you will take a look at this topic, which is actually linked to JSR-133 Frequently Asked Questions, But this comes from a solid hardware perspective, because it is intended for compiler authors. There, they talk about what obstacles are needed for special operations, including synchronization - and the obstacles discussed there can easily be mapped to the actual hardware. Most of the original mapping has the right discussion in the cookbook.


Comments

Popular posts from this blog

c# - How to capture HTTP packet with SharpPcap -

php - Multiple Select with Explode: only returns the word "Array" -

php - jQuery AJAX Post not working -