Showing posts with label Java. Show all posts
Showing posts with label Java. Show all posts

Monday, 3 June 2013

Why double checked locking is broken?

Introduction


Yesterday I was talking to a good friend in the park and the topic of double checked locking for lazy instantiating of singletons came up. (Why we talked about this on a nice sunny day in the park is a separate issue, which won't be addressed in the blog ;)) At the time I was sure it was broken but I couldn't explain the reasons behind this strange behaviour. So I decided to have a look at it and write a quick blog about it.

How does double check locking look like in Java:

public class DoubleCheckedLocking {

    private static DoubleCheckedLocking singleton;
    private final String a;
    private final String b;

    private DoubleCheckedLocking() {
        this.a = "A";
        this.b = "B";
    }

    public DoubleCheckedLocking getSingleton() {
      if(singleton == null) {                       //1
        synchronized(this) {                        //2
          if(singleton == null) {                   //3
            singleton = new DoubleCheckedLocking(); //4
          }
        }
      }
      return singleton;                             //5
   }
}


The idea behind this pattern is that you want to lazy initialise your singleton but you want to avoid the performance penalty which comes from declaring the whole method synchronized. In the pattern the first thread would enter the method in line 1 and check that the singleton is null. It would then synchronize on the objects internal lock on line 2 and do a null check on the value of the singleton again on line 3. If the null check holds true the singleton will be initialised on line 5 finally. All further calls to the getSingleton method wouldn't see the singleton variable as null and would just return the cached instance. Unfortunately this pattern is not thread-safe and is considered broken. What could possibly go wrong in this situation? The problem sits on line 4. Assigning a reference to a variable and running the constructor is not an atomic operation. Two things need to happen:

  • Run the constructor
  • Assign newly created instance to variable (singleton)

The JVM is allowed to use out-of-order processing (see Why shared mutable state is root of all evil ) and can assign the reference to the variable before the constructors returns. That means that there is a window of vulnerability that singleton points to not fully instantiated object for a short while. For example if the constructor initialises internal fields a and b the unfinished object might have a initialised but not b. If a or b are final fields then it could actually happen that clients of singleton see them first as null and later properly initialised ("A"). It is only guaranteed that one threads sees the action of the other thread in proper order if you use synchronization for both reads and writes of the shared mutable state.

What can you do?

The easiest way to fix this is to not use lazy instantiation at all or synchronize the whole method.

public class ThreadSafeLoading {
    private static final ThreadSafeLoading singleton = new ThreadSafeLoading();
    
    public ThreadSafeLoading getSingleton() {
     return singleton;
    }
}

This works very well. The code to initiate a static variable runs during classloading time and before the class can be instantiated. The variable is declared final so it is ensured that all clients calling getSingleton will refer to the same instance of the class.




What if you must have the use lazy loading and can't synchronize the whole method? You can also declare the variable singleton as volatile. That way the JVM will bring memory barriers in place which will stop the possibility for out-of-order processing. Each write to volatile variable and each read from a volatile variable will create memory barrier and ensure that two threads will see actions in the other thread in proper order.

public class DoubleCheckedLocking {

    private static volatile DoubleCheckedLocking singleton;
    private final String a;
    private final String b;

    private DoubleCheckedLocking() {
        this.a = "A";
        this.b = "B";
    }
 
    public DoubleCheckedLocking getSingleton() {
      if(singleton == null) {                       //1
        synchronized(this) {                        //2
          if(singleton == null) {                   //3
            singleton = new DoubleCheckedLocking(); //4
          }
        }
      }
      return singleton;                             //5
    }
}

Saturday, 1 June 2013

Why shared mutable state is the root of all evil.

Introduction


A while back I went to a talk by Martin Odersky introducing Scala. Martin is the lead designer of Scala and a very smart guy. You can have a look at the slides from the talk here.

He described how the effect of Moore's law (that the number of transistors on a reasonable priced CPU doubles roughly every 2 years) doesn't lead to higher clock speeds anymore since the early 2000s. The number of transistors is still doubling but the extra transistors are used to include additional cores per CPU and to increase L2 and L3 cache sizes. Before the 2000s programmers got a free lunch meaning that the performance of their programs automatically improved with each new processor generation. This free lunch is now over and the performance of a single threaded program hasn't improved much over the last 10 years. To be able to create programs with smaller latency or higher throughput parallel programming mechanism are needed. 

What I found really profound was this pseudo equation which Martin introduced:

       non-determinism = parallel processing + mutable state

This equation basically means that both parallel processing and mutable state combined result in non-deterministic program behaviour. If you just do parallel processing and have only immutable state everything is fine and it is easy to reason about programs. On the other hand if you want to do parallel processing with mutable data you need to synchronize the access to the mutable variables which essentially renders these sections of the program single threaded. This  is not really new but I haven't seen this concepts expressed so elegantly. A non-deterministic program is broken.

Why are parallel programs without proper synchronization broken?

There are a few different things which can go wrong when no proper synchronization policy is enforced: Race conditions, visibility issues and out-of-order processing.

Race condition

A race condition means that the outcome of an algorithm is depend on its timing and lack of influences from other threads. Lets have a lock at the check-then-act race condition. In Java it would look something like this:

public class BrokenMap<E, V> extends HashMap<E ,V> {
    public T putIfAbsent(final E key, final T value) {
        if (!this.containsKey(key)) {
            return this.put(key, value);
        } else {
            return this.get(key);
        }
    )
}

In the putIfAbsent method we first check if the element is part of the map already. If this is not the case we add the element. Otherwise we just return the existing element. This code works perfectly fine in a single threaded environment. In a multithreaded environment however a second thread could have jumped in between the two calls to containsKey and put and put its value their first. Once thread one is continuing execution again it would override the already existing value of thread two in the map.

For the method putIfAbsent to be thread-safe it needs to guaranteed that the two calls to containsKey and put are always run uninterrupted. They need to be atomic. The easiest way to ensure this to put both calls into a synchronized block which essentially renders the program single threaded in this section.

Another form of race condition is read-modify-write. In Java it would look something like this:


public class BrokenCounter {
    private long count = 0;
    public long increment() {
        return ++count;
    }
}

The class BrokenCounter has a method increment. This class is not thread-safe because the call to increment the counter ++count is not just one operation. Under the hood it is actually 3 operations.
  1. Read the old value of count.
  2. Increment the value of count.
  3. Store the new value of count.
If a second thread interrupts the first thread somewhere between step 1 and 3 it would read the same value as thread one, increment it by one and store it back into the count variable. Once the first thread resumes it would also increment the old value and store it back into the count variable. The result of this would be a counter which has been only incremented by one even though it was called twice.

For the method increment to be thread-safe it needs to guarantee that calls to ++count will not be interrupted. The easiest way to achieve this is by putting the call into a synchronized block which essentially renders this section of the program single threaded.


Visibility

Visibility means essentially that an action in thread one is visible to a second thread. In a single threaded program when you write a new value to a variable it is guaranteed that all future reads to this variable will return the new value. In a multithreaded program such a guarantee doesn't exist. Lets have a look at the example program BrokenServer:

public class BrokenServer {

    private boolean stopped=false;
    
    public void run() {
        while(!stopped) {
            /* do something useful */
        }
    }
 
    public void cancel() {
        this.stopped = true;
    }
}


In this program a method run keeps processing stuff until it get interrupted by another thread calling the cancel method. It is a bid odd to imagine but the thread in the run method may actually never see the updated value of stopped. This is because the variable stopped may be cached in a local variable in the JVM or the value is resolved from the L1/L2 cache of the core rather than from main memory. To fix this we need to tell the JVM to include a memory barrier after writing the stopped variable and another one before reading the stopped variable. The memory barrier guarantees that all writes to the variable happen before all successive reads from the same variable (like you would expect in a single threaded program).

The easiest way to achieve this is to use a synchronized block around reading and writing the variable stopped which once again renders this part of the program single threaded.


Ordering


If you made it this far you must be truly interested in Java Concurrency. Lets have a look at one more case which is even more weird than the visibility issue above. The JVM and modern CPUs are allowed to execute code out of order if that helps with speeding up the code or improve memory access pattern.

Lets have a look at another piece of code:

public class BrokenOrder {
    int a = 0;
    int b = 0;
 
    public void assignment() {
        a = 1;
        b = 2;
    }
 
    public int getSum() {
        return a+b;
    } 
}
It is actually surprisingly difficult to reason about this very simple program. What would be the result of getSum if called in a multithreaded environment?

  • Returns 0 if getSum runs before assignment
  • Returns 3 if getSum runs after assignment
  • Returns 1 if getSum runs after a=2 assignment but before b=3 assignment
  • Returns 2 if getSum runs after b=2 assignment but before a=2 assignment

This last case can happen if the JVM / CPU decides to process the assignment method out of order. To prohibit this kind of non-deterministic behavior once again we need to use a memory barrier to prevent out-of-order processing. The easiest way to do this in Java is to use a synchronized block which renders the program single threaded in this section.

Conclusion


Shared mutable state and parallel processing doesn't go together at all. If you do not use proper synchronization you will create extremely difficult to find bugs and your programs are basically broken even if they appear to work just fine in most cases.