ReentrantLock implementation principle

When using synchronize to do synchronization, lock acquisition and release are implicitly implemented by compiling and adding different machine instructions to achieve the principle.

ReentrantLock is a class based on AbstractQueuedSynchronizer(AQS for short) implementation, this article analyzes the implementation principle of ReentrantLock, I hope it will be helpful to you.

ReentrantLock: a thread can still repeat the lock after it has obtained it, and will not appear to block itself.

AQS is an important basic framework for implementing locks and synchronization in the Java concurrency package.

Lock types

ReentrantLock is divided into fair locks and non-fair locks, and the specific type can be specified via the constructor.

    // default non-fair lock
    public ReentrantLock() {
        sync = new NonfairSync();
    }
    
    // Fair lock
    public ReentrantLock(boolean fair) {
        sync = fair ? new FairSync() : new NonfairSync();
    }

The default is generally to use NonfairLock , which is much more efficient and throughput than fair locks (the exact reasons will be analyzed later).

Get locks

The usual way to use it is as follows:

    private ReentrantLock lock = new ReentrantLock();
    public void run() {
        lock.lock();
        try {
            //do bussiness
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            lock.unlock();
        }
    }

Fair lock acquisition lock

First look at the process of acquiring a lock.

    public void lock() {
        sync.lock();
    }

You can see that the method sync is used, and this method is an abstract method, specifically implemented by its subclass ( FairSync), and the following is the implementation of the fair lock:

        final void lock() {
            acquire(1);
        }
        
        //AbstractQueuedSynchronizer in acquire()
        public final void acquire(int arg) {
        if (!tryAcquire(arg) &&
            acquireQueued(addWaiter(Node.EXCLUSIVE), arg))
            selfInterrupt();
        }

The first step is to try to acquire a lock ( tryAcquire(arg) ), which is also implemented by its subclasses.

        protected final boolean tryAcquire(int acquires) {
            final Thread current = Thread.currentThread();
            int c = getState();
            if (c == 0) {
                if (!hasQueuedPredecessors() &&
                    compareAndSetState(0, acquires)) {
                    setExclusiveOwnerThread(current);
                    return true;
                }
            }
            else if (current == getExclusiveOwnerThread()) {
                int nextc = c + acquires;
                if (nextc < 0)
                    throw new Error("Maximum lock count exceeded");
                setState(nextc);
                return true;
            }
            return false;
        }
    }

The first step is to determine whether state in AQS is equal to 0. 0 means that no other thread has obtained a lock, so the current thread can try to obtain a lock.

Note: Before trying, the hasQueuedPredecessors() method will be used to determine if there are other threads in the queue of AQS, if there are, no lock will be attempted ( This is the case for fair locks).

If there is no thread in the queue, CAS is used to change the state in AQS to 1, which means that the lock is acquired, and if it succeeds, the current thread is set as the exclusive thread that acquired the lock ( setExclusiveOwnerThread(current) ).

If state is greater than 0, the lock has already been acquired, so you need to determine if the thread that acquired the lock is the current thread ( ReentrantLock supports reentry), and if so, you need to set state + 1 and update the value.

Write to queue

If tryAcquire(arg) fails to acquire a lock, you need to write the current thread to the queue with addWaiter(Node.EXCLUSIVE).

You need to wrap the current thread as a Node object ( addWaiter(Node.EXCLUSIVE) ) before writing.

The queue in AQS is implemented as a bidirectional chain of Node nodes.

Wrapper code:

    private Node addWaiter(Node mode) {
        Node node = new Node(Thread.currentThread(), mode);
        // Try the fast path of enq; backup to full enq on failure
        Node pred = tail;
        if (pred ! = null) {
            node.prev = pred;
            if (compareAndSetTail(pred, node)) { if (compareAndSetTail(pred, node)) {
                pred.next = node;
                return node;
            }
        }
        enq(node);
        return node;
    }

First determine if the queue is empty, if not then write the encapsulated Node to the end of the queue using CAS, if there is a concurrent write failure then you need to call enq(node); to write it.

    private Node enq(final Node node) {
        for (;;) {
            Node t = tail;
            if (t == null) { // Must initialize
                if (compareAndSetHead(new Node()))
                    tail = head;
            } else {
                node.prev = t;
                if (compareAndSetTail(t, node)) { if (compareAndSetTail(t, node)) {
                    t.next = node;
                    return t;
                }
            }
        }
    }

This processing logic is equivalent to spin plus CAS to guarantee that the queue will definitely be written.

Hanging a waiting thread

After writing to the queue the current thread needs to be hung (using acquireQueued(addWaiter(Node.EXCLUSIVE), arg) ).

    final boolean acquireQueued(final Node node, int arg) {
        boolean failed = true;
        try {
            boolean interrupted = false;
            for (;;) {
                final Node p = node.predecessor();
                if (p == head && tryAcquire(arg)) {
                    setHead(node);
                    p.next = null; // help GC
                    failed = false;
                    return interrupted;
                }
                if (shouldParkAfterFailedAcquire(p, node) &&
                    parkAndCheckInterrupt())
                    interrupted = true;
            }
        } finally {
            if (failed)
                cancelAcquire(node);
        }
    }

First, it will get whether the previous node is the head node according to node.predecessor(), if it is, it will try to acquire a lock once, and if it succeeds, everything will be fine.

If it is not the head node, or if the lock acquisition fails, it will be handled according to the waitStatus status of the previous node ( shouldParkAfterFailedAcquire(p, node) ).

waitStatus is used to record the status of the current node, such as node cancelled, node waiting, etc.

shouldParkAfterFailedAcquire(p, node) returns whether the current thread needs to hang, and if so, calls parkAndCheckInterrupt() : `

    private final boolean parkAndCheckInterrupt() {
        LockSupport.park(this);
        return Thread.interrupted();
    }

He is using the part method of LockSupport to hang the current thread until it is woken up.

Non-fair locks get locks

The difference between a fair lock and a non-fair lock is mainly in the acquisition of the lock.

Fair locks need to be used in sequential order. Non-fair locks, on the other hand, do not have these rules and are preemption mode , which directly attempts to acquire the lock.

Non-fair locks :

        final void lock() {
            //try to get the lock directly
            if (compareAndSetState(0, 1))
                setExclusiveOwnerThread(Thread.currentThread());
            else
                acquire(1);
        }

Fair lock:

        final void lock() {
            acquire(1);
        }

Another important difference is that when attempting to acquire a lock tryAcquire(arg), a non-fair lock does not need to determine if there are other threads in the queue, but also directly attempts to acquire the lock:

        final boolean nonfairTryAcquire(int acquires) {
            final Thread current = Thread.currentThread();
            int c = getState();
            if (c == 0) {
                //no !hasQueuedPredecessors() judgment
                if (compareAndSetState(0, acquires)) {
                    setExclusiveOwnerThread(current);
                    return true;
                }
            }
            else if (current == getExclusiveOwnerThread()) {
                int nextc = c + acquires;
                if (nextc < 0) // overflow
                    throw new Error("Maximum lock count exceeded");
                setState(nextc);
                return true;
            }
            return false;
        }

Releasing locks

The release process for both fair and non-fair locks is the same: the

    public void unlock() {
        sync.release(1);
    }
    
    public final boolean release(int arg) {
        if (tryRelease(arg)) {
            Node h = head;
            if (h ! = null && h.waitStatus ! = 0)
                   // wake up the hung thread
                unparkSuccessor(h);
            return true;
        }
        return false;
    }
    
    //try to release the lock
    protected final boolean tryRelease(int releases) {
        int c = getState() - releases;
        if (Thread.currentThread() ! = getExclusiveOwnerThread())
            throw new IllegalMonitorStateException();
        boolean free = false;
        if (c == 0) {
            free = true;
            setExclusiveOwnerThread(null);
        }
        setState(c);
        return free;
    }        

The first step is to determine if the current thread is the thread that acquired the lock, and since it is a reentrant lock, you need to reduce state to 0 before you can consider the lock fully released.

After releasing it, you need to call unparkSuccessor(h) to wake up the hung thread.

Summary

Since fair locks need to care about the queue, they need to acquire locks in the order of the queue (which causes a lot of thread context switching), while non-fair locks do not have this limitation.

So that explains why non-fair locks are more efficient than fair locks.