Synchronized
This is used to achieve atomicity of functions or code blocks but doesn't use CPU features.
JVM sets the metadata of the object on which the synchronized block is defined. When a thread enters the synchronized block, it acquires the lock on the object. No other thread can enter any synchronized block on the same object until the lock is released.
Waiting thread can either fully JVM handled or can be handled by OS.
- Spin-wait - keep checking in a loop until the lock is released.
- Fat Lock - If it's taking long, JVM will decide to use OS. OS will put the thread to sleep and wake it up when the lock is released.
The entire synchronized implementation is built on top of CPU's atomicity feature. When thread tires to acquire the lock, it uses the CPU's atomic instruction to set the metadata of the object. This ensures that only one thread can set the metadata at a time.
Blocking Data Structures
In case of blocking data structures such as BlockingQueue, the process is similar but uses kernel's Futex feature to implement it.
- The main thread tries to fetch data on the queue.
- If the queue is empty, the fetch method will call the futex_wait system call, which puts the thread to sleep until there is data in the queue.
- When the producer thread adds data to the queue, it will call the futex_wake system call, which wakes up one or more threads that are waiting on the queue.
No. This is purely implemented in the kernel. The kernel ensures the thread is put to sleep after atomically ensuring that the value is still empty.
Rest, it does the regular job of putting a thread to sleep and waking it up whenever requested.
Distributed Systems
In case of distributed messaging servers, the clients can't do the same. In such cases, the queue clients will use the standard Socket I/O to send requests to the server and wait for the response. The server will use the same socket to send the response back to the client.
The threads are still put to sleep and woken up by the kernel as explained here.