Curiosities of Java Virtual Threads pinning with synchronized
If you heard about Java Virtual Threads, you also heard about the pinning phenomenon. Yet, many articles I found online
stop at “replace synchronized
with ReentrantLock
and don’t use native methods or foreign functions”. While I don’t
want to discuss the second part here, I think good-old synchronized
deserves a bit more attention.
Note: If you are not familiar with Virtual Threads pinning, I suggest starting with the official Virtual Threads documentation.
Thread pinning can be tricky. Having a synchronized
alone does not have to cause pinning.
Removing synchronized
from your code doesn’t mean it’s not being used under the hood. Some of the pinning cases can be
hard to spot just by looking at the code. Surprisingly, pinning can be caused also when using ReentrantLock
!
Finally, the only reasonable way of knowing if your app is pinning-free is to actively monitor for its occurrences.
Let’s go through a couple of code snippets to see how it all works.
Methodology
In order to detect thread pinning in presented examples, I always set -Djdk.tracePinnedThreads=full
parameter.
This way, whenever the JVM detects pinning, the full stacktrace is printed out. Monitoring thread pinning deserves
its own article. Yet, for our purposes, the proposed solution should be just enough.
For simplicity, I introduced two helper functions. log
calls System.out.println
under the hood, presenting the
current thread’s name. sleep
calls Thread.sleep
, hiding the InterruptedException
from us.
All the presented examples have been run on Java 21. The Future Java version may solve some of the described problems for us! I recommend watching Alan Bateman’s talk “Java’s Virtual Threads - Next Steps” to understand what may come next. I hope it could convince you, that bumping the Java version regularly can be very beneficial.
synchronized alone is not enough
Let’s start with our first simple example. There are two Virtual Threads, logging from within the synchronized
block.
We all heard how bad is this keyword for Virtual Threads… Time to see it in action!
@Test
void logsFromWithinSynchronizedBlock() throws InterruptedException {
var cdl = new CountDownLatch(2);
int iterations = 3;
var obj = new Object(); // something to lock
Thread.ofVirtual().name("vt1").start(() -> {
for (int i = 0; i < iterations; i++) {
synchronized (obj) {
log("Iteration " + i);
}
sleep(100);
}
cdl.countDown();
});
Thread.ofVirtual().name("vt2").start(() -> {
for (int i = 0; i < iterations; i++) {
synchronized (obj) {
log("Iteration " + i);
}
sleep(70);
}
cdl.countDown();
});
cdl.await();
}
Output:
[vt1] Iteration 0
[vt2] Iteration 0
[vt2] Iteration 1
[vt1] Iteration 1
[vt2] Iteration 2
[vt1] Iteration 2
There was no pinning detected, despite using synchronized
! How is that?
The short answer is: there were no attempts of parking the Virtual Thread within the synchronized
block. That’s why
the synchronized
alone is not enough to cause thread pinning. It is what happens within the synchronized
block which
really matters.
Virtual Thread parking has been described in detail on the Rock the JVM Blog. Here’s the bare minimum you should know about it now:
Every time the virtual thread reaches a blocking point, the state of the thread is changed to PARKING. The reaching of a blocking point is signaled through the call of the VirtualThread.park() method
(…)
Once in the PARKING state, the yieldContinuation() method is called. This method is the one that performs the actual parking of the virtual thread and tries to unmount the virtual thread from its carrier thread
When this happens from within the synchronized
block, a virtual thread can’t be unmounted from the carrier.
Adding a bit of I/O
Let’s add a real-life I/O operation to our synchronized
block. The blockingNetworkCall()
method performs an HTTP
GET call to dev.java page and prints the length of the response (which is an HTML page). Under the hood,
it uses the built-in HttpClient
introduced in Java 11.
@Test
void ioFromWithinSynchronizedBlock() throws InterruptedException {
var cdl = new CountDownLatch(2);
var obj = new Object();
int iterations = 3;
Thread.ofVirtual().name("vt1").start(() -> {
for (int i = 0; i < iterations; i++) {
synchronized (obj) {
blockingNetworkCall();
}
}
cdl.countDown();
});
Thread.ofVirtual().name("vt2").start(() -> {
for (int i = 0; i < iterations; i++) {
synchronized (obj) {
blockingNetworkCall();
}
}
cdl.countDown();
});
cdl.await();
}
The output looks as follows:
Thread[#22,ForkJoinPool-1-worker-1,5,CarrierThreads]
java.base/java.lang.VirtualThread$VThreadContinuation.onPinned(VirtualThread.java:185)
java.base/jdk.internal.vm.Continuation.onPinned0(Continuation.java:393)
java.base/java.lang.VirtualThread.park(VirtualThread.java:592)
java.base/java.lang.System$2.parkVirtualThread(System.java:2639)
java.base/jdk.internal.misc.VirtualThreads.park(VirtualThreads.java:54)
java.base/java.util.concurrent.locks.LockSupport.park(LockSupport.java:219)
java.base/java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1864)
java.base/java.util.concurrent.ForkJoinPool.unmanagedBlock(ForkJoinPool.java:3780)
java.base/java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3725)
java.base/java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1898)
java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2072)
java.net.http/jdk.internal.net.http.HttpClientImpl.send(HttpClientImpl.java:931)
java.net.http/jdk.internal.net.http.HttpClientFacade.send(HttpClientFacade.java:133)
com.mikemybytes.playground.VirtualThreadsTest.blockingNetworkCall(VirtualThreadsTest.java:128)
com.mikemybytes.playground.VirtualThreadsTest.lambda$example3$0(VirtualThreadsTest.java:27) <== monitors:1
java.base/java.lang.VirtualThread.run(VirtualThread.java:311)
[vt1] Response length: 25522
[vt1] Response length: 25522
[vt1] Response length: 25522
Thread[#24,ForkJoinPool-1-worker-2,5,CarrierThreads]
java.base/java.lang.VirtualThread$VThreadContinuation.onPinned(VirtualThread.java:185)
java.base/jdk.internal.vm.Continuation.onPinned0(Continuation.java:393)
java.base/java.lang.VirtualThread.park(VirtualThread.java:592)
java.base/java.lang.System$2.parkVirtualThread(System.java:2639)
java.base/jdk.internal.misc.VirtualThreads.park(VirtualThreads.java:54)
java.base/java.util.concurrent.locks.LockSupport.park(LockSupport.java:219)
java.base/java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1864)
java.base/java.util.concurrent.ForkJoinPool.unmanagedBlock(ForkJoinPool.java:3780)
java.base/java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3725)
java.base/java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1898)
java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2072)
java.net.http/jdk.internal.net.http.HttpClientImpl.send(HttpClientImpl.java:931)
java.net.http/jdk.internal.net.http.HttpClientFacade.send(HttpClientFacade.java:133)
com.mikemybytes.playground.VirtualThreadsTest.blockingNetworkCall(VirtualThreadsTest.java:128)
com.mikemybytes.playground.VirtualThreadsTest.lambda$example3$1(VirtualThreadsTest.java:36) <== monitors:1
java.base/java.lang.VirtualThread.run(VirtualThread.java:311)
[vt2] Response length: 25522
[vt2] Response length: 25522
[vt2] Response length: 25522
No surprises this time. JVM detected thread pinning because our blocking network call (parking the thread) was wrapped
with synchronized
block.
What’s worth noting, we would observe a similar situation replacing blockingNetworkCall();
with good-old Thread.sleep(...)
.
Actually, any blocking operation within the synchronized
could end up like this.
Switching to ReentrantLock
Let’s verify the popular wisdom and replace synchronized
with java.util.concurrent.locks.ReentrantLock
. In our case,
this should be rather straightforward.
@Test
void reentrantLockInsteadOfSynchronized() throws InterruptedException {
var cdl = new CountDownLatch(2);
var lock = new ReentrantLock();
int iterations = 3;
Thread.ofVirtual().name("vt1").start(() -> {
for (int i = 0; i < iterations; i++) {
lock.lock();
try {
blockingNetworkCall();
} finally { // don't forget to unlock!
lock.unlock();
}
}
cdl.countDown();
});
Thread.ofVirtual().name("vt2").start(() -> {
for (int i = 0; i < iterations; i++) {
lock.lock();
try {
blockingNetworkCall();
} finally {
lock.unlock();
}
}
cdl.countDown();
});
cdl.await();
}
The output looks much better this time:
[vt1] Response length: 25522
[vt1] Response length: 25522
[vt1] Response length: 25522
[vt2] Response length: 25522
[vt2] Response length: 25522
[vt2] Response length: 25522
As expected: no thread pinning has been detected! 🎉
What’s interesting, the execution order seems to be fixed, no matter how many times we re-run this example. vt1
executes its requests first, while vt2
has to wait. The reason for that can be found in the
ReentrantLock
documentation.
Initializing our lock object with new ReentrantLock(true)
would enable the fair ordering policy:
The constructor for this class accepts an optional fairness parameter. When set true, under contention, locks favor granting access to the longest-waiting thread.
There’s a reason why the fair ordering policy is not enabled by default, and that’s the throughput. Also, non-fair locks are simply good enough in most cases. The timeless Java Concurrency in Practice explains this much better than I could.
The hidden traps
All our previous examples were rather evident. It’s not that hard to spot the potential thread pinning just by looking at their code. Let’s try something a bit more tricky.
In our next attempt, we use ConcurrentHashMap
to build a simple, DB-backed cache. Whenever the requested key is not
found, our cache will load the appropriate value from the DB. Do you see anything suspicious?
@Test
void dbReadFromWithinComputeIfAbsent() throws InterruptedException {
Map<String, Integer> cache = new ConcurrentHashMap<>();
var cdl = new CountDownLatch(1);
Thread.ofVirtual().name("vt").start(() -> {
Integer cachedValue = cache.computeIfAbsent("foo", key -> {
var calculated = readFromDb(key);
log("Caching value for '" + key + "': " + calculated);
return calculated;
});
log("Cached value: " + cachedValue);
cdl.countDown();
});
cdl.await();
}
private static Integer readFromDb(String key) {
sleep(100); // imagine this is a blocking read from DB
return 44;
}
This is what the execution output looks like:
Thread[#22,ForkJoinPool-1-worker-1,5,CarrierThreads]
java.base/java.lang.VirtualThread$VThreadContinuation.onPinned(VirtualThread.java:185)
java.base/jdk.internal.vm.Continuation.onPinned0(Continuation.java:393)
java.base/java.lang.VirtualThread.parkNanos(VirtualThread.java:631)
java.base/java.lang.VirtualThread.sleepNanos(VirtualThread.java:803)
java.base/java.lang.Thread.sleep(Thread.java:507)
com.mikemybytes.playground.VirtualThreadsTest.sleep(VirtualThreadsTest.java:174)
com.mikemybytes.playground.VirtualThreadsTest.readFromDb(VirtualThreadsTest.java:42)
com.mikemybytes.playground.VirtualThreadsTest.lambda$example5$0(VirtualThreadsTest.java:30)
java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1708) <== monitors:1
com.mikemybytes.playground.VirtualThreadsTest.lambda$example5$1(VirtualThreadsTest.java:29)
java.base/java.lang.VirtualThread.run(VirtualThread.java:311)
[vt] Caching value for 'foo': 44
[vt] Cached value: 44
Houston, we have a thread pinning!
The stacktrace reported by the JDK is very explicit about the root cause - it’s related to the computeIfAbsent
.
A quick look at the
ConcurrentHashMap
source code
leaves no doubts - the synchronized
keyword is the culprit here. The lambda passed to the computeIfAbsent
is executed
from within the synchronized
block. That’s why blocking operations like a DB read causes thread pinning.
You could observe the same phenomenon when using Caffeine, the most popular Java
caching library. ConcurrentHashMap
can be found under the hood
there as well.
Down the rabbit hole
Time for the last example. Let’s completely remove our DB read and just hardcode the “calculated” value. The only
meaningful operation within the computeIfAbsent
would be… logging.
This time we will use a real logger, backed by a popular logging framework. Let’s stick to the default configuration
provided by the Spring Boot 3.2.3 which depends on SLF4J 2.0.12 and Logback
1.4.14. This combo gives us a good-old ConsoleAppender
, logging entries to the standard out.
Can an ordinary log entry cause thread pinning?
public class VtLoggingTest {
// slf4j + logback
private static final Logger log = LoggerFactory.getLogger(VtLoggingTest.class);
@RepeatedTest(5) // we want to be "lucky", so let's run it more than once
void logsFromWithinCacheComputeIfAbsent() throws InterruptedException {
Map<String, Integer> cache = new ConcurrentHashMap<>();
CountDownLatch cdl = new CountDownLatch(2);
Thread.ofVirtual().name("vt1").start(() -> {
Integer cachedValue = cache.computeIfAbsent("foo", key -> {
int calculated = 44;
log.info("Caching value for '" + key + "': " + calculated);
return calculated;
});
log.info("Cached value: " + cachedValue);
cdl.countDown();
});
Thread.ofVirtual().name("vt2").start(() -> {
Integer cachedValue = cache.computeIfAbsent("bar", key -> {
int calculated = 7;
log.info("Caching value for '" + key + "': " + calculated);
return calculated;
});
log.info("Cached value: " + cachedValue);
cdl.countDown();
});
cdl.await();
}
}
Let’s run the test and prepare for a little surprise:
17:47:23.895 [vt1] INFO com.mikemybytes.playground.VtLoggingTest -- Caching value for 'foo': 44
17:47:23.897 [vt1] INFO com.mikemybytes.playground.VtLoggingTest -- Cached value: 44
Thread[#25,ForkJoinPool-1-worker-2,5,CarrierThreads]
java.base/java.lang.VirtualThread$VThreadContinuation.onPinned(VirtualThread.java:183)
java.base/jdk.internal.vm.Continuation.onPinned0(Continuation.java:393)
java.base/java.lang.VirtualThread.park(VirtualThread.java:582)
java.base/java.lang.System$2.parkVirtualThread(System.java:2639)
java.base/jdk.internal.misc.VirtualThreads.park(VirtualThreads.java:54)
java.base/java.util.concurrent.locks.LockSupport.park(LockSupport.java:219)
java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:754)
java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:990)
java.base/java.util.concurrent.locks.ReentrantLock$Sync.lock(ReentrantLock.java:153)
java.base/java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:322)
ch.qos.logback.core.OutputStreamAppender.writeBytes(OutputStreamAppender.java:200)
ch.qos.logback.core.OutputStreamAppender.writeOut(OutputStreamAppender.java:193)
ch.qos.logback.core.OutputStreamAppender.subAppend(OutputStreamAppender.java:237)
ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:102)
ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:85)
ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:51)
ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:272)
ch.qos.logback.classic.Logger.callAppenders(Logger.java:259)
ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:426)
ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:386)
ch.qos.logback.classic.Logger.info(Logger.java:584)
com.mikemybytes.playground.VtLoggingTest.lambda$logsFromWithinCacheComputeIfAbsent$2(VtLoggingTest.java:34)
java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1708) <== monitors:1
com.mikemybytes.playground.VtLoggingTest.lambda$logsFromWithinCacheComputeIfAbsent$3(VtLoggingTest.java:32)
java.base/java.lang.VirtualThread.run(VirtualThread.java:309)
17:47:23.895 [vt2] INFO com.mikemybytes.playground.VtLoggingTest -- Caching value for 'bar': 44
17:47:23.903 [vt2] INFO com.mikemybytes.playground.VtLoggingTest -- Cached value: 44
17:47:23.910 [vt1] INFO com.mikemybytes.playground.VtLoggingTest -- Caching value for 'foo': 44
17:47:23.910 [vt1] INFO com.mikemybytes.playground.VtLoggingTest -- Cached value: 44
17:47:23.910 [vt2] INFO com.mikemybytes.playground.VtLoggingTest -- Caching value for 'bar': 44
17:47:23.910 [vt2] INFO com.mikemybytes.playground.VtLoggingTest -- Cached value: 44
17:47:23.912 [vt1] INFO com.mikemybytes.playground.VtLoggingTest -- Caching value for 'foo': 44
17:47:23.912 [vt1] INFO com.mikemybytes.playground.VtLoggingTest -- Cached value: 44
17:47:23.912 [vt2] INFO com.mikemybytes.playground.VtLoggingTest -- Caching value for 'bar': 44
17:47:23.912 [vt2] INFO com.mikemybytes.playground.VtLoggingTest -- Cached value: 44
Now things got really intriguing. A thread has been pinned by our log
statement! What’s even more interesting
we also see the call to the ReentrantLock.lock
method. Actually, it’s the ReentrantLock
who pinned our thread!
We already learned, that ConcurrentHashMap
uses synchronized
under the hood. Why has the ReentrantLock
pinned our
thread? Analyzing the stacktrace brings us to the writeBytes
method of the Logback’s OutputStreamAppender
method:
// Logback OutputStreamAppender sources
// (...)
// this was executed from within the 'synchronized' (ConcurrentHashMap#computeIfAbsent)
private void writeBytes(byte[] byteArray) throws IOException {
if (byteArray == null || byteArray.length == 0)
return;
streamWriteLock.lock(); // <- ReentrantLock!
try {
writeByteArrayToOutputStreamWithPossibleFlush(byteArray);
} finally {
streamWriteLock.unlock();
}
}
// (...)
protected final void writeByteArrayToOutputStreamWithPossibleFlush(byte[] byteArray) throws IOException {
this.outputStream.write(byteArray);
if (immediateFlush) { // 'true' by default!
this.outputStream.flush();
}
}
The issue here is not the ReentrantLock
itself, but wrapping it with the synchronized
few levels above. Logback’s
ConsoleAppender
flushes the stream after each write (immediateFlush=true
). When one thread is waiting for
flush to complete, the others will wait to acquire a lock. When a lock is not available, the calling Virtual Thread will
be parked. Thread parking happening within the synchronized
block is already a known example of pinning. Mystery solved!
Most of the logging frameworks I know use synchronous appenders by default. Asynchronous appenders come with their own challenges - just take a look at Log4j 2 and Logback docs. Carefully consider their tradeoffs before enabling them. As I stated at the beginning of this article, future Java versions should make these pinning issues obsolete.
The need for monitoring
Thread pinning can be really hard to spot just by looking at the code. Most of Java applications use various frameworks and tools, hiding the implementation details from people writing the code on top of them. In fact, the whole Java standard library does exactly the same. I don’t think we should drop all the tools developed through the years and start writing everything from scratch… It’s 2024, we can do better!
Many of the Virtual Threads-related issues have been already fixed or should be fixed soon. Keeping our dependencies up-to-date has become more important than ever. Since the upcoming Java versions should bring further improvements, those using Virtual Threads should feel encouraged to update their Java version regularly. I believe that eventually, the issues described in this article will no longer be valid.
Until that happens, we should monitor our Virtual Threads applications looking for thread pinning. The simple solution
used in this article (passing -Djdk.tracePinnedThreads=full
) is a good starting point but also has its drawbacks
(like this JDK issue). When aiming for constant monitoring,
JFR Events Streaming seems to be a much better solution. But that’s also a topic for
a separate article.
Coda
Investigating real-life pinning scenarios was an interesting exercise. I hope the examples I provided clarify how
synchronized
can cause this phenomenon. Yet, being aware is not enough, as the only reasonable way of detecting
pinning is to actively monitor for its occurrences. Some of the scenarios are far from being evident in the code.
A bit of infrequent and short-lived pinning may not be an issue. In other words, you may not need to chase every single use case and eliminate it now. But, of course, your mileage may vary.
What I’d encourage you to do, is keep your dependencies and Java version up-to-date. I really hope this article will turn into a detective story of the past thanks to ecosystem fixes.