Skip to main content

Mastering Concurrency In Java - Part 2: The Fundamentals

Yash Sachdeva
Author
Yash Sachdeva
Software Engineer | Turning Complex Problems into Simple Solutions
Software engineer by trade, problem solver by nature. I write about the systems I build in my free time and the experiences that shape them.
Mastering Concurrency in Java - This article is part of a series.
Part : This Article

In Part 1, we discussed the core concurrency hazards and control concepts in Java: race conditions, visibility, atomicity, deadlocks, starvation, livelock, contention, backpressure, interruption, and cancellation.

In this part, we will discuss the coordination primitives - synchronized, volatile, Atomics, Locks, Semaphores, Blocking Queues, and Concurrent Collections.

1. synchronized
#

Every Java object implicitly carries a monitor lock. Synchronized methods and blocks acquire and release that monitor around a critical section.

synchronized (lockObject) {
    // critical section
}

What does it guarantee?

  • Mutual Exclusion on the monitor: At most one thread at a time executes code guarded by the same monitor object.
  • Visibility and ordering: Exiting a synchronized block flushes writes to main memory, and entering a synchronized block invalidates the local cache, forcing a reload of variables from main memory, so the acquiring thread sees the latest values.
  • Reentrancy: The owning thread that holds the monitor can reacquire it multiple times without blocking itself.

What does it not guarantee?

  • Fairness: There is no guarantee that threads will acquire the lock in the order they request it.
  • Timeout: There is no way to time out while waiting for the lock to be released; it simply blocks.
  • Deadlock or starvation freedom: It does not inherently prevent deadlocks or starvation.

When to choose synchronized?

  • The critical sections are small, the contention is moderate, and you want to keep the code simple.
  • There is no need for timed, interruptible, or fair acquisition. Synchronized protects basic in-memory invariants that should always be acquired before proceeding.
public class SynchronizedCounter {
    private int count = 0;

    public synchronized void increment() {
        count++;
    }

    public synchronized int get() {
        return count;
    }
}

2. volatile
#

The volatile keyword marks a field so that reads and writes go directly to main memory, with additional ordering guarantees.

What does volatile guarantee?

  • Visibility: A write to a volatile field by one thread is promptly visible to reads of that field by other threads; they do not see stale cached values.
  • Ordering fence: Reads and writes to a volatile field also preserve order, preventing instruction reordering around the access.

What does it not guarantee?

  • Atomicity: It does not make compound operations atomic. For example, incrementing a volatile int is still a non-atomic read-modify-write.
  • Mutual Exclusion: Multiple threads can read and write to a volatile variable concurrently. It does not serialize them like a lock.

When to choose volatile?

  • One-writer, many-reader flags: Ideal for shutdown signals, configuration toggles, health indicators, or status fields where only one thread mutates and others only observe.

Example:

public class VolatileSingleWriterMultipleReadersExample {
    private volatile boolean running = true;

    public void startWorker() {
        Thread worker = new Thread(() -> {
            while (running) {
                doWork();
            }
        });
        worker.start();
    }

    public void stop() {
        running = false; // visible to worker without locks
    }

    private void doWork() {
        // perform unit of work
    }
}

3. Atomics
#

The java.util.concurrent.atomic package provides classes such as AtomicInteger, AtomicLong, and AtomicReference that offer lock-free, atomic read-modify-write operations implemented using Compare-And-Swap (CAS) primitives.

What do Atomics guarantee?

  • Atomic read-modify-write on a single variable.
  • Visibility of updates.
  • Non-blocking progress under contention: CAS-based updates avoid explicit blocking; threads retry on contention rather than sleeping on a monitor.

What do Atomics not guarantee?

  • Atomicity of compound operations beyond operating one variable at a time.
  • Fairness or bounded retry delays for a given thread.

Example:

import java.util.concurrent.atomic.AtomicLong;

public class AtomicCounter {
    private final AtomicLong value = new AtomicLong();

    public void increment() {
        value.incrementAndGet();
    }

    public long get() {
        return value.get();
    }
}

4. Locks: ReentrantLock
#

The java.util.concurrent.locks package provides Lock implementations such as ReentrantLock that offer more control than synchronized, including timed and interruptible acquisition, explicit unlocking, and configurable fairness policies.

What do Locks guarantee?

  • Mutual Exclusion and visibility: At most one thread at a time executes code guarded by the same lock.
  • Reentrancy: The owning thread that holds the lock can reacquire it multiple times. It must unlock the same number of times it acquired it.
  • Optional Fairness: ReentrantLock can be constructed with a fairness policy, so that waiting threads acquire the lock in FIFO order of arrival.

What do Locks not guarantee?

  • Automatic Release: Unlike synchronized, locks must be explicitly released by the caller. Failure to do so results in a deadlock or locked state.
  • Absolute Freedom From Starvation: They can implement a fairness policy but cannot control OS thread scheduling.

When to choose Locks?

  • Timed or interruptible lock acquisition is required.
  • A fairness policy is required.
  • Complex lock acquisition patterns (e.g., hand-over-hand locking) are required.

Example:

import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;

public class TryLockExample {
    private final Lock lock = new ReentrantLock(true); // fair lock

    public boolean doWithLockOrTimeout() throws InterruptedException {
        if (lock.tryLock(50, TimeUnit.MILLISECONDS)) {
            try {
                // critical section
                return true;
            } finally {
                lock.unlock();
            }
        } else {
            // fall back: log, queue, or degrade functionality
            return false;
        }
    }
}

5. Semaphores: Bounded Concurrency / Throttling
#

A Semaphore maintains a set of permits. acquire() blocks (or fails) when no permits are available, and release() returns a permit to the pool.

A semaphore with one permit behaves much like a lock; with more permits, it limits concurrent access to a resource, which maps directly to throttling and connection-pooling patterns.

What do Semaphores guarantee?

  • Upper bound of concurrent permit holders: At most N threads can hold N permits at any time.
  • Optional Fairness: The constructor takes a fairness flag to favor FIFO acquisition of permits, reducing starvation for long-waiting threads.
  • Mutual exclusion when permits = 1 (behaves like an unstructured lock).

What do they not guarantee?

  • Automatic permit balancing: Forgetting to call release() leads to permanent permit leakage.
  • Protection of critical sections: Semaphores only limit concurrency; they do not inherently protect the critical section itself from race conditions or visibility issues (unless permits = 1).
  • Backpressure: Semaphores do not inherently provide backpressure; they only limit the number of concurrent requests, not the rate at which they arrive.

When to choose Semaphores?

  • Limiting the number of concurrent calls to a downstream dependency while still allowing many application threads to exist.
  • Implementing bounded resource pools (e.g., database connections, file handles).

Example:

import java.util.concurrent.Semaphore;

public class BoundedResource {
    private final Semaphore semaphore = new Semaphore(3);  // 3 permits

    public void accessResource() {
        semaphore.acquireUninterruptibly();
        try {
            // Use resource
        } finally {
            semaphore.release();
        }
    }
}

Note: Semaphores aren’t used for rate limiting because they don’t have any concept of time, only of how many concurrent threads are allowed to execute. They don’t work across distributed systems either.

6. Blocking Queues
#

java.util.concurrent.BlockingQueue represents a thread-safe FIFO queue that supports blocking operations for insertion and removal.

What do Blocking Queues guarantee?

  • Thread-safe insertion and removal: Multiple threads can interact with the queue without race conditions or visibility issues.
  • Blocking Semantics: Methods such as put() or take() block when the queue is full or empty, respectively, until space or an element becomes available.
  • Optional Bounded Capacity: Implementations like ArrayBlockingQueue and bounded LinkedBlockingQueue enforce a maximum size, preventing unbounded memory growth.

What do they not guarantee?

  • Strict fairness across producers / consumers.
  • Automatic load shedding: A bounded queue can block or reject producers, but deciding when to drop, buffer elsewhere, or apply rate limiting remains an application-level concern.

When to deliberately use Blocking Queues: Blocking queues are the in-process analog of message queues, and are used when:

  • The design naturally decomposes into producers and consumers.
  • Backpressure and decoupling are desired.

Example:

import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;

public class ProducerConsumerExample {
    private final BlockingQueue<String> queue = new LinkedBlockingQueue<>(10);

    public void produce(String item) throws InterruptedException {
        queue.put(item); // blocks if full
    }

    public String consume() throws InterruptedException {
        return queue.take(); // blocks if empty
    }
}

7. Concurrent Collections
#

The java.util.concurrent package provides collection implementations designed for concurrent access, such as ConcurrentHashMap, ConcurrentLinkedQueue, and CopyOnWriteArrayList.

What it guarantees:

  • Thread-safe access without global locking.
  • Atomic compound operations: ConcurrentMap defines methods that add, remove, or replace entries only if certain conditions hold (for example, “add if absent”), implemented atomically to avoid race conditions.
  • Non-blocking reads in many cases: Implementations such as ConcurrentHashMap are heavily optimized for concurrent reads, often allowing them without global locks.

What does it not guarantee?

  • Global atomicity across multiple operations.
  • Stale reads in some cases where iterators are used, where new writes may or may not become visible.

When to use:

  • Ideal for situations where reads vastly outnumber writes (e.g., listener lists, configuration registries).

Example:

import java.util.concurrent.ConcurrentHashMap;

public class CacheExample {
    private final ConcurrentHashMap<String, String> cache = new ConcurrentHashMap<>();

    public void putIfMissing(String key, String value) {
        cache.putIfAbsent(key, value);
    }
}

Conclusion
#

Choosing the right concurrency primitive is about balancing safety, performance, and complexity.

  • Use synchronized or Atomics for simple state.
  • Use Locks or Semaphores when you need fine-grained control or resource throttling.
  • Use Blocking Queues for producer-consumer architectures.
  • Use Concurrent Collections to scale shared state access.
Mastering Concurrency in Java - This article is part of a series.
Part : This Article

Related

Mastering Concurrency In Java - Part 3: Execution Models

In Part 1 and Part 2, we covered the fundamentals and building blocks of concurrency. In this part, we will discuss the execution models of concurrency - classic thread pools, task queues, Future/Callable, CompletableFuture, and, from Java 21+, virtual threads and virtual-thread-per-task executors. Choosing among them is ultimately about latency, throughput, and operational simplicity, not about syntax.

Mastering Concurrency In Java - Part 1: The Fundamentals

Introduction # Here, we will discuss the core concurrency hazards and control concepts in Java: race conditions, visibility, atomicity, deadlocks, starvation, livelock, contention, backpressure, interruption, and cancellation. Race Conditions # A race condition occurs when the correctness of a task depends on the relative timing or interleaving of threads accessing shared mutable state. The bug is not “thread A ran before B”, it is “if they interleave in this specific way, invariants break.”