1. Introduction
In Java multithreading, effective coordination between threads is crucial to ensure proper synchronization and prevent data corruption. Two commonly used mechanisms for thread coordination are CountDownLatch and Semaphore. In this tutorial, we’ll explore the differences between CountDownLatch and Semaphore and discuss when to use each.
2. Background
Let’s explore the fundamental concepts behind these synchronization mechanisms.
2.1. CountDownLatch
CountDownLatch enables one or more threads to pause gracefully until a specified set of tasks has been completed. It operates by decrementing a counter until it reaches zero, indicating that all prerequisite tasks have finished.
2.2. Semaphore
Semaphore is a synchronization tool that controls access to a shared resource through the use of permits. In contrast to CountDownLatch, Semaphore permits can be released and acquired multiple times throughout the application, allowing finer-grained control over concurrency management.
3. Differences Between CountDownLatch and Semaphore
In this section, we’ll delve into the key distinctions between these synchronization mechanisms.
3.1. Counting Mechanism
CountDownLatch operates by starting with an initial count, which is decremented as tasks are completed. Once the count reaches zero, the waiting threads are released.
Semaphore maintains a set of permits, where each permit represents permission to access a shared resource. Threads acquire permits to access the resources and release them when finished.
3.2. Resettability
Semaphore permits can be released and acquired multiple times, allowing for dynamic resource management. For example, if our application suddenly requires more database connections, we can release additional permits to increase the number of available connections dynamically.
While in CountDownLatch, once the count reaches zero, it cannot be reset or reused for another synchronization event. It’s designed for one-time use cases.
3.3. Dynamic Permit Count
Semaphore permits can be dynamically adjusted at runtime using the acquire() and release() methods. This allows for dynamic changes in the number of threads allowed to access a shared resource concurrently.
On the other hand, once CountDownLatch is initialized with a count, it remains fixed and cannot be altered during runtime.
3.4. Fairness
Semaphore supports the concept of fairness, which ensures that threads waiting to acquire permits are served in the order they arrived (first-in-first-out). This helps prevent thread starvation in high-contention scenarios.
In contrast, CountDownLatch doesn’t have a fairness concept. It’s commonly used for one-time synchronization events where the specific order of thread execution is less critical.
3.5. Use Cases
CountDownLatch is commonly used for scenarios such as coordinating the startup of multiple threads, waiting for parallel operations to complete, or synchronizing the initialization of a system before proceeding with main tasks. For example, in a concurrent data processing application, CountDownLatch can ensure that all data loading tasks are completed before data analysis begins.
On the other hand, Semaphore is suitable for managing access to shared resources, implementing resource pools, controlling access to critical sections of code, or limiting the number of concurrent database connections. For instance, in a database connection pooling system, Semaphore can limit the number of concurrent database connections to prevent overwhelming the database server.
3.6. Performance
Since CountDownLatch primarily involves decrementing a counter, it incurs minimal overhead in terms of processing and resource utilization. Moreover, Semaphore introduces overhead in managing permits, particularly when acquiring and releasing permits frequently. Each call to acquire() and release() involves additional processing to manage the permit count, which can impact performance, especially in scenarios with high concurrency.
3.7. Summary
This table summarizes the key differences between CountDownLatch and Semaphore across various aspects:
Feature | CountDownLatch | Semaphore |
---|---|---|
Purpose | Synchronize threads until a set of tasks completes | Control access to shared resources |
Counting Mechanism | Decrements a counter | Manages permits (tokens) |
Resettability | Not resettable (one-time synchronization) | Resettable (permits can be released and acquired multiple times) |
Dynamic Permit Count | No | Yes (permits can be adjusted at runtime) |
Fairness | No specific fairness guarantee | Provides fairness (first-in-first-out order) |
Performance | Low overhead (minimal processing) | Slightly higher overhead due to permit management |
4. Comparison in Implementation
In this section, we’ll highlight the differences between how CountDownLatch and Semaphore are implemented in syntax and functionality.
4.1. CountDownLatch Implementation
First, we create a CountDownLatch with an initial count equal to the number of tasks to be completed. Each worker thread simulates a task and decrements the latch count using the countDown() method upon task completion. The main thread waits for all tasks to be completed using the await() method:
int numberOfTasks = 3;
CountDownLatch latch = new CountDownLatch(numberOfTasks);
for (int i = 1; i <= numberOfTasks; i++) {
new Thread(() -> {
System.out.println("Task completed by Thread " + Thread.currentThread().getId());
latch.countDown();
}).start();
}
latch.await();
System.out.println("All tasks completed. Main thread proceeds.");
After all tasks are completed and the latch count reaches zero, attempting to call countDown() will have no effect. Additionally, since the latch count is already zero, any subsequent call to await() returns immediately without blocking the thread:
latch.countDown();
latch.await(); // This line won't block
System.out.println("Latch is already at zero and cannot be reset.");
Let’s now observe the program’s execution and examine the output:
Task completed by Thread 11
Task completed by Thread 12
Task completed by Thread 13
All tasks completed. Main thread proceeds.
Latch is already at zero and cannot be reset.
4.2. Semaphore Implementation
In this example, we create a Semaphore with a fixed number of permits NUM_PERMITS. Each worker thread simulates resource access by acquiring a permit using the acquire() method before accessing the resource. One thing to take note is that, when a thread calls the acquire() method to obtain a permit, it may be interrupted while waiting for the permit. Therefore, it’s essential to catch the InterruptedException within the try–catch block to handle this interruption gracefully.
After completing resource access, the thread releases the permit using the release() method:
int NUM_PERMITS = 3;
Semaphore semaphore = new Semaphore(NUM_PERMITS);
for (int i = 1; i <= 5; i++) {
new Thread(() -> {
try {
semaphore.acquire();
System.out.println("Thread " + Thread.currentThread().getId() + " accessing resource.");
Thread.sleep(2000); // Simulating resource usage
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
semaphore.release();
}
}).start();
}
Next, we simulate resetting the Semaphore by releasing additional permits to bring the count back to the initial permit value. This demonstrates that Semaphore permits can be dynamically adjusted or reset during runtime:
try {
Thread.sleep(5000);
semaphore.release(NUM_PERMITS); // Resetting the semaphore permits to the initial count
System.out.println("Semaphore permits reset to initial count.");
} catch (InterruptedException e) {
e.printStackTrace();
}
The following is the output after running the program:
Thread 11 accessing resource.
Thread 12 accessing resource.
Thread 13 accessing resource.
Thread 14 accessing resource.
Thread 15 accessing resource.
Semaphore permits reset to initial count.
5. Conclusion
In this article, we’ve explored the key characteristics of both CountDownLatch and Semaphore. CountDownLatch is ideal for scenarios where a fixed set of tasks needs to be completed before allowing threads to proceed, making it suitable for one-time synchronization events. In contrast, Semaphore is used to control access to shared resources by limiting the number of threads that can access them concurrently, providing finer-grained control over concurrency management.
As always, the source code for the examples is available over on GitHub.