Introduction

This article combines the JUC package provided by Doug Lea in JDK1.5 to understand the use of thread pools from the setting of thread pool size parameters, the creation of work threads, the recycling of idle threads, the use of blocking queues, task rejection strategies, thread pool Hook and other aspects, which involves some details including the choice of different parameters, different queues, different rejection strategies, the resulting The details include the different parameters, the different queues, the choice of different rejection strategies, the resulting impact and behavior, and for better use of the thread pool.

The ExecutorService is based on pooled threads to execute user submitted tasks, usually simply by creating ThreadPoolExecutor instances via the factory methods provided by Executors.

• thread pooling optimizes system performance when executing a large number of asynchronous tasks by reducing the performance consumption incurred each time a task is performed.

• thread pooling also provides a way to limit and manage the resources and threads consumed when batch tasks are executed. In addition ThreadPoolExecutor provides simple statistics such as how many tasks have been executed so far.

Quick Start

To make the thread pool suitable for a large number of different application contexts, ThreadPoolExecutor provides a number of configurable parameters and hooks that can be used to extend it. However, users can also quickly create ThreadPoolExecutor instances by using some of the factory methods provided by Executors. For example.

1. using Executors#newCachedThreadPool you can quickly create a thread pool with automatic thread recycling and no limits.

If the instance created by the above method does not meet our needs, we can configure it ourselves with parameters to instantiate an instance.

What you need to know about setting the thread count size parameter

When a task is submitted to the thread pool through the executor, we need to know the following points.

• If the number of worker threads in the current pool is less than corePoolSize at this time, a new worker thread is created to execute the task, regardless of whether there are threads in the worker thread collection that are idle. If there are threads in the pool, then a new thread is created to execute the task, regardless of whether there are threads in the worker set that are idle.
• If there is a worker thread in the pool that is larger than the corePoolSize but smaller than the maximumPoolSize, the task will be first tried to be put into the queue, and two cases need to be mentioned separately here:
• If the task is successfully put into the queue, see if a new thread needs to be opened to execute the task. Only when the current number of threads is 0 will a new thread be created, because the previous threads may be removed because they are idle or because the work is finished.
• Only go ahead and create a new worker thread if it fails to be put into the queue.
• If corePoolSize and maximumPoolSize are the same, the size of the thread pool is fixed.
• By setting maximumPoolSize to infinity, we can get a thread pool with no upper limit.
• In addition to setting these thread pool parameters via the construction parameters we can also set them at runtime.

By default, core worker threads are created initially and started when a new task arrives, but we can change this behavior by overriding the restartCoreThread or restartCoreThreads methods. The usual scenario is that we can WarmUp the core thread when the application is started, thus achieving the result of being able to execute the task immediately when it comes, making the initial task processing time somewhat optimized.

If the number of worker threads in the current pool is greater than corePoolSize, and if the threads above this number are idle for longer than keepAliveTime, these threads will be terminated as a strategy to reduce unnecessary resource consumption. This parameter can be changed at runtime, and we can also apply this policy to core threads, which we can do by calling allowCoreThreadTimeout.

Choosing the right blocking queue

All blocking queues can be used to hold tasks, but using different queues for corePoolSize will exhibit different behavior.

When the number of worker threads in the pool is less than corePoolSize, a new worker thread will be created each time a task comes in.

When the number of worker threads in the pool is greater than or equal to corePoolSize, each time a task comes in, it first tries to put the thread into the queue instead of creating the thread directly.

If the queue fails and the number of threads in the pool is less than the maximumPoolSize, then a worker thread is created.

The following are mainly the different queueing strategies performance:

Unbounded Queues

Using an unbounded queue such as LinkedBlockingQueue without specifying a maximum capacity will cause new tasks to be placed on the queue when the core threads are all busy, so that no threads larger than corePoolSize will ever be created, and therefore the maximumPoolSize parameter will fail. This strategy is more suitable for all tasks that are not dependent on each other and execute independently. As an example, for example, in a web server, each thread processes requests independently. But when the task processing speed is less than the task entry speed it will cause infinite expansion of the queue.

Bounded Queues

Bounded queues such as ArrayBlockingQueue help to limit the consumption of resources, but are not easy to control. The two values of queue length and maximumPoolSize affect each other. Using a large queue and a small maximumPoolSize will reduce CPU usage, OS resources, and context switching consumption, but will reduce throughput. If tasks are blocked frequently such as IO threads, the system can actually schedule more threads. Using a small queue usually requires a large maximumPoolSize, which makes the CPU a little busier, but increases the consumption of thread scheduling that reduces throughput. To summarize is IO-intensive can consider more threads to balance the CPU usage, CPU-intensive can consider less threads to reduce the consumption of thread scheduling.

Choose a suitable rejection strategy

When new tasks arrive and the thread pool is closed, or when the number of threads and queues have reached the upper limit, we need to make a decision on how to reject these tasks. The following is a description of the common strategies.

ThreadPoolExecutor#AbortPolicy: This policy directly throws a RejectedExecutionException exception.

ThreadPoolExecutor#CallerRunsPolicy: This policy will use the Caller thread to execute the task, which is a feedback policy that slows down the task submission.

ThreadPoolExecutor#DiscardPolicy: This policy will simply discard the task.

ThreadPoolExecutor#DiscardOldestPolicy: This policy will discard the task at the head of the task queue and then retry to execute it, and if it still fails, continue with the policy.

In addition to the above policies, we can also implement our own policies by implementing RejectedExecutionHandler.

ThreadPoolExecutor provides hook methods of type protected that can be overridden, allowing the user to do something after the task will be executed before it is executed. We can use it to implement operations such as initializing ThreadLocal, collecting statistics, such as logging, etc. There is another hook that can be used to allow the user to insert logic when the task is finished being executed, such as rerminated.

If the hook method fails to execute, the execution of the internal worker thread will fail or be interrupted.

Accessible Queues

The getQueue method can be used to access the queue for some statistical or debug work, but we do not recommend it for other purposes. Also the remove and purge methods can be used to remove tasks from the queue.