Fixes are available
8.5.5.9: WebSphere Application Server V8.5.5 Fix Pack 9
8.5.5.10: WebSphere Application Server V8.5.5 Fix Pack 10
8.5.5.11: WebSphere Application Server V8.5.5 Fix Pack 11
8.0.0.13: WebSphere Application Server V8.0 Fix Pack 13
8.5.5.12: WebSphere Application Server V8.5.5 Fix Pack 12
8.0.0.14: WebSphere Application Server V8.0 Fix Pack 14
8.5.5.13: WebSphere Application Server V8.5.5 Fix Pack 13
8.0.0.15: WebSphere Application Server V8.0 Fix Pack 15
8.5.5.14: WebSphere Application Server V8.5.5 Fix Pack 14
8.5.5.15: WebSphere Application Server V8.5.5 Fix Pack 15
8.5.5.17: WebSphere Application Server V8.5.5 Fix Pack 17
8.5.5.20: WebSphere Application Server V8.5.5.20
8.5.5.18: WebSphere Application Server V8.5.5 Fix Pack 18
8.5.5.19: WebSphere Application Server V8.5.5 Fix Pack 19
8.5.5.16: WebSphere Application Server V8.5.5 Fix Pack 16
8.5.5.21: WebSphere Application Server V8.5.5.21
APAR status
Closed as program error.
Error description
New property control_region_thread_pool_maximum_size that was added in 8.5.5.2 introduced a dynamic way to create WebSphere worker threads in a controller region depending on the workload demand for the controller. If above variable is set to 0 (default), WebSphere calculates its own max value based on the number of servants and worker threads per servant. If the configuration has a large number of servants and threads per SR, the max value might get very high causing additional overhead in Java Garbage Collection which in turn causes delays in processing in the CR and high cpu. If the max number of CR threads was created, the following message will be issued with max thread count: BBOO0412I THE MAXIMUM NUMBER OF WORKER THREADS HAVE BEEN CREATED MAXIMUM=??? Symptoms may also include EC3/0413000x timeout in the servants if the requests being processed are waiting for Controller processing, ex: Examples of a thread in SR waiting for work to be done in CR: com/ibm/ws390/xmem/proxy/XMemProxySRCppUtilities.flushHttpRespon seFragmentBuffers com/ibm/ws390/xmem/proxy/channel/XMemProxySRInboundHttpServiceCo ntextImpl.sendResponseBodyCommon ... Example of a high CPU threads executing GC in a CR: pthread_cond_wait monitor_wait_original j9thread_monitor_wait MM_ParallelTask::synchronizeGCThreadsAndReleaseMaster(MM_Env MM_ParallelScavenger::completeBackOut(MM_EnvironmentStandard MM_ParallelScavenger::workThreadGarbageCollect(MM_Environmen MM_ParallelScavengeTask::run(MM_EnvironmentModron*) MM_ParallelDispatcher::slaveEntryPoint(MM_EnvironmentModron* ... MM_::fixupSubArea(J9Object*,J9Object*,bool,unsi MM_CompactScheme::fixupObjects(MM_EnvironmentStandard*,unsig MM_CompactScheme::compact(MM_EnvironmentStandard*,bool,bool) MM_ParallelCompactTask::run(MM_EnvironmentModron*) MM_ParallelDispatcher::slaveEntryPoint(MM_EnvironmentModron* ... This apar will provide a cap value for maximum number of threads WAS will create if the calculated value as described above is too high. Additional documentation will also be provided that will explain that JVM heap size needs to be consider when setting control_region_thread_pool_maximum_size
Local fix
Configure server property control_region_thread_pool_maximum_size and set it to lower value than maximum specified in BBOO0412I message. Keep in mind, the server starts with a default of 25 threads.
Problem summary
**************************************************************** * USERS AFFECTED: All users of IBM WebSphere Application * * Server for z/OS V8.0 and V8.5 * **************************************************************** * PROBLEM DESCRIPTION: WebSphere Application Server for z/OS * * controller encountered delays when a * * high number of worker threads were * * created. * **************************************************************** * RECOMMENDATION: * **************************************************************** When control_region_thread_pool_maximum_size is set to 0, WebSphere Application Server for z/OS calculates the maximum number of controller worker threads based on the number of servants and the number of worker threads per servant. If a configuration has a large number of servants and each servant has a large number of threads, the maximum number of threads might get very high causing additional overhead in Java Garbage Collection which in turn causes delays in processing in the controller region.
Problem conclusion
When control_region_thread_pool_maximum_size is set to 0 and the calculated maximum number of threads is greater than 100, the code has been changed to limit the maximum number of threads to 100. APAR PI50098 is currently targeted for inclusion in Fix Packs 8.0.0.13 and 8.5.5.9 of WebSphere Application Server. Please refer to the Recommended Updates page for delivery information: http://www.ibm.com/support/docview.wss?rs=180&uid=swg27004980 In addition, please refer to URL: http://www.ibm.com/support/docview.wss?rs=404&uid=swg27006970 for Fix Pack PTF information.
Temporary fix
Comments
APAR Information
APAR number
PI50098
Reported component name
WEBSPHERE FOR Z
Reported component ID
5655I3500
Reported release
850
Status
CLOSED PER
PE
NoPE
HIPER
NoHIPER
Special Attention
NoSpecatt
Submitted date
2015-10-07
Closed date
2015-11-13
Last modified date
2015-11-13
APAR is sysrouted FROM one or more of the following:
APAR is sysrouted TO one or more of the following:
Fix information
Fixed component name
WEBSPHERE FOR Z
Fixed component ID
5655I3500
Applicable component levels
R850 PSY
UP
Document Information
Modified date:
28 April 2022