Fixes are available
8.5.0.2: WebSphere Application Server V8.5 Fix Pack 2
8.0.0.6: WebSphere Application Server V8.0 Fix Pack 6
7.0.0.29: WebSphere Application Server V7.0 Fix Pack 29
8.0.0.7: WebSphere Application Server V8.0 Fix Pack 7
8.0.0.8: WebSphere Application Server V8.0 Fix Pack 8
7.0.0.31: WebSphere Application Server V7.0 Fix Pack 31
7.0.0.33: WebSphere Application Server V7.0 Fix Pack 33
8.0.0.9: WebSphere Application Server V8.0 Fix Pack 9
7.0.0.35: WebSphere Application Server V7.0 Fix Pack 35
8.0.0.10: WebSphere Application Server V8.0 Fix Pack 10
7.0.0.37: WebSphere Application Server V7.0 Fix Pack 37
8.0.0.11: WebSphere Application Server V8.0 Fix Pack 11
7.0.0.39: WebSphere Application Server V7.0 Fix Pack 39
8.0.0.12: WebSphere Application Server V8.0 Fix Pack 12
7.0.0.41: WebSphere Application Server V7.0 Fix Pack 41
8.0.0.13: WebSphere Application Server V8.0 Fix Pack 13
7.0.0.43: WebSphere Application Server V7.0 Fix Pack 43
8.0.0.14: WebSphere Application Server V8.0 Fix Pack 14
7.0.0.45: WebSphere Application Server V7.0 Fix Pack 45
8.0.0.15: WebSphere Application Server V8.0 Fix Pack 15
7.0.0.29: Java SDK 1.6 SR13 FP2 Cumulative Fix for WebSphere Application Server
7.0.0.45: Java SDK 1.6 SR16 FP60 Cumulative Fix for WebSphere Application Server
7.0.0.31: Java SDK 1.6 SR15 Cumulative Fix for WebSphere Application Server
7.0.0.35: Java SDK 1.6 SR16 FP1 Cumulative Fix for WebSphere Application Server
7.0.0.37: Java SDK 1.6 SR16 FP3 Cumulative Fix for WebSphere Application Server
7.0.0.39: Java SDK 1.6 SR16 FP7 Cumulative Fix for WebSphere Application Server
7.0.0.41: Java SDK 1.6 SR16 FP20 Cumulative Fix for WebSphere Application Server
7.0.0.43: Java SDK 1.6 SR16 FP41 Cumulative Fix for WebSphere Application Server
APAR status
Closed as program error.
Error description
In WebSphere Application Server,in a scenario where MDB is running on a different server than the server where messaging engine is hosted. The MDB would poll at a regular interval to establish a connection to the Messaging Engine. During this poll, if the attempt to establish a connection fails it might result in the leak of Channel framework chains. . During the attempt to register with the Messaging Engine the resources Adapter would check if there is an existing connection that was successful to the Messaging Engine. If not we attempt to create one. . If there was no existing connection we create a new endpoint and clone it and use that to prepare an outbound connection which we use to connect to the Messaging Engine. And during the connection attempt Channel framework attempts to start a chain. . The following is the stack that explains the above -------------------------------------------------- . com.ibm.ws.channel.framework.impl.WSChannelFrameworkImpl. startChainInternal(WSChannelFrameworkImpl.java:996) . com.ibm.ws.channel.framework.impl.ChannelFrameworkImpl. startChainInternal(ChannelFrameworkImpl.java:2794) . com.ibm.ws.channel.framework.impl.OutboundVirtual ConnectionFactoryImpl.createConnection (OutboundVirtualConnectionFactoryImpl.java: 114) . com.ibm.ws.channel.framework.impl.WSVirtualConnection FactoryImpl.createConnection (WSVirtualConnectionFactoryImpl.java:36) . com.ibm.ws.sib.jfapchannel.framework.impl. CFWNetworkConnectionFactory.createConnection (CFWNetworkConnectionFactory.java:91) . com.ibm.ws.sib.jfapchannel.impl.octracker. ConnectionDataGroup.connect (ConnectionDataGroup.java:377) . com.ibm.ws.sib.jfapchannel.impl.octracker. OutboundConnectionTracker.connect (OutboundConnectionTracker.java:496) com.ibm.ws.sib.jfapchannel.impl. ClientConnectionManagerImpl.connect (ClientConnectionManagerImpl.java:159) . com.ibm.ws.sib.comms.client. ClientSideConnection.connect (ClientSideConnection.java:243) . com.ibm.ws.sib.trm.client. TrmSICoreConnectionFactoryImpl.remoteAttach (TrmSICoreConnectionFactoryImpl.java:525) . com.ibm.ws.sib.trm.client. TrmSICoreConnectionFactoryImpl. connectFromInsideServer (TrmSICoreConnectionFactoryImpl.java:408) . com.ibm.ws.sib.trm.client. TrmSICoreConnectionFactoryImpl.localBootstrap (TrmSICoreConnectionFactoryImpl.java:323) . com.ibm.ws.sib.trm.client. TrmSICoreConnectionFactoryImpl. createConnection (TrmSICoreConnectionFactoryImpl.java:304) . com.ibm.ws.sib.trm.client. TrmSICoreConnectionFactoryImpl. createConnection (TrmSICoreConnectionFactoryImpl.java:222) . com.ibm.ws.sib.ra.inbound.impl. SibRaMessagingEngineConnection. createConnection (SibRaMessagingEngineConnection.java:1187) . com.ibm.ws.sib.ra.inbound.impl. SibRaMessagingEngineConnection. <init>(SibRaMessagingEngineConnection.java:661) . com.ibm.ws.sib.ra.inbound.impl. SibRaCommonEndpointActivation$ DestinationStrategy.connectUsingTrmNoTargetData (SibRaCommonEndpointActivation.java:1825) . com.ibm.ws.sib.ra.inbound.impl. SibRaCommonEndpointActivation. connectUsingTrmNoTargetData (SibRaCommonEndpointActivation.java:612) . com.ibm.ws.sib.ra.inbound.impl. SibRaCommonEndpointActivation.connectUsingTrm (SibRaCommonEndpointActivation.java:577) . com.ibm.ws.sib.ra.inbound.impl. SibRaCommonEndpointActivation.connect (SibRaCommonEndpointActivation.java:486) . com.ibm.ws.sib.ra.inbound.impl. SibRaCommonEndpointActivation.checkMEs (SibRaCommonEndpointActivation.java:366) . com.ibm.ws.sib.ra.inbound.impl. SibRaCommonEndpointActivation.timerLoop (SibRaCommonEndpointActivation.java:325) . com.ibm.ws.sib.ra.inbound.impl. SibRaCommonEndpointActivation$1.run (SibRaCommonEndpointActivation.java:413) . ----------------------------------------- In the heap dump, the heap analyzer tool reports the suspect leak to be "com.ibm.ws.tcp.channel.impl.NioTCPChannel" . Example: ======= Leak suspect report: 45,278 instances of "com.ibm.ws.tcp.channel.impl.NioTCPChannel", loaded by "com.ibm.oti.vm.BootstrapClassLoader @ 0x481c131a30" occupy 591,149,944 (56.58%) bytes. ... Addtional possible leak suspect: com.ibm.ws.tcp.channel.impl.AioTCPChannel . .--------------------------------------------------- And here is an expanded view of the leak suspect 2,483,456,608 (79.84%) [224] 15 com/ibm/ws/tcp/channel/impl/AioTCPChannel 0xe7dc4f0 |- 2,483,367,736 (79.83%) [40] 5 com/ibm/ws/tcp/channel/impl/WSTCPChannelFactory 0xc59f3e0 |- 2,483,353,256 (79.83%) [56] 1 java/util/HashMap 0xc5b8300 |- 2,483,353,200 (79.83%) [144] 9 array of java/util/HashMap$Entry 0x385b3a90 |- 2,483,063,336 (79.82%) [32] 3 java/util/HashMap$Entry 0xeaa6ad8 |- 2,483,063,304 (79.82%) [32] 3 java/util/HashMap$Entry 0x1010f6e0 |- 2,471,355,200 (79.45%) [32] 2 java/util/HashMap$Entry 0x302293f0 |- 2,471,354,976 (79.45%) [224] 15 com/ibm/ws/tcp/channel/impl/AioTCPChannel 0x30229310 |- 2,471,353,976 (79.45%) [528] 128 array of com/ibm/ws/tcp/channel/impl/TCPChannelLinkedList 0x2eee2158 |- 2,471,347,352 (79.45%) [24] 1 com/ibm/ws/tcp/channel/impl/TCPChannelLinkedList 0x2eee2f50 |- 2,471,347,328 (79.45%) [24] 2 java/util/LinkedList$Link 0x2eee3b50 |- 2,471,287,440 (79.44%) [24] 3 java/util/LinkedList$Link 0x2eee3b68 |- 2,471,227,760 (79.44%) [24] 3 java/util/LinkedList$Link 0x2eee3b80 |- 2,471,168,632 (79.44%) [24] 3 java/util/LinkedList$Link 0x2eee3b98 |- 2,471,109,504 (79.44%) [24] 3 java/util/LinkedList$Link 0x2eee3bb0 |- 2,471,050,376 (79.44%) [24] 3 java/util/LinkedList$Link 0x2eee3bc8 | |- 2,470,990,528 (79.43%) [24] 3 java/util/LinkedList$Link 0x2eee3be0 | | |- 2,470,930,552 (79.43%) [24] 3 java/util/LinkedList$Link 0x2eee3bf8 | | |- 59,952 (0%) [88] 9 com/ibm/ws/tcp/channel/impl/TCPConnLink 0x313fe1d8 | | |- 2,471,050,376 (79.44%) [24] 3 java/util/LinkedList$Link 0x2eee3bc8 | |- 59,824 (0%) [88] 9 com/ibm/ws/tcp/channel/impl/TCPConnLink 0x3140cc58 | |- 2,471,109,504 (79.44%) [24] 3 java/util/LinkedList$Link 0x2eee3bb0 |- 59,104 (0%) [88] 9 com/ibm/ws/tcp/channel/impl/TCPConnLink 0x31412988
Local fix
Problem summary
**************************************************************** * USERS AFFECTED: Users of the default messaging provider for * * IBM WebSphere Application Server 7.0,8.0 * * and * * 8.5 * **************************************************************** * PROBLEM DESCRIPTION: In an environment having a Message * * Driven Bean (MDB) using an Activation * * Spec to connect to a Service * * Integration Bus Messaging Engine. If * * during the start-up of the Message * * Driven Bean if the connection to the * * Messaging Engine is not successful * * that would lead to a leak of * * channels. A lot of CHFW0019I messages * * are observed in the logs. * **************************************************************** * RECOMMENDATION: * **************************************************************** In an environment having a Message Driven Bean using an Activation Spec to connect to a Service Integration Bus Messaging Engine. If during the start-up of the Message Driven Bean if the connection to the Messaging Engine is not successful that would lead to leak of channels. When the creation of a connection to the Messaging Engine is unsuccessful the Service Integration Bus closes the earlier created connection and creates a new one later. But when a connection gets created for the first time since it is not already attributed to a channel the channel framework creates a new one. But Channel Framework would not know when to close the channel. This resulted in the pile up. A lot of CHFW0019I messages are observed in the logs. For exapmle: CHFW0019I: The Transport Channel Service has started chain chain_0. CHFW0019I: The Transport Channel Service has started chain chain_48152. Each time a new chain is created the chain number increases by 1. This leak can be observed only in client side.
Problem conclusion
To resolve the problem, when creating a connection without a designated channel the code has been modified to cache the channel created for it and reuse it later on, thus avoiding the duplicate channel creation. The fix for this APAR is currently targeted for inclusion in fix packs 7.0.0.29,8.0.0.6 and 8.5.0.2. Please refer to the Recommended Updates page for delivery information: http://www.ibm.com/support/docview.wss?rs=180&uid=swg27004980
Temporary fix
Comments
APAR Information
APAR number
PM72835
Reported component name
WAS SIB & SIBWS
Reported component ID
620800101
Reported release
300
Status
CLOSED PER
PE
NoPE
HIPER
NoHIPER
Special Attention
NoSpecatt
Submitted date
2012-09-13
Closed date
2012-12-31
Last modified date
2015-09-07
APAR is sysrouted FROM one or more of the following:
APAR is sysrouted TO one or more of the following:
PM88714
Fix information
Fixed component name
WAS SIB & SIBWS
Fixed component ID
620800101
Applicable component levels
R300 PSY
UP
R800 PSY
UP
Document Information
Modified date:
29 October 2021