Fixes are available
7.0.0.27: WebSphere Application Server V7.0 Fix Pack 27
8.5.0.2: WebSphere Application Server V8.5 Fix Pack 2
8.0.0.6: WebSphere Application Server V8.0 Fix Pack 6
7.0.0.29: WebSphere Application Server V7.0 Fix Pack 29
8.0.0.7: WebSphere Application Server V8.0 Fix Pack 7
8.0.0.8: WebSphere Application Server V8.0 Fix Pack 8
7.0.0.31: WebSphere Application Server V7.0 Fix Pack 31
7.0.0.27: Java SDK 1.6 SR13 FP2 Cumulative Fix for WebSphere Application Server
7.0.0.33: WebSphere Application Server V7.0 Fix Pack 33
8.0.0.9: WebSphere Application Server V8.0 Fix Pack 9
7.0.0.35: WebSphere Application Server V7.0 Fix Pack 35
8.0.0.10: WebSphere Application Server V8.0 Fix Pack 10
7.0.0.37: WebSphere Application Server V7.0 Fix Pack 37
8.0.0.11: WebSphere Application Server V8.0 Fix Pack 11
7.0.0.39: WebSphere Application Server V7.0 Fix Pack 39
8.0.0.12: WebSphere Application Server V8.0 Fix Pack 12
7.0.0.41: WebSphere Application Server V7.0 Fix Pack 41
8.0.0.13: WebSphere Application Server V8.0 Fix Pack 13
7.0.0.43: WebSphere Application Server V7.0 Fix Pack 43
8.0.0.14: WebSphere Application Server V8.0 Fix Pack 14
7.0.0.45: WebSphere Application Server V7.0 Fix Pack 45
8.0.0.15: WebSphere Application Server V8.0 Fix Pack 15
7.0.0.27: Java SDK 1.6 SR12 Cumulative Fix for WebSphere Application Server
7.0.0.29: Java SDK 1.6 SR13 FP2 Cumulative Fix for WebSphere Application Server
7.0.0.45: Java SDK 1.6 SR16 FP60 Cumulative Fix for WebSphere Application Server
7.0.0.31: Java SDK 1.6 SR15 Cumulative Fix for WebSphere Application Server
7.0.0.35: Java SDK 1.6 SR16 FP1 Cumulative Fix for WebSphere Application Server
7.0.0.37: Java SDK 1.6 SR16 FP3 Cumulative Fix for WebSphere Application Server
7.0.0.39: Java SDK 1.6 SR16 FP7 Cumulative Fix for WebSphere Application Server
7.0.0.41: Java SDK 1.6 SR16 FP20 Cumulative Fix for WebSphere Application Server
7.0.0.43: Java SDK 1.6 SR16 FP41 Cumulative Fix for WebSphere Application Server
APAR status
Closed as program error.
Error description
In a High Availability and High scalability messaging engine topology, message distribution to neighboring message engines (ME) failed due to either stale ME connection information or missing ME connection information. Presence of the following message from the ME trace would be the match for this problem. ---------------------------------- Due to stale neighborhood ME connection information, message was sent to the wrong ME [11/3/11 14:50:36:773 IST] 0000002c MPIO 3 (com.ibm.ws.sib.processor.io.MPIO) [:] Ignoring message as in stopped state (or) Due to missing neighborhood ME connection information, there is no clue about the target ME. So source ME cannot send message to the neighborhood(target) ME. [1/23/12 15:07:34:713 IST] 00000036 MPIO < findMPConnection (com.ibm.ws.sib.processor.io.MPIO) [bus1:cluster1.000-bus1] Exit <null> ---------------------------------- For example, we have a messaging setup with a cluster which has 4 application servers as bus member. Messaging queue names are:Q1, Q2, Q3, and Q4. When application sends 4000 messages with load balancing, in a non working scenario the final results of messages in each message-engine (queue) is: Q1-1000, Q2-1000, Q3-1000, and Q4=0. This problem cannot be recreated all the time. If we restart the ME for Q4 (or) the entire cluster, the problem would go away. This problem is specific to high availability (or) high availability and high scalability setup.
Local fix
Problem summary
**************************************************************** * USERS AFFECTED: Users of the default messaging provider * * for IBM WebSphere Application Server * * Versions 7.0, 8.0 and 8.5 * **************************************************************** * PROBLEM DESCRIPTION: Messages are not distributed properly * * to a load balanced Service Integration * * Bus(SIB) Messaging Engines(ME). * **************************************************************** * RECOMMENDATION: * **************************************************************** In a high scalability setup there is a possibility of an ME transitioning from one server to another server during cluster startup. ME transitioning means that ME is stopped in one server and the same ME is started in the other server to balance the number of MEs running on each application server. During cluster startup, there is ME transition between application servers because one application server might be fully started while other application server is in the process of starting. This problem is specific to high availability or high availability and high scalability setup. This problem cannot be recreated all the time. The problem was due to neighborhood ME information not get cleaned properly during ME stop (during ME transition).
Problem conclusion
Modified the source code to add cleanup in ME stop and a fresh initialization of neighbourhood ME information during ME start. The fix for this APAR is currently targeted for inclusion in fix packs 7.0.0.27, 8.0.0.7, and 8.5.0.2. Please refer to the Recommended Updates page for delivery information: http://www.ibm.com/support/docview.wss?rs=180&uid=swg27004980
Temporary fix
Restart the message engine where the messages are not distributed (or) restart the application server JVM (or) the entire cluster.
Comments
APAR Information
APAR number
PM60436
Reported component name
WAS SIB & SIBWS
Reported component ID
620800101
Reported release
300
Status
CLOSED PER
PE
NoPE
HIPER
NoHIPER
Special Attention
NoSpecatt
Submitted date
2012-03-14
Closed date
2012-10-17
Last modified date
2012-10-17
APAR is sysrouted FROM one or more of the following:
APAR is sysrouted TO one or more of the following:
Fix information
Fixed component name
WAS SIB & SIBWS
Fixed component ID
620800101
Applicable component levels
R800 PSY
UP
Document Information
Modified date:
28 October 2021