IBM Support

PM42959: WLM EXTERNAL CLIENT DATA PROPAGATION INCOMPLETE

Fixes are available

8.0.0.3: WebSphere Application Server V8.0 Fix Pack 3
7.0.0.23: WebSphere Application Server V7.0 Fix Pack 23
8.0.0.4: WebSphere Application Server V8.0 Fix Pack 4
7.0.0.25: WebSphere Application Server V7.0 Fix Pack 25
8.0.0.5: WebSphere Application Server V8.0 Fix Pack 5
7.0.0.27: WebSphere Application Server V7.0 Fix Pack 27
8.0.0.6: WebSphere Application Server V8.0 Fix Pack 6
7.0.0.29: WebSphere Application Server V7.0 Fix Pack 29
8.0.0.7: WebSphere Application Server V8.0 Fix Pack 7
6.1.0.47: WebSphere Application Server V6.1 Fix Pack 47
8.0.0.8: WebSphere Application Server V8.0 Fix Pack 8
7.0.0.31: WebSphere Application Server V7.0 Fix Pack 31
7.0.0.27: Java SDK 1.6 SR13 FP2 Cumulative Fix for WebSphere Application Server
7.0.0.33: WebSphere Application Server V7.0 Fix Pack 33
8.0.0.9: WebSphere Application Server V8.0 Fix Pack 9
7.0.0.35: WebSphere Application Server V7.0 Fix Pack 35
8.0.0.10: WebSphere Application Server V8.0 Fix Pack 10
7.0.0.37: WebSphere Application Server V7.0 Fix Pack 37
8.0.0.11: WebSphere Application Server V8.0 Fix Pack 11
7.0.0.39: WebSphere Application Server V7.0 Fix Pack 39
8.0.0.12: WebSphere Application Server V8.0 Fix Pack 12
7.0.0.41: WebSphere Application Server V7.0 Fix Pack 41
8.0.0.13: WebSphere Application Server V8.0 Fix Pack 13
7.0.0.43: WebSphere Application Server V7.0 Fix Pack 43
8.0.0.14: WebSphere Application Server V8.0 Fix Pack 14
7.0.0.45: WebSphere Application Server V7.0 Fix Pack 45
8.0.0.15: WebSphere Application Server V8.0 Fix Pack 15
6.1.0.43: Java SDK 1.5 SR13 Cumulative Fix for WebSphere Application Server
6.1.0.45: Java SDK 1.5 SR14 Cumulative Fix for WebSphere Application Server
6.1.0.47: Java SDK 1.5 SR16 Cumulative Fix for WebSphere Application Server
7.0.0.23: Java SDK 1.6 SR10 FP1 Cumulative Fix for WebSphere
7.0.0.25: Java SDK 1.6 SR11 Cumulative Fix for WebSphere Application Server
7.0.0.27: Java SDK 1.6 SR12 Cumulative Fix for WebSphere Application Server
7.0.0.29: Java SDK 1.6 SR13 FP2 Cumulative Fix for WebSphere Application Server
7.0.0.45: Java SDK 1.6 SR16 FP60 Cumulative Fix for WebSphere Application Server
7.0.0.31: Java SDK 1.6 SR15 Cumulative Fix for WebSphere Application Server
7.0.0.35: Java SDK 1.6 SR16 FP1 Cumulative Fix for WebSphere Application Server
7.0.0.37: Java SDK 1.6 SR16 FP3 Cumulative Fix for WebSphere Application Server
7.0.0.39: Java SDK 1.6 SR16 FP7 Cumulative Fix for WebSphere Application Server
7.0.0.41: Java SDK 1.6 SR16 FP20 Cumulative Fix for WebSphere Application Server
7.0.0.43: Java SDK 1.6 SR16 FP41 Cumulative Fix for WebSphere Application Server

Subscribe

You can track all active APARs for this component.

 

APAR status

  • Closed as program error.

Error description

  • In the scenario in which a customer has an external client to
    the target cluster (defined as either a java thin client, or a
    WebSphere Application Server process acting as a client which
    is either in a separate, non-bridged cell, or in the same cell
    but in a separate, non-bridged  coregroup), the cluster data
    passed back to the client after certain events may be
    incomplete, or new data may not be passed back at all.  An
    example of this is shutting down a cluster member.  The
    information that the cluster member was shut down may not be
    passed back to the thin client.  In most scenarios this will
    not cause more than minor issues, as the client side workload
    management (WLM) code will be able to detect the server is
    unavailable via an exception attempting to route to that
    member, but this will not cause the client to permanately
    remove that downed cluster member from the routing list.
    Occasionally the client will attempt to reconnect to the
    downed member to see if it has become available, in an
    interval controlled by the jvm property described in this
    article:
    
    http://www-01.ibm.com/support/docview.wss?uid=swg21380854
    
    Each attempt to the downed member would take a small amount of
    time based on the client connection and socket timeouts.  If the
    server has been shut down, the return will usually be quick,
    one second or less. If the server is unavailable for another
    reason, for example a Solaris Zone shutdown, this path
    required a socket-connection timeout, which the default is
    generally much longer.
    
    Overall the problem experienced by a customer in this scenario
    would be a reduced overall throughput due to the additional
    attempts to the downed server.  In environments using the
    default WLM timeout and client connection parameters, this
    reduction in throughput would be slight, if even measurable,
    but there are environments in which this can cause a
    significant degredation in throughput and response times.
    

Local fix

Problem summary

  • ****************************************************************
    * USERS AFFECTED:  Users of java thin client applications and  *
    *                  IBM WebSphere Application Server Network    *
    *                  Deployment edition with clustering          *
    ****************************************************************
    * PROBLEM DESCRIPTION: Workload management (WLM) data          *
    *                      propagated to thin clients is           *
    *                      incomplete                              *
    ****************************************************************
    * RECOMMENDATION:                                              *
    ****************************************************************
    In the scenario in which a customer has an external client to
    the target cluster (defined as either a java thin client, or a
    WebSphere Application Server process acting as a client which
    is either in a separate, non-bridged cell, or in the same cell
    but in a separate, non-bridged  coregroup), the cluster data
    passed back to the client after certain events may be
    incomplete, or new data may not be passed back at all.  An
    example of this is shutting down a cluster member.  The
    information that the cluster member was shut down may not be
    passed back to the thin client.  In most scenarios this will
    not cause more than minor issues, as the client side WLM code
    will be able to detect the server is unavailable via an
    exception attempting to route to that member, but this will
    not cause the client to permanately remove that downed cluster
    member from the routing list. Occasionally the client will
    attempt to reconnect to the downed member to see if it has
    become available, in an interval controlled by the  jvm
    property described in this article:
    http://www-01.ibm.com/support/docview.wss?uid=swg21380854
    Each attempt to the downed member would take a small amount of
    time based on the client connection and socket timeouts.  If
    the
    server has been shut down, the return will usually be quick,
    one
    second or less.
    If the server is unavailable for another reason, for example a
    Solaris Zone shutdown, this path required a socket-connection
    timeout, which the default is generally much longer.
    Overall the problem experienced by a customer in this scenario
    would be a reduced overall throughput due to the additional
    attempts to the downed server.  In environments using the
    default WLM timeout and client connection parameters, this
    reduction in throughput would be slight, if even measurable,
    but there are environments in which this can cause a
    significant
    degradation in throughput and response times.
    

Problem conclusion

  • The code was modified to ensure in the start and stop scenario
    that the data will be updated and propagated to the java thin
    clients properly to allow for more correct routing behaviors.
    
    In order to enable this function, you must set a cell-level
    custom property by going through the administrative console
    from:
    
    System Administration -> Cell -> Custom Properties
    
    and define a new custom property with the name:
    
    IBM_CLUSTER_ENABLE_THIN_CLIENT_UPDATES
    
    and a value of true.  Save, and synchronize to the nodes and
    the next time each of the cluster members are restarted, the
    custom property will be picked up and the change in behavior
    come into effect.
    
    The fix for this APAR is currently targeted for inclusion in
    fix packs 6.1.0.43, 7.0.0.23, 8.0.0.3.  Please refer to the
    Recommended Updates page for delivery information:
    http://www.ibm.com/support/docview.wss?rs=180&uid=swg27004980
    

Temporary fix

Comments

APAR Information

  • APAR number

    PM42959

  • Reported component name

    WEBS APP SERV N

  • Reported component ID

    5724H8800

  • Reported release

    610

  • Status

    CLOSED PER

  • PE

    NoPE

  • HIPER

    NoHIPER

  • Special Attention

    NoSpecatt

  • Submitted date

    2011-07-04

  • Closed date

    2011-10-31

  • Last modified date

    2011-10-31

  • APAR is sysrouted FROM one or more of the following:

  • APAR is sysrouted TO one or more of the following:

Fix information

  • Fixed component name

    WEBS APP SERV N

  • Fixed component ID

    5724H8800

Applicable component levels

  • R61A PSY

       UP

  • R61H PSY

       UP

  • R61I PSY

       UP

  • R61P PSY

       UP

  • R61S PSY

       UP

  • R61W PSY

       UP

  • R61Z PSY

       UP

  • R700 PSY

       UP

  • R800 PSY

       UP

[{"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSEQTP","label":"WebSphere Application Server"},"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"610","Line of Business":{"code":"LOB45","label":"Automation"}}]

Document Information

Modified date:
27 October 2021