I have inherited a 'technology' stack that I have very little experience of and am hoping for help to resolve an issue. Using the OpenVMS Web Service Integration Toolkit to call OpenVMS service from a web browser I am receiving the following error:
'com.hp.wsi.WsiConnectionException: ERROR: Transceive failure EndPointLocate: %WSI-F-FAILED_IPC_INIT, Unexpected failure while initializing IPC context'
Switching on IPC debugging shows the following:
(wsi$$protocol_init) Initialized (once only), OK
(wsi$$protocol_list_new) OK
(_set_srv_context) pctx=0x00f617f8, WSI$_SRV_THREADS=1, OK
(_set_srv_context) pctx=0x00f617f8, WSI$_SRV_STACKSIZE=2000000, OK
(_set_srv_context) pctx=0x00f617f8, WSI$_SRV_INIT_F=0x00080328, OK
(_set_srv_context) pctx=0x00f617f8, WSI$_SRV_TRANSCEIVE_F=0x00080348, OK
(_set_srv_context) pctx=0x00f617f8, WSI$_SRV_DISCONNECT_F=0x00080368, OK
(wsi$$error_set) iError=0x0001004a (65610) (IPC,ERROR,9)
(wsi$$error_set) osError=0x0000045c (1116)
(wsi$$error_set) "I/O failure: SYS$ICC_OPEN_ASSOC() failed"
(_icc_init_assoc_locked) sys$icc_open_assoc(ICC$PID_00003E97_WSI) failed, st=1116
A successful connection shows:
(wsi$$protocol_init) Initialized (once only), OK
(wsi$$protocol_list_new) OK
(_set_srv_context) pctx=0x00f617f8, WSI$_SRV_THREADS=1, OK
(_set_srv_context) pctx=0x00f617f8, WSI$_SRV_STACKSIZE=2000000, OK
(_set_srv_context) pctx=0x00f617f8, WSI$_SRV_INIT_F=0x00080328, OK
(_set_srv_context) pctx=0x00f617f8, WSI$_SRV_TRANSCEIVE_F=0x00080348, OK
(_set_srv_context) pctx=0x00f617f8, WSI$_SRV_DISCONNECT_F=0x00080368, OK
(_icc_init_assoc_locked) sys$icc_open_assoc(ICC$PID_000071BF_WSI), assoc=0x00010001, OK
(wsi$$protocol_binding_compose_d) ProtSeq="wsi_icc"
(wsi$$protocol_binding_compose_d) NetAddr="SVF"
(wsi$$protocol_binding_compose_d) EndPoint="ICC$PID_000071BF_WSI"
I am guessing this is a resourcing issue but have no idea what needs to be changed.
Any help very much appreciated.
TIA
You seem to be getting SS$_SSFAIL (1116)
error from sys$icc_open_assoc
, that is from ICC (Intra-cluster communication) servces:
SS$_SSFAIL Transport association name table is full, systemwide.
Perhaps, some process does not exit cleanly, thus the resources are not released.
Quoting from the doc:
SYS$ICC_OPEN_ASSOC
... The association name space is a controlled resource. For information about managing this resource, see the HP OpenVMS System Manager's Manual.
An attempt to open an association with a name not authorized as described in the HP OpenVMS System Manager's Manual will fail with the error SS$_NOPRIV returned to the caller. In addition to making entries in the system's local association name space, a call to $ICC_OPEN_ASSOC may also make an entry in a simple clusterwide registry of active associations.
An association may only be accessed from the mode in which it was opened. Inner modes are prevented from using the default association.
An application can open any number of associations subject to available process BYTLM quota. Currently, there is a systemwide limit of 512 open associations. There is no limit imposed clusterwide.
Gustogusto was spot on - there is a OpenVMS system-wide limit on the number of associations permitted and the error 0x0000045c is thrown when the limit of 512 associations (connections) is reached (see details below). At this time there is no fix for this issue. A possible workaround could be to configure this machine into a cluster, since there’s no limit on associations in a cluster-wide setup. Another possible workaround may be to periodically stop/restart the WSIT Software.
The IPC logs highlight that the SYS$ICC_OPEN_ASSOC() call fails with error 0x0000045c.
The SYS$ICC_OPEN_ASSOC() opens an "association" with the Intra-Cluster Communications, so that it can receive incoming connections.
There’s a way to verify this limit is through SDA. The commands:
$ analyze/system SDA> ICC SHOW ASSOCIATIONS will display all open ICC associations.
Example Output:
ICC Associations
--- ICCPAB Summary Page ---
ICCPAB Addr Extended Process name State Conn Association Name
----------- ---PID--- --------------- ------- ---- ----------------
896771C0 00000433 WSI$MANAGER Open 0 WSI$MANAGER_REPTAR
896AB440 00000442 MATH_0442 Open 0 ICC$PID_00000442
896AA2C0 00000442 MATH_0442 Open 0 ICC$PID_00000442_WSI
896B1080 00000443 STOCK_0443 Open 0 ICC$PID_00000443
896AA140 00000443 STOCK_0443 Open 0 ICC$PID_00000443_WSI
I then created this COM file to alert when limit is being approached to allow garbage collections to clear orphaned processes:
$ SET NOON
$ SET NOVERIFY
$ SUBMIT/QUE='<batch que name here>'/AFTER="+00:10"/LOG=ICC.LOG ICC.COM
$ ANALYSE/SYSTEM
SET OUTPUT/SINGLE HPESUPPORT-ICC.TXT
ICC SHOW ASSOCIATIONS
EXIT
$ pipe sear HPESUPPORT-ICC.TXT "ws1_","open"/mat=and/stats/out=ICC.TXT|search sy
s$input "records matched"/out=t.t
$ OPEN/READ INFILE T.T
$ READ/END_OF_FILE=EOF INFILE Temp
$ CLOSE INFILE
$ TEMP = F$EXTRACT(30, 3, TEMP)
$ IF TEMP .GE. 500
$ THEN
$ mail/subject="ICC limit alert" ICC.TXT "email address here"
$ ENDIF
$ EOF:
$ PURGE/KEEP = 144 icc.log
$ PURGE/KEEP = 144 hpesupport-icc.txt
$ PURGE/KEEP = 144 t.t
$ !+
$ ! Show connection count chronologically.
$ !-
$ !type t.t;*/out=p.t
$ !sort/key=(pos:30,siz=3) p.t sys$output
$ !+
$ ! List connections in user name sequence.
$ !-
$ !sear hpesupport-icc.txt "Process Name: WS1_"/format=dump/out=p.t
$ !sort/key=(pos:42,siz=12) P.t sys$output
$ EXIT
Now off to find out why processes are being orphaned as this will prevent the limit being reached quite so quickly.
User contributions licensed under CC BY-SA 3.0