REP-50600 : Broadcasting is disabled. Please enable cosnaming for Reports discovery.

 

 

REP-50600 : Broadcasting is disabled. Please enable cosnaming for Reports discovery. 

 

Few days ago, we faced one issue with our customer where customer couldn’t start oracle standalone report server after they had restarted their physical server .

 

While tried to start the report server using command failed with following error

[oracle@dev-wls bin]$ ./opmnctl startproc ias-component=RptSvr_dev-wls_asinst_1

opmnctl startproc: starting opmn managed processes...================================================================================

opmn id=dev-wls.rcf.gov.in:6701

Response: 2 of 3 processes started.

 

ias-instance id=asinst_1

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

ias-component/process-type/process-set:

  RptSvr_dev-wls_asinst_1/ReportsServerComponent/RptSvr_dev-wls_asinst_1/

 

Error

--> Process (index=1,uid=1518988558,pid=30017)

  failed to start a managed process after the maximum retry limit

  Log:

  /u01/app/oracle/Middleware/asinst_1/diagnostics/logs/ReportsServerComponent/RptSvr_dev-wls_asinst_1/console~RptSvr_dev-wls_asinst_1~1.log

 

Upon checking console~RptSvr_dev-wls_asinst_1~1.log we didn’t find much information.

However we checked rwserver_diagnostic.log in same location and we found following error message

[2025-04-08T20:36:47.506+05:30] [reports] [INCIDENT_ERROR] [REP-50600] [oracle.reports.server] [tid: 10] [ecid: 0000POLw7MfFo2c5xjg8yW1bxJi2000001,0] REP-50600 : Broadcasting is disabled. Please enable cosnaming for Reports discovery.  [[

oracle.reports.RWException: IDL:oracle/reports/RWException:1.0

        at oracle.reports.utility.Utility.newRWException(Utility.java:1053)

        at oracle.reports.server.RWServer.startServer(RWServer.java:1123)

        at oracle.reports.server.RWServer.jniMain(RWServer.java:307)

 

 

Solution:-

The issue is generally observed if the system has firewall enabled.

In our case firewall was disabled however it was not disabled on startup and when server came on it started the firewall service.

#systemctl stop firewalld

#systemctl disable firewalld

 

Upon executing above, we finally fixed the issue and  report server started successfully.

Step by step guide to patch Oracle Grid & DB Home to 19.23 in Oracle RAC 19c on Aix 7.3

 Environment details:-

RAC 2 node Cluster

ORACLE_HOME

/u01/app/oracle/product/19.3.0/dbhome_1

GRID_HOME

/u01/app/grid/19.3.0/gridhome_1

OS

AIX  Power 7.3

DB Version

19.3.0

Grid Version

19.0.0.0



·         Download the patch from MOS (My Oracle Support)

As we are applying Patch 36233126 - GI Release Update 19.23.0.0.240416. Download this patch from MOS. This patch includes PSU patches for both Oracle Database Grid infrastructure.

 

Once downloaded, upload the patches in shared path called <UNZIPPED_PATCH_LOCATION>. This path should be accessible from both the nodes. It can be a nfs path.

 

Ensure the <UNZIPPED_PATCH_LOCATION> is empty before unzip.

Unzip the patch inside this directory.

unzip  p36233126_190000_AIX64-5L.zip

 

·         Opatch version:-

Ensure latest version of OPatch is installed on both grid and oracle home. Download latest OPatch from MOS. Patch number : 6880880

 

Before installation, take backup of existing OPatch folder inside $ORACLE_HOME

To Validate opatch version..

Perform this on Both nodes.


Login as Oracle user

 

-bash-5.2$ cd $ORACLE_HOME

-bash-5.2$ cd OPatch

-bash-5.2$ opatch version

OPatch Version: 12.2.0.1.42

 

OPatch succeeded.

 

 

Login as Grid user

-bash-5.2# su - oracle

-bash-5.2$ cd $ORACLE_HOME

-bash-5.2$ cd OPatch

-bash-5.2$ opatch version

OPatch Version: 12.2.0.1.42


·         Validation of Oracle Inventory

Check the consistency of inventory information for both Grid home and each Oracle home to be patched. Perform this on Both nodes.

Run this command as the respective Oracle home  owner to check the consistency. Run this on both the nodes.

Login as grid user

$ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME

 

Login as oracle user

$ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME

 


·         Run OPatch to check Conflict. Do this on both nodes

Determine whether any currently installed one-off patches conflict with this patch 36233126 as follows:

  • Login As the Grid home user:

export UNZIPPED_PATCH_LOCATION=/orabackup/patches

 

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $UNZIPPED_PATCH_LOCATION/36233126/36233263

 

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $UNZIPPED_PATCH_LOCATION/36233126/36240578

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $UNZIPPED_PATCH_LOCATION/36233126/36233343

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $UNZIPPED_PATCH_LOCATION/36233126/36460248

 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $UNZIPPED_PATCH_LOCATION/36233126/36383196

 

 

Login as oracle user and check following.

export UNZIPPED_PATCH_LOCATION=/orabackup/patches

 

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $UNZIPPED_PATCH_LOCATION/36233126/36233263

 

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $UNZIPPED_PATCH_LOCATION/36233126/36240578

 


·         Run OPatch System Space Check


Check if enough free space is available on the ORACLE_HOME filesystem for the patches to be applied as given below:

Run this as grid user on both nodes.


echo > /tmp/patch_list_gihome.txt

echo $UNZIPPED_PATCH_LOCATION/36233126/36233263  >> /tmp/patch_list_gihome.txt

echo $UNZIPPED_PATCH_LOCATION/36233126/36240578  >> /tmp/patch_list_gihome.txt

echo $UNZIPPED_PATCH_LOCATION/36233126/36233343  >> /tmp/patch_list_gihome.txt

echo $UNZIPPED_PATCH_LOCATION/36233126/36460248  >> /tmp/patch_list_gihome.txt

echo $UNZIPPED_PATCH_LOCATION/36233126/36383196  >> /tmp/patch_list_gihome.txt



Run OPatch command on both nodes to check if enough free space is available in the Grid home:

Run as Grid user on both nodes.

$ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt



Run OPatch command on both nodes to check if enough free space is available in the Oracle  home:

Run as Oracle user on both nodes.


echo > /tmp/patch_list_dbhome.txt

echo $UNZIPPED_PATCH_LOCATION/36233126/36233263 >> /tmp/patch_list_dbhome.txt

echo $UNZIPPED_PATCH_LOCATION/36233126/36240578 >> /tmp/patch_list_dbhome.txt


·         The following commands check for conflicts in both the 19c Grid home and the 19c DB homes.

Run as Root user on 1st node.


$ORACLE_HOME/OPatch/opatchauto apply $UNZIPPED_PATCH_LOCATION/36233126 -analyze

 

-bash-5.2# /u01/app/grid/19.3.0/gridhome_1/OPatch/opatchauto apply $UNZIPPED_PATCH_LOCATION/36233126 -analyze

 

OPatchauto session is initiated at Tue Jun 11 13:03:21 2024

 

System initialization log file is /u01/app/grid/19.3.0/gridhome_1/cfgtoollogs/opatchautodb/systemconfig2024-06-11_01-03-46PM.log.

 

Following home(s) will not be included as part of current opatchauto session as they do not run from the current host.

        Database Name: PROD

        Oracle Home: /u01/app/oracle/product/19.3.0/dbhome_1

        Host:

 db1

 db2

 

To complete the patching process for the above databases, execute it on host where the databases are running.

Session log file is /u01/app/grid/19.3.0/gridhome_1/cfgtoollogs/opatchauto/opatchauto2024-06-11_01-04-08PM.log

The id for this session is P4XT

 

Executing OPatch prereq operations to verify patch applicability on home /u01/app/grid/19.3.0/gridhome_1

Patch applicability verified successfully on home /u01/app/grid/19.3.0/gridhome_1

 

 

Executing patch validation checks on home /u01/app/grid/19.3.0/gridhome_1

Patch validation checks successfully completed on home /u01/app/grid/19.3.0/gridhome_1

 

OPatchAuto successful.

 

--------------------------------Summary--------------------------------

 

Analysis for applying patches has completed successfully:

 

Host:db1

CRS Home:/u01/app/grid/19.3.0/gridhome_1

Version:19.0.0.0.0

 

 

==Following patches were SUCCESSFULLY analyzed to be applied:

 

Patch: /orabackup/patches/36233126/36240578

Log: /u01/app/grid/19.3.0/gridhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2024-06-11_13-04-41PM_1.log

 

Patch: /orabackup/patches/36233126/36233343

Log: /u01/app/grid/19.3.0/gridhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2024-06-11_13-04-41PM_1.log

 

Patch: /orabackup/patches/36233126/36383196

Log: /u01/app/grid/19.3.0/gridhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2024-06-11_13-04-41PM_1.log

 

Patch: /orabackup/patches/36233126/36460248

Log: /u01/app/grid/19.3.0/gridhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2024-06-11_13-04-41PM_1.log

 

Patch: /orabackup/patches/36233126/36233263

Log: /u01/app/grid/19.3.0/gridhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2024-06-11_13-04-41PM_1.log

 

 

 

OPatchauto session completed at Tue Jun 11 13:05:57 2024

Time taken to complete the session 2 minutes, 12 seconds

-bash-5.2#


  • Before applying the patch, the readiness of the Grid_Home can be verified by cluvfy utility.

Run this as grid user fron any cluster node


bash-5.2$ cluvfy stage -pre patch

 

Verifying cluster upgrade state ...PASSED

Verifying Software home: /u01/app/grid/19.3.0/gridhome_1 ...PASSED

 

Pre-check for Patch Application was successful.

 

CVU operation performed:      stage -pre patch

Date:                         Jun 11, 2024 1:23:08 PM

CVU home:                     /u01/app/grid/19.3.0/gridhome_1/

User:                         grid



If it doesn’t report any issue, we are good to go.


·         Execute opatchauto utility to start the patching on GI and DB home. Opatchauto will be executed as root user. This will apply patch on a rolling fashion.

The utility must be executed by an operating system (OS) user with root privileges, and it must be executed on each node in the cluster if the Grid home or Oracle RAC database home is in non-shared storage. The utility can be run in parallel on the cluster nodes except for the first (any) node.

Depending on command line options specified, one invocation of OPatchAuto can patch the Grid home, Oracle RAC database homes, or both Grid and Oracle RAC database homes of the same Oracle release version as the patch. You can also roll back the patch with the same selectivity.

Add the directory containing the OPatchAuto to the $PATH environment variable. For example:


export PATH=$PATH:<GI_HOME>/OPatch

 

Or, when using -oh flag:

# export PATH=$PATH:<oracle_home_path>/OPatch

To patch the Grid home and all Oracle RAC database homes of the same version:

# opatchauto apply <UNZIPPED_PATCH_LOCATION>/36233126

To patch only the Grid home:

# opatchauto apply <UNZIPPED_PATCH_LOCATION>/36233126 -oh <GI_HOME>

To patch one or more Oracle RAC database homes:

# opatchauto apply <UNZIPPED_PATCH_LOCATION>/36233126 -oh <oracle_home1_path>,<oracle_home2_path>

To roll back the patch from the Grid home and each Oracle RAC database home:

# opatchauto rollback <UNZIPPED_PATCH_LOCATION>/36233126 

To roll back the patch from the Grid home:

# opatchauto rollback <UNZIPPED_PATCH_LOCATION>/36233126 -oh <path to GI home>  

To roll back the patch from the Oracle RAC database home:

# opatchauto rollback <UNZIPPED_PATCH_LOCATION>/36233126 -oh <oracle_home1_path>,<oracle_home2_path> 

 

 


·         In our setup , we are going to use opatchauto for both GI and DB homes using a single command

Login to 1st node as root user and execute following


export PATH=$PATH:/u01/app/grid/19.3.0/gridhome_1/OPatch

 

-bash-5.2# opatchauto apply /orabackup/patches/36233126

 

Once patches are applied on 1st node, login to 2nd node as root user and execute same command
 

export PATH=$PATH:/u01/app/grid/19.3.0/gridhome_1/OPatch

 

-bash-5.2# opatchauto apply /orabackup/patches/36233126

 

Opatchauto itself will shutdown the GI services and apply patch and run post installation steps related to patch.

 

Login to each node as oracle user and shutdown oracle instance running on that node


$ srvctl stop database -d PROD -i PROD1

$ srvctl status database -d PROD -i PROD1

 

export PATH=$PATH:/u01/app/oracle/product/19.3.0/dbhome_1/OPatch

 

opatch apply -oh /u01/app/oracle/product/19.3.0/dbhome_1 -local /orabackup/patches/36233126/36233263

 

Once patch is applied, start database instance in node 1

 

$ srvctl start database -d PROD -i PROD1

 

After database instance started on node 1, switch to node 2 and start patching using same method as mentioned above.


Once patch is applied on both database home, apply datapatch from any node.


 

-bash-5.2$ /u01/app/oracle/product/19.3.0/dbhome_1/OPatch/datapatch -verbose


Verify patch details

-bash-5.2$ opatch lsinventory | grep 19.23

Patch description:  "Database Release Update : 19.23.0.0.240416 (36233263)"