Showing posts with label Cluster. Show all posts
Showing posts with label Cluster. Show all posts

April 28, 2025

How to Configure Passwordless SSH Authentication Between Oracle RAC Nodes

 

How to Configure Passwordless SSH Authentication Between Oracle RAC Nodes

During Oracle RAC installation (especially Grid Infrastructure), the installer and tools need to execute commands across all RAC nodes automatically, without human intervention.

Password less SSH is needed because: 

During Installation Step

Why Passwordless SSH is Needed

Grid Infrastructure (GI) Installation

The Oracle Universal Installer (runInstaller) copies files, runs scripts, sets up ASM, Clusterware, CRS services across all nodes.

Cluster Verification (cluvfy)

Verifies shared storage, network config, user equivalence by connecting across nodes.

Running root.sh automatically

Installer needs to trigger root scripts remotely on all nodes.

Configuration of SCAN, VIPs

RAC configures network resources which requires access to all nodes without asking password each time.

opatchauto (patching GI or RAC)

OPatchAuto automatically connects to all nodes, stops CRS, applies patches, restarts — needs SSH access.

Database installation

Same for database binaries if installing with RAC options.

📋 As per Oracle Official Documentation :

"You must configure secure shell (SSH) for both the Oracle Grid Infrastructure software owner (grid) and the Oracle Database software owner (oracle) to enable passwordless SSH user equivalence across all cluster nodes. This is required for Oracle Universal Installer to copy and run scripts on all cluster nodes during installation."

  

Assumptions

  • Your RAC nodes: Atleast 2 node rac
  • You are configuring passwordless SSH for the following users:
    • grid user (for Grid Infrastructure)
    • oracle user (for RDBMS software)

(If you use only one user for both, steps are same.)

 

On node1(Hostname :testrac1) (as grid user):

[grid@testrac1 ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/grid/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/grid/.ssh/id_rsa.

Your public key has been saved in /home/grid/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:bLXitTwHBHpFSE88BWI8pA0Bwq3Z0ifj4hb+tGvXrV4 grid@testrac1

The key's randomart image is:

+---[RSA 3072]----+

| .....o=*+=o.    |

|  ...  *+*o      |

|   =  o o.+.     |

|  + = .o o .     |

|   o +  S +      |

|  o .  o + o     |

| o o.  ...E .    |

|  +.... ...o     |

| . o+o .o.       |

+----[SHA256]-----+

[grid@testrac1 ~]$ cd ~/.ssh

[grid@testrac1 .ssh]$ cat id_rsa.pub >> authorized_keys

 

On node2(Hostname : testrac2)  (as grid user):

grid@testrac2 ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/grid/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/grid/.ssh/id_rsa.

Your public key has been saved in /home/grid/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:BBrZIK8m2dhCjaH4jkPNwl9itHsx4QKVBM5PN754Hf8 grid@testrac2

The key's randomart image is:

+---[RSA 3072]----+

| o+o++.          |

|= =+.o..         |

|o* +o+  .        |

|o*B.= o.         |

|=+*O * .S        |

|.*+ * = o        |

|o .+ + . .       |

| .  o     .      |

|           E     |

+----[SHA256]-----+

 

[grid@testrac2 ~]$ cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys

 

Exchange Public Keys

From testrac1:

[grid@testrac1 .ssh]$ scp ~/.ssh/id_rsa.pub testrac2:/tmp/testrac1_id_rsa.pub

 

 

From testrac2:

[grid@testrac2 .ssh]$ scp ~/.ssh/id_rsa.pub testrac1:/tmp/testrac2_id_rsa.pub

 

 

Merge the keys into authorized_keys on both nodes

On testrac1:-

[grid@testrac1 .ssh]$ cat /tmp/testrac2_id_rsa.pub >> ~/.ssh/authorized_keys

 

On testrac2:-

[grid@testrac2 .ssh]$ cat /tmp/testrac1_id_rsa.pub >> ~/.ssh/authorized_keys

 

Perform same steps as above for oracle user if you are using oracle as RDBMS user

 

 

Test Password less authentication From Both server:-

From testrac1

[grid@testrac2 ~]$ ssh testrac1 date

Mon Apr 28 20:07:19 IST 2025

[grid@testrac2 ~]$ ssh testrac2 date

Mon Apr 28 20:07:24 IST 2025

 

From testrac2

[grid@testrac1 ~]$ ssh testrac1 date

Mon Apr 28 20:08:19 IST 2025

[grid@testrac1~]$ ssh testrac2 date

Mon Apr 28 20:08:29 IST 2025

 

This concludes the passwordless authentication between Oracle rac nodes

 

How to Configure SCAN and VIPs in Oracle RAC 19c

 

How to Configure SCAN and VIPs in Oracle RAC 19c


1. What is SCAN and VIP in RAC? (Quick intro)

  • SCAN (Single Client Access Name):
    • A single name that clients use to connect to the database.
    • Behind the scenes, SCAN resolves to three IP addresses (round-robin via DNS or GNS).
    • Advantage: No need to change client connection strings if nodes are added/removed.
  • VIP (Virtual IP):
    • Each RAC node gets a VIP in addition to its public IP.
    • If a node fails, its VIP can quickly failover to another node.
    • Helps in fast TCP/IP failover without long TCP timeout delays.

2. Network Requirements

You'll need three networks:

  • Public Network (for client/database communication)
  • Private Network (for cluster interconnect/heartbeat)
  • Optional: Backup Network (for redundancy)

Important IP planning:

IP Type

Needed

Public IPs

1 per node

VIPs

1 per node

Private IPs

1 per node

SCAN IPs

3 total (shared across the cluster)


3. DNS Setup (for SCAN and VIPs)

Before RAC installation, ensure:

  • SCAN Name points to three IP addresses in DNS.
  • VIP Names are mapped separately in DNS (or /etc/hosts for non-production setups).

Example DNS Entries:

# SCAN entries (round-robin DNS)

testrac-scan.subnet09212030.vcn09212030.oraclevcn.com IN A 10.0.0.98

testrac-scan.subnet09212030.vcn09212030.oraclevcn.com IN A 10.0.0.177

testrac-scan.subnet09212030.vcn09212030.oraclevcn.com IN A 10.0.0.112

# Public IPs

testrac1.subnet09212030.vcn09212030.oraclevcn.com IN  A 10.0.0.127

testrac2.subnet09212030.vcn09212030.oraclevcn.com IN  A  10.0.0.158

# VIPs

testrac1-vip.subnet09212030.vcn09212030.oraclevcn.com  IN  A  10.0.0.36

testrac2-vip.subnet09212030.vcn09212030.oraclevcn.com  IN  A  10.0.0.178

 

 

Note:

  • SCAN should not resolve to a single IP — must have three IPs.
  • VIPs should be in the same subnet as the public IPs.

4. /etc/hosts Example (if not using DNS)

# Public IPs

10.0.0.127 testrac1.subnet09212030.vcn09212030.oraclevcn.com testrac1

10.0.0.158  testrac2.subnet09212030.vcn09212030.oraclevcn.com  testrac2

 

# VIPs

10.0.0.36 testrac1-vip.subnet09212030.vcn09212030.oraclevcn.com testrac1-vip

10.0.0.178  testrac2-vip.subnet09212030.vcn09212030.oraclevcn.com  testrac2-vip

 

# Private Interconnect

192.168.16.18 testrac1-priv.subnet09212030.vcn09212030.oraclevcn.com testrac1-priv

192.168.16.19  testrac2-priv.subnet09212030.vcn09212030.oraclevcn.com  testrac2-priv

 

#Scan IPs

10.0.0.98         testrac-scan.subnet09212030.vcn09212030.oraclevcn.com

10.0.0.177      testrac-scan.subnet09212030.vcn09212030.oraclevcn.com

10.0.0.112      testrac-scan.subnet09212030.vcn09212030.oraclevcn.com

 


5. During Grid Infrastructure Installation

When you install Grid Infrastructure:

  • Installer will prompt for SCAN name.
  • Installer will auto-detect SCAN IPs based on DNS.
  • It will also ask for:
    • Public Interface
    • Private Interface
    • VIP names for each node.

Oracle Clusterware will configure and manage SCAN listeners and VIP listeners automatically after installation.


6. Post-Installation Checks

After installation:

  • Check SCAN listeners:

$srvctl config scan

[oracle@testrac1 ~]$ srvctl config scan

SCAN name: testrac-scan.subnet09212030.vcn09212030.oraclevcn.com, Network: 1

Subnet IPv4: 10.0.0.0/255.255.255.0/enp0s5, static

Subnet IPv6:

SCAN 1 IPv4 VIP: 10.0.0.112

SCAN VIP is enabled.

SCAN 2 IPv4 VIP: 10.0.0.177

SCAN VIP is enabled.

SCAN 3 IPv4 VIP: 10.0.0.98

SCAN VIP is enabled.

 

$srvctl config scan_listener

[oracle@testrac1 ~]$ srvctl config scan_listener

SCAN Listeners for network 1:

Registration invited nodes:

Registration invited subnets:

Endpoints: TCP:1521

SCAN Listener LISTENER_SCAN1 exists

SCAN Listener is enabled.

SCAN Listener LISTENER_SCAN2 exists

SCAN Listener is enabled.

SCAN Listener LISTENER_SCAN3 exists

SCAN Listener is enabled.

 

  • Check VIPs:

[oracle@testrac1 ~]$ srvctl config nodeapps

Network 1 exists

Subnet IPv4: 10.0.0.0/255.255.255.0/enp0s5, static

Subnet IPv6:

Ping Targets:

Network is enabled

Network is individually enabled on nodes:

Network is individually disabled on nodes:

VIP exists: network number 1, hosting node testrac1

VIP Name: testrac1-vip.subnet09212030.vcn09212030.oraclevcn.com

VIP IPv4 Address: 10.0.0.36

VIP IPv6 Address:

VIP is enabled.

VIP is individually enabled on nodes:

VIP is individually disabled on nodes:

VIP exists: network number 1, hosting node testrac2

VIP Name: testrac2-vip.subnet09212030.vcn09212030.oraclevcn.com

VIP IPv4 Address: 10.0.0.178

VIP IPv6 Address:

VIP is enabled.

VIP is individually enabled on nodes:

VIP is individually disabled on nodes:

ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL true

ONS is enabled

ONS is individually enabled on nodes:

ONS is individually disabled on nodes:

 

You should see VIPs and SCAN listeners running fine.


7. Troubleshooting Tips

  • If SCAN listeners aren’t working, check:
    • DNS settings (or /etc/hosts)
    • Firewall rules blocking SCAN IPs
    • Network interface binding issues
  • If VIP fails to start:
    • Ensure the VIP is in the correct subnet.
    • Check the public network card configuration.

 

April 27, 2025

ocrcheck commands in Oracle RAC 19c

 

ocrcheck commands in Oracle RAC 19c

 

In Oracle RAC (Real Application Clusters), ocrcheck is a command-line utility used to check the status of Oracle Cluster Registry (OCR) in a cluster environment. The OCR stores important configuration data for Oracle Clusterware, such as cluster node information, voting disk locations, and other configuration details.

Check OCR Status: To check the status of the OCR and verify its integrity, use the following command:

[root@testrac1 ~]# cd /u01/app/19.0.0.0/grid/bin/

[root@testrac1 bin]# ./ocrcheck

Status of Oracle Cluster Registry is as follows :

         Version                  :          4

         Total space (kbytes)     :     901284

         Used space (kbytes)      :      84572

         Available space (kbytes) :     816712

         ID                       : 2059301600

         Device/File Name         :      +DATA

                                    Device/File integrity check succeeded

 

                                    Device/File not configured

 

                                    Device/File not configured

 

                                    Device/File not configured

 

                                    Device/File not configured

 

         Cluster registry integrity check succeeded

 

         Logical corruption check succeeded

 

To check the local OCR copy on the node where you run the command.

[root@testrac1 bin]# ./ocrcheck -local

Status of Oracle Local Registry is as follows :

         Version                  :          4

         Total space (kbytes)     :     491684

         Used space (kbytes)      :      83324

         Available space (kbytes) :     408360

         ID                       : 1502315622

         Device/File Name         : /u01/app/grid/crsdata/testrac1/olr/testrac1_19.olr

                                    Device/File integrity check succeeded

 

         Local registry integrity check succeeded

 

         Logical corruption check succeeded

 

 

Check OCR Configuration: If you need more detailed information about the OCR configuration, including its location, you can use:

[root@testrac1 bin]# ./ocrcheck -config

Oracle Cluster Registry configuration is :

         Device/File Name         :      +DATA

 

Check OCR local location

[root@testrac1 bin]# ./ocrcheck -local -config

Oracle Local Registry configuration is :

         Device/File Name         : /u01/app/grid/crsdata/testrac1/olr/testrac1_19.olr

 

 

September 10, 2016

How to install single node hadoop cluster on Centos 6

What is hadoop?
Hadoop is an open-source framework to store and process Big Data in a distributed environment. It contains two modules
one is MapReduce and another is Hadoop Distributed File System (HDFS).


•MapReduce: It is a parallel programming model for processing large amounts of structured, semi-structured, and
unstructured data on large clusters of commodity hardware.

•HDFS:Hadoop Distributed File System is a part of Hadoop framework, used to store and process the datasets. It
provides a fault-tolerant file system to run on commodity hardware.

Hostname:- server1.soumya.com
OS:- Centos 6

Step 1: Install Java
Download the java

[root@server1 ~]# wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u101-b13/jdk-8u101-linux-x64.tar.gz"
[root@server1 ~]# tar zxvf jdk-8u101-linux-x64.tar.gz

Step 2:-Install Java with Alternatives

After extracting archive file use alternatives command to install it. alternatives command is available in chkconfig
package.

[root@server1 jdk1.8.0_101]# alternatives --install /usr/bin/java java /u01/jdk1.8.0_101/bin/java 2
[root@server1 jdk1.8.0_101]# alternatives --config java

There are 4 programs which provide 'java'.

  Selection    Command
-----------------------------------------------
   1           /usr/lib/jvm/jre-1.5.0-gcj/bin/java
*+ 2           /usr/lib/jvm/jre-1.7.0-openjdk.x86_64/bin/java
   3           /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java
   4           /u01/jdk1.8.0_101/bin/java

Enter to keep the current selection[+], or type selection number: 2

Now java8 has been installed.Its recommended to setup javac and jar commands path using alternatives

[root@server1 jdk1.8.0_101]# alternatives --install /usr/bin/jar jar //u01/jdk1.8.0_101/bin/jar 4
[root@server1 jdk1.8.0_101]# alternatives --install /usr/bin/javac javac /u01/jdk1.8.0_101//bin/javac 4
[root@server1 jdk1.8.0_101]# alternatives --set jar /u01/jdk1.8.0_101/bin/jar
[root@server1 jdk1.8.0_101]# alternatives --set javac /u01/jdk1.8.0_101/bin/javac

Now check the java version:-
[root@server1 alternatives]# java -version
java version "1.8.0_101"
Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode

Step 3:-
Now configure the enviroment variables:-

# export JAVA_HOME=/u01/jdk1.8.0_101
Setup JRE_HOME Variable
# export JRE_HOME=/u01/jdk1.8.0_101/jre
Setup PATH Variable
# export PATH=$PATH:/u01/jdk1.8.0_101/bin:/u01/jdk1.8.0_101/jre/bin

add the following variables in .bashrc file for autoloading on system boot.

[root@server1] vi ~/.bash_profile

# export JAVA_HOME=/u01/jdk1.8.0_101
Setup JRE_HOME Variable
# export JRE_HOME=/u01/jdk1.8.0_101/jre
Setup PATH Variable
# export PATH=$PATH:/u01/jdk1.8.0_101/bin:/u01/jdk1.8.0_101/jre/bin

:wq (--save & exit)

Step 4:- Create Hadoop user

[root@server1 ~]# adduser hadoop
[root@server1 ~]# passwd hadoop

Now after creation of the user account, its required to create a key for ssh into its own account.

[root@server1 ~]# su - hadoop
[hadoop@server1 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
c5:3e:25:c0:92:23:d0:17:fa:56:72:4c:79:72:4c:fe hadoop@server1.soumya.com
The key's randomart image is:
+--[ RSA 2048]----+
|  .o  .+o+.      |
|    o.=o+++      |
|    .o.o++= .    |
|     . + o +     |
|      o S o E    |
|     .     .     |
|                 |
|                 |
|                 |
+-----------------+
[hadoop@server1 ~]$
[hadoop@server1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[hadoop@server1 ~]$ chmod 0600 ~/.ssh/authorized_keys

Check the connectivity:-
[hadoop@server1 ~]$ ssh localhost
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 0b:59:e4:8b:b1:e6:12:3a:38:4f:ba:74:ef:8a:ad:46.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
[hadoop@server1 ~]$ exit
logout
Connection to localhost closed.
[hadoop@server1 ~]$

Step 5:- Download hadoop 2.6.0
[hadoop@server1 ~]$ wget http://apache.claz.org/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
[hadoop@server1 ~]$ tar -zxvf hadoop-2.6.0.tar.gz
[hadoop@server1 u01]# mv hadoop-2.6.0 /home/hadoop/hadoop

Step 6:- Edit .bash_profile file and add the following lines from hadoop user.
[hadoop@server1 ~]$ vi /home/hadoop/.bash_profile
export PATH
#Java Env Variables
export JAVA_HOME=/u01/jdk1.8.0_101
export JRE_HOME=/u01/jdk1.8.0_101/jre
export PATH=$PATH:/u01/jdk1.8.0_101/bin:/u01/jdk1.8.0_101/jre/bin


#Hadoop Env Variables
export HADOOP_HOME=/home/hadoop/hadoop
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

:wq(-save & exit)

Now apply the changes in current running environment
[hadoop@server1 ~]$. /home/hadoop/.bash_profile

Now edit the following file and change the java path

[hadoop@server1 u01]# vi $HADOOP_HOME/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/u01/jdk1.8.0_101

:wq

Now Edit hadoop configuration files and add the following lines.
[hadoop@server1 u01]# cd $HADOOP_HOME/etc/hadoop

[hadoop@server1 u01]# vi core-site.xml
<configuration>
<property>
  <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
</property>
</configuration>

[hadoop@server1 u01]# vi hdfs-site.xml
<configuration>
<property>
 <name>dfs.replication</name>
 <value>1</value>
</property>

<property>
  <name>dfs.name.dir</name>
    <value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
</property>

<property>
  <name>dfs.data.dir</name>
    <value>file:///home/hadoop/hadoopdata/hdfs/datanode</value>
</property>
</configuration>

:wq


[hadoop@server1 u01]# vi mapred-site.xml
<configuration>
 <property>
  <name>mapreduce.framework.name</name>
   <value>yarn</value>
 </property>
</configuration>


:wq


[hadoop@server1 u01]# vi yarn-site.xml
<configuration>
 <property>
  <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
 </property>
</configuration>

:wq

Step 7:- Now format the namenode using following command.
[hadoop@server1 u01]# hdfs namenode -format

Sample output:

16/09/09 14:56:22 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = server1.soumya.com/192.168.2.12
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.0
...
...
16/09/09 14:56:25 INFO common.Storage: Storage directory /home/hadoop/hadoopdata/hdfs/namenode has been successfully formatted.
16/09/09 14:56:25 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/09/09 14:56:25 INFO util.ExitUtil: Exiting with status 0
16/09/09 14:56:25 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at server1.soumya.com/192.168.2.12
************************************************************/


Step 8:-
Now start the Hadoop Cluster
[hadoop@server1 sbin]$ cd $HADOOP_HOME/sbin/
Now run start-dfs.sh script

[hadoop@server1 sbin]$ start-dfs.sh
16/09/09 15:07:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
hadoop@localhost's password:
localhost: starting namenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-namenode-server1.soumya.com.out
hadoop@localhost's password:
localhost: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-server1.soumya.com.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
RSA key fingerprint is 0b:59:e4:8b:b1:e6:12:3a:38:4f:ba:74:ef:8a:ad:46.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.
hadoop@0.0.0.0's password:
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-server1.soumya.com.out
16/09/09 15:08:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Now run start-yarn.sh script.

[hadoop@server1 sbin]$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-resourcemanager-server1.soumya.com.out
hadoop@localhost's password:
localhost: starting nodemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-nodemanager-server1.soumya.com.out

Step 9:-
Check the hadoop services from browser

http://server1.soumya.com:50070/

To access pthe information about cluster and all applications

http://server1.soumya.com:8088/

To get information about secondary namenode.
http://server1.soumya.com:50090/


Step 10:-Test Hadoop Single Node Setup

[hadoop@server1 sbin]$ hdfs dfs -mkdir /user
16/09/09 15:36:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@server1 sbin]$ hdfs dfs -mkdir /user/soumya
16/09/09 15:36:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Shell Script to Automate Oracle 19c TDE Wallet & sqlnet.ora Backups

  Recently, one of my junior colleague had a requirement to clone a production database. While doing so he faced the following error while o...