- Home
- Unravel 4.6.2 Documentation
- Installation
- On-prem platforms
- Enable additional instrumentation
- Adding a new node in an existing HDP cluster monitored by Unravel
Adding a new node in an existing HDP cluster monitored by Unravel
1. Generate and distribute Unravel's Hive Hook and Spark Sensor JARs
Create a directory,
/usr/local/unravel-jars
, for the JARs.mkdir /usr/local/unravel-jars chmod 775 -R /usr/local/unravel-jars/ chown root:hadoop /usr/local/unravel-jars/
Generate the JARs.
Tip
For
unravel-host
, specify the protocol (HTTP or HTTPS) and use the fully qualified domain name (FQDN) or IP address of Unravel Server. For example,https://playground3.unraveldata.com:3000
.For
spark-version
, use a Spark version that is compatible with this version of Unravel. For example,spark-2.0
for Spark 2.0.xspark-2.1
for Spark 2.1.xspark-2.2
for Spark 2.2.xspark-2.3
for Spark 2.3.xspark-2.4
for Spark 2.4.x
For
hive-version
, use a Hive version that is compatible with this version of Unravel. For example,HDP 3.x
3.1.0 for Hive 3.1.0
HDP 2.x
1.2.0
for Hive 1.2.0 or 1.2.10.13.0
for Hive 0.13.0
chmod +x /usr/local/unravel/install_bin/cluster-setup-scripts/unravel_hdp_setup.py cd /usr/local/unravel/install_bin/cluster-setup-scripts/ sudo python2 unravel_hdp_setup.py --sensor-only --unravel-server
unravel-host
:3000 --spark-versionspark-version
--hive-versionhive-version
--ambari-serverambari-host
After running the above command, the JAR files are stored in these two directories:
/usr/local/unravel_client
(Hive Hook JAR)/usr/local/unravel-agent/jars/
(Resource metrics sensor JARs)
Note these directories because you must specify them later in Ambari.
Copy the files into the directory
/usr/local/unravel-jars/
.cp /usr/local/unravel_client/unravel-hive-
hive-version
-hook.jar /usr/local/unravel-jars/ cp -pr /usr/local/unravel-agent/jars/* /usr/local/unravel-jars/Distribute
/usr/local/unravel-jars
to all worker, edge, and master nodes that run the queries.For example,
scp -r /usr/local/unravel-jars root@
hostname
:/usr/local/Make sure the node can reach port 4043 of Unravel Server.
2. For Oozie, copy the Hive Hook and BTrace JARs to the HDFS shared library path
If you are launching Spark actions:
Copy the JAR for the Spark version you are using, for example, spark-2.3. If you copy multiple Spark JARs, Oozie won't be to launch actions.
Ensure that the Spark event log location is configured the same as the local Spark jobs event logs' directory. In other words, Oozie must be able to locate the event log directory to store its event history logs.
Make sure that
oozie.libpath
for the Oozie shared library in HDFS is defined.Copy the Hive Hook JAR and the Btrace JAR to
oozie.libpath
. If you don't do this, jobs controlled by Oozie 2.3+ fail.
3. If you have changed your Kerberos tokens or principal you must perform the following steps:
Update the following properties to ensure the latest Kerberos keytab file for Unravel is available on Unravel servers.
com.unraveldata.kerberos.principal=
new principal
com.unraveldata.kerberos.keytab.path=new path
Make sure the new file's ownership/permission is restored to the original setup.
Restart all services.
sudo /etc/init.d/unravel_all.sh start