Setting Up Multi-Node Hadoop Cluster , just got easy !

Reading Time: 3 minutes

In this blog,we are going to embark the journey of how to setup the Hadoop Multi-Node cluster on a distributed environment.

So lets do not waste any time, and let’s get started.
Here are steps you need to perform.


1.Download & install Hadoop for local machine (Single Node Setup) – 2.7.3
use java : jdk1.8.0_111
2. Download Apache Spark from :
choose spark release : 1.6.2

1. Mapping the nodes

First of all ,we have to edit hosts file in /etc/ folder on all nodes, specify the IP address of each system followed by their host names.

# vi /etc/hosts
 enter the following lines in the /etc/hosts file. hadoop-master hadoop-slave-1 hadoop-slave-2

2. Password less login through ssh.

Then we need to setup ssh password less login. For this, we need to Configure Key Based Login
Setup ssh in every node such that they can communicate with one another without any prompt for password.

# su hduser 
 $ ssh-keygen -t rsa 
 $ ssh-copy-id -i ~/.ssh/ hduser@hadoop-master 
 $ ssh-copy-id -i ~/.ssh/ hduser@hadoop-slave-1 
 $ ssh-copy-id -i ~/.ssh/ hduser@hadoop-slave-2

Note: ssh folder should have permission : 700 & authorised_key should have 644  and hduser should have 755 permission in both master & slaves. (This is very important as it has wasted my time a lot 😉 )

3. Setup java environment for master and slave

Folder structure for both must be same.
Extract your java in /home/hduser/software and set the path in hduser’s .bashrc as :

export JAVA_HOME=/home/hduser/software/jdk1.8.0_111

4.Configuring Hadoop

  • Install your hadoop in /usr/local
  • Set $HADOOP_HOME in bashrc as:
export HADOOP_HOME=/usr/local/hadoop
  • Create a directory named hadoop_data in opt folder and dfs in $HADOOP_HOME
  • Inside dfs create a directory named name and inside name create a directory named data
  • The permissions for name and dfs should be 777.
  • Make sure that hadoop_data folder in opt folder is owned by hduser and its permissions should be 777
  • Your core-site.xml file should look like:
<description>directory for hadoop data</description>
<description> data to be put on this URI</description>
<description>Use HDFS as file storage engine</description>
  • Your hdfs-site.xml file should look like:
  • Your  mapred-site.xml should look like:

  • Your yarn-site.xml should look like:
<!-- Site specific YARN configuration properties -->

Now Set JAVA_HOME in as

export JAVA_HOME=/home/hduser/software/jdk1.8.0_111
  • In master node set slaves IP address in $HADOOP_HOME/etc/hadoop/slaves file as:
Remove localhost entry from the above file.

Important Note:Location of hadoop and spark should be same in master and slaves .

5.Configuring Spark

Install spark in /home/hduser/software
Set your $SPARK_HOME in bashrc as:

export SPARK_HOME=/home/hduser/software/spark-1.6.2-bin-hadoop2.6

1.Add the following line in

 export //IP address of master

2.Copy your hdfs-site.xml and core-site.xml file from $HADOOP_HOME/etc/hadoop and put it in $SPARK_HOME/conf folder.
3.In master node add IP addresses of slaves in slaves file located in $SPARK_HOME/conf.

  • To run hadoop:
    Go to $HADOOP_HOME in the master and run:  hadoop namenode -format
    cd $HADOOP_HOME/sbin and run: followed by

Important Note: will start NameNode, SecondaryNamenode, DataNode on master and DataNode on all slaves node. will start NodeManager, ResourceManager on master node and NodeManager on slaves.
3.Perform hadoop namenode -format only once otherwise you will get incompatible cluster_id exception. To resolve this error Clear temporary data location for datanode i.e remove the files present in $HADOOP_HOME/dfs/name/data folder.

Use the following command: rm -rf filename

  • Start spark. Go to $SPARK_HOME/sbin and run
  • Start thrift server and log into beeline using hduser as username and password.
  • To start the thrift server use the following command inside $SPARK_HOME

./bin/spark-submit –master spark master IP –conf
spark.sql.hive.thriftServer.singleSession=true –class pathOfClassToRun pathToYourApplicationJar hdfs://hadoop-master:54311/pathToStoreLocation

Look for spark master IP in master node at this address: hadoop-master:8080
If you face any issue then refer to the below section.
Look in the hadoop logs folder located in $HADOOP_HOME/logs, if you find any of these issue:

1.Error for Incompatible cluster ids : Clear temporary data location for datanode
2.Failed to start database : if you face this problem, then remove metastore _db
org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@7962a746 : remove metastore_db/dbex.lck
3.HiveSQLException: Permission denied: user=anonymous, access=WRITE : Login to beeline with user as hduser and password as hduser



8 thoughts on “Setting Up Multi-Node Hadoop Cluster , just got easy !6 min read

  1. I must say you have high quality posts here. Your posts should go viral.
    You need initial traffic only. How to get massive traffic?
    Search for: Murgrabia’s tools go viral

Comments are closed.