Setting Up Multi-Node Hadoop Cluster , just got easy !


Knoldus

In this blog,we are going to embark the journey of how to setup the Hadoop Multi-Node cluster on a distributed environment.

So lets do not waste any time, and let’s get started.
Here are steps you need to perform.

Prerequisite:

1.Download & install Hadoop for local machine (Single Node Setup)
http://hadoop.apache.org/releases.html – 2.7.3
use java : jdk1.8.0_111
2. Download Apache Spark from : http://spark.apache.org/downloads.html
choose spark release : 1.6.2

1. Mapping the nodes

First of all ,we have to edit hosts file in /etc/ folder on all nodes, specify the IP address of each system followed by their host names.

# vi /etc/hostsenter the following lines in the /etc/hosts file.192.168.1.xxx hadoop-master 192.168.1.xxx hadoop-slave-1192.168.56.xxx hadoop-slave-2

View original post 687 more words

This entry was posted in Scala. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s