Skip to main content

Big Data- Apache Hadoop Multi Node

In the last blog we installed hadoop in a single node environment. In this blog we will do multi node environment.
All daemons will be spread across different nodes.

Cluster Configuration:
 2-10 node (small)
- Name Node, Job tracker and Secondary name node on the same machine
-Data node, task tracker on all other machines

10-40 Node ( medium/Single rack)
- Name Node, Job tracker on the same machine
-Secondary name node on the dedicated Machine
-Data node, task tracker on all other machines

100+ node ( large/multi rack)
- Name Node, Job tracker and Secondary name node on the dedicated machine
-Rack awareness
-Network, HDFS optimization
-Map reduce optimization

lets see the process of bringing up multi node cluster. Once you have a bunch of machines, with OS and hadoop on it, then you need to

-Get ssh key from the name node and distribute to all our slaves/data nodes.

-Then Configure name node. in the name node , we have the masters and the slaves file. in the masters file we configure our secondary name node. in the salve file, we will list all our data nodes.

NOTE: If you have name node and job tracker on different machines, make sure that slave files are synchronized,

-Third step is configure our Data nodes and Task trackers. this is done by editing their site.xml file, specifically core-site file and then map red site file

Once its done then

- start-dfs.sh file



we just need to follow our process and our multi node cluster is set.

The commands are: Hadoop Commands

Please single node cluster installation to help you in installation of multi node cluster.I am not going to reinvent the wheel.

big-data-installing-hadoop-single-node





















Comments

Popular posts from this blog

Data Center Migration

Note: This blog is written with the help of my friend Rajanikanth
Data Center Migrations / Data Center Consolidations
Data Center Consolidations, Migrations are complex projects which impact entire orgnization they support. They usually dont happen daily but once in a decade or two. It is imperative to plan carefully, leverage technology improvements, virtualization, optimizations.
The single most important factor for any migration project is to have high caliber, high performing, experienced technical team in place. You are migrating business applications from one data center to another and there is no scope for failure or broken application during migration. So testing startegy should be in place for enterprise business applications to be migrated.
Typical DCC and Migrations business objectives
Business Drivers
·Improve utilization of IT assets ·DC space & power peaked out - business growth impacted ·Improve service levels and responsiveness to new applications ·Reduce support complexi…

HP CSA Implementation

I know the above picture is little confusing but don’t worry I break it down and explain in detail. By the time I am done explaining you all will be happy. HARDWARE AND SOFTWARE REQUIREMENTS 1.VMware vSphere infrastructure / Microsoft Hyper V: For the sake of Simplicity we will use VMware vSphere. We Need vSphere 4.0 /5/5.5 and above and vCenter 4.0 and above ready and installed. This is the first step. 2.We need Software medias for HP Cloud Service Automation, 2.00, HP Server Automation, 9.02, HP Operations Orchestration (OO)9.00.04, HP Universal CMDB 9.00.02, HP Software Site Scope, 11.01,HP Insight Software6.2 Update 1 3.DNS, DHCP and NTP systems are already installed and configured. NTP information should be part of VM templates 4.SQL Server 2005 or Microsoft® SQL Server 2008 or Microsoft® SQL Server 2012 , Oracle 11g, both 32-bit and 64-bit versions may be used for CSA database.
5.We will install  HP Cloud Service Automation, 2.00, HP Server Automation, 9.02, HP Operations Orchestra…

Openstack- Its importance in Cloud. The HP Helion Boost

Every enterprise expects few things from cloud computing, mainly:

· Auto scaling: The workload should increase and decrease as needed by the IT environment.

· Automatic repair: If there is any fault or crash of the application or the server, it automatically fix it

· Fault tolerant: The application or underlying technology is intelligent enough to make itself fault torrent

· Integrated lifecycle: It should have integrated lifecycle

· Unified management: Its easy to manage all different aspects of technology

· Less cost

· Speed


Its year 2014. till now only 5% to 7% enterprises are using cloud computing. Such a small number. Its a huge opportunity and a vast majority for anyone who is interested in providing cloud computing services.
Current IT environment is very complex. You just cant solve all your problems with cloud computing.
There are legacy systems, databases, data processors, different hardware and software. You name it , there are so many technology available in just o…