Friday, March 4, 2011

Understanding What is Hadoop

What is Hadoop:
Hadoop is a framework written in Java for running applications on large clusters of commodity hardware and incorporates features similar to those of the Google File System and of MapReduce. HDFS is a highly fault-tolerant distributed file system and like Hadoop designed to be deployed on low-cost hardware. It provides high throughput access to application data and is suitable for applications that have large data sets (In the range of terabytes to zetabytes).

Who uses Hadoop:
Hadoop is mainly used by the companies which deals with large amount of data. They may require Process the dataPerform Analysis or Generate ReportsCurrently all leading organizations including Facebook, Yahoo, Amazon, IBM, Joost, PowerSet, New York Times, Veoh etc are using Hadoop. For more Information click here

Why Hadoop:
MapReduce is Google's secret weapon: A way of breaking complicated problems apart, and spreading them across many computers. Hadoop is an open source implementation of MapReduce, and its own filesystem HDFS(Hadoop distributed file system)

Hadoop has defeated Super Computer in tera sort:
Hadoop clusters sorted 1 terabyte of data in 209 seconds, which beat the previous record of 297 seconds in the annual general purpose (daytona) terabyte sort benchmark. The sort benchmark, which was created in 1998 by Jim Gray, specifies the input data (10 billion 100 byte records), which must be completely sorted and written to disk. This is the first time that either a Java or an open source program has won. For more Information click here

Europe’s Largest Ad Targeting Platform Uses Hadoop:
Europe’s Largest Ad Company get over 100GB of data daily, Now using classical solution like RDBMS they need 5 days to for analysis and generate reports. So they were running 1 weak behind. After lots of research they started using hadoop. Now Interesting fact is "Tey are able to process data and generate reports with in 1 Hour" Thats the beauty of Hadoop. For more Information click here

Leading Distributions of Hadoop:

1. Apache Hadoop:
The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing.
Apache Hadoop Offers:

  • Hadoop Common: The common utilities that support the other Hadoop subprojects.
  • HDFS: A distributed file system that provides high throughput access to application data.
  • MapReduce: A software framework for distributed processing of large data sets on compute clusters.
  • Avro: A data serialization system.
  • Chukwa: A data collection system for managing large distributed systems.
  • HBase: A scalable, distributed database that supports structured data storage for large tables.
  • Hive: A data warehouse infrastructure that provides data summarization and ad hoc querying.
  • Mahout: A Scalable machine learning and data mining library.
  • Pig: A high-level data-flow language and execution framework for parallel computation.
  • ZooKeeper: A high-performance coordination service for distributed applications.

2. Cloudera Hadoop:
Cloudera’s Distribution for Apache Hadoop (CDH) sets a new standard for Hadoop-based data management platforms. It is the most comprehensive platform available today and significantly accelerates deployment of Apache Hadoop in your organization. CDH is based on the most recent stable version of Apache Hadoop. It includes some useful patches backported from future releases, as well as improvements we have developed for our customers

Cloudera Hadoop Offers:
  • HDFS – Self healing distributed file system
  • MapReduce – Powerful, parallel data processing framework
  • Hadoop Common – a set of utilities that support the Hadoop subprojects
  • HBase – Hadoop database for random read/write access
  • Hive – SQL-like queries and tables on large datasets
  • Pig – Dataflow language and compiler
  • Oozie – Workflow for interdependent Hadoop jobs
  • Sqoop – Integrate databases and data warehouses with Hadoop
  • Flume – Highly reliable, configurable streaming data collection
  • Zookeeper – Coordination service for distributed applications
  • Hue – User interface framework and SDK for visual Hadoop applications
Architecture of Hadoop:
The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides high throughput access to application data and is suitable for applications that have large data sets. HDFS relaxes a few POSIX requirements to enable streaming access to file system data

Source: Apache

Name Node:
NameNode manages the namespace, file system metadata, and access control. There is exactly one NameNode in each cluster. We can say NameNode is master and data nodes are slave. It Contains all the informations about data(ie meta data)

Data Node:
DataNode Holds Actual file system data. Each data node manages its own locally-attached storage (i.e., the node's hard disk) and stores a copy of some or all blocks in the file system. There are one or more DataNodes in each cluster.

Install / Deploy Hadoop:
Hadoop can be installed in 3 modes
1. Standalone mode:
To deploying Hadoop in standalone mode, we just need to set path of JAVA_HOME. In this mode there is no need to start the daemons and no need of name node format as data save in local disk. For Tutorial / Instructions click here
2. Pseudo Distributed mode:
In this mode all the daemons(nameNode, dataNode, secondaryNameNode, jobTracker, taskTracker) run on single machine. For Tutorial / Instructions click here
3. Distributed mode:
In this mode daemons(nameNode, jobTracker, secondaryNameNode(Optionally)) run on master(NameNode) and daemons(dataNode and taskTracker) run on slave(DataNode). For Tutorial / Instructions click here



  1. Like it even though most of the content is copied from Hadoop KB. You could have given credits to the original site. But it's a cool compilation

  2. Nice explanation on storage and distributed processing of very large data sets on computer clusters built from commodity hardware. This is the thing i was expecting. Thanks for sharing.

    Hadoop Training Chennai

  3. Wonderful sharing of big data storage concept using hadoop. Hadoop online Training

  4. Thanks for sharing your informative article on Hive ODBC Driver. Your article is very descriptive and assists me to learn whole concept in detail. Hadoop Training in Chennai

  5. Software developers to create stunning mobile application with ease. Further, they can make use of this platform at free of cost. Java Course in Chennai | Best JAVA Training in ChennaiThanks for your informative post on Java application development. This open source platform assists

  6. I agree with your thoughts!!! As the demand of java programming application keeps on increasing, there is massive demand for java professionals in software development industries. Thus, taking training will assist students to be skilled java developers in leading MNCs. J2EE Training in Chennai | JAVA Training Institutes in Chennai

  7. Hadoop is a free, Java-based programming framework that supports the processing of large data sets in a distributed computing environment. It is part of the Apache project sponsored by the Apache Software Foundation.
    Web Designing Training in Chennai || Selenium Training in Chennai

  8. The expansion of internet and intelligence in business process lead the way to huge volume of data. It is important to maintain and process these data to be efficient in data handling. Hadoop Training in Chennai | Big Data Training in Chennai

  9. This technical post helps me to improve my skills set, thanks for this wonder article I expect your upcoming blog, so keep sharing.
    ccna course in Chennai|ccna training in Chennai|ccna institutes in Chennai|ccna courses in Chennai

  10. Pretty article! I found some useful information in your blog, it was awesome to read, thanks for sharing this great content to my vision, keep sharing.
    sap training in Chennai|SAP institutes in chennai|SAP training chennai|SAP Training in Chennai

  11. Pretty article! I found some useful information in your blog, it was awesome to read, thanks for sharing this great content to my vision, keep sharing.
    sap training in Chennai|SAP institutes in chennai|SAP training chennai|SAP Training in Chennai

  12. Thanks for your post; selenium is most trusted automation tool to validate web application and browser. This tool provides precise and complete information about a software application or environment. Selenium Training in Chennai | Selenium Course in Chennai | Best Selenium training institute in Chennai

  13. Learning new technology would give oneself a true confidence in the current emerging Information Technology domain. With the knowledge of big data the most magnificent cloud computing technology one can go the peek of data processing. As there is a drastic improvement in this field everyone are showing much interest in pursuing this technology. Your content tells the same about evolving technology. Thanks for sharing this.

    Hadoop Training in Chennai | Best hadoop training institute in chennai | Big Data Hadoop Training in Chennai | Hadoop Course in Chennai

  14. I like your writing style, it was very clear to understanding the concept well; I hope you ll keep your blog as updated.
    Angularjs training in chennai|Angular institutes in Chennai|Angular training institutes in Chennai