Saturday, July 6, 2013

What is Hadoop

Need of Hadoop:


Big Data:

Today we are surrounded by data, infact it won't be wrong to say that we live in data age. The amount of data is increasing exponentially. As the data is increasing it is becoming more and more challenging for organisations to maintain and analyse this huge amount of data. The success of an organisation largely depends on their ability to extract valuable information from this huge amount of data. Hadoop uses the approach of scaling out rather than scaling up to deal with this exploding data i.e using more systems of computer rather than bigger computer systems.

Data Storage:

The access speeds of hard drives have not increased proportionally to their storage capacities over the years. As a result it takes hours to read an entire hard disk and even more time to perform write operations. However this problem can be solved by dividing the data over multiple hard drives and parallely reading the data from these hard drives. 

Parallel read and write operations raises new issues like
  1. Need to handle hardware failures: Hadoop has its own distributed filesystem called HDFS which deals with hardware failures by data replication. We'll learn more about HDFS in upcoming posts .
  2. Ability to combine data from different drives: Most of the analysis will require data from different  hard drives. Hadoop uses MapReduce programming model which abstracts this problem by tranforming it into computations over key and value pair. We'll learn this programming model in upcoming posts. For now all you need to know is that there are two phases of computation Mapping and Reducing. Mixing occurs at the interface between these two phases. 
Thus in short we can say that Hadoop provides us with two components HDFS and MapReduce that provides reliable shared storage and analysis system.

Hadoop Introduction:


Hadoop is a framework for implementing distributed computing to process big data.Some of the key features of Hadoop are

  1. Accessibility: Hadoop runs on large clusters of commodity hardware.
  2. Robustness: Hadoop handles failures by replication of data.
  3. Scalability: Hadoop scales up linearly.
  4. Simplicity: Hadoop allows users to write parallel programs quickly.

The image below shows how users interacts with a Hadoop cluster.


Client interaction with Hadoop cluster

In my next post I'll show where Hadoop stands in terms of comparison with other systems.

Protected by Copyscape Web Plagiarism Scanner

No comments:

Post a Comment