Thursday, 26 December 2019

Hadoop - Understanding Hadoop Architecture


Hadoop is an open source framework written in java that allows distributed processing of large datasets across clusters of computers.
Hadoop is designed to scale up from single server to thousands of machines and each have local-computation and storage.

Hadoop Architecture have 4 modules.

Understanding Hadoop Architecture
  1. Hadoop Common: These are Java libraries which provides filesystem and OS level abstractions which are required to start Hadoop.
  2. Hadoop YARN: This is used for job scheduling and cluster resource management.
  3. Hadoop Distributed File System: It provides high throughput access to application data and is suitable for applications that have large data sets.
  4. Hadoop MapReduce: This is YARN-based system for parallel processing of large data sets.

Question: What is MapReduce?
MapReduce is a software framework for easily writing applications which process big amounts of data in-parallel on large clusters. It consits of master JobTracker and slave TaskTracker per cluster. The JobTracker is responsible for resource management and schedule the task. TaskTracker is responsibel for execute the task.




Question: How Does HadoopwWork at very basic level?
  1. Job Submission by client to Hadoop.
  2. Input and output files are in "distributed file system" and with use of "Hadoop Common files" Initialization the hadoop.
  3. Execute of map and reduce functions.
  4. Scheduling the task by TaskTracker.
  5. Execution the task by JobTracker.