Big Data Hadoop

Course Name: Big Data Hadoop

Big Data Hadoop Training in PuneMind Kraftors (SQTL) is a premier Big Data Hadoop Training Institute in Pune for individual and corporate training. Our students are preferred by many MNCs for hiring trained professionals from SQTL, especially when it comes to Big Data Hadoop.

Big Data Hadoop Course Duration – 160 Hours

Big Data Hadoop Training Course Prerequisite

Basic knowledge of software programming – HTML, Java or any other programming languages.

Big Data Hadoop Training Course Syllabus

Hadoop Introduction

  • Why we need Hadoop
  • Why Hadoop is in demand in market now a days
  • Where expensive SQL based tools are failing
  • Key points , Why Hadoop is leading tool in current It Industry
  • Definition of Big Data
  • Hadoop nodes
  • Introduction to Hadoop Release-1
  • Hadoop Daemons in Hadoop Release-1
  • Introduction to Hadoop Release-2
  • Hadoop Daemons in Hadoop Release-2
  • Hadoop Cluster and Racks
  • Hadoop Cluster Demo
  • How Hadoop is getting two categories Projects-
  • New projects on Hadoop
  • Clients want POC and migration of Existing tools and Technologies on Hadoop Technology
  • How Open Source tool (HADOOP) is capable to run jobs in lesser time which take longer time in
  • Hadoop Storage – HDFS (Hadoop Distributed file system)
  • Hadoop Processing Framework (Map Reduce / YARN)
  • Alternates of Map Reduce
  • Why NOSQL is in much demand instead of SQL
  • Distributed warehouse for HDFS
  • Most demanding tools which can run on the top of Hadoop Ecosystem for specific requirements in specific scenarios
  • Data import/Export tools

Hadoop Installation and Hands-on on Hadoop machine

Hadoop Installation

  • Introduction to Hadoop FS and Processing Environment’s UIs
  • How to read and write files
  • Basic Unix commands for Hadoop
  • Hadoop  FS shell
  • Hadoop releases practical
  • Hadoop daemons practical

ETL Tool (Pig) Introduction Level-1 (Basics)

  • Pig Introduction
  • Why Pig if Map Reduce is there?
  • How Pig is different from Programming languages
  • Pig Data flow Introduction
  • How Schema is optional in Pig
  • Pig Data types
  • Pig Commands – Load, Store , Describe , Dump
  • Map Reduce job started by Pig Commands
  • Execution plan

ETL Tool (Pig) Level-2 (Complex)

  • Pig- UDFs
  • Pig Use cases
  • Pig Assignment
  • Complex Use cases on Pig
  • XML Data Processing in Pig
  • Structured Data processing in Pig
  • Semi-structured data processing in Pig
  • Pig Advanced Assignment
  • Real time scenarios on Pig
  • When we should use Pig
  • When we shouldn’t use Pig
  • Live examples of Pig Use cases

Hive Warehouse (Introduction to Hive Warehouse and Differentiation between SQL based Data warehouse and Hive) Level-1 (Basics)

  • Hive Introduction
  • Meta storage and meta store
  • Introduction to Derby Database
  • Hive Data types
  • HQL
  • DDL, DML and sub languages of Hive
  • Internal , external and Temp tables in Hive
  • Differentiation between SQL based Data warehouse and Hive

Hive Level-2 (Complex)

  • Hive releases
  • Why Hive is not best solution for OLTP
  • OLAP in Hive
  • Partitioning
  • Bucketing
  • Hive Architecture
  • Thrift Server
  • Hue Interface for Hive
  • How to analyze data using Hive script
  • Differentiation between Hive and Impala
  • UDFs in Hive
  • Complex Use cases in Hive
  • Hive Advanced Assignment
  • Real time scenarios of Hive
  • POC on Pig and Hive , With real time data sets and problem statements

Map Reduce Level-1 (Basics)

  • How Map Reduce works as Processing Framework
  • End to End execution flow of Map Reduce job
  • Different tasks in Map Reduce job
  • Why Reducer is optional while Map per is mandatory?
  • Introduction to Combiner
  • Introduction to Practitioner
  • Programming languages for Map Reduce
  • Why Java is preferred for Map Reduce programming
  • POC based on Pig, Hive, HDFS, MR

NOSQL Databases and Introduction to HBase Level-1 (Basics)

  • Introduction to NOSQL
  • Why NOSQL if SQL is in market since several years
  • Databases in market based on NOSQL
  • CAP Theorem
  • ACID Vs. CAP
  • OLTP Solutions with different capabilities
  • Which No sql based solution is capable to handle specific requirements
  • Examples of companies like Google, Facebook, Amazon, and other clients who are using NOSQL based databases
  • HBase Architecture of column families

Map Reduce Advanced and HBase Level-2 (Complex)

  • How to work on Map Reduce in real time
  • Map Reduce complex scenarios
  • Introduction to HBase
  • Introduction to other NOSQL based data models
  • Drawbacks of Hadoop
  • Why Hadoop can’t work for real time processing
  • How HBase or other NOSQL based tools made real time processing possible on the top of Hadoop
  • HBase table and column family structure
  • HBase versioning concept
  • HBase flexible schema
  • HBase Advanced

Zookeeper and SQOOP

  • Introduction to Zookeeper
  • How Zookeeper helps in Hadoop Ecosystem
  • How to load data from Relational storage in Hadoop
  • Scoop basics
  • Scoop practical implementation
  • Scoop alternative
  • Scoop connector
  • Quick revision of previous classes to fill the gap in your understanding and correct understandings

Flume , Oozie and YARN

  • How to load data in Hadoop that is coming from web server or other storage without fixed schema
  • How to load unstructured and semi structured data in Hadoop
  • Introduction to Flume
  • Hands-on on Flume
  • How to load Twitter data in HDFS using Hadoop
  • Introduction to Oozier
  • How to schedule jobs using Oozier
  • What kind of jobs can be scheduled using Oozier
  • How to schedule jobs which are time based
  • Hadoop releases
  • From where to get Hadoop and other components to install
  • Introduction to YARN
  • Significance of YARN

Hue, Hadoop Releases comparison, Hadoop Real time scenarios Level-2 (Complex)

  • Introduction to Hue
  • How Hue is used in real time
  • Hue Use cases
  • Real time Hadoop usage
  • Real time cluster introduction
  • Hadoop Release 1 vs. Hadoop Release 2 in real time
  • Hadoop real time project
  • Major POC based on combination of several tools of Hadoop Ecosystem
  • Comparison between Pig and Hive real time scenarios
  • Real time problems and frequently faced errors with solution

SPARK and Scale  Level-1 (Basics)

  • Introduction to Spark
  • Introduction to scale
  • Basics Features of SPARK and Scale available in Hue
  • Why Spark demand is increasing in market
  • How can we use Spark with Hadoop Eco System
  • Datasets for practice purpose

SPARK and Scala  Level-2 (Complex)

  • Spark use cases with  real time scenarios
  • Spark Practical with advanced concepts
  • Scale platform with complex use cases
  • Real time project use cases examples based on Spark and Scale
  • How we can reduce
  • Additional Key Features

  • This training program contains 5 POCs and Two real time projects with problem statements and data sets
  • This training is based on 16 node Hadoop Cluster machines
  • We provide you several data sets  which you can use for further practices on Hadoop
If you are interested in Big Data Hadoop Training course in Pune at Mind Kraftors (SQTL), Aundh, Pune, simply fill up the enquiry form below and we shall get back to you with further details such as course fees, upcoming batch and timings.

Watch out this video to know 5 solid reasons to learn Big Data Hadoop in 2017.

 

Check out these popular articles on Big Data Hadoop

Big Data Hadoop – A Promising Career Move in 2017

Big Data Hadoop: Why Software Professionals & Corporate Should Learn This Technology

Top Reasons to Choose Big Data Analytics as a Career Opportunity in 2017

Show Buttons
Hide Buttons