Email: Phone: +91 9764995383    +91 9947161767
Training Clicks


Hadoop Developer / Analyst / SPARK + SCALA / Hadoop (Java + Non- Java) Track

HADOOP DEV + SPARK & SCALA + NoSQL + Splunk + HDFS (Storage) + YARN (Hadoop Processing Framework) + MapReduce using Java (Processing Data) + Apache Hive + Apache Pig + HBASE (Real NoSQL ) + Sqoop + Flume + Oozie + Kafka With ZooKeeper + Cassandra + MongoDB + Apache Splunk

Best Bigdata Hadoop Training with 2 Real-time Projects with 1 TB Data set

Duration of the Training : 8 to 10 weekends

For whom Hadoop is?

IT folks who want to change their profile in a most demanding technology which is in demand by almost all clients in all domains because of below mentioned reasons-

  • Hadoop is open source (Cost saving / Cheaper)
  •  Hadoop solves Big Data problem which is very difficult or impossible to solve using highly paid tools in market
  •  It can process Distributed data and no need to store entire data in centralized storage as it is there with other tools.
  •  Now a days there is job cut in market in so many existing tools and technologies because clients are moving towards a cheaper and efficient solution in market named HADOOP
  •  There will be almost 4.4 million jobs in market on Hadoop by next year.

Please refer below mentioned links:–gartner-says.html

Can I Learn Hadoop If I Don’t know Java?


It is a big myth that if a guy don’t know Java then he can’t learn Hadoop. The truth is that Only Map Reduce framework needs Java except Map Reduce all other components are based on different terms like Hive is similar to SQL, HBase is similar to RDBMS and Pig is script based.

Only MR requires Java but there are so many organizations who started hiring on specific skill set also like HBASE developer or Pig and Hive specific requirements. Knowing MapReuce also is just like become all-rounder in Hadoop for any requirement.

Why Hadoop?

  • Solution for BigData Problem
  • Open Source Technology
  • Based on open source platforms
  • Contains several tool for entire ETL data processing Framework
  • It can process Distributed data and no need to store entire data in centralized storage as it is required for SQL based tools

Training Syllabus

HADOOP DEV + SPARK & SCALA + NoSQL + Splunk + HDFS (Storage) + YARN (Hadoop Processing Framework) + MapReduce using Java (Processing Data) + Apache Hive + Apache Pig + HBASE (Real NoSQL ) + Sqoop + Flume + Oozie + Kafka With ZooKeeper + Cassandra + MongoDB + Apache Splunk

Big Data

Distributed computing
Data management – Industry Challenges
Overview of Big Data
Characteristics of Big Data
Types of data
Sources of Big Data
Big Data examples
What is streaming data?
Batch vs Streaming data processing
Overview of Analytics
Big data Hadoop opportunities

Why we need Hadoop
Data centers and Hadoop Cluster overview
Overview of Hadoop Daemons
Hadoop Cluster and Racks
Learning Linux required for Hadoop
Hadoop ecosystem tools overview
Understanding the Hadoop configurations and Installation.
HDFS Daemons – Namenode, Datanode, Secondary Namenode
Hadoop FS and Processing Environment’s UIs
Fault Tolerant
High Availability
Block Replication
How to read and write files
Hadoop FS shell commands
YARN (Hadoop Processing Framework)
YARN Daemons – Resource Manager, NodeManager etc.
Job assignment & Execution flow
MapReduce using Java (Processing Data)
The introduction of MapReduce.
MapReduce Architecture
Data flow in MapReduce
Understand Difference Between Block and InputSplit
Role of RecordReader
Basic Configuration of MapReduce
MapReduce life cycle
How MapReduce Works
Writing and Executing the Basic MapReduce Program using Java
Submission & Initialization of MapReduce Job.
File Input/Output Formats in MapReduce Jobs
Text Input Format
Key Value Input Format
Sequence File Input Format
NLine Input Format
Map-side Joins
Reducer-side Joins
Word Count Example(or) Election Vote Count
Will cover five to Ten Map Reduce Examples with real time data.
Apache Hive
Data warehouse basics
OLTP vs OLAP Concepts
Hive Architecture
Metastore DB and Metastore Service
Hive Query Language (HQL)
Managed and External Tables
Partitioning & Bucketing
Query Optimization
Hiveserver2 (Thrift server)
JDBC , ODBC connection to Hive
Hive Transactions
Hive UDFs
Working with Avro Schema and AVRO file format
Hands on Multiple Real Time datasets.

Apache Pig
Apache Pig
Advantage of Pig over MapReduce
Pig Latin (Scripting language for Pig)
Schema and Schema-less data in Pig
Structured , Semi-Structure data processing in Pig
Pig UDFs
Pig vs Hive Use case
Hands On Two more examples daily use case data analysis in google. And Analysis on Date time dataset
Introduction to HBASE
Basic Configurations of HBASE
Fundamentals of HBase
What is NoSQL?
HBase Data Model
Table and Row.
Column Family and Column Qualifier.
Cell and its Versioning
Categories of NoSQL Data Bases
Key-Value Database
Document Database
Column Family Database
HBASE Architecture
Region Servers
How HBASE is differed from RDBMS
HDFS vs. HBase
Client-side buffering or bulk uploads
HBase Designing Tables
HBase Operations
Live Dataset
Sqoop commands
Sqoop practical implementation
Importing data to HDFS
Importing data to Hive
Exporting data to RDBMS
Sqoop connectors
Flume commands
Configuration of Source, Channel and Sink
Fan-out flume agents
How to load data in Hadoop that is coming from web server or other storage
How to load streaming data from Twitter data in HDFS using Hadoop
Action Node and Control Flow node
Designing workflow jobs
How to schedule jobs using Oozie
How to schedule jobs which are time based
Oozie Conf file
Syntax formation, Datatypes , Variables
Classes and Objects
Basic Types and Operations
Functional Objects
Built-in Control Structures
Functions and Closures
Composition and Inheritance
Scala’s Hierarchy
Packages and Imports
Working with Lists, Collections
Abstract Members
Implicit Conversions and Parameters
For Expressions Revisited
The Scala Collections API
Modular Programming Using Objects
Architecture and Spark APIs
Spark components
Spark master
Significance of Spark context
Concept of Resilient distributed datasets (RDDs)
Properties of RDD
Creating RDDs
Transformations in RDD
Actions in RDD
Saving data through RDD
Key-value pair RDD
Invoking Spark shell
Loading a file in shell
Performing some basic operations on files in Spark shell
Spark application overview
Job scheduling process
DAG scheduler
RDD graph and lineage
Life cycle of spark application
How to choose between the different persistence levels for caching RDDs
Submit in cluster mode
Web UI – application monitoring
Important spark configuration properties
Spark SQL overview
Spark SQL demo
SchemaRDD and data frames
Joining, Filtering and Sorting Dataset
Spark SQL example program demo and code walk through
Kafka With ZooKeeper
What is Kafka
Cluster architecture With Hands On
Basic operation
Integration with spark
Integration with Camel
Additional Configuration
Security and Authentication
Apache Kafka With Spring Boot Integration
Apache Splunk
Introduction & Installing Splunk
Play with Data and Feed the Data
Searching & Reporting
Visualizing Your Data
Advanced Splunk Concepts
Cassandra + MongoDB
Introduction of NoSQL
What is NOSQL & N0-SQL Data Types
System Setup Process
MongoDB Introduction
MongoDB Installation
DataBase Creation in MongoDB
ACID and CAP Theorum
What is JSON and what all are JSON Features?
JSON and XML Difference
CRUD Operations – Create , Read, Update, Delete
Cassandra Introduction
Cassandra – Different Data Supports
Cassandra – Architecture in Detail
Cassandra’s SPOF & Replication Factor
Cassandra – Installation & Different Data Types
Database Creation in Cassandra
Tables Creation in Cassandra
Cassandra Database and Table Schema and Data
Update, Delete, Insert Data in Cassandra Table
Insert Data From File in Cassandra Table
Add & Delete Columns in Cassandra Table
Cassandra Collections