Take this
course

Software Development / Big Data & Analytics

Learning Hadoop 2

An introduction to storing, structuring, and analyzing data at scale with Hadoop

Description

Hadoop emerged in response to the proliferation of masses and masses of data collected by organizations, offering a strong solution to store, process, and analyze what has commonly become known as Big Data. It comprises a comprehensive stack of components designed to enable these tasks on a distributed scale, across multiple servers and thousands of machines.


Learning Hadoop 2 introduces you to the powerful system synonymous with Big Data, demonstrating how to create an instance and leverage Hadoop ecosystem's many components to store, process, manage, and query massive data sets with confidence.


We open this course by providing an overview of the Hadoop component ecosystem, including HDFS, Sqoop, Flume, YARN, MapReduce, Pig, and Hive, before installing and configuring our Hadoop environment. We take a look at Hue, the graphical user interface of Hadoop.


We will then discover HDFS, Hadoop’s file-system used to store data. We will learn how to import and export data, both manually and automatically. Afterward, we turn our attention toward running computations using MapReduce, and get to grips working with Hadoop’s scripting language, Pig. Lastly, we will siphon data from HDFS into Hive, and demonstrate how it can be used to structure and query data sets.

Full details

Curriculum

  • The Hadoop Ecosystem
    The Course Overview
    This video will offer the overview of the course.
    1:52
    Overview of HDFS and YARN
    This video will introduce you to the basic concepts of Hadoop Distributed File System (HDFS) and Yet Another Resource Negotiator (YARN), which are the two core components of Hadoop. • HDFS is the file system that Hadoop uses • Next, we will cover YARN, the component of Hadoop that allocates resources such as CPU time and memory to jobs submitted for completion
    7:25
    Overview of Sqoop and Flume
    An introduction to the basic concepts of Sqoop and Flume, two tools for the automation of data import into Hadoop. • First, we will talk about Sqoop • Next, we go over Flume
    3:18
    Overview of MapReduce
    An introduction to the basic concepts of MapReduce, the computation engine of Hadoop. • Discussing the history and concept of MapReduce • Let's look at the word count example
    3:39
    Overview of Pig
    An introduction to the basic concepts of Pig, a scripting language for Hadoop. • Discuss what Pig is • Taking a look at the “word count” example
    3:05
    Overview of Hive
    An introduction to the basic concepts of Hive, Hadoop’s data warehousing solution. • Cover the basic concept of Hive • Take a look at internal versus external tables • Understand how Hive works with metadata • Discuss about HiveQL
    6:34
  • Installing and Configuring Hadoop
    Downloading and Installing Hadoop
    Put a working Hadoop installation on a laptop or server. You will need Hadoop on your laptop or server in order to continue. • Download the Quickstart VM from Cloudera.com • Start the VM
    2:54
    Exploring Hue
    Exploring the Hue, a GUI for Hadoop, to get familiar with the interface. • Navigate to the Hue page • Explanation of the file browser and query editor dropdowns • Create a new user
    5:25
  • Data Import and Export
    Manual Import
    This video will cover how to get data into HDFS manually. • Use Hue to pull data from local file system to HDFS • Use the command line to move data from the local file system onto HDFS
    4:34
    Importing from Databases Using Sqoop
    This video will explain how to get data from databases into HDFS. • Create a database in MySQL and load data • Use Sqoop command line to transfer data to HDFS
    6:28
    Using Flume to Import Streaming Data
    This video will cover how to import streaming data using the Flume tool. • Modify the Flume Agent configuration file • Create a text file in the local spooling directory and check to make sure Flume imports it to HDFS
    5:08
  • Using MapReduce and Pig
    Coding "Word Count" in MapReduce
    This video will explore how to build “Word Count” in Eclipse, then save it to a .jar and run it from MapReduce. • Opening Eclipse and using it to import the “Word Count” code • Save the .jar to the local file system • Run the code in MapReduce, check the progress of the job, and view the result
    5:56
    Coding "Word Count" in Pig
    Coding the same word counting program, but this time in Pig. • Open the Pig Script Editor in Hue and build our script • Save the script for future use and run it • Check the progress of the job in Hue and view the result
    2:31
    Performing Common ETL Functions in Pig
    This video will discuss how to use Pig to perform common Extract, Transform, and Load functions on data. • Filter out certain data from a dataset and save the result • Append one dataset to another in an identical format using Union • Join one dataset to another using a common column in each
    8:49
    Using User-defined Functions in Pig
    This video will explore how to use predefined code called User Defined Functions (UDFs) in Pig scripts. • Identify whether two UDF repositories (Piggybank and DataFu) are installed • Register the Stats UDF and define a Quartile function to use it • Write the script and run the code, resulting in a document that shows the minimum, median, and max values for Quantity in our data
    5:59
  • Using Hive
    Importing Data from HDFS into Hive
    Create a database in Hive. • Import data into an internal table (the default) • Import data into an external table • How to get data from HDFS into Hive
    4:58
    Importing Data Directly from a Database
    This video will cover how to get data into Hive from a database without going to HDFS first. • Use Sqoop from the command line to move the data • Check the data browser to see if the right directory was created in Hive • Use "select * from table" to see the data in the table
    2:24
    Performing Basic Queries in Hive
    Using queries in Hive to find information. • Using the basic Select From Where query • Combining two tables using Union • Creating a new table from the results of a query
    6:59
    Putting It All Together
    A quick summary of what the viewer has learned in the entire course. • Review the Hadoop Ecosystem chart • See graphic of structured and unstructured data import to Hadoop • Introduce the term "Data Lake" and understand that we can now make one
    2:16

Skills

  • Apache Hadoop
  • PIG

Similar Courses

More Courses by this Instructor