Software Development / AI & Machine Learning

Become a data scientist in the tech industry! Comprehensive data mining and machine learning course with Python & Spark.

67 Lessons

8 hours, 44 minutes

All Levels

Data Scientists enjoy one of the top-paying jobs, with an average salary of $120,000 according to Glassdoor and Indeed. That's just the average! And it's not just about money - it's interesting work too!
If you've got some programming or scripting experience, this course will teach you the techniques used by real data scientists in the tech industry - and prepare you for a move into this hot career path. This comprehensive course includes 68 lectures spanning almost 9 hours of video, and most topics include hands-on Python code examples you can use for reference and for practice. I’ll draw on my 9 years of experience at Amazon and IMDb to guide you through what matters, and what doesn’t.
The topics in this course come from an analysis of real requirements in data scientist job listings from the biggest tech employers. We'll cover the machine learning and data mining techniques real employers are looking for, including:
Regression analysis
K-Means Clustering
Principal Component Analysis
Train/Test and cross validation
Bayesian Methods
Decision Trees and Random Forests
Multivariate Regression
Multi-Level Models
Support Vector Machines
Reinforcement Learning
Collaborative Filtering
K-Nearest Neighbor
Bias/Variance Tradeoff
Ensemble Learning
Term Frequency / Inverse Document Frequency
Experimental Design and A/B Tests
...and much more! There's also an entire section on machine learning with Apache Spark, which lets you scale up these techniques to "big data" analyzed on a computing cluster.
If you're new to Python, don't worry - the course starts with a crash course. If you've done some programming before, you should pick it up quickly. This course shows you how to get set up on Microsoft Windows-based PC's; the sample code will also run on MacOS or Linux desktop systems, but I can't provide OS-specific support for them.
Each concept is introduced in plain English, avoiding confusing mathematical notation and jargon. It’s then demonstrated using Python code you can experiment with and build upon, along with notes you can keep for future reference.
If you’re a programmer looking to switch into an exciting new career track, or a data analyst looking to make the transition into the tech industry – this course will teach you the basic techniques used by real-world industry data scientists. I think you'll enjoy it!

- Getting StartedDOWNLOAD Course ResourcesDownload this .zip file to access materials that I reference throughout the course.IntroductionWhat to expect in this course, who it's for, and the general format we'll follow.2:45Installing Enthought CanopyWe'll walk through installing the Python scientific computing IDE used in this course: Enthought Canopy, and the Python packages needed to run the scripts used in this course.8:43Python Basics, Part 2In part 2 of our Python crash course, we'll cover functions, boolean expressions, and looping constructs in Python.9:42Running Python ScriptsThis course presents Python examples in the form of iPython Notebooks, but we'll cover the other ways to run Python code: interactively from the Python shell, or running stand-alone Python script files.3:56Python Basics, Part 1In a crash course on Python and what's different about it, we'll cover the importance of whitespace in Python scripts, how to import Python modules, and Python data structures including lists, tuples, and dictionaries.15:59

- Statistics and Probability Refresher, and Python PractiseTypes Of DataWe cover the differences between continuous and discrete numerical data, categorical data, and ordinal data.6:59Mean, Median, ModeA refresher on mean, median, and mode - and when it's appropriate to use each.5:27Using Mean, Median, and Mode in PythonWe'll use mean, median, and mode in some real Python code, and set you loose to write some code of your own.8:31Variation and Standard DeviationWe'll cover how to compute the variation and standard deviation of a data distribution, and how to do it using some examples in Python.11:13Probability Density Function; Probability Mass FunctionIntroducing the concepts of probability density functions (PDF's) and probability mass functions (PMF's).3:28Common Data DistributionsWe'll show examples of continuous, normal, exponential, binomial, and poisson distributions using iPython.7:46Percentiles and MomentsWe'll look at some examples of percentiles and quartiles in data distributions, and then move on to the concept of the first four moments of data sets.12:34A Crash Course in matplotlibAn overview of different tricks in matplotlib for creating graphs of your data, using different graph types and styles.13:47Covariance and CorrelationThe concepts of covariance and correlation used to look for relationships between different sets of attributes, and some examples in Python.11:32Exercise: Conditional ProbabilityWe cover the concepts and equations behind conditional probability, and use it to try and find a relationship between age and purchases in some fabricated data using Python.11:04Exercise Solution: Conditional Probability of Purchase by AgeHere we'll go over my solution to the exercise I challenged you with in the previous lecture - changing our fabricated data to have no real correlation between ages and purchases, and seeing if you can detect that using conditional probability.2:19Bayes TheoremAn overview of Bayes' Theorem, and an example of using it to uncover misleading statistics surrounding the accuracy of drug testing.5:24

- Predictive ModelsLinear RegressionWe introduce the concept of linear regression and how it works, and use it to fit a line to some sample data using Python.11:02Polynomial RegressionWe cover the concepts of polynomial regression, and use it to fit a more complex page speed - purchase relationship in Python.8:05Multivariate Regression, and Predicting Car PricesMultivariate models let us predict some value given more than one attribute. We cover the concept, then use it to build a model in Python to predict car prices based on their age, mileage, and model. We'll also get our first look at the pandas library in Python.8:07Multi-Level ModelsWe'll just cover the concept of multi-level modeling, as it is a very advanced topic. But you'll get the ideas and challenges behind it.4:37

- Machine Learning with PythonSupervised vs. Unsupervised Learning, and Train/TestThe concepts of supervised and unsupervised machine learning, and how to evaluate the ability of a machine learning model to predict new values using the train/test technique.8:58Using Train/Test to Prevent Overfitting a Polynomial RegressionWe'll apply train test to a real example using Python.5:48Bayesian Methods: ConceptsWe'll introduce the concept of Naive Bayes and how we might apply it to the problem of building a spam classifier.4:00Implementing a Spam Classifier with Naive BayesWe'll actually write a working spam classifier, using real email training data and a surprisingly small amount of code!8:06K-Means ClusteringK-Means is a way to identify things that are similar to each other. It's a case of unsupervised learning, which could result in clusters you never expected!7:24Clustering people based on income and ageWe'll apply K-Means clustering to find interesting groupings of people based on their age and income.5:15Measuring EntropyEntropy is a measure of the disorder in a data set - we'll learn what that means, and how to compute it mathematically.3:10Decision Trees: ConceptsDecision trees can automatically create a flow chart for making some decision, based on machine learning! Let's learn how they work.8:44Decision Trees: Predicting Hiring DecisionsWe'll create a decision tree and an entire "random forest" to predict hiring decisions for job candidates.9:48Ensemble LearningRandom Forests was an example of ensemble learning; we'll cover over techniques for combining the results of many models to create a better result than any one could produce on its own.6:00Support Vector Machines (SVM) OverviewSupport Vector Machines are an advanced technique for classifying data that has multiple features. It treats those features as dimensions, and partitions this higher-dimensional space using "support vectors."4:28Using SVM to cluster people using scikit-learnWe'll use scikit-learn to easily classify people using a C-Support Vector Classifier.5:37

- Recommender SystemsUser-Based Collaborative FilteringOne way to recommend items is to look for other people similar to you based on their behavior, and recommend stuff they liked that you haven't seen yet.7:58Item-Based Collaborative FilteringThe shortcomings of user-based collaborative filtering can be solved by flipping it on its head, and instead looking at relationships between items instead of relationships between people.8:16Finding Movie SimilaritiesWe'll use the real-world MovieLens data set of movie ratings to take a first crack at finding movies that are similar to each other, which is the first step in item-based collaborative filtering.9:09Improving the Results of Movie SimilaritiesOur initial results for movies similar to Star Wars weren't very good. Let's figure out why, and fix it.8:00Making Movie Recommendations to PeopleWe'll implement a complete item-based collaborative filtering system that uses real-world movie ratings data to recommend movies to any user.10:23Improve the recommender's resultsAs a student exercise, try some of my ideas - or some ideas of your own - to make the results of our item-based collaborative filter even better.5:30

- More Data Mining and Machine Learning TechniquesK-Nearest-Neighbors: ConceptsKNN is a very simple supervised machine learning technique; we'll quickly cover the concept here.3:45Using KNN to predict a rating for a movieWe'll use the simple KNN technique and apply it to a more complicated problem: finding the most similar movies to a given movie just given its genre and rating information, and then using those "nearest neighbors" to predict the movie's rating.12:30Dimensionality Reduction; Principal Component AnalysisData that includes many features or many different vectors can be thought of as having many dimensions. Often it's useful to reduce those dimensions down to something more easily visualized, for compression, or to just distill the most important information from a data set (that is, information that contributes the most to the data's variance.) Principal Component Analysis and Singular Value Decomposition do that.5:45PCA Example with the Iris data setWe'll use sckikit-learn's built-in PCA system to reduce the 4-dimensions Iris data set down to 2 dimensions, while still preserving most of its variance.9:06Data Warehousing Overview: ETL and ELTCloud-based data storage and analysis systems like Hadoop, Hive, Spark, and MapReduce are turning the field of data warehousing on its head. Instead of extracting, transforming, and then loading data into a data warehouse, the transformation step is now more efficiently done using a cluster after it's already been loaded. With computing and storage resources so cheap, this new approach now makes sense.9:06Reinforcement LearningWe'll describe the concept of reinforcement learning - including Markov Decision Processes, Q-Learning, and Dynamic Programming - all using a simple example of developing an intelligent Pac-Man.12:45

- Dealing with Real-World DataBias/Variance TradeoffBias and Variance both contribute to overall error; understand these components of error and how they relate to each other.6:16K-Fold Cross-Validation to avoid overfittingWe'll introduce the concept of K-Fold Cross-Validation to make train/test even more robust, and apply it to a real model.10:56Data Cleaning and NormalizationCleaning your raw input data is often the most important, and time-consuming, part of your job as a data scientist!7:11Cleaning web log dataIn this example, we'll try to find the top-viewed web pages on a web site - and see how much data pollution makes that into a very difficult task!10:57Normalizing numerical dataA brief reminder: some models require input data to be normalized, or within the same range, of each other. Always read the documentation on the techniques you are using.3:23Detecting outliersA review of how outliers can affect your results, and how to identify and deal with them in a principled manner.7:01

- Apache Spark: Machine Learning on Big DataInstalling Spark - Part 1We'll present an overview of the steps needed to install Apache Spark on your desktop in standalone mode, and get started by getting a Java Development Kit installed on your system.7:03Installing Spark - Part 2We'll install Spark itself, along with all the associated environment variables and ancillary files and settings needed for it to function properly.13:30Spark IntroductionA high-level overview of Apache Spark, what it is, and how it works.9:11Spark and the Resilient Distributed Dataset (RDD)We'll go in more depth on the core of Spark - the RDD object, and what you can do with it.11:43Introducing MLLibA quick overview of MLLib's capabilities, and the new data types it introduces to Spark.5:10Decision Trees in SparkWe'll take the same problem for our earlier Decision Tree lecture - predicting hiring decisions for job candidates - but implement it using Spark and MLLib!16:01K-Means Clustering in SparkWe'll take the same example of clustering people by age and income from our earlier K-Means lecture - but solve it in Spark!11:08TF / IDFWe'll introduce the concept of TF-IDF (Term Frequency / Inverse Document Frequency) and how it applies to search problems, in preparation for using it with MLLib.6:44Searching Wikipedia with SparkLet's use TF-IDF, Spark, and MLLib to create a rudimentary search engine for real Wikipedia pages!8:12

- Experimental DesignA/B Testing ConceptsRunning controlled experiments on your website usually involves a technique called the A/B test. We'll learn how they work.8:24T-Tests and P-ValuesHow to determine significance of an A/B tests results, and measure the probability of the results being just from random chance, using T-Tests, the T-statistic, and the P-value.6:00Hands-on With T-TestsWe'll fabricate A/B test data from several scenarios, and measure the T-statistic and P-Value for each using Python.6:05Determining How Long to Run an ExperimentSome A/B tests just don't affect customer behavior one way or another. How do you know how long to let an experiment run for before giving up?3:25A/B Test GotchasThere are many limitations associated with running short-term A/B tests - novelty effects, seasonal effects, and more can lead you to the wrong decisions. We'll discuss the forces that may result in misleading A/B test results so you can watch out for them.9:27

- You Made It!More to ExploreWhere to go from here - recommendations for books, websites, and career advice to get you into the data science job you want.3:00

- Machine Learning
- Apache Spark
- Data Analysis
- Python