Introduction to Machine Learning

Description

This course will provide you a foundational understanding of machine learning models (logistic regression, multilayer perceptrons, convolutional neural networks, natural language processing, etc.) as well as demonstrate how these models can solve complex problems in a variety of industries, from medical diagnostics to image recognition to text prediction. In addition, we have designed practice exercises that will give you hands-on experience implementing these data science models on data sets. These practice exercises will teach you how to implement machine learning algorithms with PyTorch, open source libraries used by leading tech companies in the machine learning field (e.g., Google, NVIDIA, CocaCola, eBay, Snapchat, Uber and many more).

What you will learn

Simple Introduction to Machine Learning

The focus of this module is to introduce the concepts of machine learning with as little mathematics as possible. We will introduce basic concepts in machine learning, including logistic regression, a simple but widely employed machine learning (ML) method. Also covered is multilayered perceptron (MLP), a fundamental neural network. The concept of deep learning is discussed, and also related to simpler models.

Basics of Model Learning

In this module we will be discussing the mathematical basis of learning deep networks. We’ll first work through how we define the issue of learning deep networks as a minimization problem of a mathematical function. After defining our mathematical goal, we will introduce validation methods to estimate real-world performance of the learned deep networks. We will then discuss how gradient descent, a classical technique in optimization, can be used to achieve this mathematical goal. Finally, we will discuss both why and how stochastic gradient descent is used in practice to learn deep networks.

Image Analysis with Convolutional Neural Networks

This week will cover model training, as well as transfer learning and fine-tuning. In addition to learning the fundamentals of a CNN and how it is applied, careful discussion is provided on the intuition of the CNN, with the goal of providing a conceptual understanding.

Recurrent Neural Networks for Natural Language Processing

This week will cover the application of neural networks to natural language processing (NLP), from simple neural models to the more complex. The fundamental concept of word embeddings is discussed, as well as how such methods are employed within model learning and usage for several NLP applications. A wide range of neural NLP models are also discussed, including recurrent neural networks, and specifically long short-term memory (LSTM) models.

What’s included