Deep Learning

CS 398/ IE 398, Spring 2019

Instructor: Justin Sirignano
Teaching Assistant: Logan Courtney

What is Deep Learning?

Deep learning has revolutionized image recognition, speech recognition, and natural language processing. There's also growing interest in applying deep learning to science, engineering, medicine, and finance.

At a high level, deep neural networks are stacks of nonlinear operations, typically with millions of parameters. This produces a highly flexible and powerful model which has proved effective in many applications. The design of network architectures and optimization methods have been the focus of intense research.

Course overview

Topics include convolution neural networks, recurrent neural networks, and deep reinforcement learning. Homeworks on image classification, video recognition, and deep reinforcement learning. Training of deep learning models using TensorFlow and PyTorch. A large amount of GPU resources are provided to the class. See Syllabus for more details.

Mathematical analysis of neural networks, reinforcement learning, and stochastic gradient descent algorithms will also be covered in lectures. (However, there will be no proofs in homeworks and the midterm.)

IE 398 Deep Learning is cross-listed with CS 398.

This course is part of the Deep Learning sequence:

  • IE 398 Deep Learning (undergraduate version)
  • IE 534 Deep Learning
  • IE 598 Deep Learning II
Computational resources

A large amount of GPU resources are provided to the class: 25,000 hours for 40 students. Graphics processing units (GPUs) can massively parallelize the training of deep learning models. This is a unique opportunity for students to develop sophisticated deep learning models at large scales.

Code

Extensive TensorFlow and PyTorch code is provided to students.

Datasets, Code, and Notes

MNIST Dataset

CIFAR10 Dataset

Introduction to running jobs on Blue Waters

Blue Waters Help Document for the Class

Recommended articles on deep learning

PyTorch Class Tutorial

PyTorch Website

Course Notes for Weeks 1-4

Project List

Practice Midterm Exam

Lecture Slides: Lecture 1 , Lecture 2-3 , Lecture 4-5 , Lecture 6 , Lecture 8 , Lecture 10 , GAN Lecture Slides , Lecture 11 , Code for Distributed Training , Lecture 12 , Deep Learning Image Ranking Lecture , Action Recognition Lecture

Homeworks

  • HW1: Implement and train a logistic regression model from scratch in Python for the MNIST dataset (no PyTorch). The logistic regression model should be trained on the Training Set using stochastic gradient descent. It should achieve 90-93% accuracy on the Test Set. For full credit, submit via Compass (1) the code and (2) a paragraph (in a PDF document) which states the Test Accuracy and briefly describes the implementation. Due Monday, January 28 at 5 PM.
  • HW2: Implement and train a neural network from scratch in Python for the MNIST dataset (no PyTorch). The neural network should be trained on the Training Set using stochastic gradient descent. It should achieve 97-98% accuracy on the Test Set. For full credit, submit via Compass (1) the code and (2) a paragraph (in a PDF document) which states the Test Accuracy and briefly describes the implementation. Due Wednesday, February 6 at 5:00 PM.
  • HW3: Implement and train a convolution neural network from scratch in Python for the MNIST dataset (no PyTorch). You should write your own code for convolutions (e.g., do not use SciPy's convolution function). The convolution network should have a single hidden layer with multiple channels. It should achieve at least 96% accuracy on the Test Set. For full credit, submit via Compass (1) the code and (2) a paragraph (in a PDF document) which states the Test Accuracy and briefly describes the implementation. Due Thursday, February 14 at 5:00 PM.
  • HW4: Train a deep convolution network on a GPU with PyTorch for the CIFAR10 dataset. The convolution network should use (A) dropout, (B) trained with RMSprop or ADAM, and (C) data augmentation. For 10% extra credit, compare dropout test accuracy (i) using the heuristic prediction rule and (ii) Monte Carlo simulation. For full credit, the model should achieve 80-90% Test Accuracy. Submit via Compass (1) the code and (2) a paragraph (in a PDF document) which reports the results and briefly describes the model architecture. Due Tuesday, February 26 at 5:00 PM.
  • HW5: Implement a deep residual neural network for CIFAR100. Homework #5 Details.
  • HW6: Generative adversarial networks (GANs). Homework Link . Due Wednesday, April 10 at 5:00 PM.
  • HW7: Natural Language Processing A. Part I and II of NLP assignment

  • HW8: Natural Language Processing B. Part III of NLP assignment

  • HW9: Implement a deep learning model for image ranking. Homework #6 Details.
  • HW10: Video recognition I. Homework Link
  • HW11 (not assigned this year): Deep reinforcement learning on Atari games I. 2017 version of this homework.
  • HW12 (not assigned this year): Deep reinforcement learning on Atari games II. 2017 version of this homework.
  • Final Project: See Syllabus for a list of possible final projects.
Examples of what will be implemented in the Homeworks

In HW6, a deep learning model is trained to predict the action occurring in a video solely using the raw pixels in the sequence of frames. The five most likely actions according to the deep learning model are reported (selected from a total of 400 possible actions).

In HW9, a deep learning model learns to play the Atari video game using only the raw pixels in the sequence of frames (as a human would learn).