ECE Illinois 543: Statistical Learning Theory Spring 2019 Project Papers
Shubh Gupta-
PAC-Bayesian analysis in Gaussian Processes
Pulkit Katdare-
Sample complexity of linear quadratic regulators
Haochen Hua-
EM Algorithm in Gaussian Mixture Models
Vishal Rana-
Privacy-preserving Prediction
Siqi Miao-
Generalization Bounds of Neural Networks
Liming Wang-
Bandit-aided boosting
Hassan Dbouk -
On the convergence of ADAM
Aravind Sankar-
PAC-learnability of influence functions in social networks
Forest Yang-
Spectrally normalized margin bounds for neural networks
Heng Sheng Chang-
Concentration Bounds for Single Parameter Adaptive Cont
Xuechao Wang-
Information Theoretic Guarantees for ERM
Dariush Kari-
Learning Through the Lens of Information Bottleneck
Sourya Basu -
Rademacher
complexity for adversarially robust generalization
Joshua Hanson-
Learning Volterra Series via RKHS Methods
Yifeng Chu-
Correspondence between f-divergence and surrogate loss in binary classification
Tiancheng Zhao-
Bandit-aided boosting
Ben Rabe -
EM Algorithm Applied to Gaussian Mixture Model
Bochao Li -
Contextual Decision Processes with low Bellman rank is PAC learnable
Hsin Po Wang -
Uniform, nonparametric, non-asymptotic confidence sequences
Yuqi Li -
Learning bounds in compressed domain
Amr Martini -
Minimax Bounds for Online Learning Algorithms
Katherine Tsai -
Generalization Bounds for Uniformly Stable Algorithms
Alan Yang -
Non-Convex Follow the Perturbed Leader
Haoliang Yue -
Data-Dependent Stability of Stochastic Gradient Descent
Sohrab Madani -
Online Non-convex Games with an Optimization Oracle
Runcheng Huang -
Learning Algorithm Interpolating the Data is Optimal for Nonparametric Regression and Prediction with Square Loss
Lian Yu -
Online nonconvex optimization
Akshayaa Magesh -
Thompson sampling for contextual bandits with linear payoffs
Hieu Huynh -
Accelerating AdaBoost algorithm using multi-armed bandits