This talk will be presented by Rahim Entezari, researcher at CSH and TU Graz, on Friday, June 17 at 3 pm in room 201 at the Complexity Science Hub.
Title: Generalization in Neural Networks
Abstract:
Learning with deep neural networks has enjoyed huge empirical success in recent years across a wide variety of tasks. Despite being a complex, non-convex optimization problem, simple methods such as stochastic gradient descent (SGD) are able to recover good solutions that minimize the training error.
More surprisingly, the networks learned this way exhibit good generalization behavior. Understanding generalization is one of the fundamental unsolved problems in deep learning. This problem has been studied extensively in machine learning, with a rich history going back more than 50 years. However, most of existing theories in machine learning fail when applied to modern deep networks. In this talk we will have a practical look on neural networks through the lens of in-distribution and out-of-distribution generalization.