Rahim Entezari will present his research pursued with Behnam Neyshabur, Hanie Sedghi and Olga Saukh at Deep Learning Classics and Trends DLCT on March 11, 2022.
In this work, we conjecture that if the permutation invariance of neural networks is taken into account, SGD solutions will likely have no barrier in the linear interpolation between them. Although it is a bold conjecture, we show how extensive empirical attempts fall short of refuting it.
We further provide a preliminary theoretical result to support our conjecture. Our conjecture has implications for the lottery ticket hypothesis, distributed training, and ensemble methods.