On June 13, 2022, Rahim Entezary and the CSH host the “2nd Workshop on Efficient Machine Learning.”
The idea of running the workshop in Vienna was inspired by and in collaboration with Jonathan Frankle (Harvard Faculty / MosaicML co-founder).
If you are interested in attending, please check registration on the event’s webpage: https://sites.google.com/view/efficientml
Abstract:
Today’s world needs orders of magnitude more efficient ML to address environmental and energy crises, optimize resource consumption and improve sustainability. With the end of Moore’s Law and Dennard Scaling, we can no longer expect more and faster transistors for the same cost and power budget. This is particularly problematic when looking at the growing data volumes collected by populated sensors and systems, larger and larger models we train, and the fact that most ML models have to run on edge devices to minimize latency, preserve privacy and save energy.
The algorithmic efficiency of deep learning becomes essential to achieve desirable speedups, along with efficient hardware implementations and compiler optimizations for common math operations. Current research highlights sparsity, model and data augmentation, search for efficient network architectures, algorithmic training speed-ups, and new nature-inspired local ways of computation as promising research directions to build efficient ML systems.
In this workshop, we would like to discuss and celebrate recent advances in efficient ML and sketch the way forward.
Speakers
Jonathan Frankle | Incoming professor at the Harvard Mosaic ML
Olga Saukh | CSH Vienna
Mostafa Dehghani | Google Brain
Dan Alistarh | IST Austria
Amirhossein Habibian | Qualcomm AI Research