The talk by Ramin Hasani will take place online from 15:00—16:00 on September 10, 2020.
If you are interested in participating, you can join here via Zoom.
Neural networks with continuous-time (CT) hidden state representations have become unprecedentedly popular within the machine learning community. This is due to their strong approximation capability in modeling time-series, their adaptive computation modality, their memory, and parameter efficiency. In his talk, Ramin Hasani will discuss how this family of neural networks work and why they realize attractive degrees of generalizability across different application domains. Ramin will also touch base on the shortcomings of CT models and provide some workarounds.
Ramin Hasani is a postdoctoral associate at the Computer Science and Artificial Intelligence Lab (CSAIL) of Massachusetts Institute of Technology (MIT), jointly with the Institute of Computer Engineering of TU Wien. Ramin’s primary research focus is on the foundational properties of deep models in continuous-time control and robotics environments.
Ramin has obtained his PhD degree with honors in computer science from TU Wien. Before that, he completed his MSc In Electronic Engineering at Politecnico di Milano, Italy, and his BSc in Electrical Engineering at the Ferdowsi University of Mashhad, Iran.