Ramin Hasani

Co-founder & CEO at Liquid AI | Research Affiliate at CSAIL MIT

Contact me at:

CV


Quick Updates

[2 Accepted Papers] [ICML 2024] “Large Scale Dataset Distillation with Domain Shift” and “State-free Inference of SSMs” are now accepted at ICML 2024.

[2 Accepted Papers] [ICLR 2024]  “Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust Closed-Loop Control” & “Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation” are now accepted at ICLR 2024.

[2 Accepted Papers] [ICRA 2024]  “Overparametrization helps offline-to-online generalization of closed-loop control from pixels” & “Learning with Chemical versus Electrical Synapses – Does it Make a Difference?” are now accepted at ICRA 2024.

[2 Accepted Papers] [NeurIPS 2023]  “Gigastep – One Billion Steps per Second Multi-agent Reinforcement Learning” & “On the Size and Approximation Error of Distilled Datasets” are now accepted at NeurIPS 2023.

[Accepted Paper][CoRL 2023] “Measuring Interpretability of Neural Policies with Disentangled Representations” is now accepted for Oral presentation (Top 6%) at CoRL 2023.

[2 Accepted Papers] [ICML 2023] “On the forward-invariance of neural ODEs” and “Dataset distillation with convexified implicit gradients” are now accepted to ICML 2023!

[TEDx Talk] “AI that understands what it does!” at TEDxMIT, April 22nd, 2023.

[Invited Talk] “Generalist AI Systems in Finance” at the New York AI in Finance Summit, April 20th, 2023.

[TEDx Talk] “What is a Generalist Artificial Intelligence?” at TEDxBoston, March 6th, 2023.

[Accepted Paper] [L4DC 2023] “Learning Stability Attention in Vision-based End-to-end Driving Policies” is now accepted at L4DC 2023.

[Accepted Paper] [ICLR 2023] “Liquid Structural State-Space Models” is now accepted to ICLR 2023. [link]

[Accepted Paper] [IEEE T-RO 2023] “BarrierNet: Differentiable Control Barrier Functions for Learning of Safe Robot Control”, is now accepted in the IEEE Transactions on Robotics Journal.

[Invited Lecture at MIT] I gave a lecture on “The modern era of statistics” at the MIT Introduction to Deep Learning course, on Jan 12th, 2023. [link]

[Accepted Paper] [ICRA 2023] “Infrastructure-based End-to-End Learning and Prevention of Driver Failure” is now accepted to ICRA 2023. [link]

[TEDx Talk] Liquid Neural Networks at TEDxMIT, December 4th, 2022. [link]

[Keynote Talk] “Generalist AI Models” at the Vanguard’s 5th Artificial Intelligence and Machine Learning Summit (Oct 16th, 2022)

[New PrePrint] Achieving state-of-the-art performance in sequence modeling with Liquid State-Space Models (Liquid-S4) [link]

[New Patent] Sparse Closed-form Neural Algorithms for Out-of-Distribution Generalization on Edge Robots [US Provisional Patent Case No. 63/415,382] (October 12th, 2022)

[New PrePrint] Interpreting Neural Policies with Disentangled Tree Representations [link]

[New Software and PrePrint] PyHopper: HyperParameter Optimization [link]

[New PrePrint] On the Forward Invariance of Neural ODEs [link]

[2 Accepted Papers] [NeurIPS 2022] 2 papers accepted to NeurIPS 2022. Both articles are related to Exploring neural network properties in the infinite width limit with my brilliant Ph.D. student Noel Loo at MIT.

[New Patent]  Systems and Methods for Efficient Dataset Distillation Using Non-Deterministic Feature Approximation [US Provisional Patent Case No. 63/390,952] (July 20th, 2022)

[New PrePrint] Are All Vision Models Created Equal? A Study of the Open-Loop to Closed-Loop Causality Gap 2022 [link]

[Invited Talk] “Achieving Causality and Out-of-distribution Robustness via Liquid Neural Networks,” Centre for Autonomous and Cyber-Physical Systems at Cranfield University, UK (Oct 7th, 2022)

[Invited Talk] “Liquid Neural Networks” Stanford Intelligent Systems Laboratory: SISL, Stanford University (July 18th, 2022)

[Invited Talk] “Liquid Neural Networks” at the Vectors of Cognitive AI Workshop, Intel AI Labs, CA (May 17th, 2022)

[Keynote Talk] “Liquid Neural Networks” at the Council of Scientific Society Presidents (CSSP), Spring Leadership Workshop, Role of Artificial Intelligence on Science and Quality of Life [link] (May 2nd, 2022)

[Accepted Paper] [ICRA 2022] Latent Imagination Improves Real-World Deep Reinforcement Learning

[AWARD] Hyperion Research 2022 HPC Innovation Excellence Awards for the invention of Liquid Machine Learning [link]

[Accepted Paper] [AAAI 2022] Gotube: Guarantee the Safety of Continuous-depth Neural Models

[2 Accepted Papers] [NeurIPS 2021] 2 papers accepted to NeurIPS 2021! Causal Navigation and Sparse Flows.

[Seminar talk] at MIT Center for Brain, Minds, and Machines (CBMM), on Liquid Neural Networks, (Oct 5, 2021) [link]

[Keynote talk] on the 20th of August, 2021, I will give a keynote talk on “Liquid Neural Networks for Autonomous Driving” at IJCAI 2021 Artificial Intelligence for Autonomous Driving Workshop! [link]

[Accepted Paper] [ICML 2021] Our paper On-Off Center-Surround Receptive Fields for Robust Image Classification got accepted for publication at the 38th International Conference on Machine Learning (ICML), 2021. [link]

[Recent Invited Talks]

“Liquid Time-Constant Networks”,
Synthesis of Models and Systems Seminar at Simons Institute, UC Berkeley, CA, 3.22.21 [link]

“Understanding Liquid Time-Constant Networks”,
MIT Lincoln Laboratory Machine Learning Special Interest Group (LL-MLSIG) Seminar Series, 3.25.21

“Liquid Neural Networks”
MIT Open Learning, MIT Horizon, Cambridge, MA, 4.8.21

“What Is a Liquid Time-Constant Network?”,
Northeastern University, Boston, MA, 3.14.21 [link]

[New Preprint] A new preprint of our work on comparing model-based to model-free agents in autonomous racing environments is out! [link]

[Accepted Papers] [ICRA 2021] Our work “Adversarial training is not ready for robot learning” has been accepted for publication at the IEEE International Conference on Robotics and Automation (ICRA) 2021. [link]

[Press MIT News] article about our research: “Liquid” machine-learning system adapts to changing conditions.
The new type of neural network could aid decision-making in autonomous driving and medical diagnosis. (Jan 28th, 2021) [link]

[2 Accepted Papers] [AAAI 2021] our works “Liquid time-constant networks” and “On the verification of Neural ODEs” have been accepted for publication at the 35th AAAI Conference on Artificial Intelligence. [link]

[Cover of Nature MI] Our paper got featured on the cover of the October 2020 Issue of Nature Machine Intelligence Journal [link] [pdf]

[Position Update] I joined the Distributed Robotics Lab (DRL) of CSAIL MIT, as a postdoctoral associate.[link]

[New Paper Out] “Learning Long-term Dependencies in Irregularly-samples Time Series” [Paper][code]

[Accepted Paper] [Nature Machine Intelligence] “Neural Circuit Policies Enabling Auditable Autonomy” got accepted for publication in Nature Machine Intelligence. [link]

[Ph.D. Thesis Award Nomination] My PhD Dissertation has been nominated for the TÜV Austria 2020 Science Award. [About the Award] [video]

[Accepted Paper] [ICML 2020] “A Natural Lottery Ticket Winner: Reinforcement Learning with Ordinary Neural Circuits” got accepted to the 2020 International Conference on Machine Learning (ICML) [link]

[Accepted Paper] [Journal of Autonomous Robots 2020] “Plug-and-play supervisory control using muscle and brain signals for real-time gesture and error detection” got accepted to the journal of Autonomous Robots, August 2020. [link]

[Ph.D. dissertation] Check out my Ph.D. dissertation here: [link]

[Ph.D. studies – Done!] Completed my Ph.D. degree with honors on May 5th, 2020

[Medium Article] Curious about some mysterious facts on neural ODEs’? Read my Medium article: The Overlooked Side of Neural ODEs. [link]

[Accepted Paper] ICRA 2020 – We introduced a new regularization scheme to obtain state-stable recurrent neural networks, in control environments. The paper is gonna be presented during ICRA 2020 (May 29th -June 4th) in Paris, France.

[TED Talk] Watch my latest TEDxCluj talk entitled “A journey inside a neural network” [link]

[MIT] I am currently a research scholar at MIT CSAIL Daniela Rus’s Robotics Lab. [My MIT CSAIL webpage]

[Accepted Paper] ICRA 2019 – We proposed a new brain-inspired neural network design methodology for interpretable and noise-robust robotic control [link]

[Accepted Paper] IJCNN 2019 – We proposed a new method to interpret LSTM networks [link]

[TED Talk] My TEDxVienna talk entitled “Simple Artificial Brains to Govern Complex Tasks” is officially released by TEDx. watch it [here]

[Accepted Paper] We proposed a new method to interpret LSTM networks. The paper will be presented at the NeurIPS (NIPS) 2018 Workshop on Interpretability and Robustness (IRASL). [Paper]

[Interview] Read my interview with Vera Steiner at TEDxVienna here

[TED Talk 2018] I gave my first TEDx talk at TEDxVienna on October 20th 2018. [link]

[Interview] Read my interview with TrendingTopics about my research and perspectives on AI [link]

[Press] reflections of our research work on “Neuronal Circuit Policies“: [TU Wien] [EurekAlert] [i-programmer] [techxplore] [NewsGuard] [Motherboard vice]

[AAAI-IAAI 2019] One accepted paper entitled “a machine learning suite for Machine’s health monitoring”, for oral presentation. [link]

[Interview] Read my interview with Futurezone in German [link]

[AI Talk Sep2018] I gave a talk on “AI and Neuroscience” at the “BrainStorms” event. [link]

[Accepted papers] 2 journal papers got published in the Royal Society Philosophical Transactions: Biological Sciences. [Publications]

[RSS 2018] In a recently published paper at the Robotics Science and Systems (RSS) 2018 Conference, we showed “How to control robots with brainwaves and hand gestures” [MIT Press release][Paper]

At ICML & IJCAI 2018 in Stockholm, I presented two papers, one at the Explainable AI (XAI-18) workshop and one at DISE1 workshop.

[Press] Press releases on our recent NIPS Deep RL Symposium 2017 paper: [Motherboard Vice] [Phys.org] [TU Wien]

My student, Mathias Lechner, won the Best Master Thesis Award 2017 as the “Distinguished Young-Alumnus Award”, at the Faculty of Informatics at TU Wien. [link]

I presented a paper at the Deep Reinforcement Learning Symposium at NIPS 2017, and two papers at the workshop on Worm’s Neural Information Processing, at NIPS 2017.

I was a visiting research scholar at MIT CSAIL, working on developing interpretable machine-learning algorithms for autonomous systems, with Daniela Rus.

I co-chaired a NIPS 2017 workshop on Worm’s Neural Information Processing (WNIP).

I attended ICML 2017 in Sydney and presented at WCB 2017. [slides]

I also attended IJCAI 2017 in Melbourne and presented at BOOM 2017, [slides] [poster] Won the Best Poster Award! [link]

I participated in the Deep Learning Indaba 2017 in Johannesburg, South Africa.


About me

I am currently an AI Scientist at CSAIL MIT. Prior to that, I was jointly a Principal AI and Machine Learning Scientist at the Vanguard Group and a Research Affiliate at CSAIL MIT(11/2021-7/2023). Before that, I worked with Daniela Rus as a Postdoctoral Associate at CSAIL MIT (10/2020-12/2021). I have completed my Ph.D. studies (with distinction) in Computer Science, at TU Wien, Austria (May 2020). My Ph.D. dissertation on Liquid Neural Networks was co-advised by Prof. Radu Grosu (TU Wien) and Prof. Daniela Rus (MIT).

My research focuses on flexible decision-making algorithms.