Contact me at:
[2 Accepted Papers] [NeurIPS 2022] 2 papers accepted to NeurIPS 2022. Both articles are related to Exploring neural network properties in the infinite width limit with my brilliant student Noel Loo at MIT.
[Keynote Talk] “Generalist AI Models” at the Vanguard 5th Artificial Intelligence and Machine Learning Summit (Oct 16th, 2022)
[Invited Talk] “Achieving Causality and Out-of-distribution Robustness via Liquid Neural Networks,” Centre for Autonomous and Cyber-Physical Systems at Cranfield University, UK (Oct 7th, 2022)
[Invited Talk] “Liquid Neural Networks” Stanford Intelligent Systems Laboratory: SISL, Stanford University (July 18th, 2022)
[Invited Talk] “Liquid Neural Networks” at the Vectors of Cognitive AI Workshop, Intel AI Labs, CA (May 17th, 2022)
[Keynote Talk] “Liquid Neural Networks” at the Council of Scientific Society Presidents (CSSP), Spring Leadership Workshop, Role of Artificial Intelligence on Science and Quality of Life [link] (May 2nd, 2022)
[Accepted Paper] [ICRA 2022] Latent Imagination Improves Real-World Deep Reinforcement Learning
[AWARD] Hyperion Research 2022 HPC Innovation Excellence Awards for the invention of Liquid Machine Learning [link]
[Accepted Paper] [AAAI 2022] Gotube: Guarantee the Safety of Continuous-depth Neural Models
[Seminar talk] at MIT Center for Brain, Minds, and Machines (CBMM), on Liquid Neural Networks, (Oct 5, 2021) [link]
[Keynote talk] on the 20th of August, 2021, I will give a keynote talk on “Liquid Neural Networks for Autonomous Driving” at IJCAI 2021 Artificial Intelligence for Autonomous Driving Workshop! [link]
[Accepted Paper] [ICML 2021] Our paper On-Off Center-Surround Receptive Fields for Robust Image Classification got accepted for publication at the 38th International Conference on Machine Learning (ICML), 2021. [link]
[Recent Invited Talks]
“Liquid Time-Constant Networks”,
Synthesis of Models and Systems Seminar at Simons Institute, UC Berkeley, CA, 3.22.21 [link]
“Understanding Liquid Time-Constant Networks”,
MIT Lincoln Laboratory Machine Learning Special Interest Group (LL-MLSIG) Seminar Series, 3.25.21
“Liquid Neural Networks”
MIT Open Learning, MIT Horizon, Cambridge, MA, 4.8.21
“What Is a Liquid Time-Constant Network?”,
Northeastern University, Boston, MA, 3.14.21 [link]
[New Preprint] A new preprint of our work on comparing model-based to model-free agents in autonomous racing environments is out! [link]
[Accepted Papers] [ICRA 2021] our work “Adversarial training is not ready for robot learning” has been accepted for publication at the IEEE International Conference on Robotics and Automation (ICRA) 2021. [link]
[Press MIT News] article about our research: “Liquid” machine-learning system adapts to changing conditions.
The new type of neural network could aid decision-making in autonomous driving and medical diagnosis. (Jan 28th, 2021) [link]
[2 Accepted Papers] [AAAI 2021] our works “Liquid time-constant networks” and “On the verification of Neural ODEs” have been accepted for publication at the 35th AAAI Conference on Artificial Intelligence. [link]
[Position Update] I joined the Distributed Robotics Lab (DRL) of CSAIL MIT, as a postdoctoral associate.[link]
[Accepted Paper] [Nature Machine Intelligence] “Neural Circuit Policies Enabling Auditable Autonomy” got accepted for publication in Nature Machine Intelligence. [link]
[Accepted Paper] [ICML 2020] “A Natural Lottery Ticket Winner: Reinforcement Learning with Ordinary Neural Circuits” got accepted to the 2020 International Conference on Machine Learning (ICML) [link]
[Accepted Paper] [Journal of Autonomous Robots 2020] “Plug-and-play supervisory control using muscle and brain signals for real-time gesture and error detection” got accepted to the journal of Autonomous Robots, August 2020. [link]
[PhD dissertation] Check out my PhD dissertation here: [link]
[PhD studies – Done!] Completed my PhD degree with honors on May 5th, 2020
[Accepted Paper] ICRA 2020 – We introduced a new regularization scheme to obtain state-stable recurrent neural networks, in control environments. The paper is gonna be presented during ICRA 2020 (May 29th -June 4th) in Paris, France.
[TED Talk] watch my latest TEDxCluj talk entitled “A journey inside a neural network” [link]
[Accepted Paper] ICRA 2019 – We proposed a new brain-inspired neural network design methodology for interpretable and noise-robust robotic control [link]
[Accepted Paper] IJCNN 2019 – We proposed a new method to interpret LSTM networks [link]
[TED Talk] My TEDxVienna talk entitled “Simple Artificial Brains to Govern Complex Tasks” is officially released by TEDx. watch it [here]
[Interview] Read my interview with Vera Steiner at TEDxVienna here
[AAAI-IAAI 2019] one accepted paper entitled “a machine learning suite for machine’s health monitoring”, for oral presentation. [link]
[Accepted papers] 2 journal papers got published at the Royal Society Philosophical Transactions: Biological Sciences. [Publications]
I took part in the Deep learning Indaba 2017 in Johannesburg, South Africa.
I am a Principal AI and Machine Learning Scientist at the Vanguard Group. I am also a research affiliate at the Computer Science and Artificial Intelligence Lab (CSAIL), Massachusetts Institute of Technology (MIT). Before that, I was a Postdoctoral Associate at CSAIL MIT working with Daniela Rus. I have completed my Ph.D. studies (with distinction) in Computer Science, at TU Wien, Austria (May 2020). My Ph.D. dissertation on Liquid Neural Networks was co-advised by Prof. Radu Grosu (TU Wien) and Prof. Daniela Rus (MIT).
My research focuses on developing and understanding interpretable deep learning and decision-making algorithms.