The overall aim of our research at the AISys lab is to build the foundations that enable us to synthesize truly autonomous systems. To that end, we create novel algorithmic and theoretically principled methods grounded in mathematics that will allow us to learn concepts and policies via appropriate representations that enable systems to act rationally and transfer their knowledge and experience across different tasks and environments.
AISys Core Research Areas
- Causal AI: Structure Learning, Inference, Transfer Learning, Representation Learning
- Causal AI provides a set of tools and principles that allow us to reason questions of a counterfactual nature by combining data and structural invariances about the environment—i.e., what would have happened had reality been different, even when no data was collected about this alternate world is available.
- We develop theoretical and practical Causal AI approaches typically for high-dimensional, low-sample-size scenarios for the applications of our interests (computer systems and healthcare).
- Statistical ML: Multi-objective Optimization, Transfer Learning, Reinforcement Learning
- We develop principled methods to enable learning transferable concepts that promote transfer learning across tasks and environments.
- We develop multi-objective optimization methods for hyper-parameter optimization and neural architecture search.
- We develop transfer learning methods that scale well for high-dimensional low-sample-size scenarios.
- Trustworthy AI: ML Security, Robustness, Explainability
- With the overall goal of building reliable and trustworthy machine learning systems, we develop methods to explain decisions made by ML models.
- We also enhance model robustness and shield ML systems against adversarial attacks by developing extensible defense mechanisms.
- ML for Systems: Computer Architecture, On-device ML Systems, Highly-Configurable & Composable Systems
- To build reliable, scalable, and performant computer systems, we apply our theoretical work in machine learning and causality to address computer systems problems.
- We develop principled methods to optimize the performance of highly configurable systems across the stack (software, middleware, hardware).
- We develop methods for understanding the performance behavior of highly configurable and composable systems and be able to reason and predict systems qualities (e.g., throughput, energy consumption) and enable performance trade-offs both at design time and runtime.
- We develop novel machine learning methods that learn from historical data for more efficient Design Space Explorations in computer systems, across the stack, from software, compiler, computer architecture, and hardware.
- Robot Learning: Causal Reinforcement Learning, Autonomous Robots, Autonomous Space Rovers, Adaptive Systems
- We aim to develop the next generation of autonomous systems (on-device, embedded, heterogeneous, cloud-native, robotics) that can perceive, reason, and react to complex real-world environments and users with high levels of precision and efficiency.
- In particular, we are interested in Causal Reinforcement Learning, where the robot can employ observational data to make a causal conclusion about the efficacy of actions and estimate the effects of such actions in the counterfactual world, using counterfactual and causal effect estimation.