I am an assistant professor in computer science at UofSC and a visiting researcher at Google. My research involves developing theories in Causal AI and Statistical ML and apply them for solving problems in computer systems, autonomous robots, and software engineering. I am, in particular, interested in Representation Learning, Causal Transfer Learning, Reinforcement Learning, Trustworthy AI, and ML for Systems.

Prior to joining the faculty at the University of South Carolina, I was a post-doc in the School of Computer Science at Carnegie Mellon University working with Christian Kaestner, and, before that, I was a post-doc in the Department of Computing at Imperial College London. I received my Ph.D. from the School of Computing at Dublin City University and my advisor was Claus Pahl.


08/20/21 A postdoc position (up to 3 years) is available at AISys on Causal AI for Systems. Please apply here.
08/09/21 I am thrilled Causal Performance Debugging for Highly-Configurable Systems has been funded by NSF (♡). This is a collaborative project on Causal AI for Systems with Christian Kaestner (CMU) and Baishakhi Ray (Columbia) with total funding of $1,200,000.
07/29/21 We have released a demo about our NASA RASPBERRY-SI project on AI-based autonomy for Europa Lander to find life in Jupiter's moon Europa; thanks, NASA (♡).
06/22/21 I am thrilled RTG: Mathematical Foundation of Data Science at University of South Carolina has been funded by NSF (♡). This is a collaborative training project with my genious colleagues in mathematics, Wolfgang Dahmen, Linyuan Lu (PI) Wuchen Li, and Qi Wang, on the Mathematical Foundation of AI and ML.
06/18/21 A new way to ‘see’: A story about our NSF SmartSight project on AI for Social Good, has been published at UofSC's research magazine
06/04/21 I am so delighted that I (together with Valerie Issarny) will serve as the PC co-chair of SEAMS 2023 colocated with ICSE 2023 in Melbourne. Meanwhile, please do consider submitting to SEAMS 2012!
05/10/21 I am so delighted to announce that I am now a Visiting Researcher at Google! I will be working on Causal Representation Learning, Adversarial ML, and Self-Supervised Learning.
04/15/21 Accelerating Recursive Partition-Based Causal Structure Learning, a generic causal structure learning method that can locate that scales to high-dimensional problems, has been accepted in AAMAS 2021.
03/22/21 Scalable Causal Transfer Learning, a method for identifying causal invariance---suitable for high-dimensional low sample size problems such as biomedical problems, is out.
01/18/21 We have a website for CSCE 212: Introduction to Computer Architecture that I teach this semester.
01/15/21 White-Box Analysis over Machine Learning: Modeling Performance of Configurable Systems will appear at ICSE'21. Congratulations Miguel Velez!
12/28/20 An interview on AI in Deep Space Missions with The Post and Courier newsletter.
12/12/20 Shahriar Iqbal presented our recent work on CADET in the Workshop on ML for Systems at NeurIPS 2020.
11/07/20 Vijay Chidambaram (UT Austin), Neeraja Yadwadkar (Stanford), Ivo Jimenez (UC Santa Cruz), and Romain Jacob (ETH Zurich), and I launched JSys (Journal of Systems Research)—a new diamond open-access journal for the systems community.
11/06/20 An interview on AI in Space about our recent NASA RASPBERRY SI project.

Older News…

Current Projects


NASA RASPBERRY-SI (2020–) Jianhai Su Pooyan Jamshidi

In this collaborative project (USC, CMU, York, UArk, NASA JPL/Ames/GRC), we develop technologies that enable learning-based autonomous planning and adaptation of space landers.

Causal AI: Structure Learning and Transfer Learning

Causal AI: Structure Learning and Transfer Learning (2018–) Mohammad Ali Javidian Pooyan Jamshidi

In this project, we are addressing the following important questions from theoretical and empirical perspectives: (i) How to learn reliable causal structures from data and how to use the learned causal structures for identifying causal invariances across environments? (ii) How to use the causal structure learning and counterfactual reasoning based on causal invariances for systems problems? In particular, we are extremely interested to apply these theoretical advancements for an explainable and guided performance debugging of highly-configurable systems.


ATHENA (2019–) Ying Meng Jianhai Su Pooyan Jamshidi

Despite achieving state-of-the-art performance across many domains, machine learning systems are highly vulnerable to subtle adversarial perturbations. In this project, we propose Athena, an extensible framework for building effective defenses to adversarial attacks against machine learning systems.


FlexiBO (2018–) Md Shahriar Iqbal Jianhai Su Pooyan Jamshidi

One of the key challenges in designing machine learning systems is to determine the right balance amongst several objectives, which also oftentimes are incommensurable and conflicting. For example, when designing deep neural networks (DNNs), one often has to trade-off between multiple objectives, such as accuracy, energy consumption, and inference time. We developed FlexiBO, a flexible Bayesian optimization method, to address this issue.


BRASS MARS (2016–) Jianhai Su Pooyan Jamshidi

How do you build a software system that can function for a century without being touched by a human engineer? This is the herculean task being undertaken by the DARPA Building Resource Adaptive Software System’s (BRASS) program. We, at UofSC with several collborators at CMU, have been developing learning mechanisms to be integrated with quantitative planning to adapt/reconfigure robots to overcome environmental changes at runtime.