Ph.D. Candidate
Department of Computer Science
University of California, Davis
Email: zhetao#ucdavis.edu | zhe#zhe-tao.com
I’m a Ph.D. candidate in the Davis Automated Reasoning Group at UC Davis, advised by Prof. Aditya V. Thakur. I’m interested in programming languages, automated reasoning and machine learning.
09/2025 Paper accepted to NeurIPS 2025 as Spotlight!
09/2024 Paper accepted to NeurIPS 2024!
09/2024 Paper accepted to Correctness 2024!
06/2024 Tutorial on Provable Repair of Deep Neural Networks at PLDI 2024!
05/2024 Paper accepted to SAIV 2024!
04/2023 Paper accepted to PLDI 2023!
12/2022 Paper accepted to STTT 2023!
02/2021 Paper accepted to USENIX Security 2021!
Provable Gradient Editing of Deep
Neural Networks
Zhe Tao and Aditya V.
Thakur
Advances in Neural Information Processing Systems 39: Annual
Conference on Neural Information Processing Systems (NeurIPS 2025).
December 2025.
NeurIPS 2025 Spotlight
Provable Editing of Deep Neural
Networks using Parametric Linear Relaxation
Zhe Tao and Aditya V.
Thakur
Advances in Neural Information Processing Systems 38: Annual
Conference on Neural Information Processing Systems (NeurIPS 2024).
December 2024.
NeurIPS 2024
Towards Verifying Exact Conditions for
Implementations of Density Functional Approximations
Sameerah Helal, Zhe
Tao, Cindy Rubio-González, Francois Gygi and Aditya V.
Thakur
SC24-W: Workshops of the International Conference for High
Performance Computing, Networking, Storage and Analysis
Correctness@SC 2024
Provable Repair of Vision
Transformers
Stephanie Nawas, Zhe
Tao and Aditya V. Thakur
The 7th International Symposium on AI Verification (SAIV 2024).
2024.
SAIV 2024
Architecture-Preserving Provable Repair
of Deep Neural Networks
Zhe Tao, Stephanie
Nawas, Jacqueline Mitchell and Aditya V. Thakur
44th ACM SIGPLAN Conference on Programming Language Design and
Implementation (PLDI 2023). June 2023.
PLDI 2023
SyReNN: A Tool for Analyzing Deep
Neural Networks
Matthew Sotoudeh†,
Zhe Tao† and Aditya V. Thakur
† Equal
contribution.
International Journal on Software Tools for Technology Transfer
(STTT 2023).
STTT 2023
DICE*: A Formally Verified
Implementation of DICE Measured Boot
Zhe Tao, Aseem
Rastogi, Naman Gupta, Kapil Vaswani and Aditya V. Thakur
30th USENIX Security Symposium (USENIX Security 2021). August
2021.
USENIX Security 2021
Algorithms and Applications for Provable Repair of Deep Neural
Networks. (Co-host)
45th ACM SIGPLAN Conference on Programming Language Design and
Implementation (PLDI 2024). June 2024. Copenhagen, Denmark.
PLDI 2024
Architecture-Preserving Provable Repair of Deep Neural Networks.
44th ACM SIGPLAN Conference on Programming Language Design and
Implementation (PLDI 2023). June 2023. Orlando, Florida, United
States.
PLDI 2023
DICE*: A Formally Verified Implementation of DICE Measured
Boot.
30th USENIX Security Symposium (USENIX Security ’21). August 2021.
(Virtual).
USENIX Security 2021
APRNN is the first architecture-preserving provable V-polytope repair tool for DNNs. APRNN supports fully-connected, convolutional, pooling and residual layers with activation functions that have some linear pieces. APRNN scales to large networks like ResNet152 and VGG19, achieves low drawdown and high generalization, and introduces no overhead.
SyReNN is a library for analyzing deep neural networks by enumerating the linear regions of piecewise-linear functions (e.g., ReLU).
DICE* is a formally verified DICE measured boot implementation in F*, which is proved to be functionally correct, memory-safe and resistant to timing- and cache-based side channels. A key component of DICE* is a verified certificate creation library for a fragment of X.509.