2025 Amazon Trainium Fellows
2025 Amazon Trainium Fellows
We are proud to introduce the
14 recipients of the Amazon Trainium Fellowship at UCLA
. As part of Amazon’s
$110M
Build on Trainium
investment program, these scholars are at the forefront of accelerating AI innovation.
By leveraging the
AWS Trainium
architecture, our fellows are pushing the boundaries of high-performance computing—from optimizing large-scale distributed systems to advancing complex quantum circuit simulations. This partnership empowers UCLA’s brightest minds to move beyond the constraints of traditional GPU systems, fostering a new era of open-source contribution and state-of-the-art AI research.

Yuanzhou Chen
ADVISOR: Wei Wang
Computer Science
I previously worked on the theories of reinforcement learning and deep learning. For the past three years, my research mainly focused on the application of AI in scientific discovery. Currently, I’m exploring the development of implicit, high-level thinking for Causal LMs, towards addressing both the context window length and reasoning depth bottlenecks for more scalable and intelligent LLMs.
My website

Quan Do
ADVISOR: Jens Palsberg
Computer Science
Quan Do’s research focuses on the intersection of quantum computing, compiler design, and hardware-aware optimization. He aims to bridge the gap between high-level programming abstractions and the physical constraints of modern computational hardware.
My website

Honglin He
ADVISOR: Bolei Zhou
Computer Science
Honglin He’s research centers on Embodied AI and the development of large-scale simulation platforms. His work aims to bridge the gap between virtual perception and physical interaction.
My website

Zifan He
ADVISOR: Jason Cong
Computer Science
My research focuses on algorithm-hardware co-design for efficient machine learning inference, with an emphasis on memory optimization to enable high-quality and low-latency decoding for large language models. I am particularly interested in mapping model memory structures to the system and hardware memory hierarchy of Trainium devices, combined with intelligent control mechanisms for efficient streaming-context processing.
My website

Wenbo Hu
ADVISOR: Kai-Wei Chang
Computer Science
My primary research interest lies in the intersection of vision, language, and agentic. Particularly, I have worked on 2D and 3D vision-language models (VLMs) in multimodal understanding, reasoning, reconstruction, generation and embodied tasks. I build scalable and efficient multimodal systems that can work on various machines such as GPUs and Trainium.
My website

Heyang Jiang
ADVISOR: Baharan Mirzasoleiman
Computer Science
My research interests lie in the domains of Deep Learning and Machine Learning, with a specific focus on their applications in diffusion and visual generation.
My website

Weikai Li
ADVISOR: Yizhou Sun
Computer Science
My research focuses on AI for chip design and interpretable AI, particularly large language models and graph neural networks. Supported by Trainium, I explore cognitive modeling with LLMs and investigate how contextual framing influences an LLM’s perceived worldview.
My website

Ben Limpanukorn
ADVISOR: Miryung Kim
Computer Science
Ben’s research focuses on reducing developer effort for software testing and engineering through automated testing and program synthesis. Building on his work in extracting repair rules and synthesizing custom mutations to test complex systems, he is currently investigating the potential to automate the synthesis/transpilation of NKI kernels for AWS Trainium.
My website

Yifeng Liu
ADVISOR: Quanquan Gu
Computer Science
My research interests lies in developing efficient algorithms for training LLMs, ranging from optimization, scaling law and architecture design in the pretraining stage, to reinforcement learning with verifiable rewards (RLVR) in post-training.
My website

Siyuan Miao
ADVISOR: Lei He
Electrical and Computer Engineering
Siyuan Miao’s interest lies in the intersection of Computer Architecture, AI Compilers, and Efficient Deep Learning. His research focuses on optimizing the deployment of large-scale neural networks on specialized hardware.
My website

Po-Nien Kung
ADVISOR: Nanyun Peng
Computer Science
My primary research interests center around natural language processing and machine learning. The core motivation behind my research is to develop general-purpose AI systems that can seamlessly integrate into everyday life. These systems need to be efficient, interpretable, controllable, and aligned with human preferences.
My website

Yidou Weng
ADVISOR: Guy Van den Broeck
Computer Science
My research develops tractable probabilistic models for controllable text generation, combining formal guarantees with neural flexibility. I am currently exploring building reliable world models for embodied AI, where scaling surrogate model training on platforms like Trainium enables principled planning under complex, structured constraints.
My website

Yun Zhang
ADVISOR: Jiaqi Ma
Civil & Environmental Engineering
My research focuses on human-centric and trustworthy physical AI agents that can perceive, reason, and act in complex real-world environments. I develop scalable Vision–Language–Action models for robotics and autonomous systems, leveraging large multimodal models and Amazon Trainium to enable reliable long-horizon decision making.
My website

Hengguang Zhou
ADVISOR: Cho-Jui Hsieh
Computer Science
My research focuses on advancing post-training methods for multimodal large language models, particularly on eliciting reasoning and agentic behaviors through reinforcement learning and improving how models understand and interpret the physical world from diverse sensory inputs, enabled by scaled experimentation on Trainium.
My website
