Hello! I am an Assistant Professor in the Computer Science and Engineering (CSE) Department at the University of California, Riverside (UCR). I received my Ph.D. degree from Carnegie Mellon University advised by Professor Guy Blelloch. Prior to that, I received my Bachelor’s degree in Computer Science from Tsinghua University. [CV]

My research interests include broad topics in the theory and practice of parallel computing, including algorithms, data structures, frameworks, implementations, programming tools and their applications. My work involves both designing algorithms with improved asymptotic bounds and developing efficient solutions to large-scale real-world problems. You can find more information about our research group here.

I am teaching CS142 in Winter 2024.

I will be teaching CS218 in Spring 2024.

This is the homepage of my lovely husband.

Our Research Group

Yan Gu and I are leading the Parallel Algorithms Lab at UCR (UCRPAL). We are also part of the Algorithms and Computational Biology Lab (i.e., the theory group) at UCR.

If you are interested in a Ph.D. position in our group, or working on a Master’s Project with me, you can find more information here and here.


Our group also runs the annual event UCR Programming Challenge (UCRPC). Usually UCRPC happens at the beginning of the Fall quater. We welcome everyone inside and outside UCR to join!

View My Publication List:

Google Scholar   DBLP   By Year

Ph.D. Thesis:

Join-based Parallel Balanced Binary Trees. [Thesis]

Selected Research Topics:

A full list of my publications can be found here.

I have been working on parallel and concurrent tree structures, both in theory and in practice. In particular, I developed the join-based algorithms on trees, which base all other tree algorithms on a single function join. Other than join, all tree algorithms are generic across multiple balancing schemes. The tree structure is highly-parallelized, work-efficient, safe for concurrency, persistent (and functional), and also supports a full interface for commonly-used functions. This tree data structure is later referred to as P-trees (Parallel trees).

The tree structure is implemented in a library called PAM, which is applied to and tested in vairious applications. See my publication list with tags #join-based tree algorithms, #the PAM libaray, and #P-trees, or see all related publications here.

Download PAM Tutorial Video Tutorial Slides

Related Wikipedia Pages:

The join-based algorithms are also used in Hackage (the Haskell central package archive) Data.Set and Data.Map, and C-trees (a compressed purely-functional search tree structure) in Aspen (a graph-streaming framework).

I’m also interested in designing parallel algorithms from sequential iterative algorithms. The goal is to identify the “true dependences” among iterations/objects in the algorithm, and find efficient algorithm to parallelize them. This include algorithms under the randomized incremental construction (RIC) framework, dynamic programming algorithms, greedy algorithms, graph algorithms, etc. We show that many of these algorithms are highly parallel, with shallow dependence depth. See my publication list with tag #paralleling sequential iterative algorithms, or see all related publications here.

I have been working on many graph algorithms in both sequential and parallel setting. These include single-source shortest path (SSSP), graph metric embedding, strongly-connected component (SCC), biconnected components, and applications in data mining and compuational biology. See my publication list with tag #graph algorithms, or see all related publications here.

I have been working on algorithms on computational geometry, most of them in parallel. Some of them are also write-efficient. The topics include range, segment, interval, rectangle queries in low dimensions, Delaunay triangulations, closest pair, smallest enclosing disk, convex hull, nearest neighbor search, etc. Many of them are based on efficient parallel tree structures, and the others use a parallel randomized incremental construction framework proposed by us. See my publication list with tag #geometry algorithms, or see all related publications here.

I have been working on write-efficient algorithms, which means to optimize the overall (asymptotical) cost considering more expensive writes to the memory than reads. The research is motivated by the new Non-Volatile Memory (NVM) technology. See my publication list with tag #write-effienct algorithms, or see all related publications here.

Other Topics

I have also been working on other interested research topics, including parallel sorting and semisorting, data mining on social networks, and computational biology.