In the era of "big data," numerical linear algebra operations related to large matrices are behind many applications, ranging from data mining to large-scale simulations and machine learning. The Monte Carlo methods, which are based on statistical sampling, exhibit many attractive properties, including fast approximated results, memory efficiency, reduced matrix element accesses, intrinsic parallelism, and inherent fault tolerance. As a result, there has been a recent increase of interest of using Monte Carlo methods to carry out computations on large matrices.
We first revisit the classical Ulam-von Neumann method for solving linear operator equations and derive a necessary and sufficient condition for its convergence, which clarifies a long-standing rule of thumb (60 years since Ulam and von Neumann proposed the original method) in Monte Carlo method theory associated with the use of the Neumann series. Nevertheless, the slow convergence of the Ulam-von Neumann scheme limits its applicability to general linear systems. Afterward we turn to the approach of embedding Monte Carlo sampling into the Krylov subspace methods, where the extreme eigenvalues/eigenvectors of the large coefficient matrix can be estimated and continuously refined during the course of iterations via importance sampling. These approximate eigenvectors are then injected into the Krylov subspace for deflation to accelerate the convergence of the linear solvers. The resulted Krylov subspace-based solver with Monte Carlo deflation is memory-bounded, matrix pass-efficient, and can be efficiently implemented on CPU-coprocessor (CPU-GPU or CPU-Xeon Phi) architectures. Monte Carlo methods for addressing problems of approximating dominant eigenvalue/eigenvector, low-rank approximation, and numerical linear algebra fault tolerance and applications in bioinformatics, image segmentation, and matrix completion are also discussed.
Dr. Yaohang Li is an Associate Professor in the Department of Computer Science at Old Dominion University. His research interests are in Computational Biology and Scientific Computing. He received the Ph.D. and M.S. degrees in Computer Science from the Florida State University in 2003 and 2000, respectively. After graduation, he worked at Oak Ridge National Laboratory as a research associate for a short period of time. Before joining ODU, he was an associate professor in the Computer Science Department at North Carolina A&T State University. He is the recipient of Ralph E. Powe Award and an NSF CAREER Award.