
A Note on Error Bounds for Pseudo Skeleton Approximations of Matrices
Due to their importance in both data analysis and numerical algorithms, ...
read it

Monte Carlo Estimators for the Schatten pnorm of Symmetric Positive Semidefinite Matrices
We present numerical methods for computing the Schatten pnorm of positi...
read it

Krylov Subspace Recycling for Fast Iterative LeastSquares in Machine Learning
Solving symmetric positive definite linear problems is a fundamental com...
read it

On maximum volume submatrices and cross approximation for symmetric semidefinite and diagonally dominant matrices
The problem of finding a k × k submatrix of maximum volume of a matrix A...
read it

Positive semidefinite embedding for dimensionality reduction and outofsample extensions
In machine learning or statistics, it is often desirable to reduce the d...
read it

Gaussian distributions on Riemannian symmetric spaces in the large N limit
We consider Gaussian distributions on certain Riemannian symmetric space...
read it

Positivity and Transportation
We prove in this paper that the weighted volume of the set of integral t...
read it
Low rank approximation of positive semidefinite symmetric matrices using Gaussian elimination and volume sampling
Positive semidefinite matrices commonly occur as normal matrices of least squares problems in statistics or as kernel matrices in machine learning and approximation theory. They are typically large and dense. Thus algorithms to solve systems with such a matrix can be very costly. A core idea to reduce computational complexity is to approximate the matrix by one with a low rank. The optimal and well understood choice is based on the eigenvalue decomposition of the matrix. Unfortunately, this is computationally very expensive. Cheaper methods are based on Gaussian elimination but they require pivoting. We will show how invariant matrix theory provides explicit error formulas for an averaged error based on volume sampling. The formula leads to ratios of elementary symmetric polynomials on the eigenvalues. We discuss some new an old bounds and include several examples where an expected error norm can be computed exactly.
READ FULL TEXT
Comments
There are no comments yet.