×

On the Nyström method for approximating a gram matrix for improved kernel-based learning. (English) Zbl 1222.68186

Summary: A problem for many kernel-based methods is that the amount of computation required to find the solution scales as \(O(n^{3})\), where \(n\) is the number of training examples. We develop and analyze an algorithm to compute an easily-interpretable low-rank approximation to an \(n \times n\) Gram matrix \(G\) such that computations of interest may be performed more rapidly. The approximation is of the form \(\tilde{G}_{k} = CW_{k}^{+}C^{T}\), where \(C\) is a matrix consisting of a small number \(c\) of columns of \(G\) and \(W_{k}\) is the best rank-\(k\) approximation to \(W\), the matrix formed by the intersection between those \(c\) columns of \(G\) and the corresponding \(c\) rows of \(G\). An important aspect of the algorithm is the probability distribution used to randomly sample the columns; we will use a judiciously-chosen and data-dependent nonuniform probability distribution. Let \(\|\cdot\|_{2}\) and \(\|\cdot\|_{F}\) denote the spectral norm and the Frobenius norm, respectively, of a matrix, and let \(G_{k}\) be the best rank-\(k\) approximation to \(G\). We prove that by choosing \(O(k/\epsilon ^{4})\) columns \[ \| G-CW_{k}^{+}C^{T}\|_{\xi } \leq \| G-G_{k}\|_{\xi } + \epsilon \Sigma _{i=1}^{n} G_{ii}^{2}, \] both in expectation and with high probability, for both \(\xi = 2, F\), and for all \(k: 0 \leq k\leq \text{rank}(W)\). This approximation can be computed using \(O(n)\) additional space and time, after making two passes over the data from external storage. The relationships between this algorithm, other related matrix decompositions, and the Nyström method from integral equation theory are discussed.

MSC:

68T05 Learning and adaptive systems in artificial intelligence
65F30 Other matrix algorithms (MSC2010)
PDFBibTeX XMLCite
Full Text: Link