Abstract: | Two optimal orthogonalization processes are devised to orthogonalize, possibly approximately, the columns of a very large and possibly sparse matrix A ∈ ?n×k. Algorithmically the aim is, at each step, to optimally decrease nonorthogonality of all the columns of A. One process relies on using translated small rank corrections. Another is a polynomial orthogonalization process for performing the Löwdin orthogonalization. The steps rely on using iterative methods combined, preferably, with preconditioning which can have a dramatic effect on how fast nonorthogonality decreases. The speed of orthogonalization depends on how bunched the singular values of A are, modulo the number of steps taken. These methods put the steps of the Gram-Schmidt orthogonalization process into perspective regarding their (lack of) optimality. The constructions are entirely operator theoretic and can be extended to infinite dimensional Hilbert spaces. |