Local Convergence of Adaptively Regularzied Tensor Methods

Published in Arxiv preprint, 2025

We extend the convergence theory of high-order tensor methods to fully adaptive settings, providing the first sharp local rates for ARp without even needing the Lipschitz constant. However, dealing with nonconvex local models brings new challenges. It is demonstrated that for p>2, regarding the global minimizer of the subproblem, even asymptotically, the successfulness of all iterations is not a guaranteed necessity.

Basically, you cannot just blindly pick the global minimizer. We show that you must select the “right” local minimizer to preserve the pth convergence order; otherwise, the superlinear rate degrades. We confirm that adaptive methods work for degenerate problems, provided p is large enough.