Levenberg-Marquardt algorithm with numeric Jacobians
1. Examples
If you prefer directly jumping to example code, see:
2.Introduction
This page first describes the Levenberg-Marquardt optimization algorithm, then shows how to use its implementation within the mrpt-math
C++ library. All the source code discussed here, the implementation of the algorithm itself, and the examples, are available for download within the MRPT packages.
The following notation and algorithm have been extracted from the report [6]
3. Algorithm description
The Levenberg-Marquardt (LM) method consists on an iterative least-square minimization of a cost function based on a modification of the Gauss-Newton method. Let’s state the problem formally before defining the algorithm. We will assume that derivatives of the cost functions are not available in closed form, so they will be approximated by finite-difference approximation Finite-difference approximation.
Let
with:
The function
We define the Jacobian of the error functions as the
The Hessian of the error function is the
If we do not have closed form expressions for the derivatives needed for the Jacobian, we can estimate them from finite differences using some increments for each individual variable
Then, the LM method minimizes the following linear approximation of the actual error function:
where the gradient
Now, denote as
where each step is obtained from:
The damping factor
4. C++ Implementation
The LM algorithm is implemented in the C++ template class mrpt::math::CLevenbergMarquardtTempl<T>, and there is an example in MRPT/samples/optimize-lm, which is described next.
The type mrpt::math::CLevenbergMarquard is actually a shortcut for the template instantiation mrpt::math::CLevenbergMarquardtTempl<double>
.
The image below represents the resulting path from the initial guess to the minimum for this example. The displayed equation is the one-dimensional error (cost) function,