What are the numerical methods for the least squares method?

I have the following code in Matlab:

fileID = fopen('input.txt','r');
formatSpec = '%f';
y = fscanf(fileID,formatSpec);

step = 0.1;
x0 = step;
xn = length(y)*0.1;
x = x0:step:xn;

fitfunc = 'a + exp(x/b)+x^2/3+x'
startPoints = [-1 -1];

[f2 f2_info] = fit(x',y,fitfunc, 'Start', startPoints)
disp('Coefficients values: ');
coeffvalues(f2)
disp('Forecasts value on 600s: ');
f2(600)

This code performs data fitting and builds a forecast for the 600th second of the process.

The task is as follows: this code needs to be embedded in the device, so I need to convert this code to C++code. I see several solutions:

  1. Search for automatic converters of MATLAB code to C++code
  2. Find ready-made fitting libraries in C++, and use them to rewrite the code.
  3. By yourself to program the iterative fitting process, you will have to dive into the theory.

1 option: tried using Matlab Coder, which automatically converts code to C. But, unfortunately, Matlab Coder does not support the fit function.

2 there is no option, because using third-party libraries requires a lot of memory, which is not enough in the device for such delights.

There is only option 3, but I do not understand the theory well, I got into the Matlab code, there is very much a complex object-oriented language (I looked at the source code of the fit function), I don't have the competence to understand this.

Then I asked a question on enSO, where people suggested that the least squares method (OLS) is most likely used.

I Googled the Russian-language and English-language Internet, but there are mainly analytical solutions, and then for polynomials. In analytical solutions, there is a disadvantage: if you change the fitting function, you need to recalculate the derivatives and hammer in the code. So I need numerical solutions so that I can quickly test the interpolation with different functions.

Then I seem to have found what I need: the Levenberg-Marquardt method. I read articles on Wikipedia and on machinelearning.ru but I can't figure out how to practically apply it.

Maybe there are knowledgeable people who could briefly describe the algorithm of the Levenberg-Marquardt method for my fitfunc function = 'a + exp(x/b)+x^2/3+x' ? Or advise any other a numerical method for finding the coefficients of the resulting nonlinear function.

Author: Дух сообщества, 2016-06-22

1 answers

Just in case, I write the problem statement and what comes from where, using the example of the 2-dimensional case:

  1. You have the measurement data Xi, Yi - points taken during calibration, and so on.
  2. You have a set of basic functions Fn with which you want to represent the desired dependence, the set of functions is required to be orthogonal. The simplest set is X^0, X^1, X^2,... or sin^n( X), cos^n (X)

Problem: You want to find such coefficients C0, C1, ... Cn, so that the function Sum (Cn*Fn(Xi)) passes as close as possible to Yi, for all i.

Decision: You take your system and put in all the known Xi, subtract Yi from each resulting value, square it, add everything up, and minimize this amount. {(Sum(Cn*Fn(X0)) - Y0) ^ 2 + (Sum(Cn*Fn(X1)) - Y1) ^ 2 ... (Sum(Cn*Fn(Xi)) - Yi) ^ 2} -> min

To minimize it, you need to take the partial derivatives of Cn and equate them all to zero, this will give you a system of n equations, solving which you will find the desired Cn.

You can solve the system of equations in any way, it is often poorly conditioned, especially when removing data with an error, so it would be good to choose methods that cope with this. The Newton (Newton-Gauss) method of solving systems of equations is suitable. The method of solving with the choice of the main element turned out to be just the Gaus method, it is also worth trying, but it seems to cope worse with poorly conditioned systems. Vicky says that the chosen one The Levenberg-Marquardt method is originally an extension of the Newton method.

 2
Author: Andrey Golikov, 2016-06-22 09:23:20