Difference between revisions of "MATLAB:Fitting"

From PrattWiki
Jump to navigation Jump to search
 
(5 intermediate revisions by the same user not shown)
Line 1: Line 1:
This document contains examples of polynomial fitting, general linear regression, and nonlinear regression.  In each section, there will be example code that may come in useful for later courses.  The example code is based on the existence of a file in the same directory called <code>Cantilever.dat</code> that contains two columns of data - the first is an amount of mass (in kg) placed at the end of a beam and the second is a displacement, measured in inches, at the end of the beam.  For EGR 53, this file is:  
+
This document contains examples of polynomial fitting, general linear regression, and nonlinear regression.  In each section, there will be example code that may come in useful for later courses.  The example code is based on the existence of a file in the same directory called <code>Cantilever.dat</code> that contains two columns of data - the first is an amount of mass (in kg) placed at the end of a beam and the second is a displacement, measured in inches, at the end of the beam.  For EGR 103, this file is:  
 
<source lang="matlab">
 
<source lang="matlab">
 
0.000000        0.005211
 
0.000000        0.005211
Line 58: Line 58:
 
  ylabel('Dependent Value')
 
  ylabel('Dependent Value')
 
  title('Dependent vs. Independent and Model')
 
  title('Dependent vs. Independent and Model')
  legend('Data', 'Estimates', 'Model', 0)
+
  legend('Data', 'Estimates', 'Model', 'location', 'best')
 
</source>
 
</source>
  
Line 64: Line 64:
 
General linear regression involves finding some set of coefficients for fits that can be written as:
 
General linear regression involves finding some set of coefficients for fits that can be written as:
 
<center><math>
 
<center><math>
\hat{y}(x)=\sum_{j=1}^{N}a_j\phi_j(x)
+
\hat{y}(x)=\sum_{j=1}^{M}a_j\phi_j(x)
 
</math></center>
 
</math></center>
 
where the <math>a_j</math> are the coefficients of the fit and the <math>\phi_j</math> are the specific functions of the independent variable that make up the fit.
 
where the <math>a_j</math> are the coefficients of the fit and the <math>\phi_j</math> are the specific functions of the independent variable that make up the fit.
Line 131: Line 131:
 
  ylabel('Dependent Value')
 
  ylabel('Dependent Value')
 
  title('Dependent vs. Independent and Model')
 
  title('Dependent vs. Independent and Model')
  legend('Data', 'Estimates', 'Model', 0)
+
  legend('Data', 'Estimates', 'Model', 'location', 'best')
 
</source>
 
</source>
 
=== Multidimensional===
 
=== Multidimensional===
Line 140: Line 140:
 
figure(1); clf
 
figure(1); clf
  
%% Create data set
+
%%% Create data set - this takes the place of loading the set
 +
%%% for this demonstration; otherwise, the end of this step
 +
%%% should have x1m, x2m, and ym defined as matrices
 
[x1m, x2m] = meshgrid(0:.1:1, 0:.15:3);
 
[x1m, x2m] = meshgrid(0:.1:1, 0:.15:3);
 
ym = x1m + x1m.*x2m + x2m.^2 + 2*cos(2*pi*x1m);
 
ym = x1m + x1m.*x2m + x2m.^2 + 2*cos(2*pi*x1m);
Line 156: Line 158:
 
switch model
 
switch model
 
     case 'plane'
 
     case 'plane'
         yeqn = @(coefs, x1, x2) coefs(1)*x1.^1 + coefs(2)*x2.^1 + coefs(3)*x2.^0;
+
         yeqn = @(coefs, x1e, x2e) coefs(1)*x1e.^1 + coefs(2)*x2e.^1 + coefs(3)*x2e.^0;
         A    =                         [x1v.^1            x2v.^1            x2v.^0];
+
         A    =                           [x1v.^1            x2v.^1            x2v.^0];
 
     case 'General 2nd'
 
     case 'General 2nd'
         yeqn = @(c, x1, x2) c(1)*x1.^2 + c(2)*x1.*x2 + c(3)*x2.^2 + c(4)*x1 + c(5)*x2 + c(6)*x1.^0;
+
         yeqn = @(c, x1e, x2e) c(1)*x1e.^2 + c(2)*x1e.*x2e + c(3)*x2e.^2 + c(4)*x1e + c(5)*x2e + c(6)*x1e.^0;
         A    =                 [x1v.^2       x1v.*x2v     x2v.^2       x1v       x2v       x1v.^0];;
+
         A    =                   [x1v.^2       x1v.*x2v       x2v.^2       x1v       x2v       x1v.^0];;
 
     case 'Right'
 
     case 'Right'
         yeqn = @(c, x1, x2) c(1)*x1.^2 + c(2)*x1.*x2 + c(3)*x2.^2 + c(4)*x1 + c(5)*x2 + c(6)*x1.^0 + c(7)*cos(2*pi*x1);
+
         yeqn = @(c, x1e, x2e) c(1)*x1e.^2 + c(2)*x1e.*x2e + c(3)*x2e.^2 + c(4)*x1e + c(5)*x2e + c(6)*x1e.^0 + c(7)*cos(2*pi*x1e);
         A    =                 [x1v.^2       x1v.*x2v     x2v.^2       x1v       x2v       x1v.^0       cos(2*pi*x1v)];;
+
         A    =                   [x1v.^2       x1v.*x2v       x2v.^2       x1v       x2v       x1v.^0       cos(2*pi*x1v)];;
  
 
end
 
end
Line 405: Line 407:
 
  ylabel('Dependent Value')
 
  ylabel('Dependent Value')
 
  title('Dependent vs. Independent and Model')
 
  title('Dependent vs. Independent and Model')
  legend('Data', 'Estimates', 'Model', 0)
+
  legend('Data', 'Estimates', 'Model', 'location', 'best')
 
</source>
 
</source>
  
Line 467: Line 469:
 
  ylabel('Dependent Value')
 
  ylabel('Dependent Value')
 
  title('Dependent vs. Independent and Model')
 
  title('Dependent vs. Independent and Model')
  legend('Data', 'Estimates', 'Model', 0)
+
  legend('Data', 'Estimates', 'Model', 'location', 'best')
 
</source>
 
</source>
  
Line 478: Line 480:
 
<references />
 
<references />
 
[[Category:BME 153]]
 
[[Category:BME 153]]
[[Category:EGR 53]]
+
[[Category:EGR 103]]
[[Category:EGR 119]]
+
[[Category:EGR 224]]

Latest revision as of 13:36, 27 September 2017

This document contains examples of polynomial fitting, general linear regression, and nonlinear regression. In each section, there will be example code that may come in useful for later courses. The example code is based on the existence of a file in the same directory called Cantilever.dat that contains two columns of data - the first is an amount of mass (in kg) placed at the end of a beam and the second is a displacement, measured in inches, at the end of the beam. For EGR 103, this file is:

0.000000        0.005211
0.113510002     0.158707
0.227279999     0.31399
0.340790009     0.474619
0.455809998     0.636769
0.569320007     0.77989
0.683630005     0.936634
0.797140015     0.999986

Polynomial Fitting

Polynomial fits are those where the dependent data is related to some set of integer powers of the independent variable. MATLAB's built-in polyfit command can determine the coefficients of a polynomial fit.

Example Code

In the example code below, N determines the order of the fit. Not much else would ever need to change.

%% Initialize the workspace
clear; format short e
figure(1); clf

%% Load and manipulate the data
load Cantilever.dat

Force = Cantilever(:,1) * 9.81;
Displ = Cantilever(:,2) * 2.54 / 100;

%% Rename and create model data
x = Force;
y = Displ;
xmodel = linspace(min(x), max(x), 100);

%% Determine the polynomial coefficients
N = 1;
P = polyfit(x, y, N)

%% Generate estimates and model
yhat   = polyval(P, x);
ymodel = polyval(P, xmodel);

%% Calculate statistics
% Compute sum of the squares of the data residuals
St = sum(( y - mean(y) ).^2)

% Compute sum of the squares of the estimate residuals
Sr = sum(( y - yhat ).^2)

% Compute the coefficient of determination
r2 = (St - Sr) / St

%% Generate plots
plot(x,      y,      'ko',...
     x,      yhat,   'ks',...
     xmodel, ymodel, 'k-');
 xlabel('Independent Value')
 ylabel('Dependent Value')
 title('Dependent vs. Independent and Model')
 legend('Data', 'Estimates', 'Model', 'location', 'best')

General Linear Regression

General linear regression involves finding some set of coefficients for fits that can be written as:

\( \hat{y}(x)=\sum_{j=1}^{M}a_j\phi_j(x) \)

where the \(a_j\) are the coefficients of the fit and the \(\phi_j\) are the specific functions of the independent variable that make up the fit.

Example Code

In the example code below, there are several examples of general linear fits of one variable.

%% Initialize the workspace
clear; format short e
figure(1); clf

%% Load and manipulate the data
load Cantilever.dat

Force = Cantilever(:,1) * 9.81;
Displ = Cantilever(:,2) * 2.54 / 100;

%% Rename and create model data
x = Force;
y = Displ;
xmodel = linspace(min(x), max(x), 100);

%% Define model equation and A matrix
model = 'linear'
switch model
    case 'linear'
        yeqn = @(coefs, x) coefs(1)*x.^1 + coefs(2)*x.^0;
        A    =                     [x.^1            x.^0];
    case 'quadratic'
        yeqn = @(coefs, x) coefs(1)*x.^2 + coefs(2)*x.^1 + coefs(3)*x.^0;
        A    =                     [x.^2            x.^1            x.^0];
    case 'line through origin'
        yeqn = @(coefs, x) coefs(1)*x.^1;
        A    =                     [x.^1];
    case 'trig'
        yeqn = @(coefs, x) coefs(1)*cos(x) + coefs(2)*sin(x);
        A    =                     [cos(x)            sin(x)];
    case 'trig with offset'
        yeqn = @(coefs, x) coefs(1)*cos(x) + coefs(2)*sin(x) + coefs(3)*x.^0;
        A    =                     [cos(x)            sin(x)            x.^0];
    otherwise
        error('Don''t know the model...')
end

%% Determine the function coefficients
MyCoefs = A\y

%% Generate estimates and model
yhat   = yeqn(MyCoefs, x);
ymodel = yeqn(MyCoefs, xmodel);

%% Calculate statistics
% Compute sum of the squares of the data residuals
St = sum(( y - mean(y) ).^2)

% Compute sum of the squares of the estimate residuals
Sr = sum(( y - yhat ).^2)

% Compute the coefficient of determination
r2 = (St - Sr) / St

%% Generate plots
plot(x,      y,      'ko',...
     x,      yhat,   'ks',...
     xmodel, ymodel, 'k-');
 xlabel('Independent Value')
 ylabel('Dependent Value')
 title('Dependent vs. Independent and Model')
 legend('Data', 'Estimates', 'Model', 'location', 'best')

Multidimensional

General linear fitting works for multidimensional functions as well. In the following code, there are two independent variables and one dependent. The data set is built at the start of the code (rather than loaded):

%% Initialize the workspace
clear; format short e
figure(1); clf

%%% Create data set - this takes the place of loading the set 
%%% for this demonstration; otherwise, the end of this step
%%% should have x1m, x2m, and ym defined as matrices
[x1m, x2m] = meshgrid(0:.1:1, 0:.15:3);
ym = x1m + x1m.*x2m + x2m.^2 + 2*cos(2*pi*x1m);

%% Rename and create vectors and model data
x1v = x1m(:);
x2v = x2m(:);
yv  = ym(:);
[x1model, x2model] = meshgrid(...
    linspace(min(x1v), max(x1v), 50),...
    linspace(min(x2v), max(x2v), 50));

%% Define model equation and A matrix
model = 'plane'
switch model
    case 'plane'
        yeqn = @(coefs, x1e, x2e) coefs(1)*x1e.^1 + coefs(2)*x2e.^1 + coefs(3)*x2e.^0;
        A    =                            [x1v.^1            x2v.^1            x2v.^0];
    case 'General 2nd'
        yeqn = @(c, x1e, x2e) c(1)*x1e.^2 + c(2)*x1e.*x2e + c(3)*x2e.^2 + c(4)*x1e + c(5)*x2e + c(6)*x1e.^0;
        A    =                    [x1v.^2        x1v.*x2v        x2v.^2        x1v        x2v        x1v.^0];;
    case 'Right'
        yeqn = @(c, x1e, x2e) c(1)*x1e.^2 + c(2)*x1e.*x2e + c(3)*x2e.^2 + c(4)*x1e + c(5)*x2e + c(6)*x1e.^0 + c(7)*cos(2*pi*x1e);
        A    =                    [x1v.^2        x1v.*x2v        x2v.^2        x1v        x2v        x1v.^0        cos(2*pi*x1v)];;

end

%% Determine the function coefficients
MyCoefs = A\yv

%% Generate estimates and model
yhat   = yeqn(MyCoefs, x1v, x2v);
ymodel = yeqn(MyCoefs, x1model, x2model);
        
%% Calculate statistics
St = sum((yv - mean(yv)).^2)
Sr = sum((yv - yhat).^2)
r2 = (St - Sr) / St

%% Generate plots
% Original data
figure(1); clf
meshc(x1m, x2m, ym);
xlabel('x1'); ylabel('x2'); zlabel('y Data')
% Model data
figure(2)
meshc(x1model, x2model, ymodel)
xlabel('x1'); ylabel('x2'); zlabel('y Model')
% Residuals
figure(3)
meshc(x1m, x2m, (ym-yeqn(MyCoefs, x1m, x2m)))
xlabel('x1'); ylabel('x2'); zlabel('Residuals')

Linearized Models

There are three primary nonlinear models which, through a transformation of variables, may yield a linear relationship between the transformed variables: the exponential model, the power-law model, and the saturation-growth model. In the subsections before, these are addressed individually by showing the modelling equation, describing a transformation mechanism, and showing the transformed equation in the form of

\( \eta=P(1)*\xi^1+P(2)*\xi^0\,\! \)

where \(\xi\) is the transformed version of the independent variable \(x\) and \(\eta\) is the transformed version of the dependent variable \(y\). In each of the three cases below, then, the polyfit command can be used, with the transformed variables, to find the coefficients of the straight-line fit.

Exponential Model

Exponential models come from relationships where the derivative of the dependent variable is proportional to the value of the dependent variable itself. That is:

\(\begin{align} \frac{dy}{dx}&=ky \end{align}\)

Separating the variables and integrating both sides yields:

\(\begin{align} \frac{1}{y}dy&=k~dx\\ \int_{y_0}^{y}\frac{1}{y}dy&=\int_{x_0}^{x}k~dx\\ \left[\ln(y)\right]_{y_0}^{y}&= \left[kx\right]_{x_0}^{x}\\ \ln(y)-\ln(y_0)&=kx-kx_0\\ \ln\left(\frac{y}{y_0}\right)&=k~(x-x_0) \end{align}\)

Exponentiating both sides gives:

\(\begin{align} \frac{y}{y_0}&=e^{k(x-x_0)}=e^{-kx_0}e^{kx}\\ y&=\frac{y_0}{e^{kx_0}}e^{kx} \end{align}\)

Simplifying the constants yields

\(y=ae^{kx}\,\!\)

where \(a\) is some constant and \(k\) is called the growth rate. Transform by taking the logarithm of each side. Using the natural logarithm is most useful:

\(\begin{align} y&=ae^{kx}\\ \ln\left(y\right)&=\ln\left(ae^{kx}\right)\\ \ln\left(y\right)&=k*x^1+\ln\left(a\right)*x^0\,\! \end{align}\)

meaning the transformed measurement \(\ln\left(y\right)\) can be fit to a straight line function of \(x\). That is, using the transformations:

\(\begin{align} \xi&=x & \eta&=\ln\left(y\right) \end{align}\)

The relationship between the slope and intercept for the transformed variables and the constants in the model equation are:

\(\begin{align} P(1)&=k & P(2)&=\ln\left(a\right)\\ k&=P(1) & a&=e^{P(2)} \end{align}\)

Power Law Model

Power Law models come from situations where the derivative of the dependent variable is proportional to the ratio of the dependent variable to the independent variable. That is:

\( \frac{dy}{dx}=k\frac{y}{x}\,\! \)

where here \(k\) is the constant of proportionality. Separating the variables and integrating both sides yields:

\(\begin{align} \frac{1}{y}dy&=\frac{k}{x}dx\\ \int_{y_0}^{y}\frac{1}{y}dy&=\int_{x_0}^{x}\frac{k}{x}dx\\ \left[\ln(y)\right]_{y_0}^{y}&= \left[k\ln(x)\right]_{x_0}^{x}\\ \ln(y)-\ln(y_0)&=k(\ln(x)-\ln(x_0)\\ \ln\left(\frac{y}{y_0}\right)&=k~\ln\left(\frac{x}{x_0}\right)=\ln\left(\left(\frac{x}{x_0}\right)^k\right) \end{align}\)

Exponentiating both sides gives:

\(\begin{align} \frac{y}{y_0}&=\left(\frac{x}{x_0}\right)^k\\ y&=\frac{y_0}{x_0^k}x^k \end{align}\)

Where the constants can be simplified to reveal a relationship where the dependent variable is proportional to some power of the independent variable. That is:

\(y=ax^{k}\,\!\)

where \(a\) is some constant and \(k\) is called the scaling exponent. Transform by taking the logarithm of each side. Unlike the transformation of the exponential model, there is generally no mathematical advantage to using one logarithm over another. There is, however, a graphical advantage if the base-10 logarithm is used since MATLAB's semilogy, semilogx, and loglog plots use the base-10 logarithm.

\(\begin{align} y&=ax^k\\ \log_{10}\left(y\right)&=\log_{10}\left(ax^k\right)\\ \log_{10}\left(y\right)&=k*\left(\log_{10}\left(x\right)\right)^1+\log_{10}\left(a\right)*\left(\log_{10}\left(x\right)\right)^0\,\! \end{align}\)

meaning the transformed measurement \(\log_{10}\left(y\right)\) can be fit to a straight line function of \(\log_{10}\left(x\right)\). That is, using the transformations:

\(\begin{align} \xi&=\log_{10}\left(x\right) & \eta&=\log_{10}\left(y\right) \end{align}\)

The relationship between the slope and intercept for the transformed variables and the constants in the model equation are:

\(\begin{align} P(1)&=k & P(2)&=\log_{10}\left(a\right)\\ k&=P(1) & a&=10^{P(2)} \end{align}\)


Saturation-Growth Model

Saturation-Growth models come from situations where the derivative of the dependent variable is proportional to the square of the ratio of the dependent variable to the independent variable. That is:

\( \frac{dy}{dx}=k\frac{y^2}{x^2}\,\! \)

where \(k\) is again the constant of proportionality. Separating the variables and integrating both sides yields:

\(\begin{align} \frac{1}{y^2}dy&=\frac{k}{x^2}dx\\ \int_{y_0}^{y}\frac{1}{y^2}dy&=\int_{x_0}^{x}\frac{k}{x^2}dx\\ \left[-\frac{1}{y}\right]_{y_0}^{y}&= \left[-\frac{k}{x}\right]_{x_0}^{x}\\ -\frac{1}{y}+\frac{1}{y_0}&=-\frac{k}{x}+\frac{k}{x_0}\\ \frac{1}{y}&=\frac{k}{x}+\frac{1}{y_0}-\frac{k}{x_0} \end{align}\)

By finding a common denominator for the right side of this equation and the inverting both sides, you can obtain:

\(\begin{align} \frac{1}{y}&=\frac{kx_0y_0+xx_0-kxy_0}{xx_0y_0}\\ y&=\frac{xx_0y_0}{kx_0y_0+x(x_0-ky_0)} \end{align}\)

To get this into the form preferred by Chapra[1] requires a bit of work. First, divide all the terms by \(x_0-ky_0\):

\(\begin{align} y&=\frac{x\frac{x_0y_0}{x_0-ky_0}}{k\frac{x_0y_0}{x_0-ky_0}+x}\\ \end{align}\)

Then simply rename the constants to

\(\begin{align} \alpha_3&=\frac{x_0y_0}{x_0-ky_0} & \beta_3&=k\frac{x_0y_0}{x_0-ky_0} \end{align}\)

to yield the version in the Chapra book on p. 301:

\(\begin{align} y&=\alpha_3\frac{x}{\beta_3+x}\\ \end{align}\)

This form is more convenient for several reasons. First, in the limit as the independent variable goes to infinity, the dependent variable approaches \(\alpha_3\). Second, when the independent variable is equal to \(\beta_3\), the dependent variable will be half that value. This means that \(\alpha_3\) is the limiting value of \(y\) and \(\beta_3\) represents a measure of the rate at which it gets there.

With respect to the transformation - several lines ago there was a linear relationship - specifically between the inverses of the dependent and independent variables. That gives a hint as to how to transform the Chapra version of the equation - just flip it:

\(\begin{align} y&=\alpha_3\frac{x}{\beta_3+x}\\ \frac{1}{y}&=\frac{\beta_3+x}{\alpha_3x}\\ \frac{1}{y}&=\frac{\beta_3}{\alpha_3}*\left(\frac{1}{x}\right)^1+\frac{1}{\alpha^3}*\left(\frac{1}{x}\right)^0 \end{align}\)

meaning the transformed measurement \(1/y\) can be fit to a straight line function of \(1/x\). That is, using the transformations:

\(\begin{align} \xi&=\frac{1}{x} & \eta&=\frac{1}{y} \end{align}\)

The relationship between the slope and intercept for the transformed variables and the constants in the model equation are:

\(\begin{align} P(1)&=\frac{\beta_3}{\alpha_3} & P(2)&=\frac{1}{\alpha_3}\\ \alpha_3&=\frac{1}{P(2)} & \beta_3&=P(1)\alpha_3=\frac{P(1)}{P(2)} \end{align}\)

Example Code

Note in the example code below that there is more work done in the switch statement than in the above examples. This is primarily because the type of linearization will determine not only the model equation but also the transformation equations into and out of the linearized regime.

%% Initialize the workspace
clear; format short e
figure(1); clf
 
%% Load and manipulate the data
load Cantilever.dat
%  Remove first point since x=0 there...
Cantilever = Cantilever(2:end, :);

Force = Cantilever(:,1) * 9.81;
Displ = Cantilever(:,2) * 2.54 / 100;
 
%% Rename and create model data
x = Force;
y = Displ;
xmodel = linspace(min(x), max(x), 100);

%% Define the model equation; transform the variables; find the linearized fit; transform back
model = 'exponential'
switch model
    case 'exponential'
        yeqn = @(coefs, x) coefs(1).*exp(coefs(2).*x);
        xi  = x;
        eta = log(y);
        P = polyfit(xi, eta, 1);
        MyCoefs(1) = exp(P(2));
        MyCoefs(2) = P(1)
    case 'power law'
        yeqn = @(coefs, x) coefs(1).*x.^coefs(2);
        xi  = log10(x);
        eta = log10(y);
        P = polyfit(xi, eta, 1);
        MyCoefs(1) = 10^(P(2));
        MyCoefs(2) = P(1)
    case 'sat growth'
        yeqn = @(coefs, x) coefs(1).*x./(coefs(2)+x);
        xi  = 1./x;
        eta = 1./y;
        P = polyfit(xi, eta, 1);
        MyCoefs(1) =    1/P(2);
        MyCoefs(2) = P(1)/P(2)
    otherwise
        error('Unknown linearization')
end

 
%% Generate estimates and model
yhat   = yeqn(MyCoefs, x);
ymodel = yeqn(MyCoefs, xmodel);
 
%% Calculate statistics
% Compute sum of the squares of the data residuals
St = sum(( y - mean(y) ).^2)
 
% Compute sum of the squares of the estimate residuals
Sr = sum(( y - yhat ).^2)
 
% Compute the coefficient of determination
r2 = (St - Sr) / St
 
%% Generate plots
plot(x,      y,      'ko',...
     x,      yhat,   'ks',...
     xmodel, ymodel, 'k-');
 xlabel('Independent Value')
 ylabel('Dependent Value')
 title('Dependent vs. Independent and Model')
 legend('Data', 'Estimates', 'Model', 'location', 'best')

Nonlinear Regression

Nonlinear regression is both more powerful and more sensitive than linear regression. For inherently nonlinear fits, it will also produce a better \(S_r\) value than linearization since the nonlinear regression process is minimizing the \(S_r\) of the actual data rather than that of the transformed values. The sensitivity comes into play as the fminsearch command principally finds local minima versus global minima. A good starting guess will work wonders.

Example Code

In the code below, note that the use of the variable fSSR was taken from Section 14.5 of the Chapra book[1].

%% Initialize the workspace
clear; format short e
figure(1); clf

%% Load and manipulate the data
load Cantilever.dat

Force = Cantilever(:,1) * 9.81;
Displ = Cantilever(:,2) * 2.54 / 100;

%% Rename and create model data
x = Force;
y = Displ;
xmodel = linspace(min(x), max(x), 100);

%% Define model equation and initial guesses
model = 'linear'
switch model
    case 'linear'
        yeqn = @(coefs, x) coefs(1)*x.^1 + coefs(2)*x.^0;
        InitGuess = [0 0]
    case 'trig with offset'
        yeqn = @(coefs, x) coefs(1)*cos(x*coefs(3)) + coefs(2)*sin(x*coefs(3)) + coefs(4);
        InitGuess = [0 0 100 0]
    otherwise
        error('Don''t know the model...')
end

%% Determine the function coefficients
fSSR = @(coefs, x, y) sum(( y - yeqn(coefs, x) ).^2)
[MyCoefs, Sr] = fminsearch(@(MyCoefsDummy) fSSR(MyCoefsDummy, x, y), InitGuess)

%% Generate estimates and model
yhat   = yeqn(MyCoefs, x);
ymodel = yeqn(MyCoefs, xmodel);

%% Calculate statistics
% Compute sum of the squares of the data residuals
St = sum(( y - mean(y) ).^2)

% Compute sum of the squares of the estimate residuals
Sr = sum(( y - yhat ).^2)

% Compute the coefficient of determination
r2 = (St - Sr) / St

%% Generate plots
plot(x,      y,      'ko',...
     x,      yhat,   'ks',...
     xmodel, ymodel, 'k-');
 xlabel('Independent Value')
 ylabel('Dependent Value')
 title('Dependent vs. Independent and Model')
 legend('Data', 'Estimates', 'Model', 'location', 'best')

Questions

Post your questions by editing the discussion page of this article. Edit the page, then scroll to the bottom and add a question by putting in the characters *{{Q}}, followed by your question and finally your signature (with four tildes, i.e. ~~~~). Using the {{Q}} will automatically put the page in the category of pages with questions - other editors hoping to help out can then go to that category page to see where the questions are. See the page for Template:Q for details and examples.

External Links

References