You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@commons.apache.org by koshino kazuhiro <ko...@ri.ncvc.go.jp> on 2007/12/12 10:39:31 UTC

[math] Standard errors in estimated parameters using LevenbergMarquardtEstimator

Hello,

Can I get standard errors in estimated parameters using
GaussNewtonEstimator or LevenbergMarquardtEstimator?

I think that those values are very important to validate estimated
parameters.

Is use of classes in java.lang.reflect.* only way to get standard errors?

Kind Regards,

Koshino

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
For additional commands, e-mail: user-help@commons.apache.org


Re: [math] Standard errors in estimated parameters using LevenbergMarquardtEstimator

Posted by koshino kazuhiro <ko...@ri.ncvc.go.jp>.
Hello Luc,

Thanks for your interest in this issue and introducing me to your JIRA 
system.
I created a ticket in JIRA for this issue.

Kind Regards,

Koshino

> Hi Koshino,
> 
> Sorry for this very late reply.
> 
> Could you please open a ticket in our JIRA issue tracking system with 
> your request or enhancement ? This will prevent your suggestion to be 
> forgotten and will automatically keep you informed as it is processed. 
> The tracking system is here: https://issues.apache.org/jira/browse/MATH
> You will have to sign up (its free) to be able to create a new issue.
> 
> Thanks
> Luc
> 
> koshino kazuhiro wrote:
>> Hello,
>>
>> Following code is one idea to obtain errors on estimated parameters.
>> (I ignore consistency with GaussNewtonEstimator via interface Estimator.)
>>
>> This code is based on the example in "[math] Example using 
>> EstimationProblem, WeightedMeasurement from apache.commons.math?".
>>
>> Kind Regards,
>> Koshino
>>
>> import org.apache.commons.math.estimation.EstimatedParameter;
>> import org.apache.commons.math.estimation.EstimationException;
>> import org.apache.commons.math.estimation.LevenbergMarquardtEstimator;
>> import org.apache.commons.math.estimation.SimpleEstimationProblem;
>> import org.apache.commons.math.estimation.WeightedMeasurement;
>>
>> // To calculate covariance matrix
>> import org.apache.commons.math.linear.RealMatrix;
>> import org.apache.commons.math.linear.MatrixUtils;
>>
>> public class QuadraticFitter extends SimpleEstimationProblem {
>>
>>     public static void main(String[] args) {
>>
>>         try {
>>             // create the uninitialized fitting problem
>>             QuadraticFitter fitter = new QuadraticFitter();
>>
>> //            // register the sample points (perfect quadratic case)
>> //            fitter.addPoint(1.0, 5.0, 0.2);
>> //            fitter.addPoint(2.0, 10.0, 0.4);
>> //            fitter.addPoint(3.0, 17.0, 1.0);
>> //            fitter.addPoint(4.0, 26.0, 0.8);
>> //            fitter.addPoint(5.0, 37.0, 0.3);
>>
>>             // register the sample points (practical case)
>>             fitter.addPoint(1.0, 1.0, 0.2);
>>             fitter.addPoint(2.0, 3.0, 0.4);
>>             fitter.addPoint(3.0, 2.0, 1.0);
>>             fitter.addPoint(4.0, 6.0, 0.8);
>>             fitter.addPoint(5.0, 4.0, 0.3);
>>             double centerX = 3.0;
>>
>>             // solve the problem, using a Levenberg-Marquardt algorithm
>>             // with default settings
>>             LevenbergMarquardtEstimator estimator =
>>               new LevenbergMarquardtEstimator();
>>             estimator.estimate(fitter);
>>             System.out.println("RMS after fitting = "
>>                              + estimator.getRMS(fitter));
>>
>>             // ******************************************
>>             // If Jacobian and chi-square are available,
>>             // we obtain errors on estimated parameters.
>>             // ******************************************
>>             RealMatrix jacobian = estimator.getJacobian();
>>             RealMatrix transJ = jacobian.transpose();
>>             RealMatrix covar = transJ.multiply(jacobian).inverse();
>>             double chisq = estimator.getChiSquare();
>>             int nm = fitter.getMeasurements().length;
>>             int np = fitter.getUnboundParameters().length;
>>             int dof = nm - np;
>>             for (int i = 0; i < np; ++i) {
>>               System.out.println(Math.sqrt(covar.getEntry(i, i)) * 
>> chisq / dof);
>>             }}
>>
>>             // display the results
>>             System.out.println("a = " + fitter.getA()
>>                     + ", b = " + fitter.getB()
>>                     + ", c = " + fitter.getC());
>>             System.out.println("y (" + centerX + ") = "
>>                                + fitter.theoreticalValue(centerX));
>>
>>         } catch (EstimationException ee) {
>>             System.err.println(ee.getMessage());
>>         }
>>
>>     }
>>   // skip constructor and methods
>> }
>>
>>> Hello,
>>>
>>>> Oooops. Sorry, my answer does not match your question. You asked 
>>>> about parameters and I answered about observations (measurements).
>>>
>>> The term "standard errors" in my sentences was ambiguity.
>>> I should use covariance matrix rather than standard errors.
>>>
>>> Your suggestion for RMS is very important because my measurements was 
>>> degraded by statistical noises.
>>>
>>>> So the proper answer is no, there is no way to get any information 
>>>> about errors on parameters yet. One method to have an estimate of 
>>>> the quality of the result is to check the eigenvalues related to the 
>>>> parameters. Perhaps we could look at this after having included a 
>>>> Singular Value Decomposition algorithm ? Is this what you want or do 
>>>> you have some other idea ?
>>>
>>> In LevenbergMarquardtEstimator.java (revision 560660),
>>> a jacobian matrix is used to estimate parameters.
>>> Using the jacobian matrix, could we obtain a covariance matrix, i.e. 
>>> errors on parameters?
>>> covariance matrix = (Jt * J)^(-1)
>>> where J, Jt and ^(-1) denotes a jacobian, a transpose matrix of J and 
>>> an inverse operator, respectively.
>>>
>>> Kind Regards,
>>>
>>> Koshino
>>>
>>>> Luc
>>>>
>>>>> the method (because it is your problems which defines both the 
>>>>> model, the
>>>>> parameters and the observations).
>>>>> If you call it before calling estimate, it will use the initial 
>>>>> values of the
>>>>> parameters, if you call it after having called estimate, it will 
>>>>> use the
>>>>> adjusted values.
>>>>>
>>>>> Here is what the javadoc says about this method:
>>>>>
>>>>>    * Get the Root Mean Square value, i.e. the root of the arithmetic
>>>>>    * mean of the square of all weighted residuals. This is related 
>>>>> to the
>>>>>    * criterion that is minimized by the estimator as follows: if
>>>>>    * <em>c</em> is the criterion, and <em>n</em> is the number of
>>>>>    * measurements, then the RMS is <em>sqrt (c/n)</em>.
>>>>>> I think that those values are very important to validate estimated
>>>>>> parameters.
>>>>>
>>>>> It may sometimes be misleading. If your problem model is wrong and too
>>>>> "flexible", and if your observation are bad (measurements errors), 
>>>>> then you may
>>>>> adjust too many parameters and have the bad model really follow the 
>>>>> bad
>>>>> measurements and give you artificially low residuals. Then you may 
>>>>> think
>>>>> everything is perfect which is false. This is about the same kind 
>>>>> of problems
>>>>> knowns as "Gibbs oscillations" for polynomial fitting when you use 
>>>>> a too high
>>>>> degree.
>>>>>
>>>>> Luc
>>>>>
>>>>>> Is use of classes in java.lang.reflect.* only way to get standard 
>>>>>> errors?
>>>>>>
>>>>>> Kind Regards,
>>>>>>
>>>>>> Koshino
>>>>>>
>>>>>> ---------------------------------------------------------------------
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
>> For additional commands, e-mail: user-help@commons.apache.org
>>
>>
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> For additional commands, e-mail: user-help@commons.apache.org
> 
> 

-- 
Kazuhiro Koshino

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
For additional commands, e-mail: user-help@commons.apache.org


Re: [math] Standard errors in estimated parameters using LevenbergMarquardtEstimator

Posted by Luc Maisonobe <Lu...@free.fr>.
Hi Koshino,

Sorry for this very late reply.

Could you please open a ticket in our JIRA issue tracking system with 
your request or enhancement ? This will prevent your suggestion to be 
forgotten and will automatically keep you informed as it is processed. 
The tracking system is here: https://issues.apache.org/jira/browse/MATH
You will have to sign up (its free) to be able to create a new issue.

Thanks
Luc

koshino kazuhiro wrote:
> Hello,
> 
> Following code is one idea to obtain errors on estimated parameters.
> (I ignore consistency with GaussNewtonEstimator via interface Estimator.)
> 
> This code is based on the example in "[math] Example using 
> EstimationProblem, WeightedMeasurement from apache.commons.math?".
> 
> Kind Regards,
> Koshino
> 
> import org.apache.commons.math.estimation.EstimatedParameter;
> import org.apache.commons.math.estimation.EstimationException;
> import org.apache.commons.math.estimation.LevenbergMarquardtEstimator;
> import org.apache.commons.math.estimation.SimpleEstimationProblem;
> import org.apache.commons.math.estimation.WeightedMeasurement;
> 
> // To calculate covariance matrix
> import org.apache.commons.math.linear.RealMatrix;
> import org.apache.commons.math.linear.MatrixUtils;
> 
> public class QuadraticFitter extends SimpleEstimationProblem {
> 
>     public static void main(String[] args) {
> 
>         try {
>             // create the uninitialized fitting problem
>             QuadraticFitter fitter = new QuadraticFitter();
> 
> //            // register the sample points (perfect quadratic case)
> //            fitter.addPoint(1.0, 5.0, 0.2);
> //            fitter.addPoint(2.0, 10.0, 0.4);
> //            fitter.addPoint(3.0, 17.0, 1.0);
> //            fitter.addPoint(4.0, 26.0, 0.8);
> //            fitter.addPoint(5.0, 37.0, 0.3);
> 
>             // register the sample points (practical case)
>             fitter.addPoint(1.0, 1.0, 0.2);
>             fitter.addPoint(2.0, 3.0, 0.4);
>             fitter.addPoint(3.0, 2.0, 1.0);
>             fitter.addPoint(4.0, 6.0, 0.8);
>             fitter.addPoint(5.0, 4.0, 0.3);
>             double centerX = 3.0;
> 
>             // solve the problem, using a Levenberg-Marquardt algorithm
>             // with default settings
>             LevenbergMarquardtEstimator estimator =
>               new LevenbergMarquardtEstimator();
>             estimator.estimate(fitter);
>             System.out.println("RMS after fitting = "
>                              + estimator.getRMS(fitter));
> 
>             // ******************************************
>             // If Jacobian and chi-square are available,
>             // we obtain errors on estimated parameters.
>             // ******************************************
>             RealMatrix jacobian = estimator.getJacobian();
>             RealMatrix transJ = jacobian.transpose();
>             RealMatrix covar = transJ.multiply(jacobian).inverse();
>             double chisq = estimator.getChiSquare();
>             int nm = fitter.getMeasurements().length;
>             int np = fitter.getUnboundParameters().length;
>             int dof = nm - np;
>             for (int i = 0; i < np; ++i) {
>               System.out.println(Math.sqrt(covar.getEntry(i, i)) * chisq 
> / dof);
>             }}
> 
>             // display the results
>             System.out.println("a = " + fitter.getA()
>                     + ", b = " + fitter.getB()
>                     + ", c = " + fitter.getC());
>             System.out.println("y (" + centerX + ") = "
>                                + fitter.theoreticalValue(centerX));
> 
>         } catch (EstimationException ee) {
>             System.err.println(ee.getMessage());
>         }
> 
>     }
>   // skip constructor and methods
> }
> 
>> Hello,
>>
>>> Oooops. Sorry, my answer does not match your question. You asked 
>>> about parameters and I answered about observations (measurements).
>>
>> The term "standard errors" in my sentences was ambiguity.
>> I should use covariance matrix rather than standard errors.
>>
>> Your suggestion for RMS is very important because my measurements was 
>> degraded by statistical noises.
>>
>>> So the proper answer is no, there is no way to get any information 
>>> about errors on parameters yet. One method to have an estimate of the 
>>> quality of the result is to check the eigenvalues related to the 
>>> parameters. Perhaps we could look at this after having included a 
>>> Singular Value Decomposition algorithm ? Is this what you want or do 
>>> you have some other idea ?
>>
>> In LevenbergMarquardtEstimator.java (revision 560660),
>> a jacobian matrix is used to estimate parameters.
>> Using the jacobian matrix, could we obtain a covariance matrix, i.e. 
>> errors on parameters?
>> covariance matrix = (Jt * J)^(-1)
>> where J, Jt and ^(-1) denotes a jacobian, a transpose matrix of J and 
>> an inverse operator, respectively.
>>
>> Kind Regards,
>>
>> Koshino
>>
>>> Luc
>>>
>>>> the method (because it is your problems which defines both the 
>>>> model, the
>>>> parameters and the observations).
>>>> If you call it before calling estimate, it will use the initial 
>>>> values of the
>>>> parameters, if you call it after having called estimate, it will use 
>>>> the
>>>> adjusted values.
>>>>
>>>> Here is what the javadoc says about this method:
>>>>
>>>>    * Get the Root Mean Square value, i.e. the root of the arithmetic
>>>>    * mean of the square of all weighted residuals. This is related 
>>>> to the
>>>>    * criterion that is minimized by the estimator as follows: if
>>>>    * <em>c</em> is the criterion, and <em>n</em> is the number of
>>>>    * measurements, then the RMS is <em>sqrt (c/n)</em>.
>>>>> I think that those values are very important to validate estimated
>>>>> parameters.
>>>>
>>>> It may sometimes be misleading. If your problem model is wrong and too
>>>> "flexible", and if your observation are bad (measurements errors), 
>>>> then you may
>>>> adjust too many parameters and have the bad model really follow the bad
>>>> measurements and give you artificially low residuals. Then you may 
>>>> think
>>>> everything is perfect which is false. This is about the same kind of 
>>>> problems
>>>> knowns as "Gibbs oscillations" for polynomial fitting when you use a 
>>>> too high
>>>> degree.
>>>>
>>>> Luc
>>>>
>>>>> Is use of classes in java.lang.reflect.* only way to get standard 
>>>>> errors?
>>>>>
>>>>> Kind Regards,
>>>>>
>>>>> Koshino
>>>>>
>>>>> ---------------------------------------------------------------------
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> For additional commands, e-mail: user-help@commons.apache.org
> 
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
For additional commands, e-mail: user-help@commons.apache.org


Re: [math] Standard errors in estimated parameters using LevenbergMarquardtEstimator

Posted by koshino kazuhiro <ko...@ri.ncvc.go.jp>.
Hello,

Following code is one idea to obtain errors on estimated parameters.
(I ignore consistency with GaussNewtonEstimator via interface Estimator.)

This code is based on the example in "[math] Example using 
EstimationProblem, WeightedMeasurement from apache.commons.math?".

Kind Regards,
Koshino

import org.apache.commons.math.estimation.EstimatedParameter;
import org.apache.commons.math.estimation.EstimationException;
import org.apache.commons.math.estimation.LevenbergMarquardtEstimator;
import org.apache.commons.math.estimation.SimpleEstimationProblem;
import org.apache.commons.math.estimation.WeightedMeasurement;

// To calculate covariance matrix
import org.apache.commons.math.linear.RealMatrix;
import org.apache.commons.math.linear.MatrixUtils;

public class QuadraticFitter extends SimpleEstimationProblem {

     public static void main(String[] args) {

         try {
             // create the uninitialized fitting problem
             QuadraticFitter fitter = new QuadraticFitter();

//            // register the sample points (perfect quadratic case)
//            fitter.addPoint(1.0, 5.0, 0.2);
//            fitter.addPoint(2.0, 10.0, 0.4);
//            fitter.addPoint(3.0, 17.0, 1.0);
//            fitter.addPoint(4.0, 26.0, 0.8);
//            fitter.addPoint(5.0, 37.0, 0.3);

             // register the sample points (practical case)
             fitter.addPoint(1.0, 1.0, 0.2);
             fitter.addPoint(2.0, 3.0, 0.4);
             fitter.addPoint(3.0, 2.0, 1.0);
             fitter.addPoint(4.0, 6.0, 0.8);
             fitter.addPoint(5.0, 4.0, 0.3);
             double centerX = 3.0;

             // solve the problem, using a Levenberg-Marquardt algorithm
             // with default settings
             LevenbergMarquardtEstimator estimator =
               new LevenbergMarquardtEstimator();
             estimator.estimate(fitter);
             System.out.println("RMS after fitting = "
                              + estimator.getRMS(fitter));

             // ******************************************
             // If Jacobian and chi-square are available,
             // we obtain errors on estimated parameters.
             // ******************************************
             RealMatrix jacobian = estimator.getJacobian();
             RealMatrix transJ = jacobian.transpose();
             RealMatrix covar = transJ.multiply(jacobian).inverse();
             double chisq = estimator.getChiSquare();
             int nm = fitter.getMeasurements().length;
             int np = fitter.getUnboundParameters().length;
             int dof = nm - np;
             for (int i = 0; i < np; ++i) {
               System.out.println(Math.sqrt(covar.getEntry(i, i)) * 
chisq / dof);
             }}

             // display the results
             System.out.println("a = " + fitter.getA()
                     + ", b = " + fitter.getB()
                     + ", c = " + fitter.getC());
             System.out.println("y (" + centerX + ") = "
                                + fitter.theoreticalValue(centerX));

         } catch (EstimationException ee) {
             System.err.println(ee.getMessage());
         }

     }
   // skip constructor and methods
}

> Hello,
> 
>> Oooops. Sorry, my answer does not match your question. You asked about 
>> parameters and I answered about observations (measurements).
> 
> The term "standard errors" in my sentences was ambiguity.
> I should use covariance matrix rather than standard errors.
> 
> Your suggestion for RMS is very important because my measurements was 
> degraded by statistical noises.
> 
>> So the proper answer is no, there is no way to get any information 
>> about errors on parameters yet. One method to have an estimate of the 
>> quality of the result is to check the eigenvalues related to the 
>> parameters. Perhaps we could look at this after having included a 
>> Singular Value Decomposition algorithm ? Is this what you want or do 
>> you have some other idea ?
> 
> In LevenbergMarquardtEstimator.java (revision 560660),
> a jacobian matrix is used to estimate parameters.
> Using the jacobian matrix, could we obtain a covariance matrix, i.e. 
> errors on parameters?
> covariance matrix = (Jt * J)^(-1)
> where J, Jt and ^(-1) denotes a jacobian, a transpose matrix of J and an 
> inverse operator, respectively.
> 
> Kind Regards,
> 
> Koshino
> 
>> Luc
>>
>>> the method (because it is your problems which defines both the model, 
>>> the
>>> parameters and the observations).
>>> If you call it before calling estimate, it will use the initial 
>>> values of the
>>> parameters, if you call it after having called estimate, it will use the
>>> adjusted values.
>>>
>>> Here is what the javadoc says about this method:
>>>
>>>    * Get the Root Mean Square value, i.e. the root of the arithmetic
>>>    * mean of the square of all weighted residuals. This is related to 
>>> the
>>>    * criterion that is minimized by the estimator as follows: if
>>>    * <em>c</em> is the criterion, and <em>n</em> is the number of
>>>    * measurements, then the RMS is <em>sqrt (c/n)</em>.
>>>> I think that those values are very important to validate estimated
>>>> parameters.
>>>
>>> It may sometimes be misleading. If your problem model is wrong and too
>>> "flexible", and if your observation are bad (measurements errors), 
>>> then you may
>>> adjust too many parameters and have the bad model really follow the bad
>>> measurements and give you artificially low residuals. Then you may think
>>> everything is perfect which is false. This is about the same kind of 
>>> problems
>>> knowns as "Gibbs oscillations" for polynomial fitting when you use a 
>>> too high
>>> degree.
>>>
>>> Luc
>>>
>>>> Is use of classes in java.lang.reflect.* only way to get standard 
>>>> errors?
>>>>
>>>> Kind Regards,
>>>>
>>>> Koshino
>>>>
>>>> ---------------------------------------------------------------------


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
For additional commands, e-mail: user-help@commons.apache.org


Re: [math] Standard errors in estimated parameters using LevenbergMarquardtEstimator

Posted by koshino kazuhiro <ko...@ri.ncvc.go.jp>.
Hello,

> Oooops. Sorry, my answer does not match your question. You asked about 
> parameters and I answered about observations (measurements).

The term "standard errors" in my sentences was ambiguity.
I should use covariance matrix rather than standard errors.

Your suggestion for RMS is very important because my measurements was 
degraded by statistical noises.

> So the proper answer is no, there is no way to get any information about 
> errors on parameters yet. One method to have an estimate of the quality 
> of the result is to check the eigenvalues related to the parameters. 
> Perhaps we could look at this after having included a Singular Value 
> Decomposition algorithm ? Is this what you want or do you have some 
> other idea ?

In LevenbergMarquardtEstimator.java (revision 560660),
a jacobian matrix is used to estimate parameters.
Using the jacobian matrix, could we obtain a covariance matrix, i.e. 
errors on parameters?
covariance matrix = (Jt * J)^(-1)
where J, Jt and ^(-1) denotes a jacobian, a transpose matrix of J and an 
inverse operator, respectively.

Kind Regards,

Koshino

> Luc
> 
>> the method (because it is your problems which defines both the model, the
>> parameters and the observations).
>> If you call it before calling estimate, it will use the initial values 
>> of the
>> parameters, if you call it after having called estimate, it will use the
>> adjusted values.
>>
>> Here is what the javadoc says about this method:
>>
>>    * Get the Root Mean Square value, i.e. the root of the arithmetic
>>    * mean of the square of all weighted residuals. This is related to the
>>    * criterion that is minimized by the estimator as follows: if
>>    * <em>c</em> is the criterion, and <em>n</em> is the number of
>>    * measurements, then the RMS is <em>sqrt (c/n)</em>.
>>> I think that those values are very important to validate estimated
>>> parameters.
>>
>> It may sometimes be misleading. If your problem model is wrong and too
>> "flexible", and if your observation are bad (measurements errors), 
>> then you may
>> adjust too many parameters and have the bad model really follow the bad
>> measurements and give you artificially low residuals. Then you may think
>> everything is perfect which is false. This is about the same kind of 
>> problems
>> knowns as "Gibbs oscillations" for polynomial fitting when you use a 
>> too high
>> degree.
>>
>> Luc
>>
>>> Is use of classes in java.lang.reflect.* only way to get standard 
>>> errors?
>>>
>>> Kind Regards,
>>>
>>> Koshino
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
>>> For additional commands, e-mail: user-help@commons.apache.org
>>>
>>>
>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
>> For additional commands, e-mail: user-help@commons.apache.org
>>
>>
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> For additional commands, e-mail: user-help@commons.apache.org
> 
> 


-- 
Kazuhiro Koshino, PhD
National Cardiovascular Center Research Institute
5-7-1Fujishirodai, Suita, Osaka 565-8565, JAPAN
Tel: +81-6-6833-5012 (ex.2559)
Fax: +81-6-6835-5429
E-mail:koshino@ri.ncvc.go.jp

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
For additional commands, e-mail: user-help@commons.apache.org


Re: [math] Standard errors in estimated parameters using LevenbergMarquardtEstimator

Posted by Luc Maisonobe <Lu...@free.fr>.
luc.maisonobe@free.fr a écrit :
> Selon koshino kazuhiro <ko...@ri.ncvc.go.jp>:
> 
>> Hello,
>>
>> Can I get standard errors in estimated parameters using
>> GaussNewtonEstimator or LevenbergMarquardtEstimator?
> 
> Yes. All estimators implement the Estimator interface which includes a getRMS
> method in addition to the estimate method.

Oooops. Sorry, my answer does not match your question. You asked about 
parameters and I answered about observations (measurements).

So the proper answer is no, there is no way to get any information about 
errors on parameters yet. One method to have an estimate of the quality 
of the result is to check the eigenvalues related to the parameters. 
Perhaps we could look at this after having included a Singular Value 
Decomposition algorithm ? Is this what you want or do you have some 
other idea ?

Luc

> the method (because it is your problems which defines both the model, the
> parameters and the observations).
> If you call it before calling estimate, it will use the initial values of the
> parameters, if you call it after having called estimate, it will use the
> adjusted values.
> 
> Here is what the javadoc says about this method:
> 
>    * Get the Root Mean Square value, i.e. the root of the arithmetic
>    * mean of the square of all weighted residuals. This is related to the
>    * criterion that is minimized by the estimator as follows: if
>    * <em>c</em> is the criterion, and <em>n</em> is the number of
>    * measurements, then the RMS is <em>sqrt (c/n)</em>.
>> I think that those values are very important to validate estimated
>> parameters.
> 
> It may sometimes be misleading. If your problem model is wrong and too
> "flexible", and if your observation are bad (measurements errors), then you may
> adjust too many parameters and have the bad model really follow the bad
> measurements and give you artificially low residuals. Then you may think
> everything is perfect which is false. This is about the same kind of problems
> knowns as "Gibbs oscillations" for polynomial fitting when you use a too high
> degree.
> 
> Luc
> 
>> Is use of classes in java.lang.reflect.* only way to get standard errors?
>>
>> Kind Regards,
>>
>> Koshino
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
>> For additional commands, e-mail: user-help@commons.apache.org
>>
>>
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> For additional commands, e-mail: user-help@commons.apache.org
> 
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
For additional commands, e-mail: user-help@commons.apache.org


Re: [math] Standard errors in estimated parameters using LevenbergMarquardtEstimator

Posted by lu...@free.fr.
Selon koshino kazuhiro <ko...@ri.ncvc.go.jp>:

> Hello,
>
> Can I get standard errors in estimated parameters using
> GaussNewtonEstimator or LevenbergMarquardtEstimator?

Yes. All estimators implement the Estimator interface which includes a getRMS
method in addition to the estimate method. You should provide your problem to
the method (because it is your problems which defines both the model, the
parameters and the observations).
If you call it before calling estimate, it will use the initial values of the
parameters, if you call it after having called estimate, it will use the
adjusted values.

Here is what the javadoc says about this method:

   * Get the Root Mean Square value, i.e. the root of the arithmetic
   * mean of the square of all weighted residuals. This is related to the
   * criterion that is minimized by the estimator as follows: if
   * <em>c</em> is the criterion, and <em>n</em> is the number of
   * measurements, then the RMS is <em>sqrt (c/n)</em>.
>
> I think that those values are very important to validate estimated
> parameters.

It may sometimes be misleading. If your problem model is wrong and too
"flexible", and if your observation are bad (measurements errors), then you may
adjust too many parameters and have the bad model really follow the bad
measurements and give you artificially low residuals. Then you may think
everything is perfect which is false. This is about the same kind of problems
knowns as "Gibbs oscillations" for polynomial fitting when you use a too high
degree.

Luc

>
> Is use of classes in java.lang.reflect.* only way to get standard errors?
>
> Kind Regards,
>
> Koshino
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> For additional commands, e-mail: user-help@commons.apache.org
>
>



---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
For additional commands, e-mail: user-help@commons.apache.org