You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@commons.apache.org by Italo Maia <it...@hotmail.com> on 2012/07/05 18:30:18 UTC
[math]
Thanks Giles! I was looking in the wrong place. Any suggestions on examples for these classes (a math function example would be very nice)? I've found this link (very helpful) but I don't know what to code in the gradient method. In ParametricUnivariateFunction.value I just returned my function output with the params as arguments (plus x). For gradient, I'm in a pitch.
ps: the mailing list was refusing my mails with my other email account (don't know why). So, I'm responding through here.
RE: [math]
Posted by Italo Maia <it...@hotmail.com>.
Whoops! Where you read new double[n][n] is actually new double[n][2]
> From: italomaia@hotmail.com
> To: user@commons.apache.org
> Subject: RE: [math]
> Date: Thu, 5 Jul 2012 21:19:17 +0000
>
>
> Here you go: http://pastebin.com/UR0GV7ST
>
>
>
> Unfortunatly I can't provide the matrix data. : /
>
> > Date: Thu, 5 Jul 2012 23:06:18 +0200
> > From: gilles@harfang.homelinux.org
> > To: user@commons.apache.org
> > Subject: Re: [math]
> >
> > On Thu, Jul 05, 2012 at 08:35:28PM +0000, Italo Maia wrote:
> > >
> > > No juice. Hell! The initial function I'm trying to fit is:
> > >
> > > f(t, a, b, c) = a * t^b * exp(t*-c)
> > >
> > > I had the log of it to make it linear:
> > >
> > >
> > > f(t, a, b, c) = log(a) + b*log(t) - c*t
> > >
> > > I was using the log to do the fitting in python with scipy. With CurveFitter should I do the same?
> >
> > Please show the code.
> >
> >
> > Regards,
> > Gilles
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> > For additional commands, e-mail: user-help@commons.apache.org
> >
>
Re: [math]
Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
On Fri, Jul 06, 2012 at 10:45:40PM +0000, Italo Maia wrote:
>
> Had this to calculate the rsquared:
>
> OLSMultipleLinearRegression regression = new OLSMultipleLinearRegression();
> regression.newSampleData(curve_totals, data);
> System.out.println("rsquared:" + regression.calculateRSquared());
>
> Where curve_totals is the value calculated with Fnc.fnc and the calculated a, b and c.
>
> Used this as reference: http://commons.apache.org/math/userguide/stat.html
>
> Is that right?
The method computes what it says. Whether it is right for your purpose is
for you to decide...
>
> By the way, there is a typo in the link: double rSquared = regression.caclulateRSquared();
> Where could I report it?
For those little things, here is fine, thanks. [Fixed in revision 1358535.]
Regards,
Gilles
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
For additional commands, e-mail: user-help@commons.apache.org
RE: [math]
Posted by Italo Maia <it...@hotmail.com>.
Had this to calculate the rsquared:
OLSMultipleLinearRegression regression = new OLSMultipleLinearRegression();
regression.newSampleData(curve_totals, data);
System.out.println("rsquared:" + regression.calculateRSquared());
Where curve_totals is the value calculated with Fnc.fnc and the calculated a, b and c.
Used this as reference: http://commons.apache.org/math/userguide/stat.html
Is that right?
By the way, there is a typo in the link: double rSquared = regression.caclulateRSquared();
Where could I report it?
> Date: Sat, 7 Jul 2012 00:29:51 +0200
> From: gilles@harfang.homelinux.org
> To: user@commons.apache.org
> Subject: Re: [math]
>
> On Fri, Jul 06, 2012 at 09:39:30PM +0000, Italo Maia wrote:
> >
> > Hummm, so my assumption that my previous values for a, b and c were the best are wrong. I calculated the resid and it is really smaller. Real thanks for that!
>
> I wouldn't take the difference too seriously, given that the data are not
> really close to the curve. The errors seem quite large.
>
> > Any tips on calculating the r-squared?
>
> No.
>
> Gilles
>
> >
> > Date: Fri, 6 Jul 2012 22:05:26 +0200
> > From: gilles@harfang.homelinux.org
> > To: user@commons.apache.org
> > Subject: Re: [math]
> >
> > Hi.
> >
> > If you are using the function
> >
> > a * Math.pow(t, b) * Math.exp(-c * t)
> >
> > the gradient is:
> >
> > { Math.pow(t, b) * Math.exp(-c * t),
> > a * Math.log(t) * Math.pow(t, b) * Math.exp(-c * t),
> > -a * t Math.pow(t, b) * Math.exp(-c * t) }
> >
> > > // No idea what goes here. Nothing seems to work.
> >
> > Well, the gradient (partial derivatives w.r.t the parameters) is the thing
> > that will work; the attached figure shows the data and the function that
> > fits it with
> > a = 1.097378664278161
> > b = 0.4273818336149512
> > c = 0.01457006142420487
> >
> > >
> > > a, b and c for this example should be: A: 1.0782 B: 0.4583 C: 0.0166
> >
> > The fit is slightly better with the values found by "CurveFitter"
> > (the "LevenbergMarquardt" algorithm actually).
> >
> > Regards,
> > Gilles
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> > For additional commands, e-mail: user-help@commons.apache.org
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> For additional commands, e-mail: user-help@commons.apache.org
>
Re: [math]
Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
On Fri, Jul 06, 2012 at 09:39:30PM +0000, Italo Maia wrote:
>
> Hummm, so my assumption that my previous values for a, b and c were the best are wrong. I calculated the resid and it is really smaller. Real thanks for that!
I wouldn't take the difference too seriously, given that the data are not
really close to the curve. The errors seem quite large.
> Any tips on calculating the r-squared?
No.
Gilles
>
> Date: Fri, 6 Jul 2012 22:05:26 +0200
> From: gilles@harfang.homelinux.org
> To: user@commons.apache.org
> Subject: Re: [math]
>
> Hi.
>
> If you are using the function
>
> a * Math.pow(t, b) * Math.exp(-c * t)
>
> the gradient is:
>
> { Math.pow(t, b) * Math.exp(-c * t),
> a * Math.log(t) * Math.pow(t, b) * Math.exp(-c * t),
> -a * t Math.pow(t, b) * Math.exp(-c * t) }
>
> > // No idea what goes here. Nothing seems to work.
>
> Well, the gradient (partial derivatives w.r.t the parameters) is the thing
> that will work; the attached figure shows the data and the function that
> fits it with
> a = 1.097378664278161
> b = 0.4273818336149512
> c = 0.01457006142420487
>
> >
> > a, b and c for this example should be: A: 1.0782 B: 0.4583 C: 0.0166
>
> The fit is slightly better with the values found by "CurveFitter"
> (the "LevenbergMarquardt" algorithm actually).
>
> Regards,
> Gilles
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> For additional commands, e-mail: user-help@commons.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
For additional commands, e-mail: user-help@commons.apache.org
RE: [math]
Posted by Italo Maia <it...@hotmail.com>.
Hummm, so my assumption that my previous values for a, b and c were the best are wrong. I calculated the resid and it is really smaller. Real thanks for that!
Any tips on calculating the r-squared?
Date: Fri, 6 Jul 2012 22:05:26 +0200
From: gilles@harfang.homelinux.org
To: user@commons.apache.org
Subject: Re: [math]
Hi.
If you are using the function
a * Math.pow(t, b) * Math.exp(-c * t)
the gradient is:
{ Math.pow(t, b) * Math.exp(-c * t),
a * Math.log(t) * Math.pow(t, b) * Math.exp(-c * t),
-a * t Math.pow(t, b) * Math.exp(-c * t) }
> // No idea what goes here. Nothing seems to work.
Well, the gradient (partial derivatives w.r.t the parameters) is the thing
that will work; the attached figure shows the data and the function that
fits it with
a = 1.097378664278161
b = 0.4273818336149512
c = 0.01457006142420487
>
> a, b and c for this example should be: A: 1.0782 B: 0.4583 C: 0.0166
The fit is slightly better with the values found by "CurveFitter"
(the "LevenbergMarquardt" algorithm actually).
Regards,
Gilles
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
For additional commands, e-mail: user-help@commons.apache.org
Re: [math]
Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
Hi.
If you are using the function
a * Math.pow(t, b) * Math.exp(-c * t)
the gradient is:
{ Math.pow(t, b) * Math.exp(-c * t),
a * Math.log(t) * Math.pow(t, b) * Math.exp(-c * t),
-a * t Math.pow(t, b) * Math.exp(-c * t) }
> // No idea what goes here. Nothing seems to work.
Well, the gradient (partial derivatives w.r.t the parameters) is the thing
that will work; the attached figure shows the data and the function that
fits it with
a = 1.097378664278161
b = 0.4273818336149512
c = 0.01457006142420487
>
> a, b and c for this example should be: A: 1.0782 B: 0.4583 C: 0.0166
The fit is slightly better with the values found by "CurveFitter"
(the "LevenbergMarquardt" algorithm actually).
Regards,
Gilles
RE: [math]
Posted by Italo Maia <it...@hotmail.com>.
a, b and c for this example should be: A: 1.0782 B: 0.4583 C: 0.0166
From: italomaia@hotmail.com
To: user@commons.apache.org
Subject: RE: [math]
Date: Fri, 6 Jul 2012 16:24:21 +0000
A full working example attached.
> Date: Fri, 6 Jul 2012 11:53:05 +0200
> From: gilles@harfang.homelinux.org
> To: user@commons.apache.org
> Subject: Re: [math]
>
> On Thu, Jul 05, 2012 at 10:02:29PM +0000, Italo Maia wrote:
> >
> > Oh my. Fair enough. Here is a sample data.
> >
> > http://pastebin.com/MkQrE8d2
>
> See below.
>
> >
> > The values of a, b and c for this sample data, for best fitting, are:
> > A: 1.0782 B: 0.4583 C: 0.0166
> >
> > When everything is working, I'll publish something about the code. CurveFitter seems very devoided of love.
> >
> > > Date: Thu, 5 Jul 2012 23:52:31 +0200
> > > From: gilles@harfang.homelinux.org
> > > To: user@commons.apache.org
> > > Subject: Re: [math]
> > >
> > > Hello.
> > >
> > > On Thu, Jul 05, 2012 at 09:19:17PM +0000, Italo Maia wrote:
> > > >
> > > > Here you go: http://pastebin.com/UR0GV7ST
> > > >
> > >
> > > I'd think that it would be better not to use such a site, since it seems
> > > that the contents will be removed at some point, leading to this thread
> > > being impossible to follow in the archive.
>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> Here.
>
> > > [Maybe other people on the ML could give their opinion on this aspect.]
> > >
> > > The subject of this thread is not very clear either. :-}
> > >
> > > >
> > > > Unfortunatly I can't provide the matrix data. : /
> > >
> > > So, how am I supposed to know what is going on?
> > > Clearly if you define the "gradient" method as on the above page, it cannot
> > > work.
> > >
> > > Please provide, in an attached file, a working example, showing what you
> > > tried and what result you obtained.
>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> And here.
>
>
> Thanks,
> Gilles
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> For additional commands, e-mail: user-help@commons.apache.org
>
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
For additional commands, e-mail: user-help@commons.apache.org
RE: [math]
Posted by Italo Maia <it...@hotmail.com>.
A full working example attached.
> Date: Fri, 6 Jul 2012 11:53:05 +0200
> From: gilles@harfang.homelinux.org
> To: user@commons.apache.org
> Subject: Re: [math]
>
> On Thu, Jul 05, 2012 at 10:02:29PM +0000, Italo Maia wrote:
> >
> > Oh my. Fair enough. Here is a sample data.
> >
> > http://pastebin.com/MkQrE8d2
>
> See below.
>
> >
> > The values of a, b and c for this sample data, for best fitting, are:
> > A: 1.0782 B: 0.4583 C: 0.0166
> >
> > When everything is working, I'll publish something about the code. CurveFitter seems very devoided of love.
> >
> > > Date: Thu, 5 Jul 2012 23:52:31 +0200
> > > From: gilles@harfang.homelinux.org
> > > To: user@commons.apache.org
> > > Subject: Re: [math]
> > >
> > > Hello.
> > >
> > > On Thu, Jul 05, 2012 at 09:19:17PM +0000, Italo Maia wrote:
> > > >
> > > > Here you go: http://pastebin.com/UR0GV7ST
> > > >
> > >
> > > I'd think that it would be better not to use such a site, since it seems
> > > that the contents will be removed at some point, leading to this thread
> > > being impossible to follow in the archive.
>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> Here.
>
> > > [Maybe other people on the ML could give their opinion on this aspect.]
> > >
> > > The subject of this thread is not very clear either. :-}
> > >
> > > >
> > > > Unfortunatly I can't provide the matrix data. : /
> > >
> > > So, how am I supposed to know what is going on?
> > > Clearly if you define the "gradient" method as on the above page, it cannot
> > > work.
> > >
> > > Please provide, in an attached file, a working example, showing what you
> > > tried and what result you obtained.
>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> And here.
>
>
> Thanks,
> Gilles
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> For additional commands, e-mail: user-help@commons.apache.org
>
Re: [math]
Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
On Thu, Jul 05, 2012 at 10:02:29PM +0000, Italo Maia wrote:
>
> Oh my. Fair enough. Here is a sample data.
>
> http://pastebin.com/MkQrE8d2
See below.
>
> The values of a, b and c for this sample data, for best fitting, are:
> A: 1.0782 B: 0.4583 C: 0.0166
>
> When everything is working, I'll publish something about the code. CurveFitter seems very devoided of love.
>
> > Date: Thu, 5 Jul 2012 23:52:31 +0200
> > From: gilles@harfang.homelinux.org
> > To: user@commons.apache.org
> > Subject: Re: [math]
> >
> > Hello.
> >
> > On Thu, Jul 05, 2012 at 09:19:17PM +0000, Italo Maia wrote:
> > >
> > > Here you go: http://pastebin.com/UR0GV7ST
> > >
> >
> > I'd think that it would be better not to use such a site, since it seems
> > that the contents will be removed at some point, leading to this thread
> > being impossible to follow in the archive.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Here.
> > [Maybe other people on the ML could give their opinion on this aspect.]
> >
> > The subject of this thread is not very clear either. :-}
> >
> > >
> > > Unfortunatly I can't provide the matrix data. : /
> >
> > So, how am I supposed to know what is going on?
> > Clearly if you define the "gradient" method as on the above page, it cannot
> > work.
> >
> > Please provide, in an attached file, a working example, showing what you
> > tried and what result you obtained.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
And here.
Thanks,
Gilles
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
For additional commands, e-mail: user-help@commons.apache.org
RE: [math]
Posted by Italo Maia <it...@hotmail.com>.
Oh my. Fair enough. Here is a sample data.
http://pastebin.com/MkQrE8d2
The values of a, b and c for this sample data, for best fitting, are:
A: 1.0782 B: 0.4583 C: 0.0166
When everything is working, I'll publish something about the code. CurveFitter seems very devoided of love.
> Date: Thu, 5 Jul 2012 23:52:31 +0200
> From: gilles@harfang.homelinux.org
> To: user@commons.apache.org
> Subject: Re: [math]
>
> Hello.
>
> On Thu, Jul 05, 2012 at 09:19:17PM +0000, Italo Maia wrote:
> >
> > Here you go: http://pastebin.com/UR0GV7ST
> >
>
> I'd think that it would be better not to use such a site, since it seems
> that the contents will be removed at some point, leading to this thread
> being impossible to follow in the archive.
> [Maybe other people on the ML could give their opinion on this aspect.]
>
> The subject of this thread is not very clear either. :-}
>
> >
> > Unfortunatly I can't provide the matrix data. : /
>
> So, how am I supposed to know what is going on?
> Clearly if you define the "gradient" method as on the above page, it cannot
> work.
>
> Please provide, in an attached file, a working example, showing what you
> tried and what result you obtained.
>
> Regards,
> Gilles
>
> >
> > > Date: Thu, 5 Jul 2012 23:06:18 +0200
> > > From: gilles@harfang.homelinux.org
> > > To: user@commons.apache.org
> > > Subject: Re: [math]
> > >
> > > On Thu, Jul 05, 2012 at 08:35:28PM +0000, Italo Maia wrote:
> > > >
> > > > No juice. Hell! The initial function I'm trying to fit is:
> > > >
> > > > f(t, a, b, c) = a * t^b * exp(t*-c)
> > > >
> > > > I had the log of it to make it linear:
> > > >
> > > >
> > > > f(t, a, b, c) = log(a) + b*log(t) - c*t
> > > >
> > > > I was using the log to do the fitting in python with scipy. With CurveFitter should I do the same?
> > >
> > > Please show the code.
> > >
> > >
> > > Regards,
> > > Gilles
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> > > For additional commands, e-mail: user-help@commons.apache.org
> > >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> For additional commands, e-mail: user-help@commons.apache.org
>
Re: [math]
Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
Hello.
On Thu, Jul 05, 2012 at 09:19:17PM +0000, Italo Maia wrote:
>
> Here you go: http://pastebin.com/UR0GV7ST
>
I'd think that it would be better not to use such a site, since it seems
that the contents will be removed at some point, leading to this thread
being impossible to follow in the archive.
[Maybe other people on the ML could give their opinion on this aspect.]
The subject of this thread is not very clear either. :-}
>
> Unfortunatly I can't provide the matrix data. : /
So, how am I supposed to know what is going on?
Clearly if you define the "gradient" method as on the above page, it cannot
work.
Please provide, in an attached file, a working example, showing what you
tried and what result you obtained.
Regards,
Gilles
>
> > Date: Thu, 5 Jul 2012 23:06:18 +0200
> > From: gilles@harfang.homelinux.org
> > To: user@commons.apache.org
> > Subject: Re: [math]
> >
> > On Thu, Jul 05, 2012 at 08:35:28PM +0000, Italo Maia wrote:
> > >
> > > No juice. Hell! The initial function I'm trying to fit is:
> > >
> > > f(t, a, b, c) = a * t^b * exp(t*-c)
> > >
> > > I had the log of it to make it linear:
> > >
> > >
> > > f(t, a, b, c) = log(a) + b*log(t) - c*t
> > >
> > > I was using the log to do the fitting in python with scipy. With CurveFitter should I do the same?
> >
> > Please show the code.
> >
> >
> > Regards,
> > Gilles
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> > For additional commands, e-mail: user-help@commons.apache.org
> >
>
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
For additional commands, e-mail: user-help@commons.apache.org
RE: [math]
Posted by Italo Maia <it...@hotmail.com>.
Here you go: http://pastebin.com/UR0GV7ST
Unfortunatly I can't provide the matrix data. : /
> Date: Thu, 5 Jul 2012 23:06:18 +0200
> From: gilles@harfang.homelinux.org
> To: user@commons.apache.org
> Subject: Re: [math]
>
> On Thu, Jul 05, 2012 at 08:35:28PM +0000, Italo Maia wrote:
> >
> > No juice. Hell! The initial function I'm trying to fit is:
> >
> > f(t, a, b, c) = a * t^b * exp(t*-c)
> >
> > I had the log of it to make it linear:
> >
> >
> > f(t, a, b, c) = log(a) + b*log(t) - c*t
> >
> > I was using the log to do the fitting in python with scipy. With CurveFitter should I do the same?
>
> Please show the code.
>
>
> Regards,
> Gilles
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> For additional commands, e-mail: user-help@commons.apache.org
>
Re: [math]
Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
On Thu, Jul 05, 2012 at 08:35:28PM +0000, Italo Maia wrote:
>
> No juice. Hell! The initial function I'm trying to fit is:
>
> f(t, a, b, c) = a * t^b * exp(t*-c)
>
> I had the log of it to make it linear:
>
>
> f(t, a, b, c) = log(a) + b*log(t) - c*t
>
> I was using the log to do the fitting in python with scipy. With CurveFitter should I do the same?
Please show the code.
Regards,
Gilles
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
For additional commands, e-mail: user-help@commons.apache.org
RE: [math]
Posted by Italo Maia <it...@hotmail.com>.
No juice. Hell! The initial function I'm trying to fit is:
f(t, a, b, c) = a * t^b * exp(t*-c)
I had the log of it to make it linear:
f(t, a, b, c) = log(a) + b*log(t) - c*t
I was using the log to do the fitting in python with scipy. With CurveFitter should I do the same?
> Date: Thu, 5 Jul 2012 22:18:04 +0200
> From: gilles@harfang.homelinux.org
> To: user@commons.apache.org
> Subject: Re: [math]
>
> On Thu, Jul 05, 2012 at 06:16:11PM +0000, Italo Maia wrote:
> >
> > Some "context" below:
> >
> > Did you have a look at the classes in the package
> >
> > "org.apache.commons.math3.optimization" ?
> >
> > No, I did not. Let's see...
> >
> >
> > Which function?
> >
> > This little devil:
> >
> > http://dpaste.com/hold/767050/
> >
> > public static double fnc(double t, double a, double b, double c){
> > return Math.log(a) + b * Math.log(t) - c * t;
> >
> > }
> >
> > I have t in the matrix (first column). Second column are the observed values. I need to fit a, b and c.
> > === END
> >
> > Well, the derivatives don't seem to be working.
> >
> > double da = 1/a;
> > double db = b/t;
> > double dc = -c;
> >
>
> Then try
> 1/a
> log(t)
> -t
>
>
> Regards,
> Gilles
>
> >
> > > Date: Thu, 5 Jul 2012 19:21:46 +0200
> > > From: gilles@harfang.homelinux.org
> > > To: user@commons.apache.org
> > > Subject: Re: [math]
> > >
> > > Hi.
> > >
> > > >
> > > > Thanks Giles! I was looking in the wrong place. Any suggestions on examples for these classes (a math function example would be very nice)? I've found this link (very helpful) but I don't know what to code in the gradient method. In ParametricUnivariateFunction.value I just returned my function output with the params as arguments (plus x). For gradient, I'm in a pitch.
> > >
> > > And I'm lacking context (sorry, I deleted your previous email from my
> > > inbox)...
> > >
> > > Anyways, the "gradient(double x, double ... parameters)" method should
> > > return the partial derivatives with respect to the _parameters_. So, for
> > > example:
> > > ---
> > > public class ParamFuncExample implements ParametricUnivariateFunction {
> > > public double value(double x, double ... p) {
> > > return p[0] * x + p[1];
> > > }
> > >
> > > public double[] gradient(double x, double ... p) {
> > > return new double[] { x, 1 };
> > > }
> > > }
> > > ---
> > >
> > >
> > > HTH,
> > > Gilles
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> > > For additional commands, e-mail: user-help@commons.apache.org
> > >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> For additional commands, e-mail: user-help@commons.apache.org
>
Re: [math]
Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
On Thu, Jul 05, 2012 at 06:16:11PM +0000, Italo Maia wrote:
>
> Some "context" below:
>
> Did you have a look at the classes in the package
>
> "org.apache.commons.math3.optimization" ?
>
> No, I did not. Let's see...
>
>
> Which function?
>
> This little devil:
>
> http://dpaste.com/hold/767050/
>
> public static double fnc(double t, double a, double b, double c){
> return Math.log(a) + b * Math.log(t) - c * t;
>
> }
>
> I have t in the matrix (first column). Second column are the observed values. I need to fit a, b and c.
> === END
>
> Well, the derivatives don't seem to be working.
>
> double da = 1/a;
> double db = b/t;
> double dc = -c;
>
Then try
1/a
log(t)
-t
Regards,
Gilles
>
> > Date: Thu, 5 Jul 2012 19:21:46 +0200
> > From: gilles@harfang.homelinux.org
> > To: user@commons.apache.org
> > Subject: Re: [math]
> >
> > Hi.
> >
> > >
> > > Thanks Giles! I was looking in the wrong place. Any suggestions on examples for these classes (a math function example would be very nice)? I've found this link (very helpful) but I don't know what to code in the gradient method. In ParametricUnivariateFunction.value I just returned my function output with the params as arguments (plus x). For gradient, I'm in a pitch.
> >
> > And I'm lacking context (sorry, I deleted your previous email from my
> > inbox)...
> >
> > Anyways, the "gradient(double x, double ... parameters)" method should
> > return the partial derivatives with respect to the _parameters_. So, for
> > example:
> > ---
> > public class ParamFuncExample implements ParametricUnivariateFunction {
> > public double value(double x, double ... p) {
> > return p[0] * x + p[1];
> > }
> >
> > public double[] gradient(double x, double ... p) {
> > return new double[] { x, 1 };
> > }
> > }
> > ---
> >
> >
> > HTH,
> > Gilles
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> > For additional commands, e-mail: user-help@commons.apache.org
> >
>
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
For additional commands, e-mail: user-help@commons.apache.org
RE: [math]
Posted by Italo Maia <it...@hotmail.com>.
Some "context" below:
Did you have a look at the classes in the package
"org.apache.commons.math3.optimization" ?
No, I did not. Let's see...
Which function?
This little devil:
http://dpaste.com/hold/767050/
public static double fnc(double t, double a, double b, double c){
return Math.log(a) + b * Math.log(t) - c * t;
}
I have t in the matrix (first column). Second column are the observed values. I need to fit a, b and c.
=== END
Well, the derivatives don't seem to be working.
double da = 1/a;
double db = b/t;
double dc = -c;
> Date: Thu, 5 Jul 2012 19:21:46 +0200
> From: gilles@harfang.homelinux.org
> To: user@commons.apache.org
> Subject: Re: [math]
>
> Hi.
>
> >
> > Thanks Giles! I was looking in the wrong place. Any suggestions on examples for these classes (a math function example would be very nice)? I've found this link (very helpful) but I don't know what to code in the gradient method. In ParametricUnivariateFunction.value I just returned my function output with the params as arguments (plus x). For gradient, I'm in a pitch.
>
> And I'm lacking context (sorry, I deleted your previous email from my
> inbox)...
>
> Anyways, the "gradient(double x, double ... parameters)" method should
> return the partial derivatives with respect to the _parameters_. So, for
> example:
> ---
> public class ParamFuncExample implements ParametricUnivariateFunction {
> public double value(double x, double ... p) {
> return p[0] * x + p[1];
> }
>
> public double[] gradient(double x, double ... p) {
> return new double[] { x, 1 };
> }
> }
> ---
>
>
> HTH,
> Gilles
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
> For additional commands, e-mail: user-help@commons.apache.org
>
Re: [math]
Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
Hi.
>
> Thanks Giles! I was looking in the wrong place. Any suggestions on examples for these classes (a math function example would be very nice)? I've found this link (very helpful) but I don't know what to code in the gradient method. In ParametricUnivariateFunction.value I just returned my function output with the params as arguments (plus x). For gradient, I'm in a pitch.
And I'm lacking context (sorry, I deleted your previous email from my
inbox)...
Anyways, the "gradient(double x, double ... parameters)" method should
return the partial derivatives with respect to the _parameters_. So, for
example:
---
public class ParamFuncExample implements ParametricUnivariateFunction {
public double value(double x, double ... p) {
return p[0] * x + p[1];
}
public double[] gradient(double x, double ... p) {
return new double[] { x, 1 };
}
}
---
HTH,
Gilles
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@commons.apache.org
For additional commands, e-mail: user-help@commons.apache.org