You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@commons.apache.org by gi...@harfang.homelinux.org on 2011/10/07 11:59:27 UTC

Re: svn commit: r1179928 - in [...]

On Fri, Oct 07, 2011 at 03:20:40AM -0000, psteitz@apache.org wrote:
> Author: psteitz
> Date: Fri Oct  7 03:20:39 2011
> New Revision: 1179928
> 
> URL: http://svn.apache.org/viewvc?rev=1179928&view=rev
> Log:
> Javadoc fixes.
> 
> Modified:
> [...]
>     commons/proper/math/trunk/src/main/java/org/apache/commons/math/analysis/function/Sinc.java

> [...]

> Modified: commons/proper/math/trunk/src/main/java/org/apache/commons/math/analysis/function/Sinc.java
> URL: http://svn.apache.org/viewvc/commons/proper/math/trunk/src/main/java/org/apache/commons/math/analysis/function/Sinc.java?rev=1179928&r1=1179927&r2=1179928&view=diff
> ==============================================================================
> --- commons/proper/math/trunk/src/main/java/org/apache/commons/math/analysis/function/Sinc.java (original)
> +++ commons/proper/math/trunk/src/main/java/org/apache/commons/math/analysis/function/Sinc.java Fri Oct  7 03:20:39 2011
> @@ -21,7 +21,11 @@ import org.apache.commons.math.analysis.
>  import org.apache.commons.math.util.FastMath;
>  
>  /**
> - * Sinc function.
> + * Sinc function, defined by <pre><code>
> + *
> + * sinc(x) = 1 if abs(x) < 1e-9;
> + *           sin(x) / x; otherwise
> + * </code></pre>

I would not document the first part of the alternative since it is an
implementation detail.  "1e-9" was chosen just because, with "double"
precision, the function value will be indistinguishable from 1. Strictly
speaking it is not part of the definition of "sinc". [This part of the
implementation could even be removed if it is deemed that we lose more time
doing the check than we gain when the user asks the value of points below
1e-9.]

Also, there probably should not be a ";" after the statements.


Regards,
Gilles

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: svn commit: r1179928 - in [...]

Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
> > [...]
> >> The reason that I wanted to point out the top coding was that I
> >> thought it might be possible results that were equal, but had
> >> different internal representations could be returned over the
> >> interval (-1E-9, 1E-9), where the actual value is close to but less
> >> than 1.  This could effect computations involving the returned
> >> values.  I have not been able to demonstrate this (and suspect that
> >> I may in fact be misreading the JLS on this issue - i.e., on method
> >> return you have to map to the standard value set, so returned values
> >> have to have the same internal representation), so I am fine leaving
> >> out the reference to the cut point.
> > Do you mean the numbers representation used inside the CPU? I think that
> > this is beyond the scope of a Java library.
> 
> It would be relevant if method return values were not required to be
> among the standard value set (the JLS does not actually say this,
> but I suspect it is true.  Maybe someone who actually knows can
> educate me).  The spec allows JVMs to use extended value sets
> internally.  What is required is that equals works as an equivalence
> relation over whatever value set is used internally, with
> equivalence classes equal to the values in the standard value set,
> with the one exception that -0, +0 are different values, but in the
> same equivalence class (i.e. equal).  If method return values were
> not forced to be in the standard value set, it could happen that
> values that were equivalent under equals (mapping to the same
> standard value) but different in storage could be returned and
> reused.

How can I look at these values?

> That could result in subsequent computations using the
> values diverging (in the non-top-coded case).  I can't demonstrate
> this in the present case, so either it does not happen or my
> understanding of what is legal internally is incorrect (could well
> be the case).
> 
> When it comes to top-coding (especially around 0 and 1), I always
> like to document when I do it in numerical algorithms, because
> otherwise I am (as we are here) counting on categorical statements
> about value sets / error propagation in every execution environment
> that the code is expected to run under.   Looks safe enough in this
> case, and this discussion has gone on too long already, so I will
> drop it.

I think that the current contents of "Sinc.java" should be fine for
everyone now. All the information is there, in one form or another.


Gilles

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: svn commit: r1179928 - in [...]

Posted by Phil Steitz <ph...@gmail.com>.
On 10/9/11 9:38 AM, Gilles Sadowski wrote:
> On Sun, Oct 09, 2011 at 07:45:51AM -0700, Phil Steitz wrote:
>> On 10/9/11 5:09 AM, Gilles Sadowski wrote:
>>> On Sun, Oct 09, 2011 at 10:32:38AM +0200, Sébastien Brisard wrote:
>>>>> Answering with a few examples:
>>>>> x=1.000000000000000000e-15 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>>>> x=5.000000000000001000e-15 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>>>> x=2.500000000000000400e-14 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>>>> x=1.250000000000000200e-13 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>>>> x=6.250000000000001000e-13 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>>>> x=3.125000000000000500e-12 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>>>> x=1.562500000000000400e-11 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>>>> x=7.812500000000003000e-11 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>>>> x=3.906250000000001400e-10 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>>>> x=1.953125000000000700e-09 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>>>> x=9.765625000000004000e-09 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>>>> x=4.882812500000002000e-08 f=9.999999999999996000e-01 s=9.999999999999996000e-01
>>>>> x=2.441406250000001000e-07 f=9.999999999999900000e-01 s=9.999999999999900000e-01
>>>>> x=1.220703125000000700e-06 f=9.999999999997516000e-01 s=9.999999999997516000e-01
>>>>> x=6.103515625000004000e-06 f=9.999999999937912000e-01 s=9.999999999937912000e-01
>>>>> x=3.051757812500002000e-05 f=9.999999998447796000e-01 s=9.999999998447796000e-01
>>>>> x=1.525878906250001000e-04 f=9.999999961194893000e-01 s=9.999999961194893000e-01
>>>>> x=7.629394531250005000e-04 f=9.999999029872346000e-01 s=9.999999029872346000e-01
>>>>> x=3.814697265625002600e-03 f=9.999975746825600000e-01 s=9.999975746825600000e-01
>>>>> x=1.907348632812501400e-02 f=9.999393681227797000e-01 s=9.999393681227797000e-01
>>>>>
>>>>> Thus: below 9.765625000000004E-9, the value of the definitional formula
>>>>> (without shortcut, indicated by "f=" in the above table) is strictly the
>>>>> same as the CM implementation (with shortcut, indicated by "s=" in the above
>>>>> table) in that they are both the double value "1.0".
>>>>>
>>>>> [I still don't understand what you mean by "(despite all being equal to 1
>>>>> under double equals)".]
>>>>>
>>>>> What the implementation returns is, within double precision, the result of
>>>>> the mathematical definition of sinc: sin(x) / x. Hence, there is no *need*
>>>>> to document any special case, not even x = 0: The fact that without the
>>>>> shortcut check, we'd get NaN does not mean that the implementation departs
>>>>> from the definition, but rather that it correctly simulates it (at double
>>>>> precision).
>>>>> [However, if we assume that the doc should integrate numerical
>>>>> considerations, it doesn't hurt to add the extra prose (in which case it
>>>>> becomes necessary, IMHO, to explain the "1e-9").]
>>>>>
>>>>> Maybe I should also add a "SincTest" class where a unit test would check the
>>>>> strict equality of returned values from the actual division and from the
>>>>> shortcut implementation?
>>>>>
>>>>>
>>>>> Gilles
>>>>>
>>>> I think the 1E-9 value is actually a mathematically provable value,
>>>> since sin(x)/x is an alternating series, so the difference
>>>> |sin(x)/x-1| is bounded from above by the next term in the taylor
>>>> expansion, namely x^2/6. Replacing sinc(x) by one is therefore
>>>> rigorous provided this error remains below one ulp of one. This leads
>>>> to the following condition x^2/6 < 1E-16, which is actually less
>>>> strong than |x| < 1E-9.
>>>> So I personally think that this is indeed an implementation detail. If
>>>> you look at the implementation of any basic functions, there are
>>>> *always* boundary cases, with different formulas for different ranges
>>>> of the argument. These cases are not advertised anywhere in the doc
>>>> (and we should be thankful of that, in my opinion).
>>>> As a user of sinc (and spherical Bessel functions in general, for
>>>> diffraction problems), the only thing I really care about is:
>>>> reliability near zero. How this reliability is enforced is really none
>>>> of my concerns.
>>>> One further point. If you were to try and implement the next spherical
>>>> Bessel function, you would find that the analytical floating-point
>>>> (without short cut, using Gilles' terminology) estimate near zero does
>>>> *not* behave well, because, I think, of catastrophic cancellations. So
>>>> in this case, you *have* to carry out a test on x, and if x is too
>>>> small, you have to replace the analytical expression of j1 by its
>>>> Taylor expansion, in order to remain within one ulp of the expected
>>>> value. The boundary is found using the preceding line of reasoning
>>>> (alternating series). In this case, this is still an implementation
>>>> detail, but *not* an optimization one-- it is a *requirement*!
>>> I made some changes (revision 1180588), following the above argument which
>>> concurs with my original remark in this thread (implementation detail).
>>> Is this now satisfactory?
>> No, please at least point out that we define the value at 0 to be
>> 1.  There is no reason to link to Wikipedia,
> This gives more than just the definition (e.g. inspiration for unit tests).
>
>> when you can directly
>> provide the formula, including treatment of 0.
> Done in revision 1180642.
>
>> The reason that I wanted to point out the top coding was that I
>> thought it might be possible results that were equal, but had
>> different internal representations could be returned over the
>> interval (-1E-9, 1E-9), where the actual value is close to but less
>> than 1.  This could effect computations involving the returned
>> values.  I have not been able to demonstrate this (and suspect that
>> I may in fact be misreading the JLS on this issue - i.e., on method
>> return you have to map to the standard value set, so returned values
>> have to have the same internal representation), so I am fine leaving
>> out the reference to the cut point.
> Do you mean the numbers representation used inside the CPU? I think that
> this is beyond the scope of a Java library.

It would be relevant if method return values were not required to be
among the standard value set (the JLS does not actually say this,
but I suspect it is true.  Maybe someone who actually knows can
educate me).  The spec allows JVMs to use extended value sets
internally.  What is required is that equals works as an equivalence
relation over whatever value set is used internally, with
equivalence classes equal to the values in the standard value set,
with the one exception that -0, +0 are different values, but in the
same equivalence class (i.e. equal).  If method return values were
not forced to be in the standard value set, it could happen that
values that were equivalent under equals (mapping to the same
standard value) but different in storage could be returned and
reused.  That could result in subsequent computations using the
values diverging (in the non-top-coded case).  I can't demonstrate
this in the present case, so either it does not happen or my
understanding of what is legal internally is incorrect (could well
be the case).

When it comes to top-coding (especially around 0 and 1), I always
like to document when I do it in numerical algorithms, because
otherwise I am (as we are here) counting on categorical statements
about value sets / error propagation in every execution environment
that the code is expected to run under.   Looks safe enough in this
case, and this discussion has gone on too long already, so I will
drop it.

Phil
>
> As for the numbers returned from the above table they are _strictly_ the
> same (see the unit test, where the tolerance is zero).
>
>
> Gilles
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: svn commit: r1179928 - in [...]

Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
On Sun, Oct 09, 2011 at 07:45:51AM -0700, Phil Steitz wrote:
> On 10/9/11 5:09 AM, Gilles Sadowski wrote:
> > On Sun, Oct 09, 2011 at 10:32:38AM +0200, Sébastien Brisard wrote:
> >>> Answering with a few examples:
> >>> x=1.000000000000000000e-15 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> >>> x=5.000000000000001000e-15 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> >>> x=2.500000000000000400e-14 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> >>> x=1.250000000000000200e-13 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> >>> x=6.250000000000001000e-13 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> >>> x=3.125000000000000500e-12 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> >>> x=1.562500000000000400e-11 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> >>> x=7.812500000000003000e-11 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> >>> x=3.906250000000001400e-10 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> >>> x=1.953125000000000700e-09 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> >>> x=9.765625000000004000e-09 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> >>> x=4.882812500000002000e-08 f=9.999999999999996000e-01 s=9.999999999999996000e-01
> >>> x=2.441406250000001000e-07 f=9.999999999999900000e-01 s=9.999999999999900000e-01
> >>> x=1.220703125000000700e-06 f=9.999999999997516000e-01 s=9.999999999997516000e-01
> >>> x=6.103515625000004000e-06 f=9.999999999937912000e-01 s=9.999999999937912000e-01
> >>> x=3.051757812500002000e-05 f=9.999999998447796000e-01 s=9.999999998447796000e-01
> >>> x=1.525878906250001000e-04 f=9.999999961194893000e-01 s=9.999999961194893000e-01
> >>> x=7.629394531250005000e-04 f=9.999999029872346000e-01 s=9.999999029872346000e-01
> >>> x=3.814697265625002600e-03 f=9.999975746825600000e-01 s=9.999975746825600000e-01
> >>> x=1.907348632812501400e-02 f=9.999393681227797000e-01 s=9.999393681227797000e-01
> >>>
> >>> Thus: below 9.765625000000004E-9, the value of the definitional formula
> >>> (without shortcut, indicated by "f=" in the above table) is strictly the
> >>> same as the CM implementation (with shortcut, indicated by "s=" in the above
> >>> table) in that they are both the double value "1.0".
> >>>
> >>> [I still don't understand what you mean by "(despite all being equal to 1
> >>> under double equals)".]
> >>>
> >>> What the implementation returns is, within double precision, the result of
> >>> the mathematical definition of sinc: sin(x) / x. Hence, there is no *need*
> >>> to document any special case, not even x = 0: The fact that without the
> >>> shortcut check, we'd get NaN does not mean that the implementation departs
> >>> from the definition, but rather that it correctly simulates it (at double
> >>> precision).
> >>> [However, if we assume that the doc should integrate numerical
> >>> considerations, it doesn't hurt to add the extra prose (in which case it
> >>> becomes necessary, IMHO, to explain the "1e-9").]
> >>>
> >>> Maybe I should also add a "SincTest" class where a unit test would check the
> >>> strict equality of returned values from the actual division and from the
> >>> shortcut implementation?
> >>>
> >>>
> >>> Gilles
> >>>
> >> I think the 1E-9 value is actually a mathematically provable value,
> >> since sin(x)/x is an alternating series, so the difference
> >> |sin(x)/x-1| is bounded from above by the next term in the taylor
> >> expansion, namely x^2/6. Replacing sinc(x) by one is therefore
> >> rigorous provided this error remains below one ulp of one. This leads
> >> to the following condition x^2/6 < 1E-16, which is actually less
> >> strong than |x| < 1E-9.
> >> So I personally think that this is indeed an implementation detail. If
> >> you look at the implementation of any basic functions, there are
> >> *always* boundary cases, with different formulas for different ranges
> >> of the argument. These cases are not advertised anywhere in the doc
> >> (and we should be thankful of that, in my opinion).
> >> As a user of sinc (and spherical Bessel functions in general, for
> >> diffraction problems), the only thing I really care about is:
> >> reliability near zero. How this reliability is enforced is really none
> >> of my concerns.
> >> One further point. If you were to try and implement the next spherical
> >> Bessel function, you would find that the analytical floating-point
> >> (without short cut, using Gilles' terminology) estimate near zero does
> >> *not* behave well, because, I think, of catastrophic cancellations. So
> >> in this case, you *have* to carry out a test on x, and if x is too
> >> small, you have to replace the analytical expression of j1 by its
> >> Taylor expansion, in order to remain within one ulp of the expected
> >> value. The boundary is found using the preceding line of reasoning
> >> (alternating series). In this case, this is still an implementation
> >> detail, but *not* an optimization one-- it is a *requirement*!
> >
> > I made some changes (revision 1180588), following the above argument which
> > concurs with my original remark in this thread (implementation detail).
> > Is this now satisfactory?
> 
> No, please at least point out that we define the value at 0 to be
> 1.  There is no reason to link to Wikipedia,

This gives more than just the definition (e.g. inspiration for unit tests).

> when you can directly
> provide the formula, including treatment of 0.

Done in revision 1180642.

> The reason that I wanted to point out the top coding was that I
> thought it might be possible results that were equal, but had
> different internal representations could be returned over the
> interval (-1E-9, 1E-9), where the actual value is close to but less
> than 1.  This could effect computations involving the returned
> values.  I have not been able to demonstrate this (and suspect that
> I may in fact be misreading the JLS on this issue - i.e., on method
> return you have to map to the standard value set, so returned values
> have to have the same internal representation), so I am fine leaving
> out the reference to the cut point.

Do you mean the numbers representation used inside the CPU? I think that
this is beyond the scope of a Java library.

As for the numbers returned from the above table they are _strictly_ the
same (see the unit test, where the tolerance is zero).


Gilles

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: svn commit: r1179928 - in [...]

Posted by Phil Steitz <ph...@gmail.com>.
On 10/9/11 5:09 AM, Gilles Sadowski wrote:
> On Sun, Oct 09, 2011 at 10:32:38AM +0200, Sébastien Brisard wrote:
>>> Answering with a few examples:
>>> x=1.000000000000000000e-15 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>> x=5.000000000000001000e-15 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>> x=2.500000000000000400e-14 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>> x=1.250000000000000200e-13 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>> x=6.250000000000001000e-13 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>> x=3.125000000000000500e-12 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>> x=1.562500000000000400e-11 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>> x=7.812500000000003000e-11 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>> x=3.906250000000001400e-10 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>> x=1.953125000000000700e-09 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>> x=9.765625000000004000e-09 f=1.000000000000000000e+00 s=1.000000000000000000e+00
>>> x=4.882812500000002000e-08 f=9.999999999999996000e-01 s=9.999999999999996000e-01
>>> x=2.441406250000001000e-07 f=9.999999999999900000e-01 s=9.999999999999900000e-01
>>> x=1.220703125000000700e-06 f=9.999999999997516000e-01 s=9.999999999997516000e-01
>>> x=6.103515625000004000e-06 f=9.999999999937912000e-01 s=9.999999999937912000e-01
>>> x=3.051757812500002000e-05 f=9.999999998447796000e-01 s=9.999999998447796000e-01
>>> x=1.525878906250001000e-04 f=9.999999961194893000e-01 s=9.999999961194893000e-01
>>> x=7.629394531250005000e-04 f=9.999999029872346000e-01 s=9.999999029872346000e-01
>>> x=3.814697265625002600e-03 f=9.999975746825600000e-01 s=9.999975746825600000e-01
>>> x=1.907348632812501400e-02 f=9.999393681227797000e-01 s=9.999393681227797000e-01
>>>
>>> Thus: below 9.765625000000004E-9, the value of the definitional formula
>>> (without shortcut, indicated by "f=" in the above table) is strictly the
>>> same as the CM implementation (with shortcut, indicated by "s=" in the above
>>> table) in that they are both the double value "1.0".
>>>
>>> [I still don't understand what you mean by "(despite all being equal to 1
>>> under double equals)".]
>>>
>>> What the implementation returns is, within double precision, the result of
>>> the mathematical definition of sinc: sin(x) / x. Hence, there is no *need*
>>> to document any special case, not even x = 0: The fact that without the
>>> shortcut check, we'd get NaN does not mean that the implementation departs
>>> from the definition, but rather that it correctly simulates it (at double
>>> precision).
>>> [However, if we assume that the doc should integrate numerical
>>> considerations, it doesn't hurt to add the extra prose (in which case it
>>> becomes necessary, IMHO, to explain the "1e-9").]
>>>
>>> Maybe I should also add a "SincTest" class where a unit test would check the
>>> strict equality of returned values from the actual division and from the
>>> shortcut implementation?
>>>
>>>
>>> Gilles
>>>
>> I think the 1E-9 value is actually a mathematically provable value,
>> since sin(x)/x is an alternating series, so the difference
>> |sin(x)/x-1| is bounded from above by the next term in the taylor
>> expansion, namely x^2/6. Replacing sinc(x) by one is therefore
>> rigorous provided this error remains below one ulp of one. This leads
>> to the following condition x^2/6 < 1E-16, which is actually less
>> strong than |x| < 1E-9.
>> So I personally think that this is indeed an implementation detail. If
>> you look at the implementation of any basic functions, there are
>> *always* boundary cases, with different formulas for different ranges
>> of the argument. These cases are not advertised anywhere in the doc
>> (and we should be thankful of that, in my opinion).
>> As a user of sinc (and spherical Bessel functions in general, for
>> diffraction problems), the only thing I really care about is:
>> reliability near zero. How this reliability is enforced is really none
>> of my concerns.
>> One further point. If you were to try and implement the next spherical
>> Bessel function, you would find that the analytical floating-point
>> (without short cut, using Gilles' terminology) estimate near zero does
>> *not* behave well, because, I think, of catastrophic cancellations. So
>> in this case, you *have* to carry out a test on x, and if x is too
>> small, you have to replace the analytical expression of j1 by its
>> Taylor expansion, in order to remain within one ulp of the expected
>> value. The boundary is found using the preceding line of reasoning
>> (alternating series). In this case, this is still an implementation
>> detail, but *not* an optimization one-- it is a *requirement*!
>
> I made some changes (revision 1180588), following the above argument which
> concurs with my original remark in this thread (implementation detail).
> Is this now satisfactory?

No, please at least point out that we define the value at 0 to be
1.  There is no reason to link to Wikipedia, when you can directly
provide the formula, including treatment of 0.

The reason that I wanted to point out the top coding was that I
thought it might be possible results that were equal, but had
different internal representations could be returned over the
interval (-1E-9, 1E-9), where the actual value is close to but less
than 1.  This could effect computations involving the returned
values.  I have not been able to demonstrate this (and suspect that
I may in fact be misreading the JLS on this issue - i.e., on method
return you have to map to the standard value set, so returned values
have to have the same internal representation), so I am fine leaving
out the reference to the cut point.

Phil
>
>
> Gilles
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: svn commit: r1179928 - in [...]

Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
On Sun, Oct 09, 2011 at 10:32:38AM +0200, Sébastien Brisard wrote:
> >
> > Answering with a few examples:
> > x=1.000000000000000000e-15 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> > x=5.000000000000001000e-15 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> > x=2.500000000000000400e-14 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> > x=1.250000000000000200e-13 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> > x=6.250000000000001000e-13 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> > x=3.125000000000000500e-12 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> > x=1.562500000000000400e-11 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> > x=7.812500000000003000e-11 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> > x=3.906250000000001400e-10 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> > x=1.953125000000000700e-09 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> > x=9.765625000000004000e-09 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> > x=4.882812500000002000e-08 f=9.999999999999996000e-01 s=9.999999999999996000e-01
> > x=2.441406250000001000e-07 f=9.999999999999900000e-01 s=9.999999999999900000e-01
> > x=1.220703125000000700e-06 f=9.999999999997516000e-01 s=9.999999999997516000e-01
> > x=6.103515625000004000e-06 f=9.999999999937912000e-01 s=9.999999999937912000e-01
> > x=3.051757812500002000e-05 f=9.999999998447796000e-01 s=9.999999998447796000e-01
> > x=1.525878906250001000e-04 f=9.999999961194893000e-01 s=9.999999961194893000e-01
> > x=7.629394531250005000e-04 f=9.999999029872346000e-01 s=9.999999029872346000e-01
> > x=3.814697265625002600e-03 f=9.999975746825600000e-01 s=9.999975746825600000e-01
> > x=1.907348632812501400e-02 f=9.999393681227797000e-01 s=9.999393681227797000e-01
> >
> > Thus: below 9.765625000000004E-9, the value of the definitional formula
> > (without shortcut, indicated by "f=" in the above table) is strictly the
> > same as the CM implementation (with shortcut, indicated by "s=" in the above
> > table) in that they are both the double value "1.0".
> >
> > [I still don't understand what you mean by "(despite all being equal to 1
> > under double equals)".]
> >
> > What the implementation returns is, within double precision, the result of
> > the mathematical definition of sinc: sin(x) / x. Hence, there is no *need*
> > to document any special case, not even x = 0: The fact that without the
> > shortcut check, we'd get NaN does not mean that the implementation departs
> > from the definition, but rather that it correctly simulates it (at double
> > precision).
> > [However, if we assume that the doc should integrate numerical
> > considerations, it doesn't hurt to add the extra prose (in which case it
> > becomes necessary, IMHO, to explain the "1e-9").]
> >
> > Maybe I should also add a "SincTest" class where a unit test would check the
> > strict equality of returned values from the actual division and from the
> > shortcut implementation?
> >
> >
> > Gilles
> >
> I think the 1E-9 value is actually a mathematically provable value,
> since sin(x)/x is an alternating series, so the difference
> |sin(x)/x-1| is bounded from above by the next term in the taylor
> expansion, namely x^2/6. Replacing sinc(x) by one is therefore
> rigorous provided this error remains below one ulp of one. This leads
> to the following condition x^2/6 < 1E-16, which is actually less
> strong than |x| < 1E-9.
> So I personally think that this is indeed an implementation detail. If
> you look at the implementation of any basic functions, there are
> *always* boundary cases, with different formulas for different ranges
> of the argument. These cases are not advertised anywhere in the doc
> (and we should be thankful of that, in my opinion).
> As a user of sinc (and spherical Bessel functions in general, for
> diffraction problems), the only thing I really care about is:
> reliability near zero. How this reliability is enforced is really none
> of my concerns.
> One further point. If you were to try and implement the next spherical
> Bessel function, you would find that the analytical floating-point
> (without short cut, using Gilles' terminology) estimate near zero does
> *not* behave well, because, I think, of catastrophic cancellations. So
> in this case, you *have* to carry out a test on x, and if x is too
> small, you have to replace the analytical expression of j1 by its
> Taylor expansion, in order to remain within one ulp of the expected
> value. The boundary is found using the preceding line of reasoning
> (alternating series). In this case, this is still an implementation
> detail, but *not* an optimization one-- it is a *requirement*!


I made some changes (revision 1180588), following the above argument which
concurs with my original remark in this thread (implementation detail).
Is this now satisfactory?


Gilles

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: svn commit: r1179928 - in [...]

Posted by Sébastien Brisard <se...@m4x.org>.
>
> Answering with a few examples:
> x=1.000000000000000000e-15 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> x=5.000000000000001000e-15 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> x=2.500000000000000400e-14 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> x=1.250000000000000200e-13 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> x=6.250000000000001000e-13 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> x=3.125000000000000500e-12 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> x=1.562500000000000400e-11 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> x=7.812500000000003000e-11 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> x=3.906250000000001400e-10 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> x=1.953125000000000700e-09 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> x=9.765625000000004000e-09 f=1.000000000000000000e+00 s=1.000000000000000000e+00
> x=4.882812500000002000e-08 f=9.999999999999996000e-01 s=9.999999999999996000e-01
> x=2.441406250000001000e-07 f=9.999999999999900000e-01 s=9.999999999999900000e-01
> x=1.220703125000000700e-06 f=9.999999999997516000e-01 s=9.999999999997516000e-01
> x=6.103515625000004000e-06 f=9.999999999937912000e-01 s=9.999999999937912000e-01
> x=3.051757812500002000e-05 f=9.999999998447796000e-01 s=9.999999998447796000e-01
> x=1.525878906250001000e-04 f=9.999999961194893000e-01 s=9.999999961194893000e-01
> x=7.629394531250005000e-04 f=9.999999029872346000e-01 s=9.999999029872346000e-01
> x=3.814697265625002600e-03 f=9.999975746825600000e-01 s=9.999975746825600000e-01
> x=1.907348632812501400e-02 f=9.999393681227797000e-01 s=9.999393681227797000e-01
>
> Thus: below 9.765625000000004E-9, the value of the definitional formula
> (without shortcut, indicated by "f=" in the above table) is strictly the
> same as the CM implementation (with shortcut, indicated by "s=" in the above
> table) in that they are both the double value "1.0".
>
> [I still don't understand what you mean by "(despite all being equal to 1
> under double equals)".]
>
> What the implementation returns is, within double precision, the result of
> the mathematical definition of sinc: sin(x) / x. Hence, there is no *need*
> to document any special case, not even x = 0: The fact that without the
> shortcut check, we'd get NaN does not mean that the implementation departs
> from the definition, but rather that it correctly simulates it (at double
> precision).
> [However, if we assume that the doc should integrate numerical
> considerations, it doesn't hurt to add the extra prose (in which case it
> becomes necessary, IMHO, to explain the "1e-9").]
>
> Maybe I should also add a "SincTest" class where a unit test would check the
> strict equality of returned values from the actual division and from the
> shortcut implementation?
>
>
> Gilles
>
I think the 1E-9 value is actually a mathematically provable value,
since sin(x)/x is an alternating series, so the difference
|sin(x)/x-1| is bounded from above by the next term in the taylor
expansion, namely x^2/6. Replacing sinc(x) by one is therefore
rigorous provided this error remains below one ulp of one. This leads
to the following condition x^2/6 < 1E-16, which is actually less
strong than |x| < 1E-9.
So I personally think that this is indeed an implementation detail. If
you look at the implementation of any basic functions, there are
*always* boundary cases, with different formulas for different ranges
of the argument. These cases are not advertised anywhere in the doc
(and we should be thankful of that, in my opinion).
As a user of sinc (and spherical Bessel functions in general, for
diffraction problems), the only thing I really care about is:
reliability near zero. How this reliability is enforced is really none
of my concerns.
One further point. If you were to try and implement the next spherical
Bessel function, you would find that the analytical floating-point
(without short cut, using Gilles' terminology) estimate near zero does
*not* behave well, because, I think, of catastrophic cancellations. So
in this case, you *have* to carry out a test on x, and if x is too
small, you have to replace the analytical expression of j1 by its
Taylor expansion, in order to remain within one ulp of the expected
value. The boundary is found using the preceding line of reasoning
(alternating series). In this case, this is still an implementation
detail, but *not* an optimization one-- it is a *requirement*!

Sébastien

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: svn commit: r1179928 - in [...]

Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
On Sat, Oct 08, 2011 at 02:29:21PM -0700, Phil Steitz wrote:
> On 10/8/11 2:12 PM, Gilles Sadowski wrote:
> > On Sat, Oct 08, 2011 at 07:21:23AM -0700, Phil Steitz wrote:
> >> On 10/8/11 5:05 AM, Gilles Sadowski wrote:
> >>> Hi Phil.
> >>>
> >>>>  Can you live with r1180315?
> >>> [I guess that you are talking to me.]
> >>>
> >>> I still stand with the arguments of my other post about this "1e-9" constant
> >>> being confusing for the "non numerics-aware" users.
> >>> However, I can understand that we may want to also document the departure
> >>> from the math definition incurred by numerical considerations.  So, I'd
> >>> propose to add:
> >>>   "The direct assignment to 1 for values below 1e-9 is an efficiency
> >>>    optimization on the ground that the result of the full computation
> >>>    is indistinguishable from 1 due to the limited accuracy of the floating
> >>>    point representation."
> >> We are also *defining* the value to be 1 at 0.  I think the current
> >> doc is pretty clear and I do want to keep the threshold in there,
> >> since it does affect what is being returned and really amounts to
> >> part of the definition.  Adding extra prose above is OK with me, but
> >> I think its clear enough as is.
> > If you understand that "it does affect what is being returned" than it is
> > surely not clear enough as is, because, to the contrary, it does *not*
> > affect what is being returned: For any value 0 < x < 1e-9 will return 1.
> 
> Would it actually return the same bits if we just set the value at 0
> to 1 and evaluated sin(x)/x for the rest of the values?  In that
> case, I agree, we just need to doc the value at 0.  But IIUC what is
> going on, the values returned inside the recoded interval will be
> different from what the definitional formula would (despite all
> being equal to 1 under double equals).

Answering with a few examples:
x=1.000000000000000000e-15 f=1.000000000000000000e+00 s=1.000000000000000000e+00
x=5.000000000000001000e-15 f=1.000000000000000000e+00 s=1.000000000000000000e+00
x=2.500000000000000400e-14 f=1.000000000000000000e+00 s=1.000000000000000000e+00
x=1.250000000000000200e-13 f=1.000000000000000000e+00 s=1.000000000000000000e+00
x=6.250000000000001000e-13 f=1.000000000000000000e+00 s=1.000000000000000000e+00
x=3.125000000000000500e-12 f=1.000000000000000000e+00 s=1.000000000000000000e+00
x=1.562500000000000400e-11 f=1.000000000000000000e+00 s=1.000000000000000000e+00
x=7.812500000000003000e-11 f=1.000000000000000000e+00 s=1.000000000000000000e+00
x=3.906250000000001400e-10 f=1.000000000000000000e+00 s=1.000000000000000000e+00
x=1.953125000000000700e-09 f=1.000000000000000000e+00 s=1.000000000000000000e+00
x=9.765625000000004000e-09 f=1.000000000000000000e+00 s=1.000000000000000000e+00
x=4.882812500000002000e-08 f=9.999999999999996000e-01 s=9.999999999999996000e-01
x=2.441406250000001000e-07 f=9.999999999999900000e-01 s=9.999999999999900000e-01
x=1.220703125000000700e-06 f=9.999999999997516000e-01 s=9.999999999997516000e-01
x=6.103515625000004000e-06 f=9.999999999937912000e-01 s=9.999999999937912000e-01
x=3.051757812500002000e-05 f=9.999999998447796000e-01 s=9.999999998447796000e-01
x=1.525878906250001000e-04 f=9.999999961194893000e-01 s=9.999999961194893000e-01
x=7.629394531250005000e-04 f=9.999999029872346000e-01 s=9.999999029872346000e-01
x=3.814697265625002600e-03 f=9.999975746825600000e-01 s=9.999975746825600000e-01
x=1.907348632812501400e-02 f=9.999393681227797000e-01 s=9.999393681227797000e-01

Thus: below 9.765625000000004E-9, the value of the definitional formula
(without shortcut, indicated by "f=" in the above table) is strictly the
same as the CM implementation (with shortcut, indicated by "s=" in the above
table) in that they are both the double value "1.0".

[I still don't understand what you mean by "(despite all being equal to 1
under double equals)".]

What the implementation returns is, within double precision, the result of
the mathematical definition of sinc: sin(x) / x. Hence, there is no *need*
to document any special case, not even x = 0: The fact that without the
shortcut check, we'd get NaN does not mean that the implementation departs
from the definition, but rather that it correctly simulates it (at double
precision).
[However, if we assume that the doc should integrate numerical
considerations, it doesn't hurt to add the extra prose (in which case it
becomes necessary, IMHO, to explain the "1e-9").]

Maybe I should also add a "SincTest" class where a unit test would check the
strict equality of returned values from the actual division and from the
shortcut implementation?


Gilles

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: svn commit: r1179928 - in [...]

Posted by Phil Steitz <ph...@gmail.com>.
On 10/8/11 2:12 PM, Gilles Sadowski wrote:
> On Sat, Oct 08, 2011 at 07:21:23AM -0700, Phil Steitz wrote:
>> On 10/8/11 5:05 AM, Gilles Sadowski wrote:
>>> Hi Phil.
>>>
>>>>  Can you live with r1180315?
>>> [I guess that you are talking to me.]
>>>
>>> I still stand with the arguments of my other post about this "1e-9" constant
>>> being confusing for the "non numerics-aware" users.
>>> However, I can understand that we may want to also document the departure
>>> from the math definition incurred by numerical considerations.  So, I'd
>>> propose to add:
>>>   "The direct assignment to 1 for values below 1e-9 is an efficiency
>>>    optimization on the ground that the result of the full computation
>>>    is indistinguishable from 1 due to the limited accuracy of the floating
>>>    point representation."
>> We are also *defining* the value to be 1 at 0.  I think the current
>> doc is pretty clear and I do want to keep the threshold in there,
>> since it does affect what is being returned and really amounts to
>> part of the definition.  Adding extra prose above is OK with me, but
>> I think its clear enough as is.
> If you understand that "it does affect what is being returned" than it is
> surely not clear enough as is, because, to the contrary, it does *not*
> affect what is being returned: For any value 0 < x < 1e-9 will return 1.

Would it actually return the same bits if we just set the value at 0
to 1 and evaluated sin(x)/x for the rest of the values?  In that
case, I agree, we just need to doc the value at 0.  But IIUC what is
going on, the values returned inside the recoded interval will be
different from what the definitional formula would (despite all
being equal to 1 under double equals).

Phil  
> I'll add the additional prose.
>
>
> Gilles
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: svn commit: r1179928 - in [...]

Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
On Sat, Oct 08, 2011 at 07:21:23AM -0700, Phil Steitz wrote:
> On 10/8/11 5:05 AM, Gilles Sadowski wrote:
> > Hi Phil.
> >
> >>  Can you live with r1180315?
> > [I guess that you are talking to me.]
> >
> > I still stand with the arguments of my other post about this "1e-9" constant
> > being confusing for the "non numerics-aware" users.
> > However, I can understand that we may want to also document the departure
> > from the math definition incurred by numerical considerations.  So, I'd
> > propose to add:
> >   "The direct assignment to 1 for values below 1e-9 is an efficiency
> >    optimization on the ground that the result of the full computation
> >    is indistinguishable from 1 due to the limited accuracy of the floating
> >    point representation."
> 
> We are also *defining* the value to be 1 at 0.  I think the current
> doc is pretty clear and I do want to keep the threshold in there,
> since it does affect what is being returned and really amounts to
> part of the definition.  Adding extra prose above is OK with me, but
> I think its clear enough as is.

If you understand that "it does affect what is being returned" than it is
surely not clear enough as is, because, to the contrary, it does *not*
affect what is being returned: For any value 0 < x < 1e-9 will return 1.
I'll add the additional prose.


Gilles

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: svn commit: r1179928 - in [...]

Posted by Phil Steitz <ph...@gmail.com>.
On 10/8/11 5:05 AM, Gilles Sadowski wrote:
> Hi Phil.
>
>>  Can you live with r1180315?
> [I guess that you are talking to me.]
>
> I still stand with the arguments of my other post about this "1e-9" constant
> being confusing for the "non numerics-aware" users.
> However, I can understand that we may want to also document the departure
> from the math definition incurred by numerical considerations.  So, I'd
> propose to add:
>   "The direct assignment to 1 for values below 1e-9 is an efficiency
>    optimization on the ground that the result of the full computation
>    is indistinguishable from 1 due to the limited accuracy of the floating
>    point representation."

We are also *defining* the value to be 1 at 0.  I think the current
doc is pretty clear and I do want to keep the threshold in there,
since it does affect what is being returned and really amounts to
part of the definition.  Adding extra prose above is OK with me, but
I think its clear enough as is.

Phil
>
> Is that OK with you?
>
>
> Regards,
> Gilles
>
> P.S. I also cannot live with the missing "@" in the {@code ...} tag
>      construct. ;-)
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: svn commit: r1179928 - in [...]

Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
Hi Phil.

>  Can you live with r1180315?

[I guess that you are talking to me.]

I still stand with the arguments of my other post about this "1e-9" constant
being confusing for the "non numerics-aware" users.
However, I can understand that we may want to also document the departure
from the math definition incurred by numerical considerations.  So, I'd
propose to add:
  "The direct assignment to 1 for values below 1e-9 is an efficiency
   optimization on the ground that the result of the full computation
   is indistinguishable from 1 due to the limited accuracy of the floating
   point representation."

Is that OK with you?


Regards,
Gilles

P.S. I also cannot live with the missing "@" in the {@code ...} tag
     construct. ;-)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: svn commit: r1179928 - in [...]

Posted by Phil Steitz <ph...@gmail.com>.
 Can you live with r1180315?

Phil

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: svn commit: r1179928 - in [...]

Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
> >> 
> >> /**
> >> - * Sinc function.
> >> + * Sinc function, defined by <pre><code>
> >> + *
> >> + * sinc(x) = 1 if abs(x) < 1e-9;
> >> + *           sin(x) / x; otherwise
> >> + * </code></pre>
> > 
> > I would not document the first part of the alternative since it is an
> > implementation detail.  "1e-9" was chosen just because, with "double"
> > precision, the function value will be indistinguishable from 1. Strictly
> > speaking it is not part of the definition of "sinc". [This part of the
> > implementation could even be removed if it is deemed that we lose more time
> > doing the check than we gain when the user asks the value of points below
> > 1e-9.]
> > 
> > Also, there probably should not be a ";" after the statements.
> 
> If it a) makes a difference in the returned result and b) is correct (up to double equality),

I don't understand what you write here.

What I meant above is:
 For values < 1e-9, it does not make a difference at double precision whether
 the computation is done in full (sin(x) / x) or by assigning "1" directly.

> I think we should leave the check in

Yes, of course; the suggestion to remove it was a mistake (see below).

> and document it.   We should always document top-coding or other departures from the formulas we define things by. 

I don't see why give a wrong definition since it makes no difference. The
definition of "sinc" is[1]:
  sinc(x) = sin(x) / x

To compute it, care must be taken to not divide by zero, thus the simplest
operational definition is[2]:
  sinc(x) = 1           if x = 0
            sin(x) / x  otherwise

The one in CM is an efficiency improvement (to avoid evaluation of a sine
and a division when the result is indistinguishable from the known result).

The rationale for the shortcut could be a code comment, but as a user-level
comment, it is confusing.


Gilles

> 
> I am fine changing the semicolons (which are being used as punctuation) to commas or even dropping them altogether. 
> 
> Phil

[1] http://en.wikipedia.org/wiki/Sinc_function
[2] http://mathworld.wolfram.com/SincFunction.html

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: svn commit: r1179928 - in [...]

Posted by Phil Steitz <ph...@gmail.com>.



On Oct 7, 2011, at 2:59 AM, gilles@harfang.homelinux.org wrote:

> On Fri, Oct 07, 2011 at 03:20:40AM -0000, psteitz@apache.org wrote:
>> Author: psteitz
>> Date: Fri Oct  7 03:20:39 2011
>> New Revision: 1179928
>> 
>> URL: http://svn.apache.org/viewvc?rev=1179928&view=rev
>> Log:
>> Javadoc fixes.
>> 
>> Modified:
>> [...]
>>    commons/proper/math/trunk/src/main/java/org/apache/commons/math/analysis/function/Sinc.java
> 
>> [...]
> 
>> Modified: commons/proper/math/trunk/src/main/java/org/apache/commons/math/analysis/function/Sinc.java
>> URL: http://svn.apache.org/viewvc/commons/proper/math/trunk/src/main/java/org/apache/commons/math/analysis/function/Sinc.java?rev=1179928&r1=1179927&r2=1179928&view=diff
>> ==============================================================================
>> --- commons/proper/math/trunk/src/main/java/org/apache/commons/math/analysis/function/Sinc.java (original)
>> +++ commons/proper/math/trunk/src/main/java/org/apache/commons/math/analysis/function/Sinc.java Fri Oct  7 03:20:39 2011
>> @@ -21,7 +21,11 @@ import org.apache.commons.math.analysis.
>> import org.apache.commons.math.util.FastMath;
>> 
>> /**
>> - * Sinc function.
>> + * Sinc function, defined by <pre><code>
>> + *
>> + * sinc(x) = 1 if abs(x) < 1e-9;
>> + *           sin(x) / x; otherwise
>> + * </code></pre>
> 
> I would not document the first part of the alternative since it is an
> implementation detail.  "1e-9" was chosen just because, with "double"
> precision, the function value will be indistinguishable from 1. Strictly
> speaking it is not part of the definition of "sinc". [This part of the
> implementation could even be removed if it is deemed that we lose more time
> doing the check than we gain when the user asks the value of points below
> 1e-9.]
> 
> Also, there probably should not be a ";" after the statements.

If it a) makes a difference in the returned result and b) is correct (up to double equality), I think we should leave the check in and document it.   We should always document top-coding or other departures from the formulas we define things by. 

I am fine changing the semicolons (which are being used as punctuation) to commas or even dropping them altogether. 

Phil
> 
> 
> Regards,
> Gilles
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org