You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucenenet.apache.org by laimis <gi...@git.apache.org> on 2015/06/18 03:58:32 UTC

[GitHub] lucenenet pull request: use reduced precision float base class

GitHub user laimis opened a pull request:

    https://github.com/apache/lucenenet/pull/150

    use reduced precision float base class

    We noticed that running tests in 32 bit CPUs was introducing floating point rounding issues and tests were failing as the same calculation could produce different result if executed multiple times in a single test. After some discussions on the mailing list it was determined that on 32 bit machines different instructions are used from 64 bit machines to process floating point numbers. Intermediate results when transferred with higher precision to the next operations eventually lead to rounding errors once a final calculation is returned as float.
    
    From all the reading and trying several approaches so far, the only way I was able to consistently to get floating point conversion issues to go away is to pinvoke _controlfp_s to set floating point precision to 24 bits.
    
    I got the idea from https://randomascii.wordpress.com/2013/07/16/floating-point-determinism/, see section "Floating-point settings (runtime)", last paragraph. controlfp seems to come up in other suggestions around this issue as well. 
    
    Here is the information on _controlfp_s: https://msdn.microsoft.com/en-us/library/c9676k6h.aspx. And it seems like there is an equivalent function in *nix environments although I haven't tried them or researched them in more depth yet.
    
    For tests that rely on floating point calculations to be consistent, a variant of LuceneTestCase is used that sets the precision mask to 24 bits before each test run. I put this in the test as I don't think this code belongs to the core itself.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/laimis/lucenenet controlfp_s

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/lucenenet/pull/150.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #150
    
----
commit c11e3683b9abcc67893e8a885066107d9f703f80
Author: Laimonas Simutis <la...@gmail.com>
Date:   2015-06-06T18:20:27Z

    use reduced precision float base class

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] lucenenet pull request: use reduced precision float base class

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/lucenenet/pull/150


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---