You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by "rok (via GitHub)" <gi...@apache.org> on 2023/04/16 22:39:52 UTC

[GitHub] [arrow] rok commented on issue #22806: [C++] vendor a half precision floating point library

rok commented on issue #22806:
URL: https://github.com/apache/arrow/issues/22806#issuecomment-1510507313

   I've looked at some libraries to see what is usually used:
   * [TensorFlow](https://github.com/tensorflow/tensorflow/blob/master/third_party/FP16/workspace.bzl) - uses FP16
   * [Numpy](https://github.com/numpy/numpy/blob/7e86c2aadfc6dcee12274db698407275cede1b63/numpy/core/src/common/half.hpp#L2) seems to implement it's own arithmetic.
   * [PyTorch](https://github.com/pytorch/pytorch/tree/be0b12ece576c86c5f059d15a64dcba0eb886ddd/third_party) - uses FP16
   * [FP16](https://github.com/Maratyszcza/FP16/) - seems to be [composed](https://github.com/Maratyszcza/FP16/tree/master/third-party) mostly of [half](https://half.sourceforge.net/) and few functions taken from other places.
   
   Half [seems pretty complete](https://half.sourceforge.net/namespacehalf__float.html) so I think it or [FP16](https://github.com/Maratyszcza/FP16) would make good candidates.
   
   cc @pitrou @bkietz 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org