You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by "viirya (via GitHub)" <gi...@apache.org> on 2023/02/12 19:52:25 UTC

[GitHub] [arrow-rs] viirya commented on pull request #3690: Allow precision loss on multiplying decimal arrays

viirya commented on PR #3690:
URL: https://github.com/apache/arrow-rs/pull/3690#issuecomment-1427117474

   > This might be a dumb question, but can we not do something with precision statically instead of per-value. i.e. if multiplying two decimal numbers with precision of 10 and scale 5, the output should have precision of 20 and scale 10. The plan can then intentionally truncate inputs using division if necessary?
   
   Hmm, I think one reason is that we are not sure if truncation will actually happen before multiplying values. If not, early truncating inputs could loss precision which can be avoided? 
   
   > This might be another dumb thought, but if you want truncating arithmetic, why not just cast to a floating point representation? It'll be faster and better supported? I guess I had thought the only reason to still bother with decimal precision these days was if you couldn't tolerate approximation?
   
   A major reason for this work is to provide compatibility to existing queries. If our users happened to use decimal for whatever reasons and they just rely on such truncation behavior, they may not tolerate a different behavior like failing their queries. Besides, as truncating is not always happened but just a possibility, they might tolerate approximation for it for rare case (if happened) but for all values (this is what I thought, not voice from real users).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org