You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2020/07/30 07:35:55 UTC

[GitHub] [spark] attilapiros commented on pull request #29090: [SPARK-32293] Fix inconsistency between Spark memory configs and JVM option

attilapiros commented on pull request #29090:
URL: https://github.com/apache/spark/pull/29090#issuecomment-665952684


   Thanks @holdenk for looking into this. 
   
   And what about logging out a warning when no unit is given? 
   
   Like:
   "Memory setting without explicit unit (${value}) is taken to be in MB by default! For details check SPARK-32293."
   
   This way in case of a problem we provide an indication to the route cause. 
   This error mostly could be at the beginning of the application as after multiplying a number with 1024 the result will be a quite huge and this will trigger an allocation which is hard to be satisfied (not impossible but in client mode going up for example from 1GB to 1TB, that's huge). 
   
   The exception in these cases will be thrown by failed memory allocations. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org