You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2020/07/30 14:03:29 UTC

[GitHub] [spark] tgravescs commented on pull request #29090: [SPARK-32293] Fix inconsistency between Spark memory configs and JVM option

tgravescs commented on pull request #29090:
URL: https://github.com/apache/spark/pull/29090#issuecomment-666384190


   sorry just read @holdenk comment:
   
   >> if someone has a script that's been using the default behaviour of k already 
   
   I thought default is bytes in most cases, was there somewhere we are using k?  If not I'm less worried like I mention above because I think if someone species the size in bytes and we add in M most times probably going to fail as to large.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org