You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by vanzin <gi...@git.apache.org> on 2018/11/30 17:32:07 UTC

[GitHub] spark pull request #23055: [SPARK-26080][PYTHON] Skips Python resource limit...

Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/23055#discussion_r237941472
  
    --- Diff: docs/configuration.md ---
    @@ -190,6 +190,8 @@ of the most common options to set are:
         and it is up to the application to avoid exceeding the overhead memory space
         shared with other non-JVM processes. When PySpark is run in YARN or Kubernetes, this memory
         is added to executor resource requests.
    +
    +    NOTE: This configuration is not supported on Windows.
    --- End diff --
    
    This is a little misleading since on Windows the extra memory will still be added to the resource requests. It's just the process-level memory limit that is not implemented.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org