You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by pankajarora12 <gi...@git.apache.org> on 2015/02/25 19:45:35 UTC

[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

GitHub user pankajarora12 opened a pull request:

    https://github.com/apache/spark/pull/4770

    [CORE][YARN] SPARK-6011: Used Current Working directory for sparklocaldirs instead of Application Directory so that spark-local-files gets deleted when executor exits abruptly.

    Spark uses current application directory to save shuffle files for all Executors. But when Executor gets killed abruptly not allowing DiskBlockManager.scala shutdownhook to get executed. These files remain there till application is up. 
    
    This is causing out of disk space error for long/infinitley running applications.
    In this fix i used current working directory, which is inside executor's directory, to save shuffle files instead of application's directory. So that Yarn clears those directories when executor gets killed.
    
    -Pankaj

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/pankajarora12/spark master

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/4770.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #4770
    
----
commit d6bfba3d7b9236a02a7e91233f8e512bea761af0
Author: pankaj.arora <pa...@guavus.com>
Date:   2014-05-31T11:11:05Z

    [SPARK-1979] Added Error Handling if user passes application params with --arg

commit 7c838862d69095ad9ccf9fa8e3ff9a582b0e647d
Author: pankaj arora <pa...@guavus.com>
Date:   2015-02-25T17:33:50Z

    Merge upstream

commit 3db9a19baec2f7d891f0b1a18d89907d633c3c02
Author: pankaj arora <pa...@guavus.com>
Date:   2015-02-25T18:21:47Z

    [CORE] SPARK-6011: Used Current Working directory for sparklocaldirs instead of Application Directory so that spark-local-files gets deleted when executor exits abruptly.

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by pankajarora12 <gi...@git.apache.org>.
Github user pankajarora12 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4770#discussion_r25379441
  
    --- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
    @@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
     
       /** Get the Yarn approved local directories. */
       private def getYarnLocalDirs(conf: SparkConf): String = {
    -    // Hadoop 0.23 and 2.x have different Environment variable names for the
    -    // local dirs, so lets check both. We assume one of the 2 is set.
    -    // LOCAL_DIRS => 2.X, YARN_LOCAL_DIRS => 0.23.X
    -    val localDirs = Option(conf.getenv("YARN_LOCAL_DIRS"))
    -      .getOrElse(Option(conf.getenv("LOCAL_DIRS"))
    -      .getOrElse(""))
    +    //YarnLocalDirs must be inside container directory. Since it will be automatically deleted when container shut downs.
    +    val localDirs = Option(System.getProperty("user.dir")).getOrElse(""))
    --- End diff --
    
    Can you suggest what could be the correct way to delete those files when executor dies in case of no ExternalShuffleService.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by pankajarora12 <gi...@git.apache.org>.
Github user pankajarora12 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4770#discussion_r25367786
  
    --- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
    @@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
     
       /** Get the Yarn approved local directories. */
       private def getYarnLocalDirs(conf: SparkConf): String = {
    -    // Hadoop 0.23 and 2.x have different Environment variable names for the
    -    // local dirs, so lets check both. We assume one of the 2 is set.
    -    // LOCAL_DIRS => 2.X, YARN_LOCAL_DIRS => 0.23.X
    -    val localDirs = Option(conf.getenv("YARN_LOCAL_DIRS"))
    -      .getOrElse(Option(conf.getenv("LOCAL_DIRS"))
    -      .getOrElse(""))
    +    //YarnLocalDirs must be inside container directory. Since it will be automatically deleted when container shut downs.
    +    val localDirs = Option(System.getProperty("user.dir")).getOrElse(""))
    --- End diff --
    
    Hi Sean,
    
    What i have understood from http://hortonworks.com/blog/resource-localization-in-yarn-deep-dive/
    
    container directory, from where executor gets launched, created by node manager is inside yarn-local-dirs. So it is automatically fulfilling that criteria.
    
    Please correct me if i am wrong.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by pankajarora12 <gi...@git.apache.org>.
Github user pankajarora12 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4770#discussion_r25369938
  
    --- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
    @@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
     
       /** Get the Yarn approved local directories. */
       private def getYarnLocalDirs(conf: SparkConf): String = {
    -    // Hadoop 0.23 and 2.x have different Environment variable names for the
    -    // local dirs, so lets check both. We assume one of the 2 is set.
    -    // LOCAL_DIRS => 2.X, YARN_LOCAL_DIRS => 0.23.X
    -    val localDirs = Option(conf.getenv("YARN_LOCAL_DIRS"))
    -      .getOrElse(Option(conf.getenv("LOCAL_DIRS"))
    -      .getOrElse(""))
    +    //YarnLocalDirs must be inside container directory. Since it will be automatically deleted when container shut downs.
    +    val localDirs = Option(System.getProperty("user.dir")).getOrElse(""))
    --- End diff --
    
    @vanzin I AFAIK if executor gets lost you can not reuse its data. Correct me if i am wrong.
    
    and user.dir is inside the same directory that you configured in yarn to use for local.dirs.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by vanzin <gi...@git.apache.org>.
Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4770#discussion_r25380094
  
    --- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
    @@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
     
       /** Get the Yarn approved local directories. */
       private def getYarnLocalDirs(conf: SparkConf): String = {
    -    // Hadoop 0.23 and 2.x have different Environment variable names for the
    -    // local dirs, so lets check both. We assume one of the 2 is set.
    -    // LOCAL_DIRS => 2.X, YARN_LOCAL_DIRS => 0.23.X
    -    val localDirs = Option(conf.getenv("YARN_LOCAL_DIRS"))
    -      .getOrElse(Option(conf.getenv("LOCAL_DIRS"))
    -      .getOrElse(""))
    +    //YarnLocalDirs must be inside container directory. Since it will be automatically deleted when container shut downs.
    +    val localDirs = Option(System.getProperty("user.dir")).getOrElse(""))
    --- End diff --
    
    If the problem is with shuffle files accumulating, as I suggested before, my understanding is that `ContextCleaner` would take care of this. Maybe your application is not releasing RDDs for garbage collection, in which case the cleaner wouldn't be able to do much. Or maybe the cleaner has a bug, or wasn't supposed to do that in the first place.
    
    But the point here is that your patch is not correct. It breaks two existing features.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by pankajarora12 <gi...@git.apache.org>.
Github user pankajarora12 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4770#discussion_r25370157
  
    --- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
    @@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
     
       /** Get the Yarn approved local directories. */
       private def getYarnLocalDirs(conf: SparkConf): String = {
    -    // Hadoop 0.23 and 2.x have different Environment variable names for the
    -    // local dirs, so lets check both. We assume one of the 2 is set.
    -    // LOCAL_DIRS => 2.X, YARN_LOCAL_DIRS => 0.23.X
    -    val localDirs = Option(conf.getenv("YARN_LOCAL_DIRS"))
    -      .getOrElse(Option(conf.getenv("LOCAL_DIRS"))
    -      .getOrElse(""))
    +    //YarnLocalDirs must be inside container directory. Since it will be automatically deleted when container shut downs.
    +    val localDirs = Option(System.getProperty("user.dir")).getOrElse(""))
    --- End diff --
    
    In that case you will not be writing shuffle files to this directory.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by srowen <gi...@git.apache.org>.
Github user srowen commented on the pull request:

    https://github.com/apache/spark/pull/4770#issuecomment-76062964
  
    Yes, do you mind closing this PR? I think the same underlying issue of temp file cleanup is discussed in https://issues.apache.org/jira/browse/SPARK-5836 and I think the discussion could take place there.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by vanzin <gi...@git.apache.org>.
Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4770#discussion_r25373313
  
    --- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
    @@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
     
       /** Get the Yarn approved local directories. */
       private def getYarnLocalDirs(conf: SparkConf): String = {
    -    // Hadoop 0.23 and 2.x have different Environment variable names for the
    -    // local dirs, so lets check both. We assume one of the 2 is set.
    -    // LOCAL_DIRS => 2.X, YARN_LOCAL_DIRS => 0.23.X
    -    val localDirs = Option(conf.getenv("YARN_LOCAL_DIRS"))
    -      .getOrElse(Option(conf.getenv("LOCAL_DIRS"))
    -      .getOrElse(""))
    +    //YarnLocalDirs must be inside container directory. Since it will be automatically deleted when container shut downs.
    +    val localDirs = Option(System.getProperty("user.dir")).getOrElse(""))
    --- End diff --
    
    The code is in `DiskBlockManager.scala`. It's the same code whether you're using the external shuffle service or not. As I said, the external service just tracks location of shuffle files (e.g. "this block id is in file /blah"). That code is in `network/shuffle`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by pankajarora12 <gi...@git.apache.org>.
Github user pankajarora12 commented on the pull request:

    https://github.com/apache/spark/pull/4770#issuecomment-76057426
  
    Also using multiple disks for each executor provides speed but failure of
    any of the disk will fail all executors of that node.
    
    on other hand using one disk per executor and having as many disks
    available as there are executors will provide fault tolerant against disk
    failures.
    
    I think quite debatable.
    
    On Thu, Feb 26, 2015 at 2:30 AM, Pankaj Arora <pa...@gmail.com>
    wrote:
    
    > I thought about that case too. Since we will be having many executors on
    > one node. So yarn will use different local dir for launching each executor
    > and that will use up other disks too.
    >
    > On Thu, Feb 26, 2015 at 1:39 AM, Marcelo Vanzin <no...@github.com>
    > wrote:
    >
    >> There is also a second thing that is broken by this patch:
    >> YARN_LOCAL_DIRS can actually be multiple directories, as the name
    >> implies. The BlockManager uses that to distribute shuffle files across
    >> multiple disks, to speed things up. After this change, everything will end
    >> up on the same disk.
    >>
    >> —
    >> Reply to this email directly or view it on GitHub
    >> <https://github.com/apache/spark/pull/4770#issuecomment-76045346>.
    >>
    >
    >



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by srowen <gi...@git.apache.org>.
Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4770#discussion_r25368698
  
    --- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
    @@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
     
       /** Get the Yarn approved local directories. */
       private def getYarnLocalDirs(conf: SparkConf): String = {
    -    // Hadoop 0.23 and 2.x have different Environment variable names for the
    -    // local dirs, so lets check both. We assume one of the 2 is set.
    -    // LOCAL_DIRS => 2.X, YARN_LOCAL_DIRS => 0.23.X
    -    val localDirs = Option(conf.getenv("YARN_LOCAL_DIRS"))
    -      .getOrElse(Option(conf.getenv("LOCAL_DIRS"))
    -      .getOrElse(""))
    +    //YarnLocalDirs must be inside container directory. Since it will be automatically deleted when container shut downs.
    +    val localDirs = Option(System.getProperty("user.dir")).getOrElse(""))
    --- End diff --
    
    Perhaps, but that's not the directory we're looking for in this code. We want the local dirs. You can see comments about where this is coming from in the deleted comments. I don't see how this fixes the problem you reported though. You might have a look at the conversation happening now at https://github.com/apache/spark/pull/4759#issuecomment-76026644 ; I think shuffle files are kept on purpose in some instances, but, I am not clear if this is one of them.
    
    @vanzin I know I am invoking you a lot today but your thoughts would be good here too.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by srowen <gi...@git.apache.org>.
Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4770#discussion_r25367198
  
    --- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
    @@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
     
       /** Get the Yarn approved local directories. */
       private def getYarnLocalDirs(conf: SparkConf): String = {
    -    // Hadoop 0.23 and 2.x have different Environment variable names for the
    -    // local dirs, so lets check both. We assume one of the 2 is set.
    -    // LOCAL_DIRS => 2.X, YARN_LOCAL_DIRS => 0.23.X
    -    val localDirs = Option(conf.getenv("YARN_LOCAL_DIRS"))
    -      .getOrElse(Option(conf.getenv("LOCAL_DIRS"))
    -      .getOrElse(""))
    +    //YarnLocalDirs must be inside container directory. Since it will be automatically deleted when container shut downs.
    +    val localDirs = Option(System.getProperty("user.dir")).getOrElse(""))
    --- End diff --
    
    No, I'm pretty certain you can't make this change. You're ignoring the setting for YARN's directories and just using the user home directory? why?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/4770#issuecomment-76028290
  
    Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by pankajarora12 <gi...@git.apache.org>.
Github user pankajarora12 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4770#discussion_r25377996
  
    --- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
    @@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
     
       /** Get the Yarn approved local directories. */
       private def getYarnLocalDirs(conf: SparkConf): String = {
    -    // Hadoop 0.23 and 2.x have different Environment variable names for the
    -    // local dirs, so lets check both. We assume one of the 2 is set.
    -    // LOCAL_DIRS => 2.X, YARN_LOCAL_DIRS => 0.23.X
    -    val localDirs = Option(conf.getenv("YARN_LOCAL_DIRS"))
    -      .getOrElse(Option(conf.getenv("LOCAL_DIRS"))
    -      .getOrElse(""))
    +    //YarnLocalDirs must be inside container directory. Since it will be automatically deleted when container shut downs.
    +    val localDirs = Option(System.getProperty("user.dir")).getOrElse(""))
    --- End diff --
    
    So what i understood is: 
    1. ExternalShuffleService is a separate process - since it must be running even when executor dies. 
    2. It is per node and serves all executors of that node.
    3. If one executor on node dies then the blocks will be served by another executor of that node.
    4. It does not directly read files and just keep mapping of blockId to filename.
    
    If above is correct how does it serve blocks if all the executors on particular node dies.
    
    Am i wrong somehwere in my understanding?
    --pankaj



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/spark/pull/4770


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by vanzin <gi...@git.apache.org>.
Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4770#discussion_r25369132
  
    --- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
    @@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
     
       /** Get the Yarn approved local directories. */
       private def getYarnLocalDirs(conf: SparkConf): String = {
    -    // Hadoop 0.23 and 2.x have different Environment variable names for the
    -    // local dirs, so lets check both. We assume one of the 2 is set.
    -    // LOCAL_DIRS => 2.X, YARN_LOCAL_DIRS => 0.23.X
    -    val localDirs = Option(conf.getenv("YARN_LOCAL_DIRS"))
    -      .getOrElse(Option(conf.getenv("LOCAL_DIRS"))
    -      .getOrElse(""))
    +    //YarnLocalDirs must be inside container directory. Since it will be automatically deleted when container shut downs.
    +    val localDirs = Option(System.getProperty("user.dir")).getOrElse(""))
    --- End diff --
    
    No, please do not make this change, it's not correct. We do want to use those env variables, which are set by Yarn and configurable (so, for example, users can tell apps to use a fast local disk to store shuffle data instead of whatever disk hosts home directories).
    
    And you do not want the executors files to disappear when it dies. Because you may be able to reuse shuffle data written by that executor to save the work of re-computing that data.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by vanzin <gi...@git.apache.org>.
Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4770#discussion_r25370530
  
    --- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
    @@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
     
       /** Get the Yarn approved local directories. */
       private def getYarnLocalDirs(conf: SparkConf): String = {
    -    // Hadoop 0.23 and 2.x have different Environment variable names for the
    -    // local dirs, so lets check both. We assume one of the 2 is set.
    -    // LOCAL_DIRS => 2.X, YARN_LOCAL_DIRS => 0.23.X
    -    val localDirs = Option(conf.getenv("YARN_LOCAL_DIRS"))
    -      .getOrElse(Option(conf.getenv("LOCAL_DIRS"))
    -      .getOrElse(""))
    +    //YarnLocalDirs must be inside container directory. Since it will be automatically deleted when container shut downs.
    +    val localDirs = Option(System.getProperty("user.dir")).getOrElse(""))
    --- End diff --
    
    Yes you are. The external shuffle service only tracks the location of shuffle files. The executor's `BlockManager` is still writing the actual files to disk.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by vanzin <gi...@git.apache.org>.
Github user vanzin commented on the pull request:

    https://github.com/apache/spark/pull/4770#issuecomment-76045346
  
    There is also a second thing that is broken by this patch: `YARN_LOCAL_DIRS` can actually be multiple directories, as the name implies. The BlockManager uses that to distribute shuffle files across multiple disks, to speed things up. After this change, everything will end up on the same disk.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by vanzin <gi...@git.apache.org>.
Github user vanzin commented on the pull request:

    https://github.com/apache/spark/pull/4770#issuecomment-76057882
  
    > So yarn will use different local dir for launching each executor
    
    But that leaves the case of a single executor being tied to a single local disk after your patch. You're removing that feature. Disk failures are not an issue here, if one can fail more than one can also fail, and the block manager will figure things out.
    
    I don't see how any of this is debatable.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by pankajarora12 <gi...@git.apache.org>.
Github user pankajarora12 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4770#discussion_r25369644
  
    --- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
    @@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
     
       /** Get the Yarn approved local directories. */
       private def getYarnLocalDirs(conf: SparkConf): String = {
    -    // Hadoop 0.23 and 2.x have different Environment variable names for the
    -    // local dirs, so lets check both. We assume one of the 2 is set.
    -    // LOCAL_DIRS => 2.X, YARN_LOCAL_DIRS => 0.23.X
    -    val localDirs = Option(conf.getenv("YARN_LOCAL_DIRS"))
    -      .getOrElse(Option(conf.getenv("LOCAL_DIRS"))
    -      .getOrElse(""))
    +    //YarnLocalDirs must be inside container directory. Since it will be automatically deleted when container shut downs.
    +    val localDirs = Option(System.getProperty("user.dir")).getOrElse(""))
    --- End diff --
    
    So e.g. in spark run via yarn-client
    i can see directory structures like 
    {yarn.nodemanager.local-dirs}/nm-local-dir/usercache/admin/appcache/application_1424859293845_0003/container_1424859293845_0003_01_000001/ -- this is current working directory since executor was launched from this directory.
    and spark is using {yarn.nodemanager.local-dirs}/nm-local-dir/usercache/admin/appcache/application_1424859293845_0003/ -- this directory to write shuffle files which will gets deleted when application shuts down.
    And also regarding  #4759 (comment) will not work if executor gets killed without letting shutdown hook to trigger i.e.
    
    -pankaj



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by pankajarora12 <gi...@git.apache.org>.
Github user pankajarora12 commented on the pull request:

    https://github.com/apache/spark/pull/4770#issuecomment-76056141
  
    I thought about that case too. Since we will be having many executors on
    one node. So yarn will use different local dir for launching each executor
    and that will use up other disks too.
    
    On Thu, Feb 26, 2015 at 1:39 AM, Marcelo Vanzin <no...@github.com>
    wrote:
    
    > There is also a second thing that is broken by this patch: YARN_LOCAL_DIRS
    > can actually be multiple directories, as the name implies. The BlockManager
    > uses that to distribute shuffle files across multiple disks, to speed
    > things up. After this change, everything will end up on the same disk.
    >
    > —
    > Reply to this email directly or view it on GitHub
    > <https://github.com/apache/spark/pull/4770#issuecomment-76045346>.
    >



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by vanzin <gi...@git.apache.org>.
Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4770#discussion_r25370059
  
    --- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
    @@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
     
       /** Get the Yarn approved local directories. */
       private def getYarnLocalDirs(conf: SparkConf): String = {
    -    // Hadoop 0.23 and 2.x have different Environment variable names for the
    -    // local dirs, so lets check both. We assume one of the 2 is set.
    -    // LOCAL_DIRS => 2.X, YARN_LOCAL_DIRS => 0.23.X
    -    val localDirs = Option(conf.getenv("YARN_LOCAL_DIRS"))
    -      .getOrElse(Option(conf.getenv("LOCAL_DIRS"))
    -      .getOrElse(""))
    +    //YarnLocalDirs must be inside container directory. Since it will be automatically deleted when container shut downs.
    +    val localDirs = Option(System.getProperty("user.dir")).getOrElse(""))
    --- End diff --
    
    > I AFAIK if executor gets lost you can not reuse its data. Correct me if i am wrong.
    
    Not true if you're using the external shuffle service.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by pankajarora12 <gi...@git.apache.org>.
Github user pankajarora12 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4770#discussion_r25373106
  
    --- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
    @@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
     
       /** Get the Yarn approved local directories. */
       private def getYarnLocalDirs(conf: SparkConf): String = {
    -    // Hadoop 0.23 and 2.x have different Environment variable names for the
    -    // local dirs, so lets check both. We assume one of the 2 is set.
    -    // LOCAL_DIRS => 2.X, YARN_LOCAL_DIRS => 0.23.X
    -    val localDirs = Option(conf.getenv("YARN_LOCAL_DIRS"))
    -      .getOrElse(Option(conf.getenv("LOCAL_DIRS"))
    -      .getOrElse(""))
    +    //YarnLocalDirs must be inside container directory. Since it will be automatically deleted when container shut downs.
    +    val localDirs = Option(System.getProperty("user.dir")).getOrElse(""))
    --- End diff --
    
    Can you please point me in the code where it is been done,
    Thanks
    Pankaj


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by vanzin <gi...@git.apache.org>.
Github user vanzin commented on the pull request:

    https://github.com/apache/spark/pull/4770#issuecomment-76036782
  
    @pankajarora12 Data generated by `DiskBlockManager` cannot be deleted since it may be used by other executors when using the external shuffle service. You may be able to optimize to delete these things when not using the external service, but that sounds like the wrong approach.
    
    If it's the shuffle data that is accumulating, maybe the right fix is for the block manager to properly clean up shuffle files that are not used anymore. The executor doesn't have enough information for that, as far as I can tell, and the driver would need to tell executors when RDDs are gc'ed so that their shuffle data can be cleaned up. Maybe that's already done, even, but I don't know.
    
    Well, long story short, this change seems to break dynamic allocation and anything that uses the external shuffle service, so it really cannot be pushed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [CORE][YARN] SPARK-6011: Used Current Working ...

Posted by vanzin <gi...@git.apache.org>.
Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4770#discussion_r25378739
  
    --- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
    @@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
     
       /** Get the Yarn approved local directories. */
       private def getYarnLocalDirs(conf: SparkConf): String = {
    -    // Hadoop 0.23 and 2.x have different Environment variable names for the
    -    // local dirs, so lets check both. We assume one of the 2 is set.
    -    // LOCAL_DIRS => 2.X, YARN_LOCAL_DIRS => 0.23.X
    -    val localDirs = Option(conf.getenv("YARN_LOCAL_DIRS"))
    -      .getOrElse(Option(conf.getenv("LOCAL_DIRS"))
    -      .getOrElse(""))
    +    //YarnLocalDirs must be inside container directory. Since it will be automatically deleted when container shut downs.
    +    val localDirs = Option(System.getProperty("user.dir")).getOrElse(""))
    --- End diff --
    
    The way I understand it, the shuffle service can serve the files. But the executor still writes them directly - the write does not go through the shuffle service, and those files are written to the directories set up by `createLocalDirs` in DiskBlockManager.scala.
    
    There's even a comment alluding to that in the `doStop` method:
    
        // Only perform cleanup if an external service is not serving our shuffle files.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org