You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by morenn520 <gi...@git.apache.org> on 2017/05/05 11:30:58 UTC

[GitHub] spark pull request #17872: [SPARK-20608] allow standby namenodes in spark.ya...

GitHub user morenn520 opened a pull request:

    https://github.com/apache/spark/pull/17872

    [SPARK-20608] allow standby namenodes in spark.yarn.access.namenodes (PR in master branch)

    Related Jira:
    https://issues.apache.org/jira/browse/SPARK-20608
    
    Descriptions:
    See PR (in branch-2.1): https://github.com/apache/spark/pull/17870/


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/morenn520/spark SPARK-20608-master

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/17872.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #17872
    
----
commit ace85cab22bec7ba365340fb3fd3f8d28095fb62
Author: Chen Yuechen <ch...@qiyi.com>
Date:   2017-05-05T11:27:00Z

    [SPARK-20608] allow standby namenodes in spark.yarn.access.namenodes

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17872: [SPARK-20608] allow standby namenodes in spark.ya...

Posted by jerryshao <gi...@git.apache.org>.
Github user jerryshao commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17872#discussion_r115200282
  
    --- Diff: resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/security/HadoopFSCredentialProvider.scala ---
    @@ -22,6 +22,8 @@ import scala.util.Try
     
     import org.apache.hadoop.conf.Configuration
     import org.apache.hadoop.fs.{FileSystem, Path}
    +import org.apache.hadoop.ipc.RemoteException
    +import org.apache.hadoop.ipc.StandbyException
    --- End diff --
    
    This two line of imports can be merged into one line.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17872: [SPARK-20608] allow standby namenodes in spark.ya...

Posted by jerryshao <gi...@git.apache.org>.
Github user jerryshao commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17872#discussion_r115183873
  
    --- Diff: resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/security/HadoopFSCredentialProvider.scala ---
    @@ -48,9 +50,16 @@ private[security] class HadoopFSCredentialProvider
         val tmpCreds = new Credentials()
         val tokenRenewer = getTokenRenewer(hadoopConf)
         hadoopFSsToAccess(hadoopConf, sparkConf).foreach { dst =>
    -      val dstFs = dst.getFileSystem(hadoopConf)
    -      logInfo("getting token for: " + dst)
    -      dstFs.addDelegationTokens(tokenRenewer, tmpCreds)
    +      try {
    +        val dstFs = dst.getFileSystem(hadoopConf)
    +        logInfo("getting token for: " + dst)
    +        dstFs.addDelegationTokens(tokenRenewer, tmpCreds)
    +      } catch {
    +        case e: StandbyException =>
    +          logWarning(s"Namenode ${dst} is in state standby", e)
    --- End diff --
    
    Hadoop compatible FS doesn't equal to HDFS, we can configure to wasb, adsl and others. Also wasb and adsl support fetching delegation tokens from common FS API, so we should avoid mentioning Namenode which is only existed in HDFS.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17872: [SPARK-20608] allow standby namenodes in spark.yarn.acce...

Posted by vanzin <gi...@git.apache.org>.
Github user vanzin commented on the issue:

    https://github.com/apache/spark/pull/17872
  
    Yes, I already explained in the discussion in the bug. The very fact you're getting an exception from the standby namenode means you're not actually getting the delegation token. Which makes this change pointless.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17872: [SPARK-20608] allow standby namenodes in spark.ya...

Posted by morenn520 <gi...@git.apache.org>.
Github user morenn520 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17872#discussion_r115191861
  
    --- Diff: resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/security/HadoopFSCredentialProvider.scala ---
    @@ -48,9 +50,16 @@ private[security] class HadoopFSCredentialProvider
         val tmpCreds = new Credentials()
         val tokenRenewer = getTokenRenewer(hadoopConf)
         hadoopFSsToAccess(hadoopConf, sparkConf).foreach { dst =>
    -      val dstFs = dst.getFileSystem(hadoopConf)
    -      logInfo("getting token for: " + dst)
    -      dstFs.addDelegationTokens(tokenRenewer, tmpCreds)
    +      try {
    +        val dstFs = dst.getFileSystem(hadoopConf)
    +        logInfo("getting token for: " + dst)
    +        dstFs.addDelegationTokens(tokenRenewer, tmpCreds)
    +      } catch {
    +        case e: StandbyException =>
    +          logWarning(s"Namenode ${dst} is in state standby", e)
    --- End diff --
    
    I refactored it. Please review and give some more advices :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17872: [SPARK-20608] allow standby namenodes in spark.ya...

Posted by morenn520 <gi...@git.apache.org>.
Github user morenn520 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17872#discussion_r115184514
  
    --- Diff: resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/security/HadoopFSCredentialProvider.scala ---
    @@ -48,9 +50,16 @@ private[security] class HadoopFSCredentialProvider
         val tmpCreds = new Credentials()
         val tokenRenewer = getTokenRenewer(hadoopConf)
         hadoopFSsToAccess(hadoopConf, sparkConf).foreach { dst =>
    -      val dstFs = dst.getFileSystem(hadoopConf)
    -      logInfo("getting token for: " + dst)
    -      dstFs.addDelegationTokens(tokenRenewer, tmpCreds)
    +      try {
    +        val dstFs = dst.getFileSystem(hadoopConf)
    +        logInfo("getting token for: " + dst)
    +        dstFs.addDelegationTokens(tokenRenewer, tmpCreds)
    +      } catch {
    +        case e: StandbyException =>
    +          logWarning(s"Namenode ${dst} is in state standby", e)
    --- End diff --
    
    In our tests, there are two possible exceptions when yarn.spark.access.namenodes=hdfs://activeNamenode,hdfs://standbyNamenode
    1) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category WRITE is not supported in state standby
    2) Caused by: org.apache.hadoop.ipc.StandbyException: Operation category WRITE is not supported in state standby
    Maybe RemoteException should be caught by better way.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17872: [SPARK-20608] allow standby namenodes in spark.yarn.acce...

Posted by morenn520 <gi...@git.apache.org>.
Github user morenn520 commented on the issue:

    https://github.com/apache/spark/pull/17872
  
    @jerryshao done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17872: [SPARK-20608] allow standby namenodes in spark.yarn.acce...

Posted by srowen <gi...@git.apache.org>.
Github user srowen commented on the issue:

    https://github.com/apache/spark/pull/17872
  
    I think @vanzin is saying this is not the right change


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17872: [SPARK-20608] allow standby namenodes in spark.ya...

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/spark/pull/17872


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17872: [SPARK-20608] allow standby namenodes in spark.ya...

Posted by jerryshao <gi...@git.apache.org>.
Github user jerryshao commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17872#discussion_r115182668
  
    --- Diff: resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/security/HadoopFSCredentialProvider.scala ---
    @@ -48,9 +50,16 @@ private[security] class HadoopFSCredentialProvider
         val tmpCreds = new Credentials()
         val tokenRenewer = getTokenRenewer(hadoopConf)
         hadoopFSsToAccess(hadoopConf, sparkConf).foreach { dst =>
    -      val dstFs = dst.getFileSystem(hadoopConf)
    -      logInfo("getting token for: " + dst)
    -      dstFs.addDelegationTokens(tokenRenewer, tmpCreds)
    +      try {
    +        val dstFs = dst.getFileSystem(hadoopConf)
    +        logInfo("getting token for: " + dst)
    +        dstFs.addDelegationTokens(tokenRenewer, tmpCreds)
    +      } catch {
    +        case e: StandbyException =>
    +          logWarning(s"Namenode ${dst} is in state standby", e)
    --- End diff --
    
    It's not accurate to say "Namenode" here, because we may configure to other non-HDFS. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17872: [SPARK-20608] allow standby namenodes in spark.ya...

Posted by morenn520 <gi...@git.apache.org>.
Github user morenn520 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17872#discussion_r115183402
  
    --- Diff: resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/security/HadoopFSCredentialProvider.scala ---
    @@ -48,9 +50,16 @@ private[security] class HadoopFSCredentialProvider
         val tmpCreds = new Credentials()
         val tokenRenewer = getTokenRenewer(hadoopConf)
         hadoopFSsToAccess(hadoopConf, sparkConf).foreach { dst =>
    -      val dstFs = dst.getFileSystem(hadoopConf)
    -      logInfo("getting token for: " + dst)
    -      dstFs.addDelegationTokens(tokenRenewer, tmpCreds)
    +      try {
    +        val dstFs = dst.getFileSystem(hadoopConf)
    +        logInfo("getting token for: " + dst)
    +        dstFs.addDelegationTokens(tokenRenewer, tmpCreds)
    +      } catch {
    +        case e: StandbyException =>
    +          logWarning(s"Namenode ${dst} is in state standby", e)
    --- End diff --
    
    hum..Here is actually fetching tokens from hadoopFS, including in hadoopFSCredentialProvider, which means it's exactly HDFS?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17872: [SPARK-20608] allow standby namenodes in spark.ya...

Posted by jerryshao <gi...@git.apache.org>.
Github user jerryshao commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17872#discussion_r115186429
  
    --- Diff: resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/security/HadoopFSCredentialProvider.scala ---
    @@ -48,9 +50,16 @@ private[security] class HadoopFSCredentialProvider
         val tmpCreds = new Credentials()
         val tokenRenewer = getTokenRenewer(hadoopConf)
         hadoopFSsToAccess(hadoopConf, sparkConf).foreach { dst =>
    -      val dstFs = dst.getFileSystem(hadoopConf)
    -      logInfo("getting token for: " + dst)
    -      dstFs.addDelegationTokens(tokenRenewer, tmpCreds)
    +      try {
    +        val dstFs = dst.getFileSystem(hadoopConf)
    +        logInfo("getting token for: " + dst)
    +        dstFs.addDelegationTokens(tokenRenewer, tmpCreds)
    +      } catch {
    +        case e: StandbyException =>
    +          logWarning(s"Namenode ${dst} is in state standby", e)
    --- End diff --
    
    What I mean is that if "RemoteException" is caused by others, it is not correct to log as "Namenode ${dst} is in state standby".


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17872: [SPARK-20608] allow standby namenodes in spark.yarn.acce...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/17872
  
    Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17872: [SPARK-20608] allow standby namenodes in spark.yarn.acce...

Posted by morenn520 <gi...@git.apache.org>.
Github user morenn520 commented on the issue:

    https://github.com/apache/spark/pull/17872
  
    @srowen @jerryshao @steveloughran This is the latest PR. #17870 is deprecated.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17872: [SPARK-20608] allow standby namenodes in spark.yarn.acce...

Posted by jerryshao <gi...@git.apache.org>.
Github user jerryshao commented on the issue:

    https://github.com/apache/spark/pull/17872
  
    This change may be conflicted with #17723 , but I think it is easy to resolve, CC @mgummelt .


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17872: [SPARK-20608] allow standby namenodes in spark.ya...

Posted by steveloughran <gi...@git.apache.org>.
Github user steveloughran commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17872#discussion_r114985141
  
    --- Diff: resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/security/HadoopFSCredentialProvider.scala ---
    @@ -81,8 +90,15 @@ private[security] class HadoopFSCredentialProvider
         sparkConf.get(PRINCIPAL).flatMap { renewer =>
           val creds = new Credentials()
           hadoopFSsToAccess(hadoopConf, sparkConf).foreach { dst =>
    -        val dstFs = dst.getFileSystem(hadoopConf)
    -        dstFs.addDelegationTokens(renewer, creds)
    +        try {
    +          val dstFs = dst.getFileSystem(hadoopConf)
    +          dstFs.addDelegationTokens(renewer, creds)
    +        } catch {
    +          case e: StandbyException =>
    +            logWarning(s"Namenode ${dst} is in state standby", e)
    +          case e: RemoteException =>
    --- End diff --
    
    I'd suggested adding a handler for UnknownHostException too, but now I think that could hide problems with client config. Best to leave as is.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17872: [SPARK-20608] allow standby namenodes in spark.ya...

Posted by morenn520 <gi...@git.apache.org>.
Github user morenn520 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17872#discussion_r115188391
  
    --- Diff: resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/security/HadoopFSCredentialProvider.scala ---
    @@ -48,9 +50,16 @@ private[security] class HadoopFSCredentialProvider
         val tmpCreds = new Credentials()
         val tokenRenewer = getTokenRenewer(hadoopConf)
         hadoopFSsToAccess(hadoopConf, sparkConf).foreach { dst =>
    -      val dstFs = dst.getFileSystem(hadoopConf)
    -      logInfo("getting token for: " + dst)
    -      dstFs.addDelegationTokens(tokenRenewer, tmpCreds)
    +      try {
    +        val dstFs = dst.getFileSystem(hadoopConf)
    +        logInfo("getting token for: " + dst)
    +        dstFs.addDelegationTokens(tokenRenewer, tmpCreds)
    +      } catch {
    +        case e: StandbyException =>
    +          logWarning(s"Namenode ${dst} is in state standby", e)
    --- End diff --
    
    You are right. I will refactor the exception log.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17872: [SPARK-20608] allow standby namenodes in spark.yarn.acce...

Posted by steveloughran <gi...@git.apache.org>.
Github user steveloughran commented on the issue:

    https://github.com/apache/spark/pull/17872
  
    at a glance, patch LGTM.
    
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17872: [SPARK-20608] allow standby namenodes in spark.ya...

Posted by jerryshao <gi...@git.apache.org>.
Github user jerryshao commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17872#discussion_r115184099
  
    --- Diff: resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/security/HadoopFSCredentialProvider.scala ---
    @@ -48,9 +50,16 @@ private[security] class HadoopFSCredentialProvider
         val tmpCreds = new Credentials()
         val tokenRenewer = getTokenRenewer(hadoopConf)
         hadoopFSsToAccess(hadoopConf, sparkConf).foreach { dst =>
    -      val dstFs = dst.getFileSystem(hadoopConf)
    -      logInfo("getting token for: " + dst)
    -      dstFs.addDelegationTokens(tokenRenewer, tmpCreds)
    +      try {
    +        val dstFs = dst.getFileSystem(hadoopConf)
    +        logInfo("getting token for: " + dst)
    +        dstFs.addDelegationTokens(tokenRenewer, tmpCreds)
    +      } catch {
    +        case e: StandbyException =>
    +          logWarning(s"Namenode ${dst} is in state standby", e)
    --- End diff --
    
    Also for the below "RemoteException", how do you know "RemoteException" is exactly a standby exception?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org