You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by harishreedharan <gi...@git.apache.org> on 2014/07/21 07:46:45 UTC

[GitHub] spark pull request: SPARK-2582. Make Block Manager Master pluggabl...

GitHub user harishreedharan opened a pull request:

    https://github.com/apache/spark/pull/1506

    SPARK-2582. Make Block Manager Master pluggable.

    This patch makes the BlockManagerMaster a trait and makes the current BlockManagerMaster one of
    the possible implementations and renames it to StandaloneBlockManagerMaster. An additional (as yet undocumented)
    configuration parameter is added which can be used to set the BlockManagerMaster type to use. At some
    point, when we add BlockManagerMasters which write metadata to HDFS or replicate, we can add other possible
    values which will use other implementations.
    
    There is no change in current behavior. We must also enforce other implementations to use the current Akka actor
    itself, so the code in the BlockManager does not need to care what implementation is used on the BMM side. I am not sure
    how to enforce this. This is not too much of a concern as we don't have to make it pluggable - so the only options would
    be part of Spark - so this should be fairly easy to enforce.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/harishreedharan/spark pluggable-BMM

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/1506.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1506
    
----
commit 840b3cec3383fb8a1943c863f4db313d694f8922
Author: Hari Shreedharan <ha...@gmail.com>
Date:   2014-07-21T05:30:59Z

    SPARK-2582. Make Block Manager Master pluggable.
    
    This patch makes the BlockManagerMaster a trait and makes the current BlockManagerMaster one of
    the possible implementations and renames it to StandaloneBlockManagerMaster. An additional (as yet undocumented)
    configuration parameter is added which can be used to set the BlockManagerMaster type to use. At some
    point, when we add BlockManagerMasters which write metadata to HDFS or replicate, we can add other possible
    values which will use other implementations.
    
    There is no change in current behavior. We must also enforce other implementations to use the current Akka actor
    itself, so the code in the BlockManager does not need to care what implementation is used on the BMM side. I am not sure
    how to enforce this. This is not too much of a concern as we don't have to make it pluggable - so the only options would
    be part of Spark - so this should be fairly easy to enforce.

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: SPARK-2582. Make Block Manager Master pluggabl...

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/spark/pull/1506


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: SPARK-2582. Make Block Manager Master pluggabl...

Posted by pwendell <gi...@git.apache.org>.
Github user pwendell commented on the pull request:

    https://github.com/apache/spark/pull/1506#issuecomment-56289168
  
    Hey @harishreedharan thanks for submitting, but like to close this PR for now pending a more complete design proposal about how external implementations of the block storage service would work. There are a bunch of other challenges in decoupling the block storage service from the SparkContext... it's definitely an interesting idea longer term but one that would need a thorough design and consensus. If we make this pluggable it will signal to the community that we want to head down this direction, so I'd propose getting a consensus on that before proceeding.
    
    For streaming specifically I know you and @tdas are working on some more specific mechanisms to provide H/A in that case.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: SPARK-2582. Make Block Manager Master pluggabl...

Posted by hsaputra <gi...@git.apache.org>.
Github user hsaputra commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1506#discussion_r15251850
  
    --- Diff: core/src/main/scala/org/apache/spark/SparkEnv.scala ---
    @@ -215,9 +215,15 @@ object SparkEnv extends Logging {
           "MapOutputTracker",
           new MapOutputTrackerMasterActor(mapOutputTracker.asInstanceOf[MapOutputTrackerMaster], conf))
     
    -    val blockManagerMaster = new BlockManagerMaster(registerOrLookup(
    -      "BlockManagerMaster",
    -      new BlockManagerMasterActor(isLocal, conf, listenerBus)), conf)
    +    val blockManagerMasterType = conf.get("spark.blockmanager.type", "standalone")
    +    var blockManagerMaster: BlockManagerMaster = null
    +    blockManagerMasterType match {
    +      case _ =>
    +        // Since currently only one option exists, this is what is to be done in any case.
    +        blockManagerMaster = new StandaloneBlockManagerMaster(registerOrLookup(
    --- End diff --
    
    So if I want to use different BlockManagerMaster implementation I have to modified this code to support different type?
    
    It is preferable to use fully classified class name to allow pluggability so as long as the Class implementation is in the classpath then it should be able to use the different implementation of the BlockManagerMaster.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: SPARK-2582. Make Block Manager Master pluggabl...

Posted by harishreedharan <gi...@git.apache.org>.
Github user harishreedharan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1506#discussion_r15252048
  
    --- Diff: core/src/main/scala/org/apache/spark/SparkEnv.scala ---
    @@ -215,9 +215,15 @@ object SparkEnv extends Logging {
           "MapOutputTracker",
           new MapOutputTrackerMasterActor(mapOutputTracker.asInstanceOf[MapOutputTrackerMaster], conf))
     
    -    val blockManagerMaster = new BlockManagerMaster(registerOrLookup(
    -      "BlockManagerMaster",
    -      new BlockManagerMasterActor(isLocal, conf, listenerBus)), conf)
    +    val blockManagerMasterType = conf.get("spark.blockmanager.type", "standalone")
    +    var blockManagerMaster: BlockManagerMaster = null
    +    blockManagerMasterType match {
    +      case _ =>
    +        // Since currently only one option exists, this is what is to be done in any case.
    +        blockManagerMaster = new StandaloneBlockManagerMaster(registerOrLookup(
    --- End diff --
    
    I thought about doing that - but doing that does not allow us a way to force the implementation to use the Akka BlockManagerMasterActor, since the Block Managers would continue to use that. If we could somehow force that - then it would be a good idea to just use FQCN.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: SPARK-2582. Make Block Manager Master pluggabl...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/1506#issuecomment-49573405
  
    Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---