You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Davies Liu (JIRA)" <ji...@apache.org> on 2016/04/01 21:18:25 UTC

[jira] [Commented] (SPARK-13352) BlockFetch does not scale well on large block

    [ https://issues.apache.org/jira/browse/SPARK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15222199#comment-15222199 ] 

Davies Liu commented on SPARK-13352:
------------------------------------

After more investigating, it turned out that the block fetcher in 1.6+ is two times slower than that in 1.5, it took 44 seconds to fetch a 289M block (22 seconds in 1.5).

1.5
{code}
16/04/01 11:58:33 DEBUG BlockManager: Getting block taskresult_5 from memory
16/04/01 11:58:34 DEBUG TransportClient: Sending fetch chunk request 0 to localhost/127.0.0.1:54202
16/04/01 11:58:35 DEBUG Cleaner0: java.nio.ByteBuffer.cleaner(): available
16/04/01 11:58:56 DEBUG BlockManagerSlaveEndpoint: removing block taskresult_5
16/04/01 11:58:56 DEBUG BlockManager: Removing block taskresult_5
16/04/01 11:58:56 DEBUG MemoryStore: Block taskresult_5 of size 289281861 dropped from memory (free 2222933912)
16/04/01 11:58:56 INFO BlockManagerInfo: Removed taskresult_5 on localhost:54202 in memory (size: 275.9 MB, free: 2.1 GB)
16/04/01 11:58:56 DEBUG BlockManagerMaster: Updated info of block taskresult_5
16/04/01 11:58:56 DEBUG BlockManager: Told master about block taskresult_5
16/04/01 11:58:56 DEBUG BlockManagerSlaveEndpoint: Done removing block taskresult_5, response is true
16/04/01 11:58:56 DEBUG BlockManagerSlaveEndpoint: Sent response: true to AkkaRpcEndpointRef(Actor[akka://sparkDriver/temp/$I])
{code}

In 1.6 or master
{code}
16/04/01 11:55:47 DEBUG BlockManager: Getting remote block taskresult_5 as bytes
16/04/01 11:55:47 DEBUG BlockManager: Getting remote block taskresult_5 from BlockManagerId(driver, localhost, 54181)
16/04/01 11:55:47 DEBUG TransportClientFactory: Creating new connection to localhost/127.0.0.1:54181
16/04/01 11:55:47 DEBUG ResourceLeakDetector: -Dio.netty.leakDetectionLevel: simple
16/04/01 11:55:47 DEBUG TransportClientFactory: Connection to localhost/127.0.0.1:54181 successful, running bootstraps...
16/04/01 11:55:47 DEBUG TransportClientFactory: Successfully created connection to localhost/127.0.0.1:54181 after 31 ms (0 ms spent in bootstraps)
16/04/01 11:55:47 DEBUG Recycler: -Dio.netty.recycler.maxCapacity.default: 262144
16/04/01 11:55:47 DEBUG BlockManager: Level for block taskresult_5 is StorageLevel(true, true, false, false, 1)
16/04/01 11:55:47 DEBUG BlockManager: Getting block taskresult_5 from memory
16/04/01 11:55:48 DEBUG TransportClient: Sending fetch chunk request 0 to localhost/127.0.0.1:54181
16/04/01 11:55:58 DEBUG Cleaner0: java.nio.ByteBuffer.cleaner(): available
16/04/01 11:56:31 DEBUG BlockManagerSlaveEndpoint: removing block taskresult_5
16/04/01 11:56:31 DEBUG BlockManager: Removing block taskresult_5
16/04/01 11:56:31 DEBUG MemoryStore: Block taskresult_5 of size 289281861 dropped from memory (free 2851511312)
16/04/01 11:56:31 INFO BlockManagerInfo: Removed taskresult_5 on localhost:54181 in memory (size: 275.9 MB, free: 2.7 GB)
16/04/01 11:56:31 DEBUG BlockManagerMaster: Updated info of block taskresult_5
16/04/01 11:56:31 DEBUG BlockManager: Told master about block taskresult_5
16/04/01 11:56:31 DEBUG BlockManagerSlaveEndpoint: Done removing block taskresult_5, response is true
16/04/01 11:56:31 DEBUG BlockManagerSlaveEndpoint: Sent response: true to 192.168.0.143:54179
{code}

> BlockFetch does not scale well on large block
> ---------------------------------------------
>
>                 Key: SPARK-13352
>                 URL: https://issues.apache.org/jira/browse/SPARK-13352
>             Project: Spark
>          Issue Type: Bug
>          Components: Block Manager, Spark Core
>            Reporter: Davies Liu
>
> BlockManager.getRemoteBytes() perform poorly on large block
> {code}
>   test("block manager") {
>     val N = 500 << 20
>     val bm = sc.env.blockManager
>     val blockId = TaskResultBlockId(0)
>     val buffer = ByteBuffer.allocate(N)
>     buffer.limit(N)
>     bm.putBytes(blockId, buffer, StorageLevel.MEMORY_AND_DISK_SER)
>     val result = bm.getRemoteBytes(blockId)
>     assert(result.isDefined)
>     assert(result.get.limit() === (N))
>   }
> {code}
> Here are runtime for different block sizes:
> {code}
> 50M            3 seconds
> 100M          7 seconds
> 250M          33 seconds
> 500M         2 min
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org