You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2018/11/13 01:15:01 UTC

[jira] [Work logged] (BEAM-5775) Make the spark runner not serialize data unless spark is spilling to disk

     [ https://issues.apache.org/jira/browse/BEAM-5775?focusedWorklogId=165257&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-165257 ]

ASF GitHub Bot logged work on BEAM-5775:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 13/Nov/18 01:14
            Start Date: 13/Nov/18 01:14
    Worklog Time Spent: 10m 
      Work Description: chamikaramj commented on issue #6714: [BEAM-5775] Spark: implement a custom class to lazily encode values for persistence.
URL: https://github.com/apache/beam/pull/6714#issuecomment-438091517
 
 
   R: @iemejia can you please review or forward to somehow who is familiar with this code.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 165257)
    Time Spent: 20m  (was: 10m)

> Make the spark runner not serialize data unless spark is spilling to disk
> -------------------------------------------------------------------------
>
>                 Key: BEAM-5775
>                 URL: https://issues.apache.org/jira/browse/BEAM-5775
>             Project: Beam
>          Issue Type: Improvement
>          Components: runner-spark
>            Reporter: Mike Kaplinskiy
>            Assignee: Amit Sela
>            Priority: Minor
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently for storage level MEMORY_ONLY, Beam does not coder-ify the data. This lets Spark keep the data in memory avoiding the serialization round trip. Unfortunately the logic is fairly coarse - as soon as you switch to MEMORY_AND_DISK, Beam coder-ifys the data even though Spark might have chosen to keep the data in memory, incurring the serialization overhead.
>  
> Ideally Beam would serialize the data lazily - as Spark chooses to spill to disk. This would be a change in behavior when using beam, but luckily Spark has a solution for folks that want data serialized in memory - MEMORY_AND_DISK_SER will keep the data serialized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)