You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yin Huai (JIRA)" <ji...@apache.org> on 2015/04/17 20:59:58 UTC

[jira] [Assigned] (SPARK-6986) Make SerializationStream/DeserializationStream understand key/value semantic

     [ https://issues.apache.org/jira/browse/SPARK-6986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Yin Huai reassigned SPARK-6986:
-------------------------------

    Assignee: Yin Huai

> Make SerializationStream/DeserializationStream understand key/value semantic
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-6986
>                 URL: https://issues.apache.org/jira/browse/SPARK-6986
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Core, SQL
>            Reporter: Yin Huai
>            Assignee: Yin Huai
>            Priority: Blocker
>
> Our existing Java and Kryo serializer are both general-purpose serialize. They treat every object individually and encode the type of an object to underlying stream. For Spark, it is common that we serialize a collection with records having the same types (for example, records of a DataFrame). For these cases, we do not need to write out types of records and we can take advantage the type information to build specialized serializer. To do so, seems we need to extend the interface of SerializationStream/DeserializationStream, so a SerializationStream/DeserializationStream can have more information about objects passed in (for example, if an object is key/value pair, a key, or a value). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org