You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "David Lemieux (JIRA)" <ji...@apache.org> on 2014/05/23 21:30:02 UTC

[jira] [Created] (SPARK-1916) SparkFlumeEvent with body bigger than 1020 bytes are not read properly

David Lemieux created SPARK-1916:
------------------------------------

             Summary: SparkFlumeEvent with body bigger than 1020 bytes are not read properly
                 Key: SPARK-1916
                 URL: https://issues.apache.org/jira/browse/SPARK-1916
             Project: Spark
          Issue Type: Bug
          Components: Streaming
    Affects Versions: 0.9.0
            Reporter: David Lemieux


The readExternal implementation on SparkFlumeEvent will read only the first 1020 bytes of the actual body when streaming data from flume.

This means that any event sent to Spark via Flume will be processed properly if the body is small, but will fail if the body is bigger than 1020.
Considering that the default max size for a Flume Avro Event is 32K, the implementation should be updated to read more.

The following is related : http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-using-Flume-body-size-limitation-tt6127.html



--
This message was sent by Atlassian JIRA
(v6.2#6252)