You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Stephan Ewen (JIRA)" <ji...@apache.org> on 2014/12/14 23:28:13 UTC

[jira] [Resolved] (FLINK-1305) Flink's hadoop compatibility layer cannot handle NullWritables

     [ https://issues.apache.org/jira/browse/FLINK-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Stephan Ewen resolved FLINK-1305.
---------------------------------
       Resolution: Fixed
    Fix Version/s: 0.8-incubating

Fixed via 13968cd4de446b4f565a094554380eb8559b6cf9

> Flink's hadoop compatibility layer cannot handle NullWritables
> --------------------------------------------------------------
>
>                 Key: FLINK-1305
>                 URL: https://issues.apache.org/jira/browse/FLINK-1305
>             Project: Flink
>          Issue Type: Bug
>          Components: Hadoop Compatibility
>    Affects Versions: 0.7.0-incubating
>            Reporter: Sebastian Schelter
>            Assignee: Robert Metzger
>            Priority: Critical
>             Fix For: 0.8-incubating
>
>
> NullWritable is a special object that is commonly used in Hadoop applications. NullWritable does not provide a public constructor, but only a singleton factory method. Therefore Flink fails when users to try to read NullWritables from Hadoop sequencefiles.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)