You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Yiannis Gkoufas (JIRA)" <ji...@apache.org> on 2015/02/03 01:46:35 UTC
[jira] [Commented] (FLINK-1303) HadoopInputFormat does not work
with Scala API
[ https://issues.apache.org/jira/browse/FLINK-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14302582#comment-14302582 ]
Yiannis Gkoufas commented on FLINK-1303:
----------------------------------------
Since I am a total beginner in scala, may I ask how the example in the docs would be re-written in scala?
HadoopInputFormat<LongWritable, Text> hadoopIF =
// create the Flink wrapper.
new HadoopInputFormat<LongWritable, Text>(
// create the Hadoop InputFormat, specify key and value type, and job.
new TextInputFormat(), LongWritable.class, Text.class, job
);
TextInputFormat.addInputPath(job, new Path(inputPath));
> HadoopInputFormat does not work with Scala API
> ----------------------------------------------
>
> Key: FLINK-1303
> URL: https://issues.apache.org/jira/browse/FLINK-1303
> Project: Flink
> Issue Type: Sub-task
> Components: Scala API
> Reporter: Aljoscha Krettek
> Assignee: Aljoscha Krettek
> Fix For: 0.9
>
>
> It fails because the HadoopInputFormat uses the Flink Tuple2 type. For this, type extraction fails at runtime.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)