You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Stephan Ewen (JIRA)" <ji...@apache.org> on 2016/08/05 12:26:20 UTC
[jira] [Commented] (FLINK-4316) Make flink-core independent of
Hadoop
[ https://issues.apache.org/jira/browse/FLINK-4316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409384#comment-15409384 ]
Stephan Ewen commented on FLINK-4316:
-------------------------------------
An alternative would be to copy the Hadoop {{Writable}} type (simple interface) into {{link-core}}. We did that before, it works well and is a much easier solution (no reflection work), but comes with two issues:
- We need to carry that class in the flink codebase
- There will be multiple versions of {{Writable}} and in theory, these could lead to class cast exceptions.
> Make flink-core independent of Hadoop
> -------------------------------------
>
> Key: FLINK-4316
> URL: https://issues.apache.org/jira/browse/FLINK-4316
> Project: Flink
> Issue Type: Bug
> Components: Core
> Affects Versions: 1.1.0
> Reporter: Stephan Ewen
> Assignee: Stephan Ewen
> Fix For: 1.2.0
>
>
> We want to gradually reduce the hard and heavy mandatory dependencies in Hadoop. Hadoop will still be part of (most) flink downloads, but the API projects should not have a hard dependency on Hadoop.
> I suggest to start with {{flink-core}}, because it only depends on Hadoop for the {{Writable}} type, to support seamless operation of Hadoop types.
> I propose to move all {{WritableTypeInfo}}-related classes to the {{flink-hadoop-compatibility}} project and access them via reflection in the {{TypeExtractor}}.
> That way, {{Writable}} types will be out of the box supported if users have the {{flink-hadoop-compatibility}} project in the classpath.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)