You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by "Petr Novotnik (JIRA)" <ji...@apache.org> on 2017/04/10 06:15:41 UTC
[jira] [Created] (FLINK-6285) resolve hadoop-compatibility
confusion
Petr Novotnik created FLINK-6285:
------------------------------------
Summary: resolve hadoop-compatibility confusion
Key: FLINK-6285
URL: https://issues.apache.org/jira/browse/FLINK-6285
Project: Flink
Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Petr Novotnik
As of Flink 1.2.0, the binary distribution does not include classes from the `hadoop-compatibility` dependency anymore.
```
flink-1.2.0> for i in lib/*.jar; jar tf $i | grep WritableTypeInfo; end
flink-1.2.0 [1]> # the above finds nothing
```
Therefore, it is necessary to copy the compatibility jar to flink's installation `lib/` directory (or a sub-directory) if one wishes to use hadoop input formats. Merely packaging the compatibility jar as part of an application's "fat jar" does not suffice, as code in [TypeExtractor#createHadoopWritableTypeInfo](https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractor.java#L1988)'s relies on being able to see the compatibility classes through the classloader `TypeExtractor` itself was loaded by. On yarn this seems not to be the case (e.g. when running the application through `flink run -m yarn-cluster ...`).
* Ideally, we'd fix the class loading issue, such that flink's installation does not need to be altered, due to the need of a particular application.
* Alternatively, we could include the hadoop-compatibility jar as part of the binary distribution and provide corresponding instructions, [1] and [2] seem to be good places.
[1] https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/batch/hadoop_compatibility.html
[2] https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/migration.html
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)