You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@beam.apache.org by "Tim Robertson (JIRA)" <ji...@apache.org> on 2018/05/10 19:09:00 UTC

[jira] [Comment Edited] (BEAM-4260) Document usage for hcatalog 1.1

    [ https://issues.apache.org/jira/browse/BEAM-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468635#comment-16468635 ] 

Tim Robertson edited comment on BEAM-4260 at 5/10/18 7:08 PM:
--------------------------------------------------------------

I've found what I believe is a reasonable workaround that allows {{HCatalogIO}} to read when run in an environment providing Hive 1.1 on the system classpath (e.g. Cloudera 5.12.2 running Beam on Spark).

When building a Beam app, you can do the following:

*1. Include the following Hive 1.2 jars in an über jar*
{code}
<dependency>
    <groupId>org.apache.beam</groupId>
    <artifactId>beam-sdks-java-io-hcatalog</artifactId>
    <version>2.4.0</version>
</dependency>
<dependency>
    <groupId>org.apache.hive.hcatalog</groupId>
    <artifactId>hive-hcatalog-core</artifactId>
    <version>1.2</version>
</dependency>
<dependency>
    <groupId>org.apache.hive</groupId>
    <artifactId>hive-metastore</artifactId>
    <version>1.2</version>
</dependency>
<dependency>
    <groupId>org.apache.hive</groupId>
    <artifactId>hive-exec</artifactId>
    <version>1.2</version>
</dependency>
<dependency>
    <groupId>org.apache.hive</groupId>
    <artifactId>hive-common</artifactId>
    <version>1.2</version>
</dependency>
{code}

*2. Relocate (only) the following hive packages*
{code}
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-shade-plugin</artifactId>
    <version>${maven-shade-plugin.version}</version>
    <configuration>
        <createDependencyReducedPom>false</createDependencyReducedPom>
    </configuration>
    <executions>
        <execution>
            <phase>package</phase>
            <goals>
                <goal>shade</goal>
            </goals>
            <configuration>
                <shadedArtifactAttached>true</shadedArtifactAttached>
                <shadedClassifierName>shaded</shadedClassifierName>
                <relocations>
                    <!-- Important: Do not relocate org.apache.hadoop.hive -->
                    <relocation>
                        <pattern>org.apache.hadoop.hive.conf</pattern>
                        <shadedPattern>h12.org.apache.hadoop.hive.conf</shadedPattern>
                    </relocation>
                    <relocation>
                        <pattern>org.apache.hadoop.hive.ql</pattern>
                        <shadedPattern>h12.org.apache.hadoop.hive.ql</shadedPattern>
                    </relocation>
                    <relocation>
                        <pattern>org.apache.hadoop.hive.metastore</pattern>
                        <shadedPattern>h12.org.apache.hadoop.hive.metastore</shadedPattern>
                    </relocation>
                </relocations>
            </configuration>
        </execution>
    </executions>
</plugin>
{code}

This has only been tested to read {{sequence file}} and {{orc file}} backed tables running with Beam 2.4.0 on Spark 2.3 on YARN in a Cloudera CDH 5.12.2 managed environment (i.e. from a CDH gateway with Cloudera providing Hive). Note, I have tried this with Hive 2.x versions and it did not work with many {{NoSuchMethod}} errors. Using 1.2 is close enough to the 1.1 that it seems to bring in only the necessary methods for Beam.

I believe this might be worth documenting somewhere for others. Is here enough, or should it exist in a {{readme.md}} or similar? Or perhaps others can recommend a better approach?


was (Author: timrobertson100):
I'm afraid I haven't been able to successfully use {{HCatalogIO}} for reading in Cloudera 5 yet.  My approach has been to try and build an app using 2.3 and shading/relocating the Hive / HCatalog packages but then talking over thrift to a 1.1 server (running on Spark 2.3.0-cloudera2 and beam 2.4.0).  I'm less optimistic now that it can be done and I suspect even if some tests work, it could prove fragile with e.g {{ORCFile}} support having been repackaged.  I now presume this would need code changes in {{HCatalogIO}} to support older versions through shimming or possibly even multiple versions of {{HCatalogIO}}.

Before proceeding with further investigation I'd welcome other thoughts on the merits of supporting 1.x at all - I could be alone in being affected.  

I suggest we keep an eye on the forthcoming CDH release and ensure Beam is indeed compatible with it. CDH 6 is [currently targeting 2.1.1|https://repository.cloudera.com/artifactory/cloudera-repos/org/apache/hive/hive/2.1.1-cdh6.x-SNAPSHOT/]

> Document usage for hcatalog 1.1
> -------------------------------
>
>                 Key: BEAM-4260
>                 URL: https://issues.apache.org/jira/browse/BEAM-4260
>             Project: Beam
>          Issue Type: Improvement
>          Components: io-java-hcatalog
>    Affects Versions: 2.4.0
>            Reporter: Tim Robertson
>            Assignee: Tim Robertson
>            Priority: Minor
>
> The {{HCatalogIO}} does not work with environments providing Hive Server 1.x which is in widespread use - as an example the latest Cloudera (5.14.2) provides 1.1.x
>  
> The {{HCatalogIO}} marks it's Hive dependencies as provided, so I believe the intention was to be open to multiple versions.
>  
> The issues come from the following:  
>  - use of {{HCatUtil.getHiveMetastoreClient(hiveConf)}} while previous versions used the [now deprecated|https://github.com/apache/hive/blob/master/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java#L586] {{getHiveClient(HiveConf hiveConf)}}  
>  - Changes to the signature of {{RetryingMetaStoreClient.getProxy(...)}}
>  
> Given this doesn't work in a major Hadoop distro, and will not until the next CDH release later in 2018 (i.e. widespread adoption only expected in 2019) I think it would be worthwhile providing a fix/workaround.
> I _think_ building for 2.3 and relocating in your own app might be a workaround although I'm still testing it.  If that is successful I'd propose adding it to the project README or in a separate markdown file linked from the README.
> Does that sound like a reasonable approach please?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)