You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Sean Busbey (JIRA)" <ji...@apache.org> on 2018/04/25 22:16:00 UTC

[jira] [Comment Edited] (HBASE-20334) add a test that expressly uses both our shaded client and the one from hadoop 3

    [ https://issues.apache.org/jira/browse/HBASE-20334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16453175#comment-16453175 ] 

Sean Busbey edited comment on HBASE-20334 at 4/25/18 10:15 PM:
---------------------------------------------------------------

here's my plan:

* start single node hadoop cluster via CLI Minicluster ([docs for 2.7|http://hadoop.apache.org/docs/r2.7.6/hadoop-project-dist/hadoop-common/CLIMiniCluster.html], [docs for 3.0.1|http://hadoop.apache.org/docs/r3.0.2/hadoop-project-dist/hadoop-common/CLIMiniCluster.html])
* start HBase [Standalone-over-HDFS mode|http://hbase.apache.org/book.html#standalone.over.hdfs]
* load example TSV file to HDFS via Hadoop CLI
* import from tsv using hbase-shaded-mapreduce
* use utility program to scan result of import and compare it to data in HDFS, using shaded hbase client and hadoop dependencies (shaded hadoop client in the case of Hadoop 3)

two questions that I'd like feedback on, but will just pick something if needed:

a) where does the utility program live? is it in our code repo? do I generate it in the test? hbase-downstreamer?

b) where does this test go? I could make it a yetus plugin. that would let us choose running it in precommit in addition to nightly if we wanted. Or I could just add it as a non-yetus step to our nightly builds, ala the "check source artifact" one.


was (Author: busbey):
here's my plan:

* start single node hadoop cluster via CLI Minicluster ([docs for 2.7|http://hadoop.apache.org/docs/r2.7.6/hadoop-project-dist/hadoop-common/CLIMiniCluster.html], [docs for 3.0.1|http://hadoop.apache.org/docs/r3.0.2/hadoop-project-dist/hadoop-common/CLIMiniCluster.html])
* start HBase [Standalone-over-HDFS mode|http://hbase.apache.org/book.html#standalone.over.hdfs]
* load example TSV file to HDFS via Hadoop CLI
* import from tsv using shaded mapreduce
* use utility program to scan result of import and compare it to data in HDFS, using shaded hbase client and hadoop dependencies (shaded hadoop client in the case of Hadoop 3)

two questions that I'd like feedback on, but will just pick something if needed:

a) where does the utility program live? is it in our code repo? do I generate it in the test? hbase-downstreamer?

b) where does this test go? I could make it a yetus plugin. that would let us choose running it in precommit in addition to nightly if we wanted. Or I could just add it as a non-yetus step to our nightly builds, ala the "check source artifact" one.

> add a test that expressly uses both our shaded client and the one from hadoop 3
> -------------------------------------------------------------------------------
>
>                 Key: HBASE-20334
>                 URL: https://issues.apache.org/jira/browse/HBASE-20334
>             Project: HBase
>          Issue Type: Sub-task
>          Components: hadoop3, shading
>    Affects Versions: 2.0.0
>            Reporter: Sean Busbey
>            Assignee: Sean Busbey
>            Priority: Major
>
> Since we're making a shaded client that bleed out of our namespace and into Hadoop's, we should ensure that we can show our clients coexisting. Even if it's just an IT that successfully talks to both us and HDFS via our respective shaded clients, that'd be a big help in keeping us proactive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)