You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Zheng Hu (JIRA)" <ji...@apache.org> on 2018/11/07 03:31:00 UTC

[jira] [Commented] (HBASE-21445) CopyTable by bulkload will write hfile into yarn's HDFS

    [ https://issues.apache.org/jira/browse/HBASE-21445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677620#comment-16677620 ] 

Zheng Hu commented on HBASE-21445:
----------------------------------

Patch is quite simple,  Just use the HBase's filesystem instead of the default file system when operating the bulk load's relative path.  because when CopyTable, we'll move the yarn's conf files into  hbase conf directory,  the hbase's defaultFs will may be overrided by  yarn's defaultFs.  

> CopyTable by bulkload will write hfile into yarn's HDFS 
> --------------------------------------------------------
>
>                 Key: HBASE-21445
>                 URL: https://issues.apache.org/jira/browse/HBASE-21445
>             Project: HBase
>          Issue Type: Bug
>          Components: mapreduce
>            Reporter: Zheng Hu
>            Assignee: Zheng Hu
>            Priority: Major
>             Fix For: 1.5.0, 1.3.3, 2.2.0, 2.0.3, 1.4.9, 2.1.2
>
>         Attachments: HBASE-21445.v1.patch
>
>
> When using CopyTable with bulkload, I found that all hfile's are written in our Yarn's HDFS cluster.   and failed to load hfiles into HBase cluster, because we use different HDFS between yarn cluster and hbase cluster. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)