You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by "Pratyush Bhatt (Jira)" <ji...@apache.org> on 2023/05/23 02:47:00 UTC

[jira] [Created] (HDDS-8673) Hbase ImportTsv doesn't take ofs:// as a FS

Pratyush Bhatt created HDDS-8673:
------------------------------------

             Summary: Hbase ImportTsv doesn't take ofs:// as a FS
                 Key: HDDS-8673
                 URL: https://issues.apache.org/jira/browse/HDDS-8673
             Project: Apache Ozone
          Issue Type: Task
          Components: Ozone Filesystem
    Affects Versions: 1.4.0
            Reporter: Pratyush Bhatt


While running the bulkLoad command:
{noformat}
hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dhbase.fs.tmp.dir=ofs://ozone1/vol1/bucket1/hbase/bulkload -Dimporttsv.columns=HBASE_ROW_KEY,d:c1,d:c2 -Dimporttsv.bulk.output=ofs://ozone1/vol1/bucket1/hbase/test_VerifyHBaseNoWriteBulkloadHDFSQuota/hfiles table_dau3f3374e ofs://ozone1/vol1/bucket1/hbase/test_VerifyHBaseNoWriteBulkloadHDFSQuota/data.tsv{noformat}
Getting:
{noformat}
2023-05-22 17:01:23,263|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS: ofs://ozone1/vol1/bucket1/hbase/bulkload/partitions_72cbb1f1-d9b6-46a4-be39-e27a427c5842, expected: hdfs://ns1{noformat}
Complete trace:
{noformat}
server-resourcemanager-3.1.1.7.1.8.3-339.jar:/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p3.40935426/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-registry-3.1.1.7.1.8.3-339.jar
2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p3.40935426/bin/../lib/hadoop/lib/native
2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO zookeeper.ZooKeeper: Client environment:os.version=5.4.0-135-generic
2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO zookeeper.ZooKeeper: Client environment:user.name=hrt_qa
2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hrt_qa
2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO zookeeper.ZooKeeper: Client environment:user.dir=/hwqe/hadoopqe
2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO zookeeper.ZooKeeper: Client environment:os.memory.free=108MB
2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO zookeeper.ZooKeeper: Client environment:os.memory.max=228MB
2023-05-22 17:01:19,927|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO zookeeper.ZooKeeper: Client environment:os.memory.total=145MB
2023-05-22 17:01:19,930|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=ozn-lease16-1.ozn-lease16.root.hwx.site:2181,ozn-lease16-2.ozn-lease16.root.hwx.site:2181,ozn-lease16-3.ozn-lease16.root.hwx.site:2181 sessionTimeout=30000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$16/169226355@454e763
2023-05-22 17:01:19,942|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO common.X509Util: Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2023-05-22 17:01:19,952|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO zookeeper.ClientCnxnSocket: jute.maxbuffer value is 4194304 Bytes
2023-05-22 17:01:19,963|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO zookeeper.ClientCnxn: zookeeper.request.timeout value is 0. feature enabled=
2023-05-22 17:01:19,983|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO zookeeper.Login: Client successfully logged in.
2023-05-22 17:01:20,003|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:20 INFO zookeeper.Login: TGT refresh thread started.
2023-05-22 17:01:20,009|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:20 INFO zookeeper.Login: TGT valid starting at:        Mon May 22 13:07:24 UTC 2023
2023-05-22 17:01:20,010|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:20 INFO zookeeper.Login: TGT expires:                  Tue May 23 13:07:24 UTC 2023
2023-05-22 17:01:20,010|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:20 INFO zookeeper.Login: TGT refresh sleeping until: Tue May 23 08:54:46 UTC 2023
2023-05-22 17:01:20,010|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:20 INFO client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2023-05-22 17:01:20,033|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:20 INFO zookeeper.ClientCnxn: Opening socket connection to server ozn-lease16-1.ozn-lease16.root.hwx.site/172.27.16.139:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2023-05-22 17:01:20,039|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:20 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /10.104.25.105:47830, server: ozn-lease16-1.ozn-lease16.root.hwx.site/172.27.16.139:2181
2023-05-22 17:01:20,045|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:20 INFO zookeeper.ClientCnxn: Session establishment complete on server ozn-lease16-1.ozn-lease16.root.hwx.site/172.27.16.139:2181, sessionid = 0x10b4fe100361851, negotiated timeout = 30000
2023-05-22 17:01:22,655|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:22 INFO mapreduce.HFileOutputFormat2: bulkload locality sensitive enabled
2023-05-22 17:01:22,655|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:22 INFO mapreduce.HFileOutputFormat2: Looking up current regions for table table_dau3f3374e
2023-05-22 17:01:22,733|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:22 INFO mapreduce.HFileOutputFormat2: Configuring 1 reduce partitions to match current region count for all tables
2023-05-22 17:01:23,261|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:23 INFO client.ConnectionImplementation: Closing master protocol: MasterService
2023-05-22 17:01:23,263|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS: ofs://ozone1/vol1/bucket1/hbase/bulkload/partitions_72cbb1f1-d9b6-46a4-be39-e27a427c5842, expected: hdfs://ns1
2023-05-22 17:01:23,263|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:788)
2023-05-22 17:01:23,263|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:647)
2023-05-22 17:01:23,263|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOutputFormat2.java:864)
2023-05-22 17:01:23,263|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:661)
2023-05-22 17:01:23,264|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:598)
2023-05-22 17:01:23,264|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|at org.apache.hadoop.hbase.mapreduce.ImportTsv.createSubmittableJob(ImportTsv.java:546)
2023-05-22 17:01:23,264|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:754)
2023-05-22 17:01:23,264|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81)
2023-05-22 17:01:23,264|INFO|MainThread|machine.py:203 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|at org.apache.hadoop.hbase.mapreduce.ImportTsv.main(ImportTsv.java:767)
2023-05-22 17:01:23,635|INFO|MainThread|machine.py:232 - run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|Exit Code: 1
2023-05-22 17:01:23,635|INFO|MainThread|machine.py:238 - run()|Command /opt/cloudera/parcels/CDH/bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dhbase.fs.tmp.dir=ofs://ozone1/vol1/bucket1/hbase/bulkload -Dimporttsv.columns=HBASE_ROW_KEY,d:c1,d:c2 -Dimporttsv.bulk.output=ofs://ozone1/vol1/bucket1/hbase/test_VerifyHBaseNoWriteBulkloadHDFSQuota/hfiles table_dau3f3374e ofs://ozone1/vol1/bucket1/hbase/test_VerifyHBaseNoWriteBulkloadHDFSQuota/data.tsv failed after 0 retries {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org