You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@impala.apache.org by "Michael Smith (Jira)" <ji...@apache.org> on 2022/12/03 00:28:00 UTC

[jira] [Resolved] (IMPALA-11767) Hudi tests fail on Ozone with INVALID_VOLUME_NAME org.apache.hadoop.ozone.om.exceptions.OMException: Bucket or Volume name cannot start with a period or dash

     [ https://issues.apache.org/jira/browse/IMPALA-11767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Michael Smith resolved IMPALA-11767.
------------------------------------
    Fix Version/s: Impala 4.3.0
       Resolution: Fixed

> Hudi tests fail on Ozone with INVALID_VOLUME_NAME org.apache.hadoop.ozone.om.exceptions.OMException: Bucket or Volume name cannot start with a period or dash
> -------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: IMPALA-11767
>                 URL: https://issues.apache.org/jira/browse/IMPALA-11767
>             Project: IMPALA
>          Issue Type: Bug
>    Affects Versions: Impala 4.1.1
>            Reporter: Andrew Sherman
>            Assignee: Michael Smith
>            Priority: Critical
>              Labels: ozone
>             Fix For: Impala 4.3.0
>
>
> The test failures look like
> {code}
> query_test/test_scanners.py:404: in test_hudiparquet
>     self.run_test_case('QueryTest/hudi-parquet', vector)
> common/impala_test_suite.py:766: in run_test_case
>     user=test_section.get('USER', '').strip() or None)
> common/impala_test_suite.py:688: in __exec_in_impala
>     result = self.__execute_query(target_impalad_client, query, user=user)
> common/impala_test_suite.py:1042: in __execute_query
>     return impalad_client.execute(query, user=user)
> common/impala_connection.py:215: in execute
>     return self.__beeswax_client.execute(sql_stmt, user=user)
> beeswax/impala_beeswax.py:189: in execute
>     handle = self.__execute_query(query_string.strip(), user=user)
> beeswax/impala_beeswax.py:365: in __execute_query
>     handle = self.execute_query_async(query_string, user=user)
> beeswax/impala_beeswax.py:359: in execute_query_async
>     handle = self.__do_rpc(lambda: self.imp_service.query(query,))
> beeswax/impala_beeswax.py:522: in __do_rpc
>     raise ImpalaBeeswaxException(self.__build_error_message(b), b)
> E   ImpalaBeeswaxException: ImpalaBeeswaxException:
> E    INNER EXCEPTION: <class 'beeswaxd.ttypes.BeeswaxException'>
> E    MESSAGE: AnalysisException: Failed to load metadata for table: 'hudi_non_partitioned'
> E   CAUSED BY: TableLoadingException: Loading file and block metadata for 1 paths for table functional_parquet.hudi_non_partitioned: failed to load 1 paths. Check the catalog server log for more details.
> {code}
> The catalog logs contain
> {code}
>  
>  
> E1124 12:41:25.643142 15712 ParallelFileMetadataLoader.java:171] Loading file and block metadata for 1 paths for table functional_parquet.hudi_non_partitioned encountered an error loading data for path ofs://localhost:9862/impala/test-warehouse/hudi_parquet
> Java exception follows:
> java.util.concurrent.ExecutionException: org.apache.hudi.exception.HoodieException: Error checking path :ofs://localhost:9862/impala/test-warehouse/hudi_parquet/year=2015, under folder: ofs://localhost:9862/impala/test-warehouse/hudi_parquet
> 	at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:552)
> 	at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:513)
> 	at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:86)
> 	at org.apache.impala.catalog.ParallelFileMetadataLoader.loadInternal(ParallelFileMetadataLoader.java:168)
> 	at org.apache.impala.catalog.ParallelFileMetadataLoader.load(ParallelFileMetadataLoader.java:120)
> 	at org.apache.impala.catalog.HdfsTable.loadFileMetadataForPartitions(HdfsTable.java:781)
> 	at org.apache.impala.catalog.HdfsTable.loadFileMetadataForPartitions(HdfsTable.java:744)
> 	at org.apache.impala.catalog.HdfsTable.loadAllPartitions(HdfsTable.java:719)
> 	at org.apache.impala.catalog.HdfsTable.load(HdfsTable.java:1268)
> 	at org.apache.impala.catalog.HdfsTable.load(HdfsTable.java:1162)
> 	at org.apache.impala.catalog.TableLoader.load(TableLoader.java:144)
> 	at org.apache.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:245)
> 	at org.apache.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:242)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 	at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.hudi.exception.HoodieException: Error checking path :ofs://localhost:9862/impala/test-warehouse/hudi_parquet/year=2015, under folder: ofs://localhost:9862/impala/test-warehouse/hudi_parquet
> 	at org.apache.hudi.hadoop.HoodieROTablePathFilter.accept(HoodieROTablePathFilter.java:177)
> 	at org.apache.impala.util.HudiUtil.lambda$filterFilesForHudiROPath$0(HudiUtil.java:35)
> 	at java.util.ArrayList.removeIf(ArrayList.java:1415)
> 	at org.apache.impala.util.HudiUtil.filterFilesForHudiROPath(HudiUtil.java:35)
> 	at org.apache.impala.catalog.FileMetadataLoader.load(FileMetadataLoader.java:212)
> 	at org.apache.impala.catalog.ParallelFileMetadataLoader.lambda$loadInternal$1(ParallelFileMetadataLoader.java:162)
> 	at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> 	at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
> 	at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> 	at com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:322)
> 	at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
> 	at com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:66)
> 	at com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:36)
> 	at org.apache.impala.catalog.ParallelFileMetadataLoader.loadInternal(ParallelFileMetadataLoader.java:162)
> 	... 13 more
> Caused by: org.apache.hudi.exception.HoodieIOException: Could not check if dataset ofs://localhost:9862/ is valid dataset
> 	at org.apache.hudi.exception.DatasetNotFoundException.checkValidDataset(DatasetNotFoundException.java:55)
> 	at org.apache.hudi.common.table.HoodieTableMetaClient.<init>(HoodieTableMetaClient.java:101)
> 	at org.apache.hudi.common.table.HoodieTableMetaClient.<init>(HoodieTableMetaClient.java:88)
> 	at org.apache.hudi.common.table.HoodieTableMetaClient.<init>(HoodieTableMetaClient.java:84)
> 	at org.apache.hudi.hadoop.HoodieROTablePathFilter.accept(HoodieROTablePathFilter.java:138)
> 	... 26 more
> Caused by: INVALID_VOLUME_NAME org.apache.hadoop.ozone.om.exceptions.OMException: Bucket or Volume name cannot start with a period or dash
> 	at org.apache.hadoop.ozone.client.rpc.RpcClient.verifyVolumeName(RpcClient.java:702)
> 	at org.apache.hadoop.ozone.client.rpc.RpcClient.getVolumeDetails(RpcClient.java:507)
> 	at org.apache.hadoop.ozone.client.ObjectStore.getVolume(ObjectStore.java:162)
> 	at org.apache.hadoop.fs.ozone.BasicRootedOzoneClientAdapterImpl.getFileStatus(BasicRootedOzoneClientAdapterImpl.java:609)
> 	at org.apache.hadoop.fs.ozone.BasicRootedOzoneFileSystem.getFileStatusAdapter(BasicRootedOzoneFileSystem.java:900)
> 	at org.apache.hadoop.fs.ozone.BasicRootedOzoneFileSystem.getFileStatus(BasicRootedOzoneFileSystem.java:884)
> 	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1742)
> 	at org.apache.hadoop.fs.ozone.BasicRootedOzoneFileSystem.exists(BasicRootedOzoneFileSystem.java:948)
> 	at org.apache.hudi.common.io.storage.HoodieWrapperFileSystem.exists(HoodieWrapperFileSystem.java:459)
> 	at org.apache.hudi.exception.DatasetNotFoundException.checkValidDataset(DatasetNotFoundException.java:48)
> 	... 30 more
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)